Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

  • Doug Maloney/Unsplash
  • Intelligent Machines

    Can you make an AI that isn’t ableist?

    IBM researcher Shari Trewin on why bias against disability is much harder to squash than discrimination based on gender or race.

    Artificial intelligence has a well-known bias problem, particularly when it comes to race and gender. You may have seen some of the headlines: facial recognition systems that fail to recognize black women, or automated recruiting tools that pass over female candidates.

    But while researchers have tried hard to address some of the most egregious issues, there’s one group of people they have overlooked: those with disabilities. Take self-driving cars. Their algorithms rely on training data to learn what pedestrians look like so the vehicles won’t run them over. If the training data doesn’t include people in wheelchairs, the technology could put those people in life-threatening danger.

    For Shari Trewin, a researcher on IBM’s accessibility leadership team, this is unacceptable. As part of a new initiative, she is now exploring new design processes and technical methods to mitigate machine bias against people with disabilities. She talked to us about some of the challenges—as well as some possible solutions.

    The following has been edited for length and clarity.

    Why is fairness to people with disabilities a different problem from fairness concerning other protected attributes like race and gender?

    Disability status is much more diverse and complex in the ways that it affects people. A lot of systems will model race or gender as a simple variable with a small number of possible values. But when it comes to disability, there are so many different forms and different levels of severity. Some of them are permanent, some are temporary. Any one of us might join or leave this category at any time in our lives. It’s a dynamic thing.

    Sign up for the The Algorithm
    Artificial intelligence, demystified

    By signing up you agree to receive email newsletters and notifications from MIT Technology Review. You can change your preferences at any time. View our Privacy Policy for more detail.

    About one in five people in the US currently have a disability of some kind. So it’s really prevalent but hard to pin down into a simple variable with a small number of possible values. There might be a system that discriminates against blind people but not against deaf people. So testing for fairness becomes much harder.

    Disability information is also very sensitive. People are much more reluctant to reveal it than gender or age or race information, and in some situations it’s even illegal to ask for this information. So a lot of times in the data you’re much less likely to know anything about disabilities that a person may or may not have. That also makes it much harder to know if you have a fair system.

    I wanted to ask you about that. As humans, we decided the best way to avoid disability discrimination is to not reveal disability status. Why wouldn’t that hold true for machine-learning systems?

    Shari Trewin, researcher on IBM's Accessibility Leadership team
    courtesy of IBM

    Yeah, that’s the first thing people think of: if the system doesn’t know anything about individuals’ disability status, surely it will be fair. But the problem is that the disability often impacts other bits of information that are being fed into the model. For example, say I am a person that uses a screen reader to access the web, and I’m doing an online test for a job application. If that test program isn’t well designed and accessible to my screen reader, it’s going to take me longer to navigate around the page before I can answer the question. If that time isn’t taken into consideration in assessing me, then anybody who’s using that same tool with a similar disability is at a systematic disadvantage—even if the system doesn’t know that I’m blind.

    So if there are so many different nuances to disability, is it actually possible to achieve fairness?

    I think the more general challenge for the AI community is how to handle outliers, because machine-learning systems—they learn norms, right? They optimize for norms and don’t treat outliers in any special way. But oftentimes people with disabilities don’t fit the norm. The way that machine learning judges people by who it thinks they’re similar to—even when it may never have seen anybody similar to you—is a fundamental limitation in terms of fair treatment for people with disabilities.

    What would work a lot better would be a method that combines machine learning with some additional solution, like logical rules that are implemented in a layer above. There are also some situations where more attention to gathering a more diverse data set would definitely help. Some people are experimenting with techniques where you take out the core of the data and try to train for the outliers. Others are experimenting with different learning techniques that might optimize better for outliers rather than the norm.

    I think it’s only when you start thinking about disability that you start thinking about the diversity of individuals and the importance of outliers. If you don’t have enough gender diversity in your data set, you can fix that. It’s not so easy to fix disability diversity.

    How do you get over the problem of people being private about their disability status?

    Yeah, in order to test a system for fairness, you need some data. And people with disabilities providing that data is a social good, but it’s a personal risk. People with disabilities are often easily identified even in anonymous data, just because they’re so unique. So how do we mitigate that? We’re still figuring that out.

    What are your greatest concerns about this problem?

    Oftentimes AI systems are optimizing something that is not the wellbeing of the people who are affected by the decisions. That impact needs to have much more prominence in the design process, so that we’re not just introducing a system that looks at how much money we’re saving or how efficiently we’re processing people. We need new ways of measuring systems that incorporate the aspect of impact on the end users, especially if it’s a disadvantaged group.

    How would we do that?

    Testing for fairness is one way of measuring that impact. Including the disadvantaged group in the design process and hearing their concerns is another. Even explicitly including some metric for stakeholder satisfaction that you could measure through interviews or surveys—that sort of thing.

    What are the things that you’re excited about in this area of research?

    AI technologies are already changing the world for people with disabilities by providing them with new capabilities, like applications that tell you what’s in your field of view when you point your phone.

    I think that if we do it right, there’s a real opportunity for AI systems to improve on previous human-only systems. There’s a lot of discrimination and bias and misunderstanding of people with disabilities in society today. If we can find a way to produce AI systems that eliminate that kind of bias, then we can start to change the treatment of people with disabilities and reduce discrimination.

    Keep up with the latest in AI at EmTech Digital.
    Don't be left behind.

    March 25-26, 2019
    San Francisco, CA

    Register now
    Shari Trewin, researcher on IBM's Accessibility Leadership team
    courtesy of IBM
    More from Intelligent Machines

    Artificial intelligence and robots are transforming how we work and live.

    Want more award-winning journalism? Subscribe to Insider Plus.
    • Insider Plus {! insider.prices.plus !}*

      {! insider.display.menuOptionsLabel !}

      Everything included in Insider Basic, plus the digital magazine, extensive archive, ad-free web experience, and discounts to partner offerings and MIT Technology Review events.

      See details+

      Print + Digital Magazine (6 bi-monthly issues)

      Unlimited online access including all articles, multimedia, and more

      The Download newsletter with top tech stories delivered daily to your inbox

      Technology Review PDF magazine archive, including articles, images, and covers dating back to 1899

      10% Discount to MIT Technology Review events and MIT Press

      Ad-free website experience

    /3
    You've read of three free articles this month. for unlimited online access. You've read of three free articles this month. for unlimited online access. This is your last free article this month. for unlimited online access. You've read all your free articles this month. for unlimited online access. You've read of three free articles this month. for more, or for unlimited online access. for two more free articles, or for unlimited online access.