Considerations for AI Fairness for People with Disabilities

In society today, people experiencing disability can face discrimination. As artificial intelligence solutions take on increasingly important roles in decision-making and interaction, they have the potential to impact fair treatment of people with disabilities in society both positively and negatively. We describe some of the opportunities and risks across four emerging AI application areas: employment, education, public safety, and healthcare, identified in a workshop with participants experiencing a range of disabilities. In many existing situations, non-AI solutions are already discriminatory, and introducing AI runs the risk of simply perpetuating and replicating these flaws. We next discuss strategies for supporting fairness in the context of disability throughout the AI development lifecycle. AI systems should be reviewed for potential impact on the user in their broader context of use. They should offer opportunities to redress errors, and for users and those impacted to raise fairness concerns. People with disabilities should be included when sourcing data to build models, and in testing, to create a more inclusive and robust system. Finally, we offer pointers into an established body of literature on humancentered design processes and philosophies that may assist AI and ML engineers in innovating algorithms that reduce harm and ultimately enhance the lives of people with disabilities.

Focus: AI and Disability/Outliers
Source: AI Matters
Readability: Expert
Type: PDF Article
Open Source: No
Keywords: N/A
Learn Tags: Bias Design/Methods Disability Framework Inclusive Practice
Summary: This paper recommends reviewing AI systems for their potential impact on the user in their broader context of use, and including people with disabilities when sourcing data to build models and when testing, to create a more inclusive and robust system.