Why is fairness to people with disabilities a different problem from fairness concerning other protected attributes like race and gender?

Disability status is much more diverse and complex in the ways that it affects people. A lot of systems will model race or gender as a simple variable with a small number of possible values. But when it comes to disability, there are so many different forms and different levels of severity. Some of them are permanent, some are temporary. Any one of us might join or leave this category at any time in our lives. It’s a dynamic thing.

I think the more general challenge for the AI community is how to handle outliers, because machine-learning systems—they learn norms, right? They optimize for norms and don’t treat outliers in any special way. But oftentimes people with disabilities don’t fit the norm. The way that machine learning judges people by who it thinks they’re similar to—even when it may never have seen anybody similar to you—is a fundamental limitation in terms of fair treatment for people with disabilities.

Source: Can you make an AI that isn’t ableist?

See also,

Design is Tested at the Edges: Intersectionality, The Social Model of Disability, and Design for Real Life