By Alexandra Reeve Givens, Executive Director
I’m proud to announce the launch of a new multi-year project at Georgetown’s Institute for Tech Law & Policy on algorithmic fairness and the rights of people with disabilities.
Supported by the Ford Foundation, the project is designed to analyze the impact of algorithmic decision-making on people with disabilities—in employment, benefits determinations, and other settings where AI-driven decision-making touches people’s lives. The project will assess specific areas of risk, analyze gaps in existing legal and policy protections, and forge cross-disciplinary collaborations to center the perspectives of people with disabilities in efforts to develop algorithmic fairness and accountability. You can read more about our approach below.
This project seeks to add an important contribution to the growing conversation around fairness, accountability and transparency in machine learning. Despite increasing focus on ethics in AI, few AI scholars or policy experts are considering the unique risks and impacts of algorithmic decision-making for the millions of individuals affected by disability. In turn, disability rights advocates and regulators are just starting to consider how machine learning may impact this community.
This troubling gap exists even though people with disabilities are, in many ways, disproportionately vulnerable to the threats of algorithmic bias. Accommodation requirements, gaps in an individual’s employment history, or the need for flexible work shifts may all cause automated systems to penalize individuals. Programs that evaluate employees based on sentiment analysis may down-rate those whose expressions vary from an algorithm’s perceived “norm”. Job screening programs that rely on timed answers may penalize candidates who rely on assistive technologies. These are but a handful of examples.
While the emerging literature around AI bias gives some useful context for disability rights, the legal and policy framework for people with disabilities requires specific study. For example, the Americans with Disabilities Act (ADA) requires employers to adopt “reasonable accommodations” for qualified individuals with a disability. But what is a “reasonable accommodation” in the context of machine learning and AI? How will the ADA’s unique standard interact with case law and scholarship about AI bias against other protected groups? When the ADA governs what questions employers can ask about a candidate’s disability, how should we think about inferences from data the employer otherwise collects?
The Institute’s program on algorithmic fairness for people with disabilities seeks to address these questions. In partnership with other civil society organizations and our Project Advisory Committee, it will foster collaborative engagement, conduct legal and policy analysis, and produce materials to shape employer practices, inform potential enforcement actions, and empower individuals to know their rights.
Our initial step is to hire a Program Director who brings deep experience in disability rights. We are eager to find a talented leader who can direct the project’s research agenda, lead stakeholder engagement, and oversee and execute the project work in collaboration with me and the Institute’s other team members. Individuals who are personally affected by disability are particularly encouraged to apply. The hiring notice is available here. Please share it widely!
Our second step is to continue developing our Project Advisory Committee, which is currently in formation. The Project Advisory Committee consists of disability rights experts, including those living with disability themselves, AI experts and other individuals who will help inform the project work. The Project Advisory Committee will be finalized later this year, after the Program Director has been hired and has had an opportunity to weigh in. If you have recommendations for the Project Advisory Committee or would like to be considered, please contact us using this form.
Finally, this project will operate through extensive collaboration with other stakeholders, including individuals and organizations focused on disability rights, those at the intersection of disability rights and the rights of other marginalized communities, and those working on algorithmic fairness and accountability. The project’s key goal is to foster information exchange and knowledge sharing between these communities, with an eye to prioritizing issues, inspiring collaborations, and developing actionable work. We are committed to a consciously intersectional approach, working to center the experience of multiply-marginalized communities and advance equity for the most marginalized individuals.
Are you an experienced disability rights lawyer who cares about these issues - or do you know someone who is? Please review the Job Posting for our Program Director and apply! Applications will be considered on a rolling basis starting June 3, 2019.
Do you have thoughts about this project, or would you like to keep up to date on the latest developments? Please join our mailing list or contact us using this form.
We look forward to engaging with you in this work.