Description
We discuss issues of Artificial Intelligence (AI) fairness for people with disabilities, with examples drawn from our research on human-computer interaction (HCI) for AI-based systems for people who are Deaf or Hard of Hearing (DHH). In particular, we discuss the need for inclusion of data from people with disabilities in training sets, the lack of interpretability of AI systems, ethical responsibilities of access technology researchers and companies, the need for appropriate evaluation metrics for AI-based access technologies (to determine if they are ready to be deployed and if they can be trusted by users), and the ways in which AI systems influence human behavior and influence the set of abilities needed by users to successfully interact with computing systems.
Date of creation, presentation, or exhibit
10-1-2019
Document Type
Conference Proceeding
Department, Program, or Center
School of Information (GCCIS)
Recommended Citation
Sushant Kafle, Abraham Glasser, Sedeeq Al-khazraji, Larwan Berke, Matthew Seita, and Matt Huenerfauth. 2020. Artificial intelligence fairness in the context of accessibility research on intelligent systems for people who are deaf or hard of hearing. SIGACCESS Access. Comput., 125, Article 4 (October 2019), 1 pages. DOI:https://doi.org/10.1145/3386296.3386300
Campus
RIT – Main Campus
Comments
© 2019. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM SIGACCESS Accessibility and Computing, http://dx.doi.org/10.1145/3386296.3386300.