Abstract
To compare methods of displaying speech-recognition confidence of automatic captions, we analyzed eye-tracking and response data from deaf or hard of hearing participants viewing videos.
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Publication Date
2017
Document Type
Article
Department, Program, or Center
School of Information (GCCIS)
Recommended Citation
Rathbun, Kevin; Berke, Larwan; Caulfield, Christopher; Stinson, Michael; and Huenerfauth, Matt, "Eye Movements of Deaf and Hard of Hearing Viewers of Automatic Captions" (2017). Journal on Technology and Persons with Disabilities, 5 (), 130-140. Accessed from
https://repository.rit.edu/article/1882
Campus
RIT – Main Campus
Comments
© 2017 The authors and California State University, Northridge