Description
American Sign Language (ASL) is a visual gestural language which is used by many people who are deaf or hard-of-hearing. In this paper, we design a visual recognition system based on action recognition techniques to recognize individual ASL signs. Specifically, we focus on recognition of words in videos of continuous ASL signing. The proposed framework combines multiple signal modalities because ASL includes gestures of both hands, body movements, and facial expressions. We have collected a corpus of RBG + depth videos of multi-sentence ASL performances, from both fluent signers and ASL students; this corpus has served as a source for training and testing sets for multiple evaluation experiments reported in this paper. Experimental results demonstrate that the proposed framework can automatically recognize ASL.
Date of creation, presentation, or exhibit
9-2016
Document Type
Conference Paper
Department, Program, or Center
Information Sciences and Technologies (GCCIS)
Recommended Citation
C. Zhang, Y. Tian and M. Huenerfauth, "Multi-modality American Sign Language recognition," 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, 2016, pp. 2881-2885. doi: 10.1109/ICIP.2016.7532886
Campus
RIT – Main Campus
Comments
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.