This research is initiated to enhance the video-based eye tracker’s performance to detect small eye movements.[1] Chaudhary and Pelz, 2019, created an excellent foundation on their motion tracking of iris features to detect small eye movements[1], where they successfully used the classical handcrafted feature extraction methods like Scale InvariantFeature Transform (SIFT) to match the features on iris image frames. They extracted features from the eye-tracking videos and then used patent [2] an approach of tracking the geometric median of the distribution. This patent [2] excludes outliers, and the velocity is approximated by scaling by the sampling rate. To detect the microsaccades (small, rapid eye movements that occur in only one eye at a time) thresholding was used to estimate the velocity in the following paper[1]. Our goal is to create a robust mathematical model to create a 2D feature distribution in the given patent [2]. In this regard, we worked in two steps. First, we studied a large number of multiple recent deep learning approaches along with the classical hand-crafted feature extractor like SIFT, to extract the features from the collected eye tracker videos from Multidisciplinary Vision Research Lab(MVRL) and then showed the best matching process for our given RIT-Eyes dataset[3]. The goal is to make the feature extraction as robust as possible. Secondly, we clearly showed that deep learning methods can detect more feature points from the iris images and that matching of the extracted features frame by frame is more accurate than the classical approach.

Library of Congress Subject Headings

Eye tracking--Data processing; Digital video--Data processing; Machine learning; Pattern recognition systems; Biometric identification

Publication Date


Document Type


Student Type


Degree Name

Applied Statistics (MS)


Robert Parody

Advisor/Committee Member

Ernest Fokoue

Advisor/Committee Member

Jeff Pelz


RIT – Main Campus

Plan Codes