This research thesis explored the way in which people look at images in different semantic categories (e.g., handshake versus not-handshake), and directly related those results to computational approaches for automatic image classification. Although many eye tracking experiments have been performed, to our knowledge, this was the first study that specifically compares eye movements across categories, and that links category-specific eye tracking results to automatic image classification techniques. The hypothesis was that the eye movements of human observers for images in different semantic classes differ (e.g., handshakes, others), and that this information can be effectively used in automatic techniques. We presented eye-tracking experiments that showed the variations in eye movements (i.e., fixations and saccades) across different individuals for images in different categories. Then, we examined how empirical data obtained from eye tracking studies like this one, for a specific class; can be integrated with computational frameworks. The experiment consisted of 5 sets of images of 50 images each. 10 subjects were tracked while viewing these 250 images and the results were analyzed to ascertain whether or not there is a correlation between different subjects’ eye movements and similar images. 6 male and 4 female subjects were tested, all of whom had normal or corrected to normal vision. I will discuss the key ideas of the experiment: eye-tracking, vision terminology and experimental set-up. I will discuss the results of the experiment in a general sense. Alexandro Jaimes is responsible for the computational development of the software; I provided him with the raw data and performed some rudimentary analysis.

Publication Date


Document Type



Not listed.


Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works in February 2014. senior project.


RIT – Main Campus