Abstract
Eye movement patterns are known to differ when looking at stimuli given a different task, but less is known about how these patterns change as a function of expertise. When a particular visual pattern is viewed, a particular sequence of eye movements are executed and this sequence is defined as scanpath. In this work we made an attempt to answer the question, “Do art novices and experts look at paintings differently?” If they do, we should be able to discriminate between the two groups using machine learning applied to their scanpaths. This can be done using algorithms for Multi-Fixation Pattern Analyses (MFPA). MFPA is a family of machine learning algorithms for making inferences about people from their gaze patterns. MFPA and related approaches have been widely used to study viewing behavior while performing visual tasks, but earlier approaches only used gaze position (x, y) information with duration and temporal order and not the actual visual features in the image.
In this work, we extend MFPA algorithms to use visual features in trying to answer a question that has been overlooked by most early studies, i.e. if there is a difference found between experts and novices, how different are their viewing patterns and do these differences exist for both low- and high-level image features. To address this, we combined MFPA with a deep Convolutional Neural Network (CNN). Instead of converting a trial’s 2-D fixation positions into Fisher Vectors, we extracted image features surrounding the fixations using a deep CNN and turn them into Fisher Vectors for a trial. The Fisher Vector is an image representation obtained by pooling local image features. It is frequently used as a global image descriptor in visual classification. We call this approach MFPA-CNN. While CNNs have been previously used to recognize and classify objects from paintings, this work goes the extra step to study human perception of paintings. Ours is the first attempt to use MFPA and CNNs to study the viewing patterns of the subjects in the field of art.
If our approach is successful in differentiating novices from experts with and without instructions when both low- and high-level CNN image features were used, we could then demonstrate that novices and experts view art differently. The outcome of this study could be then used to further investigate what image features the subjects are concentrating on. We expect this work to influence further research in image perception and experimental aesthetics.
Library of Congress Subject Headings
Machine learning; Visual perception--Data processing; Eye--Movements--Data processing; Expertise--Data processing
Publication Date
12-19-2018
Document Type
Thesis
Student Type
Graduate
Degree Name
Imaging Science (MS)
Department, Program, or Center
Chester F. Carlson Center for Imaging Science (COS)
Advisor
Elena Fedorovskaya
Advisor/Committee Member
Christopher Kanan
Advisor/Committee Member
Andrew M. Herbert
Recommended Citation
Kapisthalam, Sanjana, "An Application of Deep-Learning to Understand Human Perception of Art" (2018). Thesis. Rochester Institute of Technology. Accessed from
https://repository.rit.edu/theses/9970
Campus
RIT – Main Campus
Plan Codes
IMGS-MS