Abstract

Adversarial examples pose a serious threat to the reliability of deep neural networks by subtly manipulating inputs to induce incorrect outputs. Despite progress in the field, convolutional neural networks (CNNs) remain vulnerable to such perturbations. This research proposes a novel approach to improve adversarial robustness of CNNs by incorporating human viewing behavior into both training and testing processes. To establish a baseline for comparison with models trained from scratch, an eye-tracking dataset—collected during experiments involving novel objects from the Greeble dataset—was used to capture how humans learn to classify previously unseen object categories. This study compares model saliency with human fixation maps and introduces a padded patch-based training pipeline alongside gaze-informed testing modes. Our study introduces three training modes and six testing modes, all designed to reflect human viewing strategies — such as training on full images, scaled, and padded patches, and testing on corresponding variants, including gaze-informed scaled and padded patches and heatmap-overlaid inputs. This design mimics how human attention is limited to a narrow visual angle rather than the entire field of view. Experimental results reveal that participants employed significantly distinct viewing patterns while examining the same images. This suggests that no single viewing strategy is universally adopted. Nevertheless, training with inputs that simulate a narrow visual angle (padded patch-based training) proved more effective than using full images during training. Models tested on patches maintain stable accuracy even as the strength of adversarial perturbations increases. Notably, all models achieve their best performance when evaluated on heatmap-overlay images and padded patches informed by human gaze. These findings suggest that aligning model input with human visual attention can improve both performance and adversarial robustness at inference time.

Library of Congress Subject Headings

Neural networks (Computer science)--Security measures; Convolutions (Mathematics); Computer vision; Gaze

Publication Date

8-8-2025

Document Type

Thesis

Student Type

Graduate

Degree Name

Artificial Intelligence (MS)

College

Golisano College of Computing and Information Sciences

Advisor

Cory Merkel

Advisor/Committee Member

Nathan Cahill

Advisor/Committee Member

Matthew Wright

Campus

RIT – Main Campus

Plan Codes

AI-MS

Share

COinS