Abstract

Deep learning systems have brought about a paradigm shift in the field of artificial intelligence, often surpassing human performance across a wide range of tasks. Nonetheless, there are numerous apprehensions regarding their robustness when deployed in the real world. Multiple studies have demonstrated that deep learning models tend to latch onto biases present in their training data instead of truly solving the tasks. Given the pervasive nature of this issue across various datasets and tasks, researchers have proposed a variety of techniques to improve bias resilience. However, the evaluation protocols used in prior works leave many open questions regarding their true robustness and the primary goal of this dissertation is to explore these questions. Specifically, we conduct studies with more comprehensive evaluation protocols to study if the systems are right for the right reasons and generalize to realistic forms of biases. Apart from such investigations, we also make progress in method development, by developing methods that focus on simplicity over prior methods, meanwhile remaining on par with or even surpassing the state-of-the-art. Overall, the dissertation makes progress toward addressing issues in developing bias-resilient systems and delineates potential directions for future research in the field.

Library of Congress Subject Headings

Neural networks (Computer science); Deep learning (Machine learning); Robust control

Publication Date

12-8-2023

Document Type

Dissertation

Student Type

Graduate

Degree Name

Imaging Science (Ph.D.)

Department, Program, or Center

Chester F. Carlson Center for Imaging Science

College

College of Science

Advisor

Christopher Kanan

Advisor/Committee Member

Qi Yu

Advisor/Committee Member

Nathan Cahill

Campus

RIT – Main Campus

Plan Codes

IMGS-PHD

Share

COinS