The discovery of Adversarial Examples — data points which are easily recognized by humans, but which fool artificial classifiers with ease, is relatively new in the world of machine learning. Corruptions imperceptible to the human eye are often sufficient to fool state of the art classifiers. The resolution of this problem has been the subject of a great deal of research in recent years as the prevalence of Deep Neural Networks grows in everyday systems. To this end, we propose InfoMixup , a novel method to improve the robustness of Deep Neural Networks without significantly affecting performance on clean samples. Our work is focused in the domain of image classification, a popular target in contemporary literature due to the proliferation of Deep Neural Networks in modern products. We show that our method achieves state of the art improvements in robustness against a variety of attacks under several measures.

Library of Congress Subject Headings

Image processing--Digital techniques; Machine learning; Digital images--Classification; Neural networks (Computer science)

Publication Date


Document Type


Student Type


Degree Name

Computer Engineering (MS)

Department, Program, or Center

Computer Engineering (KGCOE)


Cory E. Merkel

Advisor/Committee Member

Andres Kwasinski

Advisor/Committee Member

Matthew Wright


RIT – Main Campus

Plan Codes