The enormous success and popularity of deep convolutional neural networks for object detection has prompted their deployment in various real-world applications. However, their performance in the presence of hardware faults or damage that could occur in the field has not been studied. This thesis explores the resiliency of six popular network architectures for image classification, AlexNet, VGG16, ResNet, GoogleNet, SqueezeNet and YOLO9000, when subjected to various degrees of failures. We introduce failures in a deep network by dropping a percentage of weights at each layer. We then assess the effects of these failures on classification performance. We find the fitness of the weights and then dropped from least fit to most fit weights. Finally, we determine the ability of the network to self-heal and recover its performance by retraining its healthy portions after partial damage. We try different methods to re-train the healthy portion by varying the optimizer. We also try to find the time and resources required for re-training. We also reduce the number of parameters in GoogleNet, VGG16 to the size of SqueezeNet and re-trained with varying percentage of dataset. This can be used as a network pruning method.

Library of Congress Subject Headings

Neural networks (Computer science); Convolutions (Mathematics); Optical pattern recognition; Image processing; Fault-tolerant computing; Self-organizing systems

Publication Date


Document Type


Student Type


Degree Name

Computer Engineering (MS)

Department, Program, or Center

Computer Engineering (KGCOE)


Andreas Savakis

Advisor/Committee Member

Andres Kwasinski

Advisor/Committee Member

Dhireesha Kudithipudi


RIT – Main Campus

Plan Codes