Abstract

Biological neurons are known to be much more energy efficient than modern computers, with the human brain only consuming about 20 Watts. An Artificial Neural Network can take hours or days to train on high-performance and energy-consuming GPUs, so networks that can achieve similar performance at a fraction of the energy cost are very sought after. Spiking Neural Networks, which mimic biological neurons more closely than Artificial Neural Networks (ANNs), are a potential option for the energy problem in Neural Networks. By converting an ANN to an SNN and applying energy-saving methods/architectures, the equivalent SNN network can achieve similar performance at a fraction of the energy cost. Artificial Neurovascular Networks (AN- VNs), which mimic the distribution of energy to neurons from the vascular system, aim to reduce the energy consumption of ANNs and SNNs. The goal of this thesis is to apply methods to reduce the energy consumption of SNNs by reducing their spiking activity while maximizing accuracy. First, simple preliminary methods to decrease spiking activity in the SNN with no cost to accuracy were explored, and the results were mediocre. Next, an ANVN was designed and implemented to be trained sequentially and simultaneously with an ANN (later converted to an SNN) on the MNIST and CIFAR10 dataset. The simultaneously trained ANVN achieved baseline accuracy with fewer average spikes per inference, and both were able to outperform simple methods to reduce network activity. Then, the simultaneously trained ANVN was augmented with a root bank and leaf reservoir, and the SNN achieved even lower average spikes per inference at baseline accuracy. Lastly, the simulation time was adjusted to determine the ANVN’s effectiveness with a shorter simulation time. The ANVN method overall was able to achieve lower average spikes per inference than the simple methods (weight scaling and preliminary work) at baseline accuracy. It also demonstrated a tradeoff where a small loss in accuracy from the baseline could further significantly decrease the average spikes per inference.

Library of Congress Subject Headings

Neural networks (Computer science)--Energy consumption; Machine learning; Computer architecture

Publication Date

12-2024

Document Type

Thesis

Student Type

Graduate

Degree Name

Computer Engineering (MS)

Department, Program, or Center

Computer Engineering

College

Kate Gleason College of Engineering

Advisor

Cory Merkel

Advisor/Committee Member

Sathwika Bavikadi

Advisor/Committee Member

Alexander Ororbia

Campus

RIT – Main Campus

Plan Codes

CMPE-MS

Share

COinS