"Training wheels for encoder networks," Proceedings of the 1996 Soft Computing (SOCO '96) conference. The International Computer Science Conventions. Held at the University of Reading: Whiteknights, Reading, England: 26-28 March 1996. We develop a new approach to training encoder feed-forward neural networks and apply it to two classes of problems. Our approach is to initially train the network with a reltated, relatively easy-to-learn problem, and then gradually replace the training set with harder problems, until the network learns the problem we originally intended to solve. The problems we address are modifications of the common N-2-N encoder network problem with N exemplars, the unit vectors, ek in N-space. Our first modification of the problem is to use objects consisting of paired 1's (ek + ek+1, with subscripts taken mod N). This requires an N-2-N net to organize the images of the exemplars in 2-space ordered around a circle. Our second modification is to use patterns consisting of two objects; each object is a pair of adjacent 1's; the objects must be separated from each other. This problem can be learned by a N-4_n network which must organize the images of the exemplars in 4-space in the form of a mobius strip. The easy-to-learn problem in both cases involves replacing the two-ones signal ek + ek+1 with a block-signal of length B: ek + ek+1 + ... + ek+B-1. In several cases, our method allowed us to train networks that otherwise fail to train. In some other cases, our method proved to be ten times faster than otherwise.

Publication Date



Many of the training wheels experiments were first investigated by my graduate students: David Cox [2], Kathy Rainero [7], and Sanjay Raghavendra [6]. Jeff Pink [5] also experimented with and showed me the adaptive learning rate technique. Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works in February 2014.

Document Type


Department, Program, or Center

Chester F. Carlson Center for Imaging Science (COS)


RIT – Main Campus