We present a novel training algorithm for a feed forward neural network with a single hidden layer of nodes (i.e., two layers of connection weights). Our algorithm is capable of training networks for hard problems, such as the classic two-spirals problem. The weights in the first layer are determined using a quasirandom number generator. These weights are frozen---they are never modified during the training process. The second layer of weights is trained as a simple linear discriminator using methods such as the pseudo-inverse, with possible iterations. We also study the problem of reducing the hidden layer: pruning low-weight nodes and a genetic algorithm search for good subsets.

Date of creation, presentation, or exhibit



Proceedings of the International ICSC Symposium on Fuzzy Logic (1995) A50-A56 "Using Quasirandom Numbers in Neural Networks," Proceedings of the International ICSC Symposium on Fuzzy Logic. ICSC Academic Press. Held at the Swiss Federal Institute of Technology (ETH): Zurich, Switzerland,: May 26-27, 1995. The authors wish to thank Alex Mirzaoff and Eastman Kodak Company for support of this project. ISBN: 390-64-5400-2Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works in February 2014.

Document Type

Conference Paper

Department, Program, or Center

Chester F. Carlson Center for Imaging Science (COS)


RIT – Main Campus