Current methods of speech intelligibility estimation rely on the subjective judgements of trained listeners. Accurate and unbiased intelligibility estimates have a number of procedural and/or methodological constraints including the necessity for large pools of listeners and a wide variety of stimulus materials. Recent research findings however, have shown a strong relationship between speech intelligibility estimates and selected acoustic speech parameters which appear to determine the intelligibility of speech. These findings suggest that such acoustic speech parameters could be used to derive computer-based speech intelligibility estimation, obviating the procedural and methodological constraints typically associated with such estimates. The relationship between speech intelligibility estimates and acoustic speech parameters is complex and nonlinear in nature. Artificial neural networks have proven in general speech recognition that they are capable of dealing with complex and unspecified nonlinear relationships. The purpose of this study was to explore the possibility of using artificial neural networks to make speech intelligibility estimates. Sixty hearing-impaired speakers, whose measured speech intelligibility ranged from 0 to 99%, were used as subjects in this study. In addition to measuring speech intelligibility, the speech of these subjects was digitally analyzed to obtain 6 acoustic speech parameters that have been found to critically differentiate English phonemes. The subjects were divided into two sub-groups. One of the subgroups was used to train a variety of back-propagation neural networks and the other was used to test the ability of the neural networks to make accurate speaker-independent speech intelligibility estimates. The artificial neural network that seemed to be the most efficatious for making speaker-independent speech intelligibility estimates employed a bipolar squash function and scaled values of the speech parameters. Compared to listener judgements the overall accuracy of the network's speech intelligibility estimates was a respectable 83%. These findings suggest that with expanded subject populations and more acoustic speech parameters it might be possible to create a practical computer based tool capable of objectively determining speech intelligibility.

Library of Congress Subject Headings

Automatic speech recognition--Research; Speech, Intelligibility of--Evaluation--Data processing; Neural networks (Computer science)

Publication Date


Document Type


Department, Program, or Center

Computer Science (GCCIS)


Not Listed


Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works. Physical copy available through RIT's The Wallace Library at: TK7895.S65 K574 1990


RIT – Main Campus