Analysing auditory representations for sound classification with self-organising neural networks

Spevak C.; Polferman R.
DAFx-2000 - Verona
Three different auditory representations—Lyon’s cochlear model, Patterson’s gammatone filterbank combined with Meddis’ inner hair cell model, and mel-frequency cepstral coefficients—are analyzed in connection with self-organizing maps to evaluate their suitability for a perceptually justified classification of sounds. The self-organizing maps are trained with a uniform set of test sounds preprocessed by the auditory representations. The structure of the resulting feature maps and the trajectories of the individual sounds are visualized and compared to one another. While MFCC proved to be a very efficient representation, the gammatone model produced the most convincing results.