Recognition of Acoustic Events Using Masked Conditional Neural Networks.

RSS Source
Fady Medhat, David Chesmore, John Robinson

Automatic feature extraction using neural networks has accomplishedremarkable success for images, but for sound recognition, these models areusually modified to fit the nature of the multi-dimensional temporalrepresentation of the audio signal in spectrograms. This may not efficientlyharness the time-frequency representation of the signal. The ConditionaL NeuralNetwork (CLNN) takes into consideration the interrelation between the temporalframes, and the Masked ConditionaL Neural Network (MCLNN) extends upon the CLNNby forcing a systematic sparseness over the network's weights using a binarymask. The masking allows the network to learn about frequency bands rather thanbins, mimicking a filterbank used in signal transformations such as MFCC.Additionally, the Mask is designed to consider various combinations offeatures, which automates the feature hand-crafting process. We applied theMCLNN for the Environmental Sound Recognition problem using the Urbansound8k,YorNoise, ESC-10 and ESC-50 datasets. The MCLNN have achieved competitiveperformance compared to state-of-the-art Convolutional Neural Networks andhand-crafted attempts.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.