Spiking models of auditory perception

In classical connectionism, the information is conveyed by the firing rate of neurons. Spiking neuron models offer an additional dimension to the rate: synchrony. Synchronous spike trains are more effective than uncorrelated ones in driving the responses of target neurons. Because neurons can encode their inputs in a sequence of precisely timed spikes, input similarity translates into synchronous spiking, which can be easily detected by afferent neurons. The dual properties of synchronization and coincidence detection lead to a new computing paradigm, where neurons perform a similarity operation instead of a summation. Because synaptic plasticity favor correlated neuron groups, synchrony-based computation should play an important role in developed neural circuits. The presence of neural correlations has been demonstrated in early sensory systems, but their computational role is still unclear. In auditory perception, the fine temporal structure of sounds is thought to play an important role, in particular for pitch perception and spatial localization of sounds. It has long been proposed that the auditory system exploits the structure of neural correlations to infer information about those properties, but it is still unclear how this computation is physiologically implemented. In this project, I propose to investigate synchrony-based computation and learning in the auditory system, using computational neural modeling. The expected impact of the project is 1) the development of spike-based neural network theory, 2) a better understanding of the role of neural synchronization in auditory perception, 3) industrial applications (music transcription, auditory scene analysis) and medical applications (stimulation procedures for cochlear implants) with neural simulation technology.

PI: Romain Brette

Grant period: 01/09/2009 - 31/10/2014