![]() The frequency analysis performed by the cochlea is often modeled by a bank of band pass filters ( Patterson, 1994 Irino and Patterson, 2001 Lopez-Poveda and Meddis, 2001 Zilany and Bruce, 2006). ![]() These models derive from physiological measurements in the basilar membrane ( Recio et al., 1998) or in the auditory nerve ( Carney et al., 1999), and/or from psychophysical measurements (e.g., detection of tones in noise maskers, Glasberg and Moore, 1990), and even though existing models share key ingredients, they differ in many details. Models of auditory processing are used in a variety of contexts: in psychophysical studies, to design experiments ( Gnansia et al., 2009) and interpret behavioral results ( Meddis and O’Mard, 2006 Jepsen et al., 2008 Xia et al., 2010), in computational neuroscience, to understand the auditory system with neural modeling ( Fontaine and Peremans, 2009 Goodman and Brette, 2010 Xia et al., 2010), in engineering applications, as a front end to machine hearing algorithms ( Lyon, 2002 for example speech recognition, Mesgarani et al., 2006 or sound localization, May et al., 2011). We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in “Brian Hears,” a library for the spiking neural network simulator package “Brian.” This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. However, existing implementations do not exploit this parallelism. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. 2 Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France.1 Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes, Paris, France.Victor Benichoux 1,2 and Romain Brette 1,2*
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |