ASA Researchers

I have listed, below, the names and areas of research of some men and women whose interests relate in some way to auditory scene analysis, and have grouped them under several topic headings. Within each topic, the list is alphabetical by surname of researcher. Because researchers often move from one institution or organization to another, I haven’t included their current affiliations, although much of the descriptive material is adapted from their current websites. Each entry begins with a search expression, enclosed in angle brackets, consisting of the name of the researcher and one or more terms. The expression can be pasted into a search engine to find up-to-date information on that researcher (the expressions were tested on Google in September, 2010). [Note: Google ignores the commas and angle brackets in these expressions.]

If you would like me to make any corrections to your entry or would like me to remove it, please contact me at: al(dot)Bregman(at)mcgill(dot)ca

If you are doing research in an area related to auditory scene analysis and would like your name and research topics included, please send me an e-mail with your name and a description of your research, in the format shown below.

Auditory perception, psychophysics, hearing

< Bob Carlyon, auditory perception >

Carlyon's psychoacoustic research has spanned a wide range of topics in human hearing, e.g., perceptual segregation of concurrent sounds (grouping and streaming) and the effects of attention on auditory streaming. His research has most recently been focused on the problem of how we can listen to one voice in the presence of interfering sounds, such as other talkers. It incorporates behavioural and electrophysiological experiments with normal-hearing listeners, and studies of hearing by deaf patients fitted with a cochlear implant. He uses the resulting knowledge to study ways in which we can improve speech understanding by people with hearing loss, and, in particular, by deaf people who have been fitted with a cochlear implant.


< Laurent Demany, auditory psychophysics >

Demany is a psychophysicist interested in the temporal aspects of auditory perception and memory: auditory scene analysis (e.g., perceptual fusion of tones separated by an octave); pitch change detection; detection of continuous versus discrete frequency changes; the perceptual binding of successive sounds; the speed of perception of pitch; rhythm perception; perception of frequency peaks and troughs in wide frequency modulations; the perceptual consequences of cochlear damage; the role of memory in auditory perception; and the role of attention in auditory memory.


< Pierre Divenyi, auditory perception >

Divenyi has done research in the psychoacoustics of temporal processing, pattern perception, localization, the precedence effect and auditory scene analysis and has also studied perceptual consequences of the dynamics of speech. His interest in the "cocktail-party effect" has led him to investigate auditory functions that correlate with its loss in elderly individuals. He is the editor of two recent books, Speech Separation by Humans and Machines and Dynamics of Speech Production and Perception. He was organizer and director of a NATO Advanced Study Institute and several international and interdisciplinary symposia and workshops.


< Brian R. Glasberg, auditory perception >

Glasberg has studied the perception of sound in both normally hearing and hearing-impaired people. He also works on the development and evaluation of hearing aids, especially digital hearing aids. He has done a great deal of research on the shape of the filters in the human auditory system, and their role in the masking of various types of sound in people with normal hearing and cochlear impairment; effects of masker interaural correlation in binaural comodulation masking release.


< Joseph Hall, auditory >

Hall's research has included these topics: across-channel spectral processing; comodulation masking release (upon which he and his colleagues did the pioneering research); spectro-temporal processing in normal hearing and cochlear hearing-impaired adults and in children; informational masking release; spectral integration of speech bands


< Stephen Handel, hearing >

Handel is interested in principles of organization that affect vision, hearing of environmental sounds, and speech perception, and has published two important books about sound, Listening: An Introduction to the Perception of Auditory Events (1993) and Perceptual Coherence: Hearing and Seeing (2006). He has also done empirical research on the perception and segmentation of rhythmic patterns in audition and in other modalities.


< Ervin R. Hafter, auditory space perception >

Hafter does psychophysical research with the goal of modeling processes of hearing, with a focus on binaural processing. He has studied the neural enhancement of acoustic onsets, binaural adaptation, source segregation and its role in auditory distance perception, auditory scene analysis and cancellation of echoes. He also studies the interaction of auditory and visual cues, how space perception is dominated by visual cues, and how attention is divided between simultaneous auditory and visual signals.


< Bob Lutfi, auditory perception >

Lutfi is particularly interested in how one's ability to detect and recognize complex sounds is affected by both lawful and random variations in sound, as occur in nature. A goal of his research is the development of mathematical models for predicting detection and recognition performance by listeners under various conditions of signal uncertainty in the presence of masking sounds. He also conducts research on the perception of complex sounds by hearing-impaired listeners, and on the auditory abilities of children. He has worked on sound source identification (e.g., auditory detection of hollowness), acoustic cues for auditory motion, computational auditory scene analysis, and auditory masking.


< Josh McDermott, auditory perception>

McDermott studies computational audition, auditory scene analysis, natural sound statistics, and music perception. He conducts experiments on humans, using results in computational audio to motivate new experimental work, and using experimental results to develop new algorithms for processing sound. Recent work has focused on sound segregation and audio representation, with a focus on how the brain segregates and represents real-world sounds.


< Brian C.J. Moore, auditory perception>

Moore studies the following topics: mechanisms of normal hearing and hearing impairments; relationship of auditory abilities to speech perception; design of signal processing hearing aids for sensorineural hearing loss; fitting of hearing aids to suit the individual; electrical stimulation as a means of restoring hearing to the totally deaf; design and specification of high-fidelity sound-reproducing equipment; development of models of auditory perception, especially loudness perception. His textbook, An Introduction to the Psychology of Hearing, now in its 5th edition (as of Sept. 2010), is the standard to which all other textbooks on hearing must be compared.


< Yoshitaka Nakajima, auditory perception >

Nakajima's principal contribution to our understanding of how sound is mentally organized is his development of a "grammar" (i.e., the cognitive definition of a well-formed auditory event) that applies to simple auditory events such as tones. He has used, as evidence for the theory, new auditory illusions that he and his research colleagues have discovered such as (a) the illusory lengthening of tones preceded by a noise burst, and (b) the "gap transfer" illusion. He has also studied time perception, such as the effects of extraneous events on the perception of the duration of a silent interval defined by auditory markers.


< Andrew Oxenham, auditory perception >

Oxenham has studied (a) the relation between speech reception in complex backgrounds and psychoacoustic measures of cochlear nonlinearity, (b) the role of peripheral auditory filter bandwidth in the detection of a sinusoidal signal in a complex tone, (c) temporal models of pitch perception, (d) the role of F0 cues in segregating voices, as revealed by a noise-vocoder technique that simulates cochlear-implant processing, (e) the nature of the different mechanisms responsible for deriving pitch from low-order resolved harmonics and high-order unresolved ones, and (f) estimates of human cochlear tuning at low levels.


< Daniel Pressnitzer, hearing >

Pressnitzer's research is focused on perceptual organization in hearing, with a special interest in time: the temporal structure of sound and sound scenes, and the neural bases of their perception. Current projects include investigations of auditory memory, comparison of perceptual bistability in the auditory and visual modalities, music perception with cochlear implants, recognition of natural sound sources, computational models of hearing based on spike timing information, and comparisons of auditory change detection with visual change blindness.


< Brian Roberts, auditory perception >

Roberts carries out research on auditory scene analysis (ASA) and grouping, His research employs a variety of psychophysical techniques to investigate the cues used by the human auditory system for ASA. Particular interests are the role of harmonic relations and other kinds of spectral pattern in the perceptual organization of concurrent acoustic elements, and the acoustic properties that determine the perceptual streaming of sequences of sounds. Also of interest are: the neural bases of auditory grouping phenomena; how the effects of wideband inhibition may produce patterns of behaviour confusable with those produced by more cognitive grouping mechanisms; auditory streaming in cochlear implant listeners, categorization and identification of sounds; and the constraints imposed by auditory grouping mechanisms on the perception of speech.


< Leon van Noorden, music and movement >

Van Noorden's 1975 doctoral thesis, Temporal coherence in the perception of tone sequences. was a seminal work in the development of our knowledge about the sequential and simultaneous integration of tones. Since then he has studied perception of the musical pulse, the "tempo map", the neuroscience of rhythm perception, musical cues for patients with Parkinson's Disease, the perception and bodily expression of music, such as walking to music, and the development of synchronization skills in children.


< William A. Yost, auditory psychoacoustics >

Yost is former Director of the Parmly Hearing Institute, and former Director of the Interdisciplinary Neuroscience Minor at Loyola University Chicago. His studies have used psychophysical methods to investigate many aspects of localization and its role in auditory image perception and analysis [auditory scene analysis]: pitch perception, localization, the precedence effect, iterated rippled noise, pitch strength, the role of binaural processing in solving the "cocktail party problem", loudness recalibration, modulation detection interference, echo suppression and its breakdown in auditory processing of sound sources. He has also studied whether binaural processing is synthetic or analytic.


Auditory neuroscience

< Claude Alain, auditory scene analysis >

Alain does research in cognitive neuroscience, focusing on the brain processes that mediate perception and cognition of auditory patterns and events, specifically short-term memory and selective attention. He uses a combination of neuroimaging techniques (e.g., ERPs, MEG, and fMRI) to study how brain areas work together when attention is directed to a particular sound identity and/or sound location in the auditory field. He has developed a way to use a component of the event-related potential (ERP) to determine whether a multi-component sound is heard as a single fused sound or as two or more separate components.


< Albert S. Feng, auditory scene analysis animals >

Feng's current research focuses on determining the mechanisms underlying extraction of signals in complex auditory scenes, using the frog and bat auditory systems as models. Male frogs produce advertisement calls in large choruses, and females must localize and identify the callers based on the spectro-temporal characteristics of their vocalizations. Echolocating bats rely on analysis of echoes of their sonar emissions to determine the location and identity of objects along their flight paths, and to discriminate prey from obstacles, as well as stationary from moving objects. The current focus is on determining the roles of neural oscillation in time domain information processing. Dr. Feng is also active in translational research, e.g., advanced hearing aid technologies with the ability to extract sound embedded in noise, and biomolecular high-resolution cochlear implants.


< Yonatan I. Fishman, auditory cortex >

Fishman studies neural mechanisms of pitch perception and auditory scene analysis in primate auditory cortex. Parallel interests include translational research involving human clinical populations aimed at bridging explanatory and methodological gaps between neurophysiology of complex sound processing and auditory scene analysis in animal models and humans.


< Stephen Grossberg, neural models >

Grossberg studies bottom-up and top-down neural processes in audition, speech, and language. He develops brain models of vision and visual object recognition; audition, speech, and language; development; attentive learning and memory; cognitive information processing; reinforcement learning and motivation; cognitive-emotional interactions; navigation; sensory-motor control and robotics; and mental disorders. These models, ranging from perception to action, involve many parts of the brain and multiple levels of brain organization, from individual spikes and their synchronization to cognition. He also carries out analyses of the mathematical dynamics of neural systems and transfers biological neural models to applications in neuromorphic engineering and technology.


< Nina Kraus, auditory neuroscience >

Kraus studies the biological bases of speech and music, perception of speech in noise, musical experience, dyslexia, auditory training, neural plasticity, aging, and hearing in noise. She investigates the neurobiology underlying speech and music perception and learning-associated brain plasticity. She studies normal listeners throughout the lifespan, clinical populations (poor readers; autism; hearing loss), auditory experts (musicians) and an animal model. Her method of assessing the brain's encoding of sounds has been adapted as BioMARK (biological marker of auditory processing), a commercial product that helps educators and clinicians better diagnose learning disabilities.


< Adrian KC Lee, brain imaging >

Lee studies human brain imaging, auditory attention, auditory scene analysis, neuroengineering. His research goals are: (1) to understand how normal listeners can seamlessly segregate sound in a multi-source environment, e.g., in a crowded restaurant; (2) to map the spatiotemporal dynamics of the cortical network that is involved in attending and analyzing the different acoustical signals in the auditory scene; and (3) to combine neuroscience knowledge with state-of-the-art engineering approaches in order to design better assistive listening devices that can enable users to communicate more effectively in a "cocktail party" environment.


< Christophe Micheyl, auditory >

Micheyl studies phenomena of auditory perception and their neural basis, including: (a) the role of pitch relations and harmonicity in auditory scene analysis (b) auditory stream segregation in humans, songbirds, and other animals, (c) separation of concurrent sounds, (d) the role of auditory cortex in the formation of auditory streams. (e) separating concurrent complex tones (f) the development of mathematical models of perception and of its relationship with neural response, (g) analysis of signal-detection theory, (h) perceptual learning, (i) tinnitus, (j) auditory after-effects, (k) the auditory continuity illusion, (l) hearing impairment and cochlear implants (m) Auditory efferents, otoacoustic emissions and (n) perceptual correlates of central auditory plasticity.


< Kalle Palomäki, auditory neuroscience >

Palomäki studies spatial sound and speech in two ways: (1) magnetoencephalographic (MEG) brain measurements on spatial localization and speech perception, and (2) construction of computational auditory scene analysis models, which exploit spatial cues and other cues that are robust in reverberant environments. The MEG research has studied spatial stimuli in the auditory cortex: (a) processing of sound source location with realistic spatial stimuli, (b) spatial processing of speech vs. non-speech stimuli, and (c) processing of range of spatial location cues in the auditory cortex. In the auditory modeling part, he has constructed models for the recognition of speech in the presence of interference.


< Joel Snyder, auditory cognitive neuroscience >

Snyder investigates auditory and visual perception and cognition. Research questions include: How do listeners perceive objects and events in complex environments, such as a crowded party or a forest filled with other animals? Furthermore, how does the ability to perceive in such situations change during normal aging, and how is this ability impacted by mental illness? His research employs the measurement techniques of experimental psychology (perceptual judgments and sensory-motor tasks) and of cognitive neuroscience (event-related brain potentials, magnetoencephalography, and magnetic resonance imaging). Specific topics of research are: 1) auditory scene analysis, 2) perception and production of musical rhythm, 3) comparison of auditory and visual perception, 4) effects of aging on auditory processing, and 5) perceptual abnormalities in schizophrenia.


< Lucas Spierer, auditory plasticity >

Spierer studies auditory spatial processing, temporal order judgment, electrical neuroimaging, functional magnetic resonance imaging, transcranial magnetic stimulation, and neuropsychological investigations of post-lesional and learning-induced cortical plasticity.


< Elyse Sussman, auditory scene analysis brain >

Sussman's research is in the field of cognitive neuroscience and is focused on understanding the neural bases of auditory information processing in adults and children. Her research uses a combination of recordings of human brain activity – event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI) – in conjunction with measures of behavioral performance, to specify the processes and brain structures that contribute to the organization, storage and perception of a coherent sound environment.


Biology of auditory scene analysis

< Cynthia Moss, bats auditory scene analysis >

Moss' research is directed at understanding auditory information processing and sensorimotor integration in vertebrates via the study of hearing and perceptually-guided behavior in the echolocating bat. The research combines acoustical, psychophysical, perceptual, computational and neurophysiological studies, with the goal of developing integrative theories on brain-behavior relations in animal systems. Current behavioral studies focus on the processing of dynamic acoustic signals for the perception of auditory scenes, and deal with the problem of auditory scene analysis. Current neurophysiological experiments focus on the functional organization of the bat's superior colliculus, a midbrain structure implicated in the coordination of multimodal sensory inputs and goal-directed motor behaviors.


< Richard R. Fay, fish auditory scene analysis >

Fay's research focuses on the mechanisms of the nervous system that synthesize perceptions of sound sources. He studies fish to investigate these questions because they have simple and primitive vertebrate auditory systems, and because he is able to carry out both behavioral (psychophysical) and single-cell neurophysiological experiments under comparable acoustic conditions in the laboratory. His experiments have investigated the perceptions and neural representations of pitch, timbre, temporal pattern, stream segregation, and sound source location in goldfish and toadfish. He has written on the evolution of hearing in vertebrates, both in terms of their inner ears and their processing of sound. Fay is also a series editor for the Springer Handbook of Auditory Research, published by Springer-Verlag, New York.


< Terry Takahashi, owl auditory neuroscience >

Takahashi's research is on the synthesis and use of the owl's auditory space map for localization and identification of concurrent sounds in cluttered acoustical environments. This includes studies that establish its independence of echo-threshold and echo-delay, its leverl of spatial acuity (approximating the resolving power of space-specific neurons), the role of head saccades, auditory spatial discrimination, the contribution of level difference cues to spatial receptive fields in the barn owl's inferior colliculus, and other topics


Computational auditory scene analysis (CASA)

Computational auditory scene analysis (CASA) is the attempt to program computers to solve the ASA problem. Since CASA is a growing field, the following list of researchers is not exhaustive, and new names will be added from time to time.


< Jon Barker, computational hearing >

Barker works on CHiME (Computational Hearing in Multisource Environments), an EPSRC project that aims to develop a framework for computational hearing in multisource environments (use the search expression "CHiMe EPSRC Barker" to find it). The approach operates by exploiting two levels of processing that combine to simultaneously separate and interpret sound sources. The first processing level exploits the continuity of sound source properties to clump the acoustic mixture into fragments of energy belonging to individual sources. The second processing level uses statistical models of specific sound sources to separate fragments belonging to the acoustic foreground (i.e. the 'attended' source) from fragments belonging to the background. CHiME will build a demonstration system simulating a speech-driven home-automation application operating in a noisy domestic environment.


< Guy J. Brown, computational auditory scene analysis >

Brown's main research interest is Computational Auditory Scene Analysis (CASA), which aims to build machine systems that mimic the ability of human listeners to segregate complex mixtures of sound, and to separate speech from background sound. He also has interests in reverberation-robust automatic speech recognition, auditory-motivated techniques for sonar signal processing and music technology.


< Alain de Cheveigné, computational auditory scene analysis >

De Cheveigné's main research interests are in hearing (pitch perception, auditory scene analysis) and speech processing (fundamental frequency estimation, voice separation). He belongs to the Perception et Cognition Musicale group of the Ircam-CNRS joint research unit, affiliated to both Ircam and CNRS(his employer). Much of his work has been done in Japan, mainly at ATR's Human Information Processing Research Laboratories. He contributed an important chapter, The Cancellation Principle in Acoustic Scene Analysis, to the book, Speech Separation by Humans and Machines, edited by Pierre Divenyi.


< Daniel P. W. Ellis, CASA >

Ellis is the principal investigator of the Laboratory for Recognition and Organization of Speech and Audio (LabROSA) at Columbia University, which he founded in 2000. Research at the lab focuses on extracting information from sound in many domains and guises. Sound carries information; that is why we and other animals have evolved a sense of hearing. But the useful information in sound – what events are happening nearby, and where they are – can appear in a convoluted and variable form, and, worse still, is almost always mixed together with sound from other, simultaneous sources. Thus, the problem of extracting high-level, perceptually-relevant information from sound is complex and involved, and this problem is the focus of Ellis’s research. The goal of his group at Columbia – the Laboratory for Recognition and Organization of Speech and Audio, or LabROSA – is to develop and apply signal processing and machine learning techniques across the wide range of audio signals commonly encountered in daily life. This includes extracting many kinds of information from speech signals, music recordings, and environmental or 'ambient' sounds.


< Bernard Mont-Reynaud, (acoustics OR audio) >

Mont-Reynaud is an expert in audio and visual signal processing. His pioneering work at Stanford with doctoral student David Mellinger in the 1980's showed that concepts from visual signal processing could be applied to audio to derive properties that were useful in auditory scene analysis. This work was one of the early studies in CASA. He now with a company in Silicon Valley concerned with audio, music and pattern recognition.


< Richard M. Stern, binaural auditory >

Stern is involved in research concerned with improving the robustness of SPHINX, Carnegie Mellon’s large-vocabulary continuous-speech recognition system, with respect to acoustical distortion resulting from sources such as background noise, competing talkers, change of microphone, and room reverberation. Several different strategies are being used to address these problems, including the use of representations of the speech waveform that are based on the processing of sounds by the human auditory system. This research includes both psychoacoustical measurements to determine how we hear complex sounds, and the development of mathematical models that use optimal communication theory to relate the results of these experiments to the neural coding of sounds by the auditory system. Much of this work has been concerned with the localization of sound and other aspects of binaural perception.


< DeLiang Wang, computational auditory scene analysis >

Wang does research in machine implementations of biologically plausible neural computation for auditory and visual analysis (including segmentation, recognition and generation) based on psychological and neurobiological data from human and animal perception and computational considerations. A fundamental aspect of perception is scene analysis and segmentation, the ability to group elements of a perceived scene or sensory field into coherent clusters or objects. Wang's group focuses on solving this problem by means of large networks of coupled neural oscillators. His recent work focuses on developing models and algorithms for computational auditory scene analysis (CASA), that incorporate analyses of pitch, location, amplitude and frequency modulation, onset/offset, rhythm, and so on.


Music: Perception, cognition & technology

< James W. Beauchamp, technology music >

Beauchamp studies computer music; analysis, synthesis, and perception of musical sounds; wind acoustics; pitch detection; musical sound separation; a method of multiple wavetable synthesis called "Spectral Dynamic Synthesis"; determination of methods for testing listeners' abilities to discriminate between original acoustic sounds and synthetic replicas. He works on the continued development of C/Unix-based software packages, SNDAN, for sound spectrum analysis, and Music 4C, for musical score synthesis.


< David Huron, music >

Huron's early research centered on the study of the perceptual foundations of melody and voice-leading, using his Humdrum pattern-finding software to scan a database of musical scores. He has also done research on sensory dissonance, musical similarity, and musical expectation. His work on expectation is chronicled in the book Sweet Anticipation: Music and the Psychology of Expectation published by MIT Press. Supplementary material related to the book (including sound examples) are also available online. His current research interests focus on better understanding how music evokes emotion. Why is music so enjoyable? What precisely are its mental attractions?


< Guy Madison, music psychology >

Madison's current research projects [2010] include: (1) Physiological reactions to music and other sound patterns. (2) Music preferences, in particular as a function of repeated exposure. (3) Synchronization and timing in music: (a) how people co-ordinate their movements (including the voice) with other events with a high level of precision, as they do in musical ensembles. The results are used to construct models that describe and simulate human behaviour. (b) The relation between timing (to judge time and control one's behaviour in time) and cognitive ability (executive functions, psychometric intelligence). (c) Children with ADHD, in particular with respect to timing, reaction time, inhibition, and attention. (d) The experience of groove or "swing" in music; what is its function and biological basis, and how is it related to properties of the sound signal? (e) The human rhythmic ability and its biological basis: Adaption, ethology, neurology, and comparisons with other species (evolutionary psychology, comparative psychology).


< Stephen McAdams, music perception cognition >

McAdams is interested in auditory perception and cognition in everyday and musical listening. Topics of particular interest are: 1) the mechanisms of auditory analysis of complex scenes with multiple sources of sound, 2) the perception of the timbre of musical instruments, 3) the perception, recognition and identification of vibrating objects in the environment, and 4) the perception of musical materials and forms, particularly in naturalistic conditions like sitting in a concert. The primary emphasis of the research is on psychophysical techniques capable of quantifying the relations between the properties of vibrating objects, acoustic signals or complex messages and their perceptual results. A long-term goal is to provide empirical data that will allow the integration of lower- and higher-level auditory processes.


Speech Perception

< Deniz Baskent, speech perception >

Baskent studies speech perception, especially in hearing impaired listeners, including (a) the interaction of audition and vision, (b) phonemic restoration in sensorineural hearing loss (c) perceptual restoration of amplitude-compressed speech, (d) using information from genetic algorithms to assist in fitting hearing aids and cochlear implants, (e) Is the information transmission by cochlear implants limited by listeners' peripheral auditory system?, and (f) Frequency-place compression and expansion in cochlear implant listeners. She has also done research on robotic sonar sensors, and map building from range data with mathematical morphology.


< Valter Ciocca, speech perception >

Ciocca's research has focused on three themes in speech and hearing sciences: (1) Auditory scene analysis (ASA): how perception of phonetic identity and pitch are affected by the presence of extraneous sounds. He has compared ASA abilities in normally hearing adults, children, and adults with hearing impairments (particularly, cochlear implant users), to understand how the brain processes mixtures of sounds, and how this differs in individuals with hearing disorders and as the brain develops. (2) Speech perception and production of speech produced by typical speakers, those with cerebral palsy, with cleft lip and palate, or with hearing impairments, with the goal of understanding the articulatory strategies used by individuals with disordered speech production systems, and of how listeners are able to recognize them. (3) Pitch perception and production (as involved in lexical tones and intonation in Cantonese) in adults and in typically developing children, and in children and individuals with communication disorders. (4) His current projects study (a) the perception of the illusory continuation of interrupted sounds through louder noise and through short gaps of silence by listeners with normal hearing and by cochlear implant users, and (b) auditory processing abilities of individuals with autistic savant syndrome.


< Christopher Darwin, speech perception >

Darwin, now retired, has been a leader in the study of auditory perception in relation to speech perception - more specifically of auditory grouping and the "cocktail-party" effect. You can find his home page using the above search expression and listen to a BBC Radio 4 program to which he made a contribution on the cocktail-party problem. Use the following search expression to find his selected publications as pdfs (including difficult to obtain material): < Chris Darwin's Selected Publications >.


Acoustic technology

< Ramani Duraiswami, auditory >

Duraiswami studies microphone arrays, measurement of the head-related transfer function (HRTF) which shows how the spectrum at the ear changes with different positions of the source, the computation of HRTFs, and the creation of virtual auditory spaces.


< Masataka Goto, music audio >

Goto works on the creation of technology that (1) assists active musical listening, eg, (a) LyricSynchronizer: Automatic Synchronization of Lyrics with CD Recordings; (b) INTER: An instrument equalizer for CD Recordings; (c) MusicSun: Artist Discovery Interface Using Audio-Based, Web-Based, and Word-Based Similarities; (d) A real-time beat tracking system for musical acoustic signals. He also works on tools for exploring music on the Web, eg, MusicSun, a graphical user interface to discover artists. Artists are recommended based on one or more artists selected by the user. The recommendations are computed by combining 3 different aspects of similarity. He also works on Speech-Recognition Interfaces for Music Information Retrieval.


< Lloyd Watts, Audience >

Watts works in audio signal processing. He has worked on (a) real-time, high-resolution simulation of the auditory pathway, with application to cell-phone noise reduction, (b) objective measures for the characterization of the basic functioning of noise reduction algorithms, (c) automatic speech recognition, and (d) other topics. His company, Audience, has developed chips for the reduction of cell-phone noise, using strategies based on auditory neurophysiology and auditory scene analysis.


Auditory environments and architecture

< Sophie Arkette, philosophy auditory perception >

Arkette works in the field of acoustic ecology, understanding the role of sound as a component of urban experience, the phenomenological approach to the urban experience, altering that experience through deliberate design (e.g., creating spaces that enhance one’s auditory awareness, harmonic structure and pitch perception.


< Barry Blesser, aural architecture >

Blesser is considered one of the grandfathers of the digital audio revolution. He invented and developed the first commercial digital reverberation system, the EMT-250 in 1976, helped start Lexicon in 1971, published the landmark paper, "Digital Processing of Audio Signals" in 1978, co-chaired the 1st International Conference on Digital Audio in 1980, and was an adviser to the US Justice Department on the Watergate Tapes in 1974. More recently, he authored Spaces Speak, Are You Listening? Experiencing Aural Architecture (2006). He studies spatial perception, aural architecture, the auditory experience of place, soundscapes, eventscapes, acoustic engineering, and sensory architecture.







Valid XHTML 1.0 Transitional   Copyright ©2008 - Al Bregman   Valid CSS!