화학공학소재연구정보센터
Nature, Vol.374, No.6522, 537-539, 1995
A Common Neural Code for Frequency-Modulated and Amplitude-Modulated Sounds
MOST naturally occurring sounds are modulated in amplitude or frequencg; important examples include animal vocalizations and species-specific communication signals in mammals, insects, reptiles, birds and amphibians(1-9). Deciphering the information from amplitude-modulated (AM) sounds is a well-understood process, requiring a phase locking of primary auditory afferents to the modulation envelopes(10-12). The mechanism for decoding frequency modulation (FM) is not as clear because the FM envelope is flat (Fig. 1). One biological solution is to monitor amplitude fluctuations in frequency-tuned cochlear filters as the instantaneous frequency of the FM sweeps through the passband of these filters, This view postulates an FM-to-AM transduction whereby a change in frequency is transmitted as a change in amplitude(13,14). This is an appealing idea because, if such transduction occurs early in the auditory pathway, it provides a neurally economical solution to how the auditory system encodes these important sounds. Here we illustrate that an FM and AM sound must be transformed into a common neural code in the brain stem. Observers can accurately determine if the phase of an FM presented to one ear is leading or lagging, by only a fraction of a millisecond, the phase of an AM presented to the other ear. A single intracranial image is perceived, the spatial position of which is a function of this phase difference.