Psychology Wiki
mNo edit summary
(9 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
{{ExpPsy}}
 
{{ExpPsy}}
  +
{{Hearing}}
'''Hearing''', is one of the traditional five [[sense]]s and refers to the ability to detect [[sound]]. In humans and other vertebrates, hearing is performed primarily by the [[auditory system]]: [[sound]] is detected by the [[ear]] and transduced into nerve impulses that are perceived by the [[brain]].
+
'''Hearing''' is one of the traditional five [[sense]]s, and refers to the ability to detect [[sound]]. In humans and other vertebrates, hearing is performed primarily by the [[auditory system]]: [[sound]] is detected by the [[ear]] and transduced into nerve impulses that are perceived by the [[brain]].
   
 
Like touch, audition requires sensitivity to the movement of molecules in the world outside the organism. Both hearing and touch are types of mechanosensation.<ref>Kung C., "A possible unifying principle for mechanosensation," ''Nature'', 436(7051):647–54, 2005 Aug 4.</ref>
All sounds are not normally audible to all animals. Each species has a range of normal hearing for both loudness (amplitude) and pitch ([[frequency]]). Many animals use sound in order to communicate with each other and hearing in these species is particularly important for survival and reproduction. In species using sound as a primary means of communication, hearing is typically most acute for the range of pitches produced in calls and speech.
 
   
 
== Hearing in animals ==
Frequencies capable of being heard by humans are called [[Audio frequency|audio]] or referred to as [[sonic]]. Frequencies higher than audio are referred to as [[ultrasound|ultrasonic]], while frequencies below audio are referred to as [[infrasound|infrasonic]]. Some [[microbat|bat]]s use ultrasound for [[Animal echolocation|echo location]] while in flight. [[Dog]]s are able to hear ultrasound, which is the principle of 'silent' dog whistles. [[Snake]]s sense infrasound through their bellies, and [[whale]]s, [[giraffe]]s and [[elephant]]s use it for communication.
 
 
Not all sounds are normally audible to all animals. Each species has a range of normal hearing for both loudness (amplitude) and pitch ([[frequency]]). Many animals use sound in order to communicate with each other and hearing in these species is particularly important for survival and reproduction. In species using sound as a primary means of communication, hearing is typically most acute for the range of pitches produced in calls and speech.
   
 
Frequencies capable of being heard by humans are called [[Audio frequency|audio]] or [[sonic]]. Frequencies higher than audio are referred to as [[ultrasound|ultrasonic]], while frequencies below audio are referred to as [[infrasound|infrasonic]]. Some [[microbat|bats]] use ultrasound for [[Animal echolocation|echo location]] while in flight. [[Dog]]s are able to hear ultrasound, which is the principle of 'silent' [[dog whistle]]s. [[Snake]]s sense infrasound through their bellies, and [[whale]]s, [[giraffe]]s and [[elephant]]s use it for communication.
Humans can generally hear sounds with frequencies between 20 [[Hertz|Hz]] and 20 [[kHz]]. Human hearing is able to discriminate tiny differences in loudness (intensity) and pitch (frequency) over that large range of audible sound. This healthy human range of frequency detection varies significantly with age, occupational hearing damage, and gender; some individuals are able to hear pitches up to 22 kHz and perhaps beyond, while others are limited to about 16 kHz. Most adults' ability to hear frequencies higher than the frequency range between 200 and 8,000 hertz, where most human communication takes place, begins to deteriorate in early middle age.<ref>http://www.nytimes.com/2006/06/12/technology/12ring.html?ex=1307764800&en=2b80d158770dccdf&ei=5088&partner=rssnyt&emc=rss</ref>
 
 
Human beings develop spoken language within the first few years of life, and hearing impairment can not only prevent the ability to talk but also the ability to understand the spoken word. By the time it is apparent that a severely hearing impaired (deaf) child has a hearing deficit, problems with communication may have already caused issues within the family and hindered social skills, unless the child is part of a Deaf community where sign language is used instead of spoken language (see [[Deaf Culture]]). In many developed countries, hearing is evaluated during the newborn period in an effort to prevent the inadvertent isolation of a deaf child in a hearing family. Although sign language is a full means of communication, literacy depends on understanding spoken language. In the great majority of written language, the sound of the word is coded in symbols. Although an individual who hears and learns to speak and read will retain the ability to read even if hearing becomes too impaired to hear voices, a person who never heard well enough to learn to speak is rarely able to read proficiently.<ref>(Morton CC. Nance WE. Newborn hearing screening--a silent revolution. [Review] [47 refs] [Journal Article. Review] New England Journal of Medicine. 354(20):2151-64, 2006 May 18.)</ref> Most evidence points to early identification of hearing impairment as key if a child with very insensitive hearing is to learn spoken language.
 
 
Hearing can be measured by behavioral tests using an [[audiometer]]. Electrophysiological tests of hearing can provide accurate measurements of hearing thresholds even in unconscious subjects. Such tests include auditory brainstem evoked potentials (ABR), otoacoustic emissions and electrocochleography (EchoG). Technical advances in these tests have allowed hearing screening for infants to become widespread.
 
   
 
The physiology of hearing in vertebrates is not fully understood at this time. The molecular mechanism of sound transduction within the [[cochlea]] and the processing of sound by the brain, (the [[auditory cortex]]) are two areas that remain largely unknown.
 
The physiology of hearing in vertebrates is not fully understood at this time. The molecular mechanism of sound transduction within the [[cochlea]] and the processing of sound by the brain, (the [[auditory cortex]]) are two areas that remain largely unknown.
 
The rest of this article describes the functioning of human hearing, from the ear to the primary auditory cortex. Like the sense of touch, audition requires sensitivity to the movement of molecules in the world outside the organism. Both hearing and touch are types of [[mechanosensation]].<ref>(Kung C. A possible unifying principle for mechanosensation. [Review] [100 refs] [Journal Article. Review] Nature. 436(7051):647-54, 2005 Aug 4. )</ref>
 
   
==== Localization of sound by humans: a brain circuit ====
+
== Hearing in humans ==
 
Humans can generally hear sounds with frequencies between 20 [[Hertz|Hz]] and 20 [[kHz]]. Human hearing is able to discriminate small differences in loudness (intensity) and pitch (frequency) over that large range of audible sound. This healthy human range of frequency detection varies significantly with age, occupational hearing damage, and gender; some individuals are able to hear pitches up to 22 kHz and perhaps beyond, while others are limited to about 16 kHz. The ability of most adults to hear sounds above about 8 kHz begins to [[deteriorate]] in early middle age.<ref>http://www.nytimes.com/2006/06/12/technology/12ring.html?ex=1307764800&en=2b80d158770dccdf&ei=5088&partner=rssnyt&emc=rss</ref>
{{main|sound localization}}
 
   
  +
=== Mechanism ===
Humans are normally able to hear a variety of sound frequencies, from about 20 Hz to 20 kHz. Our ability to estimate just where the sound is coming from, sound localization, is dependent on both hearing ability of each of the two ears, and the exact quality of the sound. Since each ear lies on an opposite side of the head, a sound will reach the closest ear first, and its amplitide will be loudest in that ear. Much of the brain's ability to localize sound depends on interaural (between ears) intensity differences and interaural temporal or phase differences.
 
  +
{{main|Auditory system}}
  +
Human hearing takes place by a complex mechanism involving the transformation of [[sound wave]]s into [[nerve impulse]]s.
   
  +
==== Outer ear ====
[[Human echolocation]] is a technique involving [[echolocation]] used by some [[Blindness|blind]] humans to navigate within their environment.
 
  +
{{main|Outer ear}}
  +
The visible portion of the outer ear in humans is called the auricle or the [[pinna]]. It is a convoluted cup that arises from the opening of the ear canal on either side of the head. The auricle helps direct sound to the ear canal. Both the auricle and the ear canal amplify and guide sound waves to the tympanic membrane or [[eardrum]].
   
 
In humans, amplification of sound ranges from 5 to 20 dB for frequencies within the speech range (about 1.5–7 kHz). Since the shape and length of the human external ear preferentially amplifies sound in the speech frequencies, the external ear also improves signal to noise ratio for speech sounds.<ref>(John F. Brugge and Matthew A. Howard, Hearing,'' Chapter in'' Encyclopedia of the Human Brain, ISBN 0-12-227210-2, Elsevier, Pages 429-448,2002)</ref>
==== Hearing underwater====
 
   
  +
==== Middle ear ====
Hearing threshold and the ability to localize sound sources are reduced underwater, in which the speed of sound is faster than in air. Underwater hearing is by bone conduction, and localization of sound appears to depend on differences in amplitude detected by bone conduction.<ref>(Shupak A. Sharoni Z. Yanir Y. Keynan Y. Alfie Y. Halpern P. Underwater hearing and sound localization with and without an air interface. [Journal Article] Otology & Neurotology. 26(1):127-30, 2005 Jan.)</ref>
 
  +
{{main|Middle ear}}
  +
The eardrum is stretched across the front of a bony air-filled cavity called the middle ear. Just as the tympanic membrane is like a drum head, the middle ear cavity is like a drum body.
   
  +
Much of the middle ear's function in hearing has to do with processing sound waves in air surrounding the body into the vibrations of fluid within the [[cochlea]] of the inner ear. Sound waves move the tympanic membrane, which moves the [[ossicles]], which move the fluid of the cochlea.
==The Outer Ear captures sound==
 
The visible portion of the outer ear in humans is called the auricle, a convoluted cup that arises from the opening of the ear canal on either side of the head. The auricle helps direct sound to the ear canal, and these two components of the outer ear (auricle and ear canal) both amplify and guide sound waves to the tympanic membrane (eardrum). In humans, amplification of sound ranges from 5 to 20 dB for frequencies within the speech range (about 1.5–7 kHz). Since the shape and length of the human external ear preferentially amplifies sound in the speech frequencies, the external ear also improves signal to noise ratio for speech sounds.<ref>(John F. Brugge and Matthew A. Howard, Hearing,'' Chapter in'' Encyclopedia of the Human Brain, ISBN 0-12-227210-2, Elsevier, Pages 429-448,2002)</ref>
 
   
  +
==== Inner ear ====
==The Middle Ear formats sound for the cochlea (impedance matching)==
 
  +
{{main|Inner ear}}
  +
The cochlea is a snail shaped fluid-filled chamber, divided along ''almost'' its entire length by a membranous partition. The cochlea propagates mechanical signals from the middle ear as waves in fluid and membranes, and then transduces them to nerve impulses which are transmitted to the brain. It is responsible for the sensations of balance and motion.
   
  +
==== Central auditory system ====
The ear drum is stretched across the front of a bony air-filled cavity called the middle-ear. Just as the tympanic membrane is rather like the drum head, the middle ear cavity is something like the drum body. This middle ear cavity is also quite like a specializehe back of the [[nose]] ([[nasopharynx]]) by the [[Eustachian tube]]. Pressure equilibrium with the atmosphere is maintained through intermittent opening of the Eustachian tube in response to swallowing and other maneuvers.
 
  +
{{Expand-section|date=January 2007}}
  +
This sound information, now re-encoded, travels down the [[auditory nerve]], through parts of the [[brainstem]] (for example, the [[cochlear nucleus]] and [[inferior colliculus]]), further processed at each waypoint. The information eventually reaches the [[thalamus]], and from there it is relayed to the cortex. In the [[human brain]], the [[primary auditory cortex]] is located in the [[temporal lobe]].
   
  +
=== Representation of loudness, pitch, and timbre ===
The deepest aspect of the middle ear is at the [[oval window]], where the footplate of the [[stapes]] divides the middle ear from the inner ear. The ossicular chain is the articulated bridge of 3 ossicles (ear bones) that span the depth of the middle ear cavity. The most superficial portion of the ossicular chain is the [[malleus]], an angled ossicle that has its long process embedded in the center of the taut portion ([[pars tensa]]) of the ear drum ([[tympanic membrane]]). The head of the malleus sits above in the uppermost middle ear, at the top of the middle ear space in the epitympanum (or attic). It articulates with the body of the second ear bone ([[ossicle]]), the [[incus]]. The incus tapers to a thin long process that descends down to the stapes, the third ossicle. This horseshoe shaped bone's footplate sits in the oval window, an opening to the fluid-filled inner ear.
 
   
 
Nerves transmit information through discrete electrical impulses known as "[[action potential]]s." As the [[loudness]] of a sound increases, the rate of action potentials in the auditory nerve fibre increases. Conversely, at lower sound intensities (low loudness), the rate of action potentials is reduced.
Much of the middle ear's function in hearing has to do with processing sound waves in air surrounding the body into the vibrations of fluid within the [[cochlea]] of the inner ear. Sound waves move the tympanic membrane which moves the long process of the malleus, which moves the incus, which depresses the stapes footplate into the oval window, which moves the fluid of the cochlea. Ordinarily, when sound waves in air strike liquid, more than 99% of the energy is reflected off the surface of the liquid. Most people have an intuitive understanding of this, having experienced hearing underwater. The sounds of the shore or poolside are almost inaudible, remaining in the air. The middle ear allows the impedance matching of sound traveling in air and sound traveling in liquid, overcoming the interface between them.
 
   
  +
Different repetition rates and spectra of sounds, that is, [[Pitch (psychophysics)|pitch]] and [[timbre]], are represented on the auditory nerve by a combination of rate-versus-place and temporal-fine-structure coding. That is, different frequencies cause a maximum response at different places along the [[organ of Corti]], while different repetition rates of low enough pitches (below about 1500 Hz) are represented directly by repetition of neural firing patterns (known also as ''volley'' coding).
There are several specific ways in which the middle ear amplifies sound from air to the fluid in the oval window. The first of these is the "hydraulic principle". The vibratory portion of the tympanic membrane is many times the surface area of the footplate of the stapes. The collected pressure of sound vibration that strikes the tympanic membrane is concentrated down to this much smaller area of the footplate, increasing the force and thereby amplifying the sound.
 
   
 
Loudness and duration of sound (within small time intervals) may also influence pitch to a small extent. For example, for sounds higher than 4000 Hz, as loudness increases, the perceived pitch also increases.
The second way in which the middle ear amplifies air-conducted sound to the fluid of the cochlea is called the "lever principle". The shape of the articulated ossicular chain is like a lever, the long arm being the long process of the malleus, and the body of the incus being the fulcrum and the short arm being the lenticular process of the incus.
 
   
  +
=== Localization of sound ===
The third way in which the middle ear amplifies sound is by "round window protection". The cochlea is a fluid filled tube, wound about on itself. One end of this tube is at the oval window, the other end is at the round window. Both of these windows, opening in the bone, lay on the deep wall of the middle ear space. Whereas the tympanic membrane and ossicular chain preferentially direct sound to the oval window, the round window is protected by middle ear structures from having sound waves impinge on its surface. The middle ear structures that protect the round window are the intact tympanic membrane and the round window niche. If there were no tympanic membrane or ossicular chain, sound waves would strike the round window and oval window at the same time - and much of the energy would be lost. Fluid movement inside the cochlea, in regards to both the oval and round window and the stimulation of hair cells, is more fully explained in the next section on the cochlea.
 
 
{{main|sound localization}}
   
 
Humans are normally able to hear a variety of sound frequencies, from about 20 Hz to 20 kHz. Our ability to estimate just where the sound is coming from, sound localization, is dependent on both hearing ability of each of the two ears, and the exact quality of the sound. Since each ear lies on an opposite side of the head, a sound will reach the closest ear first, and its amplitude will be larger in that ear.
The middle-ear is able to dampen sound conduction a bit when faced with very loud sound by noise induced reflex contraction of the middle ear muscles.
 
   
  +
The shape of the [[pinna]] (outer ear) and of the head itself result in frequency-dependent variation in the amount of attenuation that a sound receives as it travels from the sound source to the ear: further this variation depends not only on the azimuthal angle of the source, but also on its elevation. This variation is described as the [[head-related transfer function]], or HRTF. As a result, humans can locate sound both in azimuth and altitude. Most of the brain's ability to localize sound depends on interaural (between ears) intensity differences and interaural temporal or phase differences. In addition, humans can also estimate the distance that a sound comes from, based primarily on how reflections in the environment modify the sound, for example as in room reverberation.
==The Cochlea of the Inner Ear Transforms Sound Energy into Nerve Impulses==
 
   
 
[[Human echolocation]] is a technique used by some [[Blindness|blind]] humans to navigate within their environment by listening for echos of click or tap sounds that they emit.
The cochlea is a snail shaped fluid-filled chamber, divided along ''almost'' its entire length by a membranous partition. The walls of the hollow cochlea are made of bone, with a thin delicate lining of epithelial tissue. This coiled tube is divided through most of its length by a membrane partition. Two fluid-filled spaces (scalae) are formed by this dividing membrane. lo
 
The fluid in both is called perilymph: a clear solution of electrolytes and proteins. The two scalae (fluid-filled chambers) communicate with each other through an opening at the top (apex) of the cochlea called the [[helicotrema]], a common space that is the one part of the cochlea that lacks the lengthwise dividing membrane.
 
   
  +
=== Hearing and language ===
At the base of the cochlea each scala ends in a membrane that faces the middle ear cavity. The [[scala vestibuli]] ends at the [[oval window]], where the footplate of the [[stapes]] sits. The footplate rocks when the ear drum moves the ossicular chain; sending the perilymph rippling with the motion, the waves moving away from footplate and towards helicotrema. Those fluid waves then continue in the perilymph of the scala tympani. The scala tympani ends at the round window, which bulges out when the waves reach it -providing pressure relief. This one-way movement of waves from oval window to round window occurs because the middle ear directs sound to the oval window, but shields the round window from being struck by sound waves from the external ear. It is important, because waves coming from both directions, from the round ''and'' oval window would cancel each other out. In fact, when the middle ear is damaged such that there is no tympanic membrane or ossicular chain, and the round window is oriented outward rather than set under a bit of a ledge in the round window niche, there is a maximal conductive hearing loss of about 60 dB.
 
 
Human beings develop [[spoken language]] within the first few years of life, and hearing impairment can not only prevent the ability to talk but also the ability to understand the spoken word. By the time it is apparent that a severely hearing impaired (deaf) child has a hearing deficit, problems with communication may have already caused issues within the family and hindered social skills, unless the child is part of a Deaf community where sign language is used instead of spoken language (see [[Deaf Culture]]). In many developed countries, hearing is evaluated during the newborn period in an effort to prevent the inadvertent isolation of a deaf child in a hearing family. Although sign language is a full means of [[communication]], [[literacy]] depends on understanding [[speech]]. In the great majority of written language, the sound of the [[word]] is coded in symbols. Although an individual who hears and learns to speak and read will retain the ability to read even if hearing becomes too impaired to hear voices, a person who never heard well enough to learn to speak is rarely able to read proficiently.<ref>(Morton CC. Nance WE. Newborn hearing screening--a silent revolution. [Review] [47 refs] [Journal Article. Review] New England Journal of Medicine. 354(20):2151-64, 2006 May 18.)</ref> Most evidence points to early identification of hearing impairment as key if a child with very insensitive hearing is to learn spoken language.
  +
Listening also plays an important role in learning a [[second language]].
   
  +
== Hearing tests ==
The lengthwise partition that divides most of the cochlea is itself a fluid-filled tube, the third scalae. This central column is called the [[scala media]] or [[cochlear duct]]. Its fluid, [[endolymph]], also contains electrolytes and proteins, but is chemically quite different from perilymph. Whereas the perilymph is rich in sodium salts, the endolymph is rich in potassium salts.
 
 
Hearing can be measured by behavioral tests using an [[audiometer]]. Electrophysiological tests of hearing can provide accurate measurements of hearing thresholds even in unconscious subjects. Such tests include auditory brainstem evoked potentials (ABR), otoacoustic emissions and electrocochleography (EchoG). Technical advances in these tests have allowed hearing screening for infants to become widespread.
   
  +
== Hearing underwater ==
The cochlear duct is supported on three sides by a rich bed of capillaries and secretory cells (the [[stria vascularis]]), a layer of simple squamous epithelial cells ([[Reissner's membrane]]), and the [[basilar membrane]], on which rests the receptor organ for hearing - the [[organ of Corti]]. The cochlear duct is almost as complex on its own as the ear itself.
 
   
 
Hearing threshold and the ability to localize sound sources are reduced underwater, in which the speed of sound is faster than in air. Underwater hearing is by [[bone conduction]], and localization of sound appears to depend on differences in amplitude detected by bone conduction.<ref>(Shupak A. Sharoni Z. Yanir Y. Keynan Y. Alfie Y. Halpern P. Underwater hearing and sound localization with and without an air interface. [Journal Article] Otology & Neurotology. 26(1):127-30, 2005 Jan.)</ref>
The ear is a very active organ. Not only does the cochlea "receive" sound, it generates it! Some of the hair cells of the cochlear duct can change their shape enough to move the basilar membrane and ''produce'' sound. This process is important in fine tuning the ability of the cochlea to accurately detect differences in incoming acoustic information and is described more fully in the following section. The sound produced by the inner ear is called an [[otoacoustic emission]] (OAC), and can be recorded by a microphone in the ear canal. Otoacoustic emissions are important is some types of tests for [[hearing impairment]].
 
 
===The basilar membrane===
 
 
The [[basilar membrane]] is a pseudo-resonant structure that, like strings on an instrument, varies in width and stiffness. The "string" of the basilar membrane is not a set of parallel strings, as in a guitar, but a long structure that has different resonant properties at different points along its length. The motion of the basilar membrane has been described as a travelling wave, and is greatest at the point where the frequency of the incoming sound matches that of the movement of the membrane. Like the string of a string instrument, just how stiff and how wide this membrane is at a given point along its length determines its resonant frequency (pitch). The Basilar membrane is widest (0.42–0.65 mm) and least taut at the apex of the cochlea, and thinnest (0.08–0.16 mm) and most taut at the base. <ref>(Oghalai JS. The cochlear amplifier: augmentation of the traveling wave within the inner ear. [Review] [85 refs] [Journal Article. Review] Current Opinion in Otolaryngology & Head & Neck Surgery. 12(5):431-8, 2004)</ref>High-pitched sounds cause resonance near the base of the cochlea, while low-pitched sounds generate resonance near the apex.
 
 
===The Inner Hair Cell and the Organ of Corti===
 
 
Sound waves set the basilar membrane into motion. When the basilar membrane moves, the tectorial membrane shears the tops of the hair cells, displacing the stereocilia of those cells. Much like a switch, the kinocilum of the inner hair cell "turns on" when there is enough movement of the perilymph to fully displace it. That amount of movement is very small, as little as the size of a hydrogen atom! When the kinocilium is moved, the hair cell depolarizes and releases neurotransmitter that crosses the synapse and stimulates dendrites of the neurons of the spiral ganglion.
 
 
===The Spiral Ganglion and the Cochlear Nucleus===
 
 
Spiral ganglion cells are strung along the bony core of the cochlea, and are part of the central nervous system. These spiral ganglion cells are [[bipolar first-order neurons]] of the auditory pathway of the brain. Their [[dendrites]] make synaptic contact with the base of hair cells, and their axons are bundled together to form the auditory portion of the VIII Cranial Nerve. In humans, the central axons number about 35,000 on each the left and right side. The acoustic information sent by this cranial nerve is very focused.
 
 
===Central Auditory Pathways- From Ear to Brain (and back!) ===
 
Afferent conduction of sound-generated nerve impulses includes the pathways from receiving ends of the eighth cranial nerve to the auditory areas of the cerbral cortex. Surprisingly, this is ''not'' the only direction that impulses travel in the auditory pathways. Properties of the inner ear are actually set by the higher centers of the brain, and the pathways that carry impulses from these higher centers down to the cochlea are efferent pathways. This section initially summarizes the afferent pathways. There is a special circuit in the human brain that is dedicated to sound localization that is also described here. Then, the efferent pathways are described.
 
 
====The Cochlear Nucleus====
 
 
The first station in the Central Auditory route to the cerebral cortex is the Cochlear Nucleus (CN). Like nearly all structures involved in audition, there are two of these: left and right. At the level of the cochlear nuclei, the input from the two ears, for the most part, remains separated. Just as the inner hair cells are arranged according to the pitch best "heard", so is the cochlea nucleus. This so-called tonotopic organization is preserved because only a few inner hair cells synapse on the dendrites of a nerve cell in the spiral ganglion, and the axon from that nerve cell synapes on only a very few dendrites in the cochlear nuclues.
 
 
Each cochlear nucleus has two parts, dorsal (DCN) and ventral (VCN). The Cochlear Nucleus receives input from each spiral ganglion, and also receives input from ''other'' parts of the brain. How the inputs from other areas of the brain affect hearing is unknown.
 
 
====The Stria, and The Trapezoid Body====
 
 
Axons leaving the VCN form a broad pathway that crosses under the brain stem in a tract known as the ventral stria or [[trapezoid body]]. A thin pathway, the intermediate acoustic stria, also leaves the VCN, merging with the trapezoid body close to the [[superior olivary complex]], where many of its axons synapse. Axons leaving the DCN form the dorsal acoustic stria, which reaches primarily the contralateral dorsal nucleus of the lateral lemniscus and the central nucleus of the inferior colliculus.
 
 
====Superior Olivary Complex: Integration of sound information from the Right and Left ====
 
 
The next level of processing takes place in the [[superior olivary]] complex, where the inputs from the two ears converge and interact to encode sound direction.
 
 
===Midbrain: The Inferior Colliculus (IC) and the Superior Colliculus (SC)===
 
===Thalamus===
 
 
[[Medial geniculate nucleus|medial geniculate body]] (MGB)
 
 
===Auditory Cortex===
 
The primary auditory cortex is located in the [[temporal lobe]]. There are additional areas of the human cerbral cortex that are involved in processing sound, in the [[frontal]] and [[parietal lobe]]s.
 
Animal studies indicate that auditory fields of the cerebral cortex receive ascending input from the auditory thalamus, and that they are interconnected on the same ''and'' on the opposite opposite cerebral hemispheres.
 
 
The auditory cortex is composed of fields, which differ from each other in both structure and function. The number of fields varies in different species, from as few as 2 in rodents to as many as 15 in the rhesus monkey. The number, location, and organization of fields in the human auditory cortex are not known at this time. What is known about the human auditory cortex comes from a base of knowledge gained from studies in mammals, including primates, used to interpret electrophysiologic tests and functional imaging studies of the brain in humans.
 
 
When each instrument of the symphony orchestra, and the jazz band, plays the same note, the quality of each sound is different — but the musician perceives each note as having the same pitch. The neurons of the auditory cortex of the brain are able to respond to pitch. Studies in the marmoset monkey have shown that pitch-selective neurons are located in a cortical region near the anterolateral border of the primary auditory cortex. This location of a pitch-selective area has also been identified in recent functional imaging studies in humans.<ref>(Bendor D. Wang X. The neuronal representation of pitch in primate auditory cortex.[see comment]. [Journal Article] Nature. 436(7054):1161-5, 2005) (Zatorre RJ. Neuroscience: finding the missing fundamental. Nature. 436(7054):1093-4, 2005)</ref>
 
 
The auditory cortex does not just receive input from lower centers and the ear, but also provides it.
 
 
===Oliviocochlear Efferent System — How pathways from higher centers in the brain "set" the inner ear for sound reception===
 
 
The Oliviocochlear Efferent system is part of a massive parallel descending system of pathways that eventually reaches the lower auditory brain stem and the cochlea.
 
 
Cochlear efferents are activated by sound and thereby provide a reflex feedback to the cochlea. This feedback ''increases'' acoustic threshold, which both reduces the likelihood of hair cell injury from noise trauma, and increases the signal to noise ratio.
 
 
===The pattern of firing of auditory nerve fibres- determination of loudness and pitch===
 
 
Nerves transmit information by discharging electrical impulses. Each discharge is called an "action potential." As the loudness of a sound increases, the frequency of action potentials in the auditory nerve fibre increases as well, i.e. the rate of firing increases. The opposite is also true. At lower sound intensities (low loudness) the frequency of action potentials is reduced.
 
 
As stated above (see The Basilar Membrane and the tectorial membrane), different frequencies cause resonance at different points along the basilar membrane. Since the frequency of sound is related to pitch (how high or low the sound is) and different pathways exist from these points along the basilar membrane (from the organ of Corti) to the brain, pitch can be perceived.
 
 
However, at low frequencies (below 2000 Hz) another factor comes into play. This is known as the ''volley effect''. The effect is that the nerve fibres discharge at each cycle of a sound wave. It must be noted that while this effect is observed, the frequency of the nerve fibre action potentials mainly conveys loudness, as opposed to pitch.
 
 
Loudness and duration of sound (within small time intervals) may also influence pitch. For example, for sounds higher than 4000 Hz, as loudness increases, the perceived pitch also increases.
 
   
 
==References==
 
==References==
 
<references/>
 
<references/>
  +
*Gelfand, S. A. (2004) ''Hearing: An Introduction to Psychological and Physiological Acoustics''. 4th Edition New York: Marcel Dekker.
 
  +
*Moore, B. C. (2004) ''An Introduction to the Psychology of Hearing''. 5th Edition London: Elsevier Academic Press.
  +
*Yost, W. A. (2000) ''Fundamentals of Hearing: An Introduction''. 4th Edition San Diego: Academic Press.
 
== See also ==
 
== See also ==
  +
* [[Absolute threshold of hearing]]
  +
* [[Audiograms in mammals]]
  +
* [[Audiometry]]
 
* [[Auditory illusion]]
 
* [[Auditory illusion]]
 
* [[Auditory brainstem response]] (ABR) test
  +
* [[Auditory perception]]
  +
* [[Auditory scene analysis]]
 
* [[Auditory system]]
 
* [[Auditory system]]
  +
* [[Bone conduction]]
  +
* [[Cochlear implant]]
  +
* [[Equal-loudness contour]]
 
* [[Hearing impairment]]
 
* [[Hearing impairment]]
  +
* [[Human echolocation]]
 
* [[Missing fundamental]]
 
* [[Missing fundamental]]
 
* [[Music]]
 
* [[Music]]
 
* [[Music and the brain]]
 
* [[Music and the brain]]
  +
* [[Presbycusis]]
 
* [[Tinnitus]]
 
* [[Tinnitus]]
  +
* [[Ultrasonic hearing]]
* [[Auditory brainstem response]] (ABR) test
 
   
 
{{Sensory_system}}
 
{{Sensory_system}}
   
  +
[[Auditory perception]]
[[Category:Hearing| ]]
 
  +
<!--
[[Category:Sound]]
 
 
 
[[bn:শ্রবণ]]
 
[[bn:শ্রবণ]]
[[ca:Oïda]]
 
 
[[cs:Sluch]]
 
[[cs:Sluch]]
 
[[da:Høresans]]
 
[[da:Høresans]]
 
[[de:Auditive Wahrnehmung]]
 
[[de:Auditive Wahrnehmung]]
 
[[es:Audición]]
 
[[es:Audición]]
[[eo:Aŭdado]]
+
[[eo:Aŭdo]]
  +
[[eu:Entzumen]]
 
[[fr:Ouïe]]
 
[[fr:Ouïe]]
[[it:Udito]]
+
[[ko:청각]]
  +
[[lt:Klausa]]
 
[[he:שמיעה]]
 
[[he:שמיעה]]
 
[[nl:Horen]]
 
[[nl:Horen]]
Line 151: Line 112:
 
[[pl:Słuch]]
 
[[pl:Słuch]]
 
[[pt:Audição]]
 
[[pt:Audição]]
  +
[[ru:Слух]]
  +
[[simple:Hearing]]
 
[[fi:Kuuloaisti]]
 
[[fi:Kuuloaisti]]
  +
[[sv:Hörsel]]
 
  +
[[zh:听觉]]
 
  +
-->
 
 
 
{{enWP|Hearing (sense)}}
 
{{enWP|Hearing (sense)}}
 
[[Category:Hearing| ]]
 
[[Category:Sound]]

Revision as of 10:04, 3 August 2013

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index


Hearing is one of the traditional five senses, and refers to the ability to detect sound. In humans and other vertebrates, hearing is performed primarily by the auditory system: sound is detected by the ear and transduced into nerve impulses that are perceived by the brain.

Like touch, audition requires sensitivity to the movement of molecules in the world outside the organism. Both hearing and touch are types of mechanosensation.[1]

Hearing in animals

Not all sounds are normally audible to all animals. Each species has a range of normal hearing for both loudness (amplitude) and pitch (frequency). Many animals use sound in order to communicate with each other and hearing in these species is particularly important for survival and reproduction. In species using sound as a primary means of communication, hearing is typically most acute for the range of pitches produced in calls and speech.

Frequencies capable of being heard by humans are called audio or sonic. Frequencies higher than audio are referred to as ultrasonic, while frequencies below audio are referred to as infrasonic. Some bats use ultrasound for echo location while in flight. Dogs are able to hear ultrasound, which is the principle of 'silent' dog whistles. Snakes sense infrasound through their bellies, and whales, giraffes and elephants use it for communication.

The physiology of hearing in vertebrates is not fully understood at this time. The molecular mechanism of sound transduction within the cochlea and the processing of sound by the brain, (the auditory cortex) are two areas that remain largely unknown.

Hearing in humans

Humans can generally hear sounds with frequencies between 20 Hz and 20 kHz. Human hearing is able to discriminate small differences in loudness (intensity) and pitch (frequency) over that large range of audible sound. This healthy human range of frequency detection varies significantly with age, occupational hearing damage, and gender; some individuals are able to hear pitches up to 22 kHz and perhaps beyond, while others are limited to about 16 kHz. The ability of most adults to hear sounds above about 8 kHz begins to deteriorate in early middle age.[2]

Mechanism

Main article: Auditory system

Human hearing takes place by a complex mechanism involving the transformation of sound waves into nerve impulses.

Outer ear

Main article: Outer ear

The visible portion of the outer ear in humans is called the auricle or the pinna. It is a convoluted cup that arises from the opening of the ear canal on either side of the head. The auricle helps direct sound to the ear canal. Both the auricle and the ear canal amplify and guide sound waves to the tympanic membrane or eardrum.

In humans, amplification of sound ranges from 5 to 20 dB for frequencies within the speech range (about 1.5–7 kHz). Since the shape and length of the human external ear preferentially amplifies sound in the speech frequencies, the external ear also improves signal to noise ratio for speech sounds.[3]

Middle ear

Main article: Middle ear

The eardrum is stretched across the front of a bony air-filled cavity called the middle ear. Just as the tympanic membrane is like a drum head, the middle ear cavity is like a drum body.

Much of the middle ear's function in hearing has to do with processing sound waves in air surrounding the body into the vibrations of fluid within the cochlea of the inner ear. Sound waves move the tympanic membrane, which moves the ossicles, which move the fluid of the cochlea.

Inner ear

Main article: Inner ear

The cochlea is a snail shaped fluid-filled chamber, divided along almost its entire length by a membranous partition. The cochlea propagates mechanical signals from the middle ear as waves in fluid and membranes, and then transduces them to nerve impulses which are transmitted to the brain. It is responsible for the sensations of balance and motion.

Central auditory system

This sound information, now re-encoded, travels down the auditory nerve, through parts of the brainstem (for example, the cochlear nucleus and inferior colliculus), further processed at each waypoint. The information eventually reaches the thalamus, and from there it is relayed to the cortex. In the human brain, the primary auditory cortex is located in the temporal lobe.

Representation of loudness, pitch, and timbre

Nerves transmit information through discrete electrical impulses known as "action potentials." As the loudness of a sound increases, the rate of action potentials in the auditory nerve fibre increases. Conversely, at lower sound intensities (low loudness), the rate of action potentials is reduced.

Different repetition rates and spectra of sounds, that is, pitch and timbre, are represented on the auditory nerve by a combination of rate-versus-place and temporal-fine-structure coding. That is, different frequencies cause a maximum response at different places along the organ of Corti, while different repetition rates of low enough pitches (below about 1500 Hz) are represented directly by repetition of neural firing patterns (known also as volley coding).

Loudness and duration of sound (within small time intervals) may also influence pitch to a small extent. For example, for sounds higher than 4000 Hz, as loudness increases, the perceived pitch also increases.

Localization of sound

Main article: sound localization

Humans are normally able to hear a variety of sound frequencies, from about 20 Hz to 20 kHz. Our ability to estimate just where the sound is coming from, sound localization, is dependent on both hearing ability of each of the two ears, and the exact quality of the sound. Since each ear lies on an opposite side of the head, a sound will reach the closest ear first, and its amplitude will be larger in that ear.

The shape of the pinna (outer ear) and of the head itself result in frequency-dependent variation in the amount of attenuation that a sound receives as it travels from the sound source to the ear: further this variation depends not only on the azimuthal angle of the source, but also on its elevation. This variation is described as the head-related transfer function, or HRTF. As a result, humans can locate sound both in azimuth and altitude. Most of the brain's ability to localize sound depends on interaural (between ears) intensity differences and interaural temporal or phase differences. In addition, humans can also estimate the distance that a sound comes from, based primarily on how reflections in the environment modify the sound, for example as in room reverberation.

Human echolocation is a technique used by some blind humans to navigate within their environment by listening for echos of click or tap sounds that they emit.

Hearing and language

Human beings develop spoken language within the first few years of life, and hearing impairment can not only prevent the ability to talk but also the ability to understand the spoken word. By the time it is apparent that a severely hearing impaired (deaf) child has a hearing deficit, problems with communication may have already caused issues within the family and hindered social skills, unless the child is part of a Deaf community where sign language is used instead of spoken language (see Deaf Culture). In many developed countries, hearing is evaluated during the newborn period in an effort to prevent the inadvertent isolation of a deaf child in a hearing family. Although sign language is a full means of communication, literacy depends on understanding speech. In the great majority of written language, the sound of the word is coded in symbols. Although an individual who hears and learns to speak and read will retain the ability to read even if hearing becomes too impaired to hear voices, a person who never heard well enough to learn to speak is rarely able to read proficiently.[4] Most evidence points to early identification of hearing impairment as key if a child with very insensitive hearing is to learn spoken language. Listening also plays an important role in learning a second language.

Hearing tests

Hearing can be measured by behavioral tests using an audiometer. Electrophysiological tests of hearing can provide accurate measurements of hearing thresholds even in unconscious subjects. Such tests include auditory brainstem evoked potentials (ABR), otoacoustic emissions and electrocochleography (EchoG). Technical advances in these tests have allowed hearing screening for infants to become widespread.

Hearing underwater

Hearing threshold and the ability to localize sound sources are reduced underwater, in which the speed of sound is faster than in air. Underwater hearing is by bone conduction, and localization of sound appears to depend on differences in amplitude detected by bone conduction.[5]

References

  1. Kung C., "A possible unifying principle for mechanosensation," Nature, 436(7051):647–54, 2005 Aug 4.
  2. http://www.nytimes.com/2006/06/12/technology/12ring.html?ex=1307764800&en=2b80d158770dccdf&ei=5088&partner=rssnyt&emc=rss
  3. (John F. Brugge and Matthew A. Howard, Hearing, Chapter in Encyclopedia of the Human Brain, ISBN 0-12-227210-2, Elsevier, Pages 429-448,2002)
  4. (Morton CC. Nance WE. Newborn hearing screening--a silent revolution. [Review] [47 refs] [Journal Article. Review] New England Journal of Medicine. 354(20):2151-64, 2006 May 18.)
  5. (Shupak A. Sharoni Z. Yanir Y. Keynan Y. Alfie Y. Halpern P. Underwater hearing and sound localization with and without an air interface. [Journal Article] Otology & Neurotology. 26(1):127-30, 2005 Jan.)
  • Gelfand, S. A. (2004) Hearing: An Introduction to Psychological and Physiological Acoustics. 4th Edition New York: Marcel Dekker.
  • Moore, B. C. (2004) An Introduction to the Psychology of Hearing. 5th Edition London: Elsevier Academic Press.
  • Yost, W. A. (2000) Fundamentals of Hearing: An Introduction. 4th Edition San Diego: Academic Press.

See also

Nervous system - Sensory system - edit
Special sensesVisual system | Auditory system | Olfactory system | Gustatory system
Somatosensory systemNociception | Thermoreception | Vestibular system |
Mechanoreception (Pressure, Vibration & Proprioception) | Equilibrioception 



Auditory perception

This page uses Creative Commons Licensed content from Wikipedia (view authors).