Your Sense of Hearing
Ears are for hearing and balance. Both involve complex translations of vibrations into impulses, so the brain can understand them as sound and pressure changes.
The following is excerpted from A Primer on Hearing.
How sensitive is hearing?
Extraordinarily so. The ear can detect a sound wave so small it moves the eardrum just one angstrom, 100 times less than the diameter of a hydrogen molecule. Murray Sachs, director of biomedical engineering, likes to say that if there were nothing between you and the airport, 10 miles away, and if there were no other sounds, nothing for sound to reflect from–then theoretically, you could hear a piece of chalk drop at the airport.
What does hearing do for us?
It helps humans communicate by hearing and understanding speech, other species by hearing its less elaborate cousin, vocalization. “More generally,” says Eric Young, director of the Johns Hopkins Center for Hearing Sciences, “it’s our far sense. It notifies us of things we can’t see but that may be important, be it a prowler or the baby whimpering.” Hearing does that by being extraordinarily sensitive, and also by being able to compute where a sound is in space. .2.0.
What the nervous system gets is two streams of sound, one in the left ear and one in the right; it then calculates a sound’s time of arrival at each ear, the difference revealing roughly where the sound is in space (within about 1º of a circle). ( “Ha! The left ear got it sooner, so it’s off to the left, about there.”) Compared with vision, human hearing locates objects crudely. “But it’s good enough,” says Young, “that you can turn your eyes toward the object and try to find it.”
What good is earwax?
It does unpleasant things to insect intruders.
How does hearing work?
Mechanically, it’s like a Swiss watch. Any engineer would be proud to have invented a device of such precision.
You can think of the system as a relay race, except that the baton keeps transforming into something else: Energy enters the ear (see diagram) in the form of a sound wave, to be converted at the eardrum into mechanical vibrations of the middle-ear bones (the ossicles, the smallest bones in the body). These mechanical vibrations become pressure waves in the fluid of the inner ear (the cochlea), and the waves bend bundles of the cilia (Latin for hairs) of what are called hair cells. Each time cilia bend, hair cells start electrical signals firing toward the brain.
Moreover, at the same time it’s doing hey presto change-o, your ear mechanically boosts the signal by some 25 decibels in our best range of hearing.
How does the brain manage to get all the subtleties of sound and speech out of vibrations alone?
The auditory system does a lot of work before the cortex gets involved, more than other senses. Smell sensations go directly from receptor to olfactory bulb, while signals for sight and touch make three stops before they reach the cortex. But in hearing, there are five waystations, nerve cells that Young calls ‘calculational centers’, right in the brain stem.
The brainstem is the stemlike structure that connects the spinal column with the cerebral hemispheres, and its processing starts almost from scratch: the sound wave that enters your ear is inchoate. It might include bagpipe droning, trees rustling, air conditioner hissing, keyboard clicking, ambulance ululating, fax teeping, several people talking, and more. The nervous system must pick this jumble apart so you can tell one sound from another and pay attention to what matters–the person you’re talking to, let’s say.
Step one is pitch, which is handled by the hair cells in the cochlea. (Young people with normal hearing have about 15,000 in each ear.) The cells are arranged rather like a piano keyboard, on a long narrow membrane that spirals the length of the (spiral) cochlea, and each hair cell is sensitive to a particular frequency at a particular loudness. At one end of the membrane, hair cells react to high-pitched sounds, at the other to low ones, in between to in between.
Then nuclei in the brainstem take over, to locate the source of the sounds in space (as discussed), and to sort all those hundreds of tones into units by timbre, families of resonance. Between the two distinctions, we all know, with seeming lack of effort, that one set of sounds represents a bagpipe, another set footsteps.
Auditory signals also get sharper, because the clever brain stem deletes a clutter of echoes, so they never reach awareness. As your friend’s voice and piano playing bounce off the walls, fireplace, and ceiling, a processing center picks out the echoes as duplicates because they arrive a tad later. It deletes all but the original signal–a neat trick, given the complexity of the sound.
We do hear echoes like halloos at the Grand Canyon, of course. That’s because they come at longer intervals, so the brain stem construes them as separate sounds and sends them on to conscious awareness.
New and unfamiliar sounds do not get deleted, however. On the contrary, they tend to attract our attention, as you may have noticed the first time you heard an icemaker dropping ice cubes into the bin. In such a case, the brain stem may even trigger the motor cortex, making you jump and look around–a startle reaction, which is reflexive; the conscious mind is not involved.
The brain stem also handles the first steps of understanding speech: it ascertains that a particular series of sounds is speech. Then it deletes all but the sounds that matter to meaning in the hearer’s native language. Such sounds (for example oh and ah, puh and tuh) are called phonemes.
There are at least 60 phonemes, depending what sounds you count (English uses 40-some) and by the time a child is 6 months old, its brain is already specialized for its own language. The classic example is English vs. Japanese. Native English speakers have a notoriously hard time learning Japanese, because the meaning of individual English words does not depend on rising and falling inflections.
Many Japanese, conversely, cannot distinguish between L and R, because R does not exist in their language. “Of course they can hear R,” says Stewart Hulse, a psychologist in Arts & Sciences whose field is auditory processing. “If you test them: ‘Is this sound, ruh, the same as this sound, luh?’, they’ll say no. They can hear it. But they can’t hear R in spoken language, because their brain stem has thrown it out, before conscious awareness. It’s almost impossible to hear these things.”
By the time a sound arrives at the cortex, then, it has been analyzed for pitch, timbre, salience, and where it comes from, at a minimum.
What happens once the signals reach the cortex?
More processing. In general, the cortex is arranged in anatomical columns, literally stacks of cells that work together to store, decode, and process information (a discovery made in the somatosensory cortex by Hopkins’s great neuroscientist emeritus, Vernon Mountcastle).
At the point where sound signals reach the auditory cortex, columns initially correspond to frequencies reported by the hair cells. A single tone may activate a large area of cortex, though, in ways that are only murkily understood. Suffice it that as complex patterns of firing develop, the rest of the cortex gets involved, comparing the patterns with stored templates to tell you, ‘Oh! that’s just the refrigerator. Pay no attention.’
Music is thought to be processed in the right hemisphere, language in the left, both in structures that evolved from the auditory cortex itself. Note that the auditory cortex reports to the language center, not the other way around.
If you sit quietly and catalog the sounds around you, you may be surprised at how very many signals are out there. Yet what you consciously hear depends on which sounds you pay attention to, if any. If you’re reading, you may feel you hear nothing. If you’re deep in conversation, you hear the other person’s voice. But you won’t be aware of the icemaker’s clatter unless you’ve never heard it before; attention suppresses stimuli that are non- salient. Otherwise we would all go mad.
Context helps in the work of integrating signals, too. Next time you are listening to someone who mumbles or has a strong accent, notice how much it helps if you have some idea what the person is going to say, or at least what the topic is.
Is the auditory system especially fragile?
Actually, the ear protects itself well. The outer ear keeps the eardrum warm and out of harm’s way, while the middle ear can dampen most sounds that are loud enough to hurt the all-important hair cells. And when hair cells do get overexercised, they tend to quit for a time. That’s why the universe seems muted right after a loud concert.
Probably because of continued insults, however, hearing problems seem to be more widespread in the industrialized world than elsewhere. In the U.S., the major causes of hearing loss are thinning hair cells and sclerosis of the middle ear.
Loss of hair cells is permanent. ( “You have all the hair cells you’ll ever have at birth,” says Young.) It mainly affects soft sounds and high frequencies, the range where women and children tend to speak.
From A Primer on Hearing
Hypothesis: Opening muscles and nerve receptors in the ears increase sensitivity despite lost hair cells.
Hypothesis: Opening ears has a positive effect on nearsightedness.
Hearing and listening is associated with the throat (5th ) chakra, which governs communication and creativity. No wonder we get a lump in our throat when we feel challenged to express ourselves!
Sound travels more quickly in higher temperatures than in cold ones.
Super great content made for curious birds without a science background.
Chakras Chart: shows corresponding sense, area of consciousness, color vibration, musical vibration, gland, nerve and system of the body and element – The Brofman Foundation for the Advancement of Healing