In the most recent University of California at Berkeley College of Letters and Science newsletter, we uncovered the highlights of a Cal professor’s research into the brain’s remarkable ability to pay attention to certain sounds.
“It’s like when you focus on one voice at a cocktail party,” says Michael DeWeese, a Berkeley professor of physics. “Your brain has top-down executive control that can direct your attention to sounds you want to focus on despite all the distracting sounds in your environment.” DeWeese is working out the neurological mechanisms behind selective auditory attention.
So how does our brain filter out background noise and allow us to focus our attention on relevant auditory stimuli?
The brain is thought to modulate attention by altering neural behavior. Just as aspirin can increase the amount of stimulus required to make a neuron pass along pain messages, neuromodulator molecules such as acetylcholine can make some neurons more or less likely to relay information about sound stimuli. “There is some change in the internal cell processing of signals,” DeWeese says. “In addition, the transmission of sensory information is gated at the circuit level.” These changes likely occur within many of the neurons in a given circuit, and to different degrees in different brain regions.
…
Encoding sound efficiently, and ignoring those deemed unimportant, offers strong evolutionary advantages. “It allows the brain to use those operations in a dynamical, smart way. You don’t want to waste your sensory processing resources on sounds that don’t matter,” DeWeese says.
Comprehension of speech in noise is a skill that frequently improves after Fast ForWord training, despite the fact that the Fast ForWord programs don’t include any exercises specifically geared at that skill. Ann Osterling, a pediatric speech-language pathologist with a private practice in Champaign, IL, says this is because Fast ForWord training is improving the underlying skills needed to process speech in noise. Ann offers the following examples:
- the brain has been trained to hear each of the phonemes more clearly – for some kids there have been “fuzzy” representations of similar sounding phonemes which are now more clear – so it is easier for the brain to recognize it
- the brain has been trained to process the phonemes more rapidly – it doesn’t have to spend as much time trying to determine what each phoneme is
- the brain can remember more sounds/words in a row because it is processing more rapidly
- it is now easier for the brain to attend – and thus pick up the important message and filter out what is/isn’t important
- there is improved ability to sustain attention for listening
- overall, the brain is more efficient at listening and understanding
As for Dr. DeWeese’s research, there are some exciting opportunities: “Understanding how the brain normally focuses on sounds could help scientists identify anomalies in those who have difficulty focusing their attention, such as patients with schizophrenia and attention deficit hyperactivity disorder (ADHD).” (The article also mentions that DeWeese’s findings could contribute to the design of hearing aids and hands-free devices that will respond to nearby voices, and deemphasize background noise, but we don’t think that’s nearly as cool.)