More than 30 million Americans suffer from hearing loss, and about 6 million wear hearing aids. While those devices can boost the intensity of sounds coming into the ear, they are often ineffective in loud environments such as restaurants, where you need to pick out the voice of your dining companion from background noise.
To do that, you need to be able to distinguish sounds with subtle differences. The human ear is exquisitely adapted for that task, but the underlying mechanism responsible for this selectivity has remained unclear. Now, new findings from MIT researchers reveal an entirely new mechanism by which the human ear sorts sounds, a discovery that could lead to improved, next-generation assistive hearing devices.
“We’ve incorporated into hearing aids everything we know about how sounds are sorted, but they’re still not very effective in problematic environments such as restaurants, or anywhere there are competing speakers,” says Dennis Freeman, MIT professor of electrical engineering, who is leading the research team. “If we knew how the ear sorts sounds, we could build an apparatus that sorts them the same way.”
In a 2007 Proceedings of the National Academy of Sciences paper, Freeman and his associates A.J. Aranyosi and lead author Roozbeh Ghaffari showed that the tiny, gel-like tectorial membrane, located in the inner ear, coordinates with the basilar membrane to fine-tune the ear’s ability to distinguish sounds. Last month, they reported in Nature Communications that a mutation in one of the proteins of the tectorial membrane interferes with that process.
It has been known for more than 50 years that sound waves entering the ear travel along the spiral-shaped, fluid-filled cochlea in the inner ear. Hair cells lining the ribbon-like basilar membrane in the cochlea translate those sound waves into electrical impulses that are sent to the brain. As sound waves travel along the basilar membrane, they “break” at different points, much as ocean waves break on the beach. The break location helps the ear to sort sounds of different frequencies.
Until recently, the role of the tectorial membrane in this process was not well understood.
In their 2007 paper, Freeman and Ghaffari showed that the tectorial membrane carries waves that move from side to side, while up-and-down waves travel along the basilar membrane. Together, the two membranes can work to activate enough hair cells so that individual sounds are detected, but not so many that sounds can’t be distinguished from each other.
Made of a special gel-like material not found elsewhere in the body, the entire tectorial membrane could fit inside a one-inch segment of human hair. The tectorial membrane consists of three specialized proteins, making them the ideal targets of genetic studies of hearing.
One of those proteins is called beta-tectorin (encoded by the TectB gene), which was the focus of Ghaffari, Aranyosi and Freeman’s recent Nature Communications paper. The researchers collaborated with biologist Guy Richardson of the University of Sussex and found that in mice with the TectB gene missing, sound waves did not travel as fast or as far along the tectorial membrane as waves in normal tectorial membranes. When the tectorial membrane is not functioning properly in these mice, sounds stimulate a smaller number of hair cells, making the ear less sensitive and overly selective.
Until the recent MIT studies on the tectorial membrane, researchers trying to come up with a model to explain the membrane’s role didn’t have a good way to test their theories, says Karl Grosh, professor of mechanical and biomedical engineering at the University of Michigan. “This is a very nice piece of work that starts to bring together the modeling and experimental results in a way that is very satisfying,” he says.
Mammalian hearing systems are extremely similar across species, which leads the MIT researchers to believe that their findings in mice are applicable to human hearing as well.
Most hearing aids consist of a microphone that receives sound waves from the environment, and a loudspeaker that amplifies them and sends them into the middle and inner ear. Over the decades, refinements have been made to the basic design, but no one has been able to overcome a fundamental problem: Instead of selectively amplifying one person’s voice, all sounds are amplified, including background noise.
Freeman believes that by incorporating the interactions between the tectorial membrane and basilar membrane traveling waves, this new model could improve our understanding of hearing mechanisms and lead to hearing aids with enhanced signal processing. Such a device could help tune in to a specific range of frequencies, for example, those of the person’s voice that you want to listen to. Only those sounds would be amplified.
Freeman, who has hearing loss from working in a noisy factory as a teenager and side effects of a medicine he was given for rheumatic fever, worked on hearing-aid designs 25 years ago. However, he was discouraged by the fact that most new ideas for hearing-aid design did not offer significant improvements. He decided to conduct basic research in this area, hoping that understanding the ear better would naturally lead to new approaches to hearing-aid design.
“We’re really trying to figure out the algorithm by which sounds are sorted, because if we could figure that out, we could put it into a machine,” says Freeman, who is a member of MIT’s Research Laboratory of Electronics and the Harvard-MIT Division of Health Sciences and Technology. His group’s recent tectorial membrane research was funded by the National Institutes of Health.