Home
GetFileAttachment_2

Dr. Richard Mooney, George Barth Geller Professor for Research in Neurobiology, Duke University

Our bodies react to sensory stimuli from our environment. For example, if we touch a hot surface, our sensory systems alert our motor centers to pull our hands back. However, there’s also a constant flow of information going from our motor centers to our sensory systems. “If we worked in a completely reactive way,” Dr. Rich Mooney said, “we wouldn’t be able to plan or anticipate anything. We would be relatively incapacitated in our ability to move about, navigate and accomplish even simple tasks.” The Mooney lab is studying the flow of information from motor systems to sensory systems, called corollary discharge, using mouse and bird models.

They mapped a neural circuit in the mouse brain from the secondary motor cortex (M2) to the auditory cortex. Some cells in M2 that project to the auditory cortex are active before and during movement and also send signals down to the brainstem, and possibly, the spinal cord to control movement.

Using microelectrodes and optical recording techniques, the Mooney lab found that the auditory cortex suppresses its response to sound when the animal moves, whether vocalizing, grooming, running on a treadmill, or executing other movements. They believe this suppression improves overall performance; if there is a predictable auditory consequence of movement, the brain can filter out that sound and be better able to respond to other unpredictable sounds in the environment, such as the footsteps of an approaching predator. A similar suppression occurs in the auditory cortex of humans during speech and other sound-generating behaviors.

“We learn throughout our lives that when we move, whether it’s walking along a path with crunchy leaves or tapping our fingers, there’s an acoustic consequence that can be readily predicted by the brain,” Mooney said. We become sensitized to better detect and react to a sound that goes outside of our expectation.

The Mooney lab has a similar project with songbirds, which are one of only a few non-human animal animals that resembling humans in our capacity for vocal imitation. Juvenile songbirds listen to adult songbirds singing in the juvenile’s vicinity to learn and memorize their courtship songs. The Mooney lab has found pathways going from song motor regions into the auditory forebrain of the bird that are analogous to the regions discovered in the mouse model.

“When you learn to speak, your brain generates motor commands that have acoustic consequences, and you start to learn the association between those motor commands and the acoustic consequence,” Mooney said. “You then begin to build a circuit that can make predictions and train the auditory system.” If we garble our speech, this predictive signal is presumably compared to the actual speech sound, resulting in an error signal that can be used for learning or error correction. That error signal can then propagate into the motor network and help correct the vocal motor command signal. Whether you are just beginning to learn language, or you’ve already mastered it, this predictive mechanism can be used to maintain a stable and accurate performance each time you speak.

Originally published on June 1, 2017 to Duke Neurobiology. View the original story

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s