In the question of how humans developed speech, there are two main factors to consider: our anatomy and our brains. Think of it kind of like hardware versus software — the structure of the vocal tract is what makes speech physically possible, and the brain contains the code to make it a reality.
And in a study published today in the journal Science Advances, a team of researchers led by Tecumseh Fitch, a biologist at the University of Vienna, make the case that the software, not the hardware, is what separates us from our closest relatives in the animal kingdom. As Nell Greenfieldboyce reported for NPR:
[Fitch] and his colleagues monitored a long-tailed macaque named Emiliano as he made a wide range of different gestures and sounds, including lip-smacks, yawns, chewing, coos, and grunts. Their special equipment took a rapid series of X-rays that allowed them to capture the full range of movement in the monkey’s vocal tract. Then they used computer models to explore its potential for generating speech.
The result: Anatomically speaking, Emiliano had the ability to make five different vowel sounds, each of which was distinguishable from the others (the researchers played the sounds to volunteers to see whether the human ear could tell them apart) — suggesting, the study authors wrote, “that the macaque vocal tract could easily produce an adequate range of speech sounds to support spoken language … Our findings imply that the evolution of human speech capabilities required neural changes rather than modifications of vocal anatomy.”
“As soon as you had a brain that was ready to control the vocal tract,” Fitch told NPR, “the vocal tract of a monkey or nonhuman primate would be perfectly fine for producing lots and lots of words.” Fortunately for the wordless monkeys, though, a picture’s worth a thousand of them.