In a groundbreaking new study, Cambridge researchers have mapped out the neurobiological basis of a key aspect of human communication: intonation.
If you were to read out loud the words, “I’m absolutely delighted that Kate blamed Paul and Tessa Arnold” in a flat voice, with no rises or falls and placing equal weight on each syllable, you would quickly demonstrate the fundamental importance in human communication of intonation. Is Kate blaming Paul, while Tessa blames Arnold? Or is Kate blaming the Arnolds: Paul and Tessa? It would also be difficult to tell whether the speaker really is delighted, or whether they are being sarcastic. You would have suppressed a natural tendency to vary how high or low your voice is (pitch), to stress particular syllables, to hesitate where you would expect commas (rhythm), and to convey emphasis by varying volume. All of these elements constitute intonation.
Dr Brechtje Post of the Phonetics Laboratory in the Department of Theoretical and Applied Linguistics describes intonation as “the melody of language”. “It signals,” she explained, “how the speech stream is structured and what category of statement you are making. The word now, for example, can signify a question or an answer depending on intonation.”
However, intonation also signals how we feel. Different intonation patterns for now can also express emotions such as triumph or frustration. “We call this function ‘paralinguistic’,” said Post. “It is thought to result from our primate inheritance, reflecting biologically driven codes that are now exploited to express attitudes and emotions universally across the languages of the world. It is distinct from the linguistic use of intonation, which is language specific.”
Since the linguistic meaning and the emotions of the speaker are conveyed by the same acoustic signals – mainly pitch – linguists have struggled to disentangle the relationship between them. “Linguists have long theorised that linguistic and paralinguistic information are crucially different, but evidence has been elusive,” said Post. “This suggests that they would have to be processed differently in the brain, but this had not been shown either – until now.”
With funding from the Economic and Social Research Council, Post and her co-investigator, neuroscientist Dr Emmanuel Stamatakis, conducted a four-year study combining experimental tasks with the latest MRI brain-scanning techniques. Native English-speaking participants within a specific age cohort were scanned while hearing test words and giving a yes/no response to either a linguistic question: ‘Does this sound like a statement?,’ or a paralinguistic question: ‘Does this sound surprised?.’ Distinct areas of their brains activated according to whether they were processing linguistic or paralinguistic meaning.
The researchers did indeed find that different frontal and temporal brain networks in both hemispheres contribute in different ways to the processing of intonational information.
“The network which is engaged in the linguistic interpretation of intonation is the same as that which supports abstraction and categorisation for other types of linguistic information, such as recognising consonants and vowels,” said Stamatakis. “We did not, however, expect the degree of overlap between these networks or that processing paralinguistic information involves a much more limited network.”
These findings confirm that neural processing of linguistic information in intonation is distinct from emotional or attitudinal information. This insight will aid the understanding of speech and comprehension deficits following, for example, stroke, and may have potential applications in speech therapy. As for the implications for understanding intonation, the findings show that it is not merely a side effect of biological imperatives related to animal communication (for example, a high squeaky sound being associated with danger), but that at least some of it is integral to the structure of human language.
From: Tuning into the melody of speech, University of Cambridge, RESEARCH13