When most people use the phrase “the search for meaning,” they’re talking about the Big Questions of life, like “Is there a God?” or “Is there a purpose to all of this?” or as comedian Steven Wright once mused, “How do you know when you’re out of invisible ink?”
When neuroscientists talk about “the search for meaning,” they’re wondering where the meanings of things – especially words – live in our brains. For example, if someone spoke to most of us the word “paard,” our ears would hear the sounds and pass them along to our brains. But there, unless we happened to speak Dutch, the sound would have no meaning to us. Have someone speak the word “horse,” on the other hand, and we’d know immediately what that word means.
Both words, as it turns out, mean the same thing, if you are bilingual in English and Dutch. But if you speak only one of the languages, only one of those words has any meaning for you; the other is just a sound. So where in the human brain is the meaning of words stored? This was the Big Question pondered by Joao Correia and his colleagues at Maastricht University in the Netherlands.
A first step towards machines that can read minds?
More specifically, Correia was seeking to answer the question, “How do we represent the meaning of words independent of the language we are listening to?” He wanted to go beyond the representation of the sounds of the words, and find the “hub” or area of activity in the brain that gives the words meaning when we hear them.
To do this, he enlisted the aid of eight volunteers bilingual in both Dutch and English and used functional magnetic resonance imaging (fMRI) to scan their brains as they listened to four one-syllable names of animals spoken in English: “bull,” “horse,” “shark,” and “duck.” Correia and his colleagues then monitored the patterns of activity they saw in the left anterior temporal cortex (an area of the brain known to be associated with language and semantic tasks) and created an algorithm that could identify which word each subject had heard, based on the pattern of the neural activity. This alone was a kind of “mind reading,” in that the algorithm allowed them to detect which animal name the subject had heard just from the fMRI scans, without hearing the word themselves.
But they then went on to test whether these now-recognizable patterns of brain activity were related only to the sound of the words in English, or to the words’ meanings. So they repeated the experiment, this time speaking the same words to the participants, but in Dutch: “bul,” “paard,” “haai” and “eend.” The algorithm was still able to identify which animal’s name had been spoken. Despite the fact that the algorithm had been “trained” based on the patterns generated when hearing these words in English, hearing them in Dutch generated the same patterns, so the algorithm was able to detect them as well.
Zoe Woodhead, a reviewer of the study from University College London, says of it, “This type of pattern recognition approach is a very exciting scientific tool for investigating how and where knowledge is represented in the brain. Words that mean the same thing in different languages activate the same set of neurons encoding that concept, regardless of the fact that the two words look and sound completely different.”
In terms of “mind reading,” Correia predicts that as fMRI technology becomes more sophisticated, it may be possible to identify a much greater number of words – and even whole sentences – from a person’s brain activity alone. But it’s not likely that we’ll see true mind-reading machines anytime soon, because the patterns that Correia and his colleagues identified were unique to each subject. The meanings of these words were stored in the same area of the brain of each subject, but slightly differently. To build a machine that actually reads a person’s mind, given current technology, you’d have to scan their brains as they worked their way through an entire dictionary, just to generate a “meaning map” for that person.
But this research is potentially valuable in many other ways, because for the first time neuroscientists know where to look in the human brain for meaning. This may help doctors to identify signs of awareness in people with locked-in syndrome, to tell whether they are truly in a vegetative state or are internally conscious, just unable to respond. Being able to tell whether a person with brain damage is still capable of processing meaning may be the start of someday being able to reverse the damage itself.