Your brain is a prediction machine that is always on

Summary: The brain constantly acts like a prediction machine, continually comparing sensory information with internal predictions.

Font: Max Planck Institute

This is in line with a recent theory about how our brain works: it’s a prediction machine, continually comparing the sensory information we pick up (such as sights, sounds, and language) with internal predictions.

“This theoretical idea is extremely popular in neuroscience, but the existing evidence is often indirect and restricted to artificial situations,” says lead author Micha Heilbron.

“I would really like to understand precisely how this works and test it in different situations.”

Brain research on this phenomenon is usually done in an artificial environment, reveals Heilbron. To evoke predictions, participants are asked to stare at a single pattern of moving dots for half an hour, or listen to simple patterns in sounds like ‘beep beep boop, beep beep boop’.

“Studies of this type reveal, in fact, that our brain can make predictions, but not that this also always happens in the complexity of everyday life. We’re trying to get it out of the lab environment. We are studying the same kind of phenomenon, how the brain handles unexpected information, but then in natural situations that are much less predictable.”

Hemingway and Holmes

The researchers analyzed the brain activity of people who listened to Hemingway or Sherlock Holmes stories. At the same time, they analyzed the texts of the books using computer models, so-called deep neural networks. In this way, they were able to calculate for each word how unpredictable it was.

For each word or sound, the brain creates detailed statistical expectations and is extremely sensitive to the degree of unpredictability: the brain’s response is strongest when a word is unexpected in context.

This shows a painting of a brain.
Our brain is a prediction machine that is always on. Credit: AI-generated illustration, via: DALL-E, OpenAi – Micha Heilbron

“By itself, this is not very surprising: after all, everyone knows that the next language can sometimes be predicted. For example, your brain sometimes automatically “fills in the blank” and mentally finishes another person’s sentences, for example, if they start speaking very slowly, stutter, or can’t think of a word. But what we have shown here is that this happens all the time. Our brain is constantly guessing words; the predictive machinery is always on.”

More than software

“In fact, our brain does something comparable to speech recognition software. Speech recognizers using artificial intelligence also constantly make predictions and are guided by your expectations, just like the autocomplete feature on your phone.

“However, we did see a big difference: brains predict not just words, but they make predictions at many different levels, from abstract meaning and grammar to specific sounds.”

There is good reason for the continued interest from technology companies who would like to use such new insights to create better image and language recognition software, for example. But these types of applications are not the main objective of Heilbron.

“I would really like to understand how our predictive machinery works at a fundamental level. I am now working with the same research setup, but for visual and auditory perceptions, such as music.”

About this neuroscience research news

Author: press office
Font: Max Planck Institute
Contact: Press Office – Max Planck Institute
Image: Image is credited to DALL-E, OpenAi – Micha Heilbron

original research: Closed access.
“A Hierarchy of Linguistic Predictions During Natural Language Understanding” by Micha Heilbron et al. PNAS

See also

This shows a boy walking alone, carrying a stuffed lion.

Summary

A hierarchy of linguistic predictions during natural language understanding

Understanding spoken language requires transforming ambiguous acoustic flows into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input.

However, the role of prediction in language processing remains in dispute, with disagreement over the ubiquity and representative nature of predictions.

Here, we address both problems by analyzing brain recordings from participants listening to audiobooks and using a deep neural network (GPT-2) to accurately quantify contextual predictions.

First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we separate the model-based predictions across dimensions, revealing dissociable neural signatures of predictions about syntactic categories (parts of speech), phonemes, and semantics.

Finally, we show that high-level predictions (words) inform low-level predictions (phonemes), supporting hierarchical predictive processing.

Together, these results underscore the ubiquity of prediction in language processing, demonstrating that the brain spontaneously predicts upcoming language at multiple levels of abstraction.

Leave a Comment