This post is also available in Dutch.
For decades, scientists have wondered at how effortlessly children acquire language. Give them just a few million words – a fraction of what AI models consume – and they’re chatting away in no time. But how the brain develops this incredible skill has remained largely mysterious until now. A new study led by Meta AI, in collaboration with the Rothschild Foundation Hospital in Paris, offers a rare glimpse into the developing brain. The team recorded brain activity from 46 children, teens, and adults with epilepsy who had electrodes implanted for medical reasons. As these participants listened to an audiobook of The Little Prince, the researchers tracked how their brains responded to each word and sound.
From Sound to Meaning: How Language Grows
What they found is nothing short of extraordinary. Even in children as young as two, the brain was already actively responding to speech, particularly to phonemes, the smallest building blocks of language, such as individual consonants and vowels. These responses were most prominent in the superior temporal gyrus (STG) and superior temporal sulcus (STS), key auditory areas known to process speech sounds. But as children aged, the neural picture became richer: the brain began responding not only to phonetic features but also to word-level and lexical information, such as word categories (nouns, verbs) and frequency of usage. These higher-level representations emerged in more distributed, associative cortical regions, which are located in the front and sides of the brain. This stepwise expansion of language processing across the cortex shows that the brain doesn’t learn language all at once. Instead, it builds it hierarchically, starting with fast, basic sound processing, and gradually layering on more abstract and semantic features over time.
A Surprising Parallel in Artificial Intelligence
This developmental layering mirrors something surprising – not in a child, but in silicon, the artificial intelligence models, particularly large language models, show a remarkably similar pattern. When these models are trained, they begin to process language in ways that mirror the human brain. The deeper you go into their layers, the more “adult-like” their responses become. Interestingly, the study also found that different layers of the AI model mapped onto different brain regions. Early layers matched low-level auditory areas like the STG, while deeper layers aligned with frontal and associative regions involved in processing meaning, suggesting a layer-by-layer parallel in how both brains and models build up language.
In fact, the study showed that trained AI models could predict neural responses in children and adults. And as these models evolved, their language patterns aligned more closely with older, more linguistically mature brains.
What This Means – And What Comes Next
Of course, the study has limitations. The participants were all French-speaking and undergoing clinical treatment, so the findings still need to be validated among a larger and inclusive population. Still, the insight is powerful: language learning isn’t just fast and intuitive – it’s deeply structured and traceable. By bridging neuroscience and artificial intelligence, this research opens a door to a richer understanding of how we become the speaking, thinking humans we are. And maybe, just maybe, it shows that the machines we build can reflect something deeply human back to us.
Author: Vivek
Buddy: Xuanwei
Editor: Siddharth
Translator: Wieger
Editor translation: Natalie