Researchers Find Brain-Like Synergistic Structures in Large Language Models

Scientists are studying LLMs like biological systems, discovering they spontaneously develop synergistic cores similar to brain structures.

Researchers Find Brain-Like Synergistic Structures in Large Language Models

Researchers are increasingly treating large language models as subjects of study similar to biological organisms, according to MIT Technology Review, which describes scientists as “new biologists treating LLMs like aliens.”

A new study published on arXiv (arXiv:2601.06851v1) reports that “large language models spontaneously develop synergistic cores — components” that resemble brain-like structures. According to the paper’s abstract, these cores “drive behavior and learning” in the models.

The research represents what MIT Technology Review characterizes as a novel approach to understanding LLMs—examining them through biological and neuroscientific frameworks rather than purely computational ones. The study suggests that the independent evolution of intelligence in both biological and artificial systems “offers a unique opportunity to identify its fundamental computational principles,” according to the arXiv paper.

MIT Technology Review provides context for the scale of modern LLMs by comparing them to a visualization: imagine Twin Peaks hill in San Francisco, from which “you can view nearly the entire city,” with every visible “block and intersection, every neighborhood and park” covered in sheets of paper.

The findings suggest potential parallels between how artificial and biological intelligence organize themselves at a structural level.