Multi-Agent AI Systems May Reach Consensus Through Random Drift
Researchers have identified a phenomenon called “memetic drift” that explains how groups of large language model (LLM) agents can reach consensus without true collective reasoning, according to a new paper published on arxiv.org.
The study, titled “When Is Collective Intelligence a Lottery?”, introduces a minimal model called Quantized Simplex Gossip (QSG) to trace how multi-agent systems reach agreement. According to the research, agents “maintain internal belief states but learn from one another’s sampled outputs, so one agent’s arbitrary choice becomes the next agent’s evidence and can compound toward agreement.”
The researchers found that even when no individual agent initially favors any particular label, populations can “rapidly break symmetry and reach consensus” through what they describe as a “sampling-driven regime.” The study derives scaling laws for this drift-induced polarization based on factors including population size, communication bandwidth, and agents’ internal uncertainty.
Separately, other arxiv.org research this week highlighted challenges in multi-agent coordination. A new benchmark called CRAFT found that “stronger reasoning ability does not reliably translate to better coordination,” with smaller open-weight models often matching or outperforming frontier systems. According to that study, “improved individual communication does not guarantee successful collaboration,” suggesting multi-agent coordination “remains a fundamentally unsolved challenge for current language models.”