Three New Research Papers Address LLM Limitations and Performance Issues
Three recent papers published on arXiv examine critical challenges in large language model development and deployment.
According to arXiv paper 2511.12869v2, researchers have identified five fundamental limitations that bound the scaling benefits of Large Language Models: hallucination, context compression, reasoning degradation, retrieval fragility, and multimodal constraints.
A separate paper (arXiv:2601.17917v1) introduces “Streaming-dLLM,” a technique designed to accelerate Diffusion Large Language Models through suffix pruning and dynamic decoding. According to the abstract, diffusion LLMs offer advantages over autoregressive models through parallel decoding and bidirectional attention, which the researchers claim enables “superior global coherence.”
The third paper (arXiv:2601.18753v1) presents “HalluGuard,” a system that distinguishes between two types of LLM hallucinations. According to the researchers, hallucinations in high-stakes domains like healthcare, law, and scientific discovery typically stem from either data-driven sources or reasoning-driven failures.
All three papers were announced as cross-category submissions on arXiv’s AI section, indicating their relevance across multiple research domains within artificial intelligence.