Three New Research Papers Examine Trust, Memory, and Medical Applications in AI Systems
Three new research papers on arXiv address distinct challenges in large language model (LLM) deployment and behavior.
According to arXiv paper 2511.22099v2, researchers are examining privacy, adversarial robustness, fairness, and ethics in compressed LLMs using low-rank factorization. The paper notes that “massive size hinders deployment in resource-constrained settings,” positioning model compression as a solution, with low-rank factorization highlighted as a particular approach.
A second paper (arXiv:2602.00428v1) explores what researchers term the “Mandela Effect” in multi-agent systems—the phenomenon of collective false memories. According to the abstract, “Recent advancements in large language models (LLMs) have significantly enhanced the capabilities of collaborative multi-agent systems,” but these systems show “susceptibility” to shared misremembering among agents.
The third paper (arXiv:2508.15746v2) addresses medical applications, specifically developing an “End-to-End Agentic RAG System Training for Traceable Diagnostic Reasoning.” According to the researchers, LLM integration into healthcare faces constraints including “knowledge limitations, hallucinations, and a disconnect from Evidence-Based Medicine (EBM).” The paper proposes Retrieval-Augmented Generation (RAG) as a potential solution to these challenges.
All three papers represent ongoing research efforts to improve LLM reliability, efficiency, and domain-specific applications.