New Research Examines Large Language Models' Vulnerabilities in Data Handling and Persuasion

Three new arXiv papers reveal LLMs struggle with distorted tabular data, can effectively promote conspiracies, and perform poorly on telecom tables.

New Research Examines Large Language Models’ Vulnerabilities in Data Handling and Persuasion

Three recent papers on arXiv highlight significant limitations in large language models across different domains.

According to arXiv:2601.05009v1, researchers investigated how LLMs handle tabular data when subjected to “semantic and structural distortions.” The study found that “LLMs lack an inherent ability to detect and correct subtle” distortions in otherwise canonical tabular representations.

A separate study (arXiv:2601.05050v1) examined LLMs’ persuasive capabilities regarding misinformation. The research found that “large language models (LLMs) have been shown to be persuasive across a variety of context,” but questioned whether this persuasive power “advantages truth over falsehood, or if LLMs can promote misbeliefs just as easily as refuting them.” The paper title indicates LLMs “can effectively convince people to believe conspiracies.”

In the telecommunications sector, arXiv:2601.04202v1 introduced TeleTables, a benchmark for evaluating LLM performance on telecom table interpretation. While LLMs are “increasingly explored in the telecom industry to support engineering tasks, accelerate troubleshooting, and assist in interpreting complex technical documents,” the researchers note that “recent studies show that LLMs perform poorly on teleco[m]” tasks involving tabular data.

These findings collectively suggest current LLMs face significant challenges in handling structured data and may pose risks when dealing with misinformation.