Three New Research Papers Explore LLM Applications in Forecasting, Networks, and Political Bias

Researchers publish studies on enhancing LLMs for time-series forecasting, resource allocation in networks, and detecting political bias in language models.

Three New Research Papers Explore LLM Applications

Three research papers published on arXiv examine different aspects of large language model (LLM) applications and capabilities.

Time-Series Forecasting Enhancement

According to arXiv:2601.07903v1, researchers proposed a method for “Enhancing Large Language Models for Time-Series Forecasting via Vector-Injected In-Context Learning.” The paper addresses time series forecasting (TSF) as a key means to respond to changes in user behavior and usage patterns on the World Wide Web, noting that “In recent years, the large language models (LLMs) for TSF (LLM4TSF” have emerged in this space.

Non-Terrestrial Network Resource Allocation

A second paper (arXiv:2601.08254v1) explores “Large Artificial Intelligence Model Guided Deep Reinforcement Learning for Resource Allocation in Non Terrestrial Networks.” According to the abstract, Large AI Models (LAM) “have been proposed to applications of Non-Terrestrial Networks (NTN), that offer better performance with its great generalization and reduced task specific trainings.” The researchers propose a Deep Reinforcement Learning (DRL) agent for this application.

Political Bias Detection

The third paper (arXiv:2601.08785v1) examines political biases in LLMs using parliamentary voting records. According to the researchers, “As large language models (LLMs) become deeply embedded in digital platforms and decision-making systems, concerns about their political biases have grown.” While “substantial work has examined social biases such as gender and race,” the paper focuses on systematic studies of political bias.