Three new papers on arXiv explore different applications of Large Language Models across diverse domains.
TowerMind: LLMs as Game Agents
According to arXiv paper 2601.05899v1, researchers have introduced TowerMind, described as “a Tower Defence Game Learning Environment and Benchmark for LLM as Agents.” The paper positions LLMs as “a promising paradigm for agents” and identifies “long-term planning and decision-making” as “core general-purpose capabilities for adapting to diverse scenarios and tasks,” according to the abstract.
LLM2IR: Converting LLMs to Retrieval Systems
A cross-listed paper (arXiv:2601.05262v1) presents LLM2IR, which the authors describe as “an efficient unsupervised contrastive learning framework to convert any decoder-only large language model (LLM) to an” information retrieval system. The research addresses the limitation that “modern dense information retrieval (IR) models usually rely on costly large-scale pretraining,” according to the abstract.
Multimodal Learning for Low-Resource ASR
Another cross-listed paper (arXiv:2601.05707v1) focuses on automatic speech recognition (ASR) for underserved languages. According to the abstract, “ASR still covers only a small fraction of the world’s languages, mainly due to supervised data scarcity.” The research explores using “in-context learning (ICL) with large language models (LLMs)” to address this gap.