Three New Papers Address LLM Applications in Ontology Alignment, Edge Computing, and Steering Vectors
Three recent papers on arXiv examine different aspects of large language model (LLM) research and applications.
According to arXiv paper 2508.08500v2, researchers are exploring the use of LLMs as “oracles” for ontology alignment, addressing the persistent challenge of producing high-quality mappings among input ontologies. The paper notes that while many methods exist for ontology alignment, a human-in-the-loop approach during the alignment process has been adopted to improve results.
A second paper (arXiv:2408.10746v2) examines resource-efficient fine-tuning of personal LLMs using collaborative edge computing. According to the abstract, LLMs have enabled powerful applications at the network edge, including intelligent personal assistants. The research addresses data privacy and security concerns by focusing on edge-based fine-tuning of personal LLMs.
The third paper (arXiv:2602.06801v2) investigates the non-identifiability of steering vectors in LLMs. According to the authors, activation steering methods are widely used to control LLM behavior and are often interpreted as revealing meaningful internal representations. However, the paper challenges the assumption that steering directions are identifiable and unique, raising questions about the interpretability of these techniques.