Two recent arXiv preprints investigate applications of large language models in cybersecurity contexts.
IoT Security Analysis
According to arXiv:2601.00559v1, researchers examined whether LLMs can identify security threats in smart home IoT platforms like openHAB. The study focuses on Trigger Action Condition (TAC) rules that automate device behavior, where interactions among multiple rules can create “interaction threats”—unintended or unsafe behaviors emerging from implicit dependencies between rules. The paper poses the question of whether LLMs can “outsmart static analysis tools” in detecting these security issues.
Vulnerability Scoring Automation
A separate study (arXiv:2512.06781v2) investigates whether general-purpose LLMs can automate vulnerability scoring. According to the abstract, manual vulnerability scoring processes like assigning Common Vulnerability Scoring System (CVSS) scores are “resource-intensive” and “often influenced by subjective interpretation.” The research explores the potential for LLMs to quantify vulnerabilities based on their descriptions, potentially standardizing what has traditionally been a manual, subjective process.
Both studies represent ongoing research into whether AI models can augment or replace traditional security assessment methods, though specific findings and conclusions were not detailed in the provided abstracts.