According to WIRED, the Department of Defense has alleged that AI developer Anthropic could potentially manipulate its AI models during wartime scenarios. However, company executives have pushed back against these claims, arguing that such manipulation is impossible.
The dispute centers on concerns about AI tools being altered or sabotaged in the middle of military operations. While the DoD appears concerned about potential risks to national security if AI systems could be remotely modified during conflict, Anthropic maintains that their technology does not allow for such interference.
The disagreement highlights growing tensions between AI developers and government agencies as artificial intelligence becomes increasingly integrated into defense applications. Neither the Department of Defense nor Anthropic provided detailed technical explanations for their respective positions, according to WIRED’s reporting. The incident underscores broader questions about AI safety, security, and the relationship between private AI companies and military applications of their technology.