Justice Department Defends Penalty Against Anthropic Over Military AI Use Restrictions

DOJ says it lawfully penalized Anthropic for attempting to limit military use of its Claude AI models, according to court filing.

The U.S. Justice Department has responded to Anthropic’s lawsuit by defending its decision to penalize the AI company over restrictions it attempted to place on military use of its Claude AI models, according to WIRED. The government stated in its response that the penalty was lawful and justified.

According to the filing reported by WIRED, the Justice Department argues that Anthropic cannot be trusted with warfighting systems due to the company’s efforts to limit how the military could utilize its Claude AI technology. The department’s position centers on the claim that Anthropic’s attempted restrictions on military applications were grounds for the government’s punitive action.

The case highlights growing tensions between AI companies seeking to maintain control over how their technologies are deployed and government interests in leveraging advanced AI systems for defense purposes. Anthropic has previously positioned itself as a safety-focused AI company, but the Justice Department’s response suggests that such positioning may conflict with government contracting requirements for unrestricted military use of AI technologies.