Over 100 Google AI Workers Call for Restrictions on Military AI Use

Google AI employees sent letter to chief scientist opposing Gemini's use in U.S. surveillance and autonomous weapons, referencing Anthropic's policies.

Over 100 Google AI Workers Call for Restrictions on Military AI Use

More than 100 Google AI employees have sent a letter to Jeff Dean, a chief scientist at the company, opposing the use of Google’s Gemini AI system for U.S. surveillance and certain autonomous weapons applications, according to the New York Times.

The letter echoes recent policy moves by Anthropic, another major AI company, suggesting growing concern among AI workers about military applications of their technology. According to the NYT report, the employees are seeking to establish “red lines” - clear boundaries on how their AI systems can be deployed in military contexts.

The letter specifically targets potential uses in surveillance operations and autonomous weapons systems, raising ethical concerns about AI-powered military technology. This internal pushback comes as major tech companies navigate increasing demand from defense agencies for AI capabilities.

The employee action represents a continuation of workforce activism at Google around military contracts, following previous controversies over defense-related AI projects. The reference to Anthropic in the letter’s framing suggests employees are looking to other AI companies’ policies as models for ethical guidelines.

Jeff Dean, as one of Google’s most senior AI leaders, would play a key role in any policy decisions regarding military applications of the company’s AI technology.