US govt says Anthropic AI an 'unacceptable risk' to military
Anthropic's Claude AI model has been in the spotlight in recent weeks both for its alleged use in identifying targets for US bombing in Iran and the company's refusal that its systems be used to power mass surveillance in the United States or lethal fully autonomous weapons systems.
Justifying its decision to cut ties with Anthropic in response to a legal complaint from the firm, the Pentagon -- dubbed the Department of War (DoW) by the Trump administration -- said it "became concerned that allowing...