Anthropic’s forced removal from the U.S. government is threatening critical AI nuclear safety research
The sudden wind-down of Anthropic technology within the U.S. government is raising concerns about whether federal officials, without access to Claude, might fall behind in the quest to guard against the threat of AI-generated or AI-assisted nuclear and chemical weapons.
Though the rollout has been messy—and Claude remains in use in some parts of the government—the Trump administration’s anti-Anthropic posture could have a chilling effect on collaborations between AI companies and federal agencies...