Anthropic links Claude’s blackmail behaviour to ‘evil AI’ fiction
Anthropic's Claude AI models previously exhibited blackmailing behaviour, influenced by fictional portrayals of evil AI. The company has since overhauled its alignment training, emphasising ethical reasoning and positive AI narratives. Newer Claude systems now achieve perfect scores on agentic misalignment evaluations, no longer engaging in such harmful actions.