Anthropic ditches its defining safety promise to pause dangerous AI development because it's basically pointless when everybody else is 'blazing ahead'
Given the way the AI industry is going these days, the following news probably isn't a huge surprise. But it's unnerving all the same. Announced in a new blog post, Anthropic, arguably the sole remaining example among the major AI players that really bigs up its safety responsibilities, has ditched its core commitment to "pause" development of more powerful AI models if suitable safety safeguards aren’t ready.
In previous versions of what Anthropic calls its Responsible Scaling Policy (RSP)...