Anthropic says it has identified thousands of 'fraudulent accounts' taking Claude and 'extracting its capabilities to train and improve their own models'
The question of what data AI models are trained on, and the legitimacy of that data, is a thorny one. Anthropic found itself defending its use of copyrighted material to train its Claude AI in the US last year, a case that eventually resulted in a ruling that its copyrighted scraping fell under fair use privileges.
However, the company eventually agreed to pay a $1.5 billion settlement in regards to claims that it pirated copies of several author's works. I mention this, because Anthropic...