Anthropic has accused Chinese AI firms DeepSeek, Moonshot AI, and MiniMax of attempting to steal trade secrets from its Claude chatbot by generating over 16 million interactions using 24,000 fraudulent accounts. The company alleges these developers trick Claude into revealing internal reasoning behind responses to train their own models.

The alleged data theft involves Chinese developers bypassing Anthropic's commercial ban on China by using proxy services and sprawling networks of fraudulent accounts to access Claude at scale. Anthropic traced the traffic's metadata back to employees of DeepSeek and Moonshot AI, warning that these campaigns are increasing in sophistication and scope.

This matter echoes last year's similar accusations by OpenAI against DeepSeek for exploiting its AI models. Anthropic underscores the potential industry-wide security risks posed by such illicit data harvesting, urging rapid coordination among AI firms, policymakers, and the global AI community to address the problem.

However, Anthropic faces criticism for hypocrisy on social media due to its own controversial practices, including scraping vast internet data and copyrighted books without explicit permission to train Claude. Critics argue this complicates Anthropic's moral stance in condemning others for data exploitation.

The dispute exposes broader challenges around AI data governance, intellectual property rights, and cross-border regulation in an increasingly competitive field. With China emerging as a significant AI player, the tensions highlight geopolitical risks for AI enterprises relying on restricted data access.

Going forward, industry observers will watch for possible regulatory actions or collaborative frameworks aimed at preventing unauthorized AI model replication. The effectiveness of such measures may shape the competitive landscape and the ethical norms underpinning AI innovation globally.