Anthropic on Monday accused Chinese AI companies DeepSeek, Moonshot AI, and MiniMax of orchestrating coordinated campaigns to extract proprietary knowledge from its Claude language model. This follows similar complaints from OpenAI about Chinese firms leveraging their technologies without authorization.

The companies allegedly used 'distillation attacks' by flooding Claude with tailored prompts designed to reveal specific model capabilities. Despite access restrictions, these firms reportedly circumvented them via commercial proxies, managing tens of thousands of Claude accounts simultaneously to gather data at high volume.

Distillation allows smaller AI models to emulate the performance of larger, more advanced systems by extracting critical knowledge. This technique helps less-resourced teams build competitive AI solutions but raises questions about intellectual property and AI ethics.

Anthropic revealed that the collected Claude responses are used either directly to train local models or for reinforcement learning, a resource-intensive method that enables models to improve from trial and error without human intervention. These actions potentially undermine the original developers' market advantage.

This issue highlights ongoing tensions over AI technology transfer and IP protection between U.S. firms and Chinese competitors amid a global AI arms race. Legal and regulatory mechanisms remain unclear on how to address such large-scale distillation campaigns.

Looking ahead, industry watchers will monitor how impacted companies respond, whether through legal action or technological countermeasures, and how governments might tighten AI export controls or protections against unauthorized model replication to safeguard innovation.