top of page

Anthropic Claims Chinese AI Firms 'Distilled' Claude for Training Their Models

  • Feb 24
  • 2 min read

Key Findings


  • Anthropic, the developer of the Claude AI chatbot, has accused several Chinese AI firms, including DeepSeek, MiniMax, and Moonshot AI, of attempting to "distill" Claude's capabilities to train their own models.

  • Distillation refers to the practice of training a new AI model by learning from the outputs of an existing model, rather than using the original training data.

  • Anthropic claims these Chinese firms engaged in coordinated, large-scale efforts to access Claude through over 24,000 fraudulent accounts, generating more than 16 million exchanges in violation of the platform's terms of service and regional access restrictions.

  • The company says the structured nature and volume of these interactions indicated systematic data collection rather than ordinary user behavior, with the firms targeting Claude's most differentiated capabilities, such as agentic reasoning, tool use, and coding.


Background


  • Distillation is a widely used technique in the AI industry, allowing developers to learn from an existing system rather than building entirely from scratch.

  • While the process has legitimate uses, Anthropic argues that large-scale, automated querying designed to replicate a model's capabilities crosses into abuse.

  • The company claims the Chinese firms' activities involved bypassing platform safeguards and export restrictions, raising concerns about national security risks if the "illicitly distilled models" were to proliferate without necessary safeguards.


DeepSeek


  • Anthropic says DeepSeek accessed Claude over 150,000 times, focusing on reasoning tasks, rubric-based grading workflows, and attempts to generate policy-safe versions of sensitive queries.


Moonshot AI


  • Moonshot AI accounted for over 3.4 million exchanges with Claude, targeting the model's agentic reasoning, coding, data analysis, computer-use agents, and computer vision capabilities.


MiniMax


  • MiniMax generated the largest volume at over 13 million exchanges, with a focus on agentic coding and tool orchestration.


Anthropic's Response


  • Anthropic is developing detection systems to identify suspicious querying patterns associated with distillation attacks, including monitoring for unusual prompt sequences, automated request patterns, and attempts to harvest structured knowledge in bulk.

  • The company argues that stronger technical controls and policy measures will be necessary as AI models become more capable and commercially valuable.


Broader Implications


  • Security experts warn that the issue extends beyond major AI labs, as any organization building customized AI assistants or chatbots could face similar risks of having their proprietary knowledge replicated through prompting alone.

  • The disclosure highlights the growing concerns around the potential misuse of AI technology and the need for robust security measures to protect intellectual property and national security interests.


Sources


  • https://hackread.com/anthropic-china-ai-firms-distilled-claude-train-models/

  • https://thehackernews.com/2026/02/anthropic-says-chinese-ai-firms-used-16.html

  • https://cyberscoop.com/anthropic-accuses-chinese-labs-ai-distillation-cyber-risk/

  • https://www.reuters.com/world/china/chinese-companies-used-claude-improve-own-models-anthropic-says-2026-02-23/

  • https://www.linkedin.com/pulse/anthropic-alleges-massive-claude-distillation-campaign-henning-steier-bn3pe

Recent Posts

See All

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page