
Anthropic Alleges Model Distillation
Anthropic Says Rival AI Firms Queried Claude 16M+ Times, Alleges Terms Violations
Anthropic says DeepSeek, MiniMax, and Moonshot collectively prompted Claude more than 16 million times and used the outputs to train competing AI systems, in violation of its terms of service.
Anthropic says three AI companies, DeepSeek, MiniMax, and Moonshot, collectively sent more than 16 million prompts, through 'approximately 24,000 fraudulent accounts', to its Claude model and then used those responses to train their own AI systems. According to Anthropic, that crosses a clear line in its terms of service, which prohibit using Claude’s outputs to build or improve competing foundation models.
The activity allegedly happened through Claude’s API. Anthropic says its internal monitoring flagged both the sheer volume and the pattern of requests. Sixteen million prompts is not casual experimentation. The company characterizes it as systematic extraction rather than normal developer usage.
What Anthropic Is Claiming
At the center of this is model distillation. In simple terms, one company queries a more advanced model, collects the answers, and uses that material as training data to help build or refine its own system. It is a known technical practice in AI research, but commercial platforms typically restrict it in their contracts.
Anthropic’s terms for Claude’s API state that customers cannot use outputs to create competing large language models. The company says the 16 million plus prompts were used in a way that violates that condition. It has not publicly broken down how many prompts were attributed to each of the three companies.
Who Is Involved
DeepSeek, MiniMax, and Moonshot are all AI developers offering their own large language models and chatbot products. They operate in the same general market as Anthropic, building systems for consumers and enterprise clients. As of now, there has been no public admission of wrongdoing from the companies named.
Anthropic distributes Claude through paid API access and enterprise agreements. Like other major AI labs, it includes usage limits and competitive restrictions in its contracts. This dispute centers on those contractual boundaries, not on open source scraping or public web data.
What Happens Next
Anthropic has not announced a lawsuit, financial claim, or formal regulatory complaint at this stage. It also has not published the detailed logs behind the 16 million figure. For now, what exists is a direct allegation that large scale API access was used to support competing model training.
Sixteen million prompts is a concrete number. If accurate, it would rank among the largest publicly disclosed cases of alleged model distillation through a commercial API. Whether this turns into legal action or stays a contractual dispute behind closed doors is still unknown.
For now, it's straightforward. Anthropic says its Claude model was queried at massive scale and that the outputs were used in ways its terms do not allow. The companies named have not publicly agreed with that claim.
For more details, you'll find a link to Anthropic’s full statement below.
Tags
Related Links
Join the Discussion
Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open, join us.
Published February 24, 2026 • Updated February 24, 2026
published
Latest in AI

Anthropic Says Rival AI Firms Queried Claude 16M+ Times, Alleges Terms Violations
Feb 24, 2026

Google Rolls Out Lyria 3 In Gemini
Feb 19, 2026

Sony Targets Copyright in AI‑Generated Music
Feb 17, 2026

Google And OpenAI Report Targeted AI Model Extraction
Feb 13, 2026

AI Had a Big Night at the Super Bowl
Feb 9, 2026
Right Now in Tech

Court Tosses Musk’s Claim That OpenAI Stole xAI Trade Secrets
Feb 26, 2026

Meta’s Age Verification Push Reignites Online Anonymity Debate
Feb 23, 2026

Substack Adds Polymarket Tools. Journalists Have Questions.
Feb 20, 2026

Netflix Ends Support for PlayStation 3 Streaming App
Feb 18, 2026

The Internet Archive Is Getting Caught in the AI Scraping War
Feb 5, 2026