Logo
READLEARNKNOWCONNECT
Back to posts
anthropic-alleges-model-distillation

Anthropic Alleges Model Distillation

ChriseFebruary 24, 2026 at 7 AM WAT

Anthropic Says Rival AI Firms Queried Claude 16M+ Times, Alleges Terms Violations

Anthropic says DeepSeek, MiniMax, and Moonshot collectively prompted Claude more than 16 million times and used the outputs to train competing AI systems, in violation of its terms of service.

Anthropic says three AI companies, DeepSeek, MiniMax, and Moonshot, collectively sent more than 16 million prompts, through 'approximately 24,000 fraudulent accounts', to its Claude model and then used those responses to train their own AI systems. According to Anthropic, that crosses a clear line in its terms of service, which prohibit using Claude’s outputs to build or improve competing foundation models.

The activity allegedly happened through Claude’s API. Anthropic says its internal monitoring flagged both the sheer volume and the pattern of requests. Sixteen million prompts is not casual experimentation. The company characterizes it as systematic extraction rather than normal developer usage.

What Anthropic Is Claiming

At the center of this is model distillation. In simple terms, one company queries a more advanced model, collects the answers, and uses that material as training data to help build or refine its own system. It is a known technical practice in AI research, but commercial platforms typically restrict it in their contracts.

Anthropic’s terms for Claude’s API state that customers cannot use outputs to create competing large language models. The company says the 16 million plus prompts were used in a way that violates that condition. It has not publicly broken down how many prompts were attributed to each of the three companies.

Who Is Involved

DeepSeek, MiniMax, and Moonshot are all AI developers offering their own large language models and chatbot products. They operate in the same general market as Anthropic, building systems for consumers and enterprise clients. As of now, there has been no public admission of wrongdoing from the companies named.

Anthropic distributes Claude through paid API access and enterprise agreements. Like other major AI labs, it includes usage limits and competitive restrictions in its contracts. This dispute centers on those contractual boundaries, not on open source scraping or public web data.

What Happens Next

Anthropic has not announced a lawsuit, financial claim, or formal regulatory complaint at this stage. It also has not published the detailed logs behind the 16 million figure. For now, what exists is a direct allegation that large scale API access was used to support competing model training.

Sixteen million prompts is a concrete number. If accurate, it would rank among the largest publicly disclosed cases of alleged model distillation through a commercial API. Whether this turns into legal action or stays a contractual dispute behind closed doors is still unknown.

For now, it's straightforward. Anthropic says its Claude model was queried at massive scale and that the outputs were used in ways its terms do not allow. The companies named have not publicly agreed with that claim.

For more details, you'll find a link to Anthropic’s full statement below.

Tags

#ai-competition#ai-ethics#ai-models#anthropic#model-distillation

Related Links

Join the Discussion

Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open, join us.

Published February 24, 2026Updated February 24, 2026

published

Anthropic Says Rival AI Firms Queried Claude 16M+ Times, Alleges Terms Violations | VeryCodedly