Anthropic this week revised its authorized phrases to make clear its coverage forbidding using third-party harnesses with Claude subscriptions, because the AI biz makes an attempt to shore up its income mannequin.

Anthropic sells subscriptions to its Claude platform, which supplies entry to a household of machine studying fashions (e.g. Opus 4.6), and related instruments like Claude Code, a web-based interface at Claude.ai, and the Claude Desktop utility, amongst others.

Claude Code is a harness or wrapper – it integrates with the person’s terminal and routes prompts to the obtainable Claude mannequin at the side of different instruments and a management loop that, collectively, make it what Anthropic calls an agentic coding instrument.

Many different instruments function harnesses for fashions, resembling OpenAI Codex, Google Antigravity, Manus (just lately acquired by Meta), OpenCode, Cursor, and Pi (the harness behind OpenClaw). 

Harnesses exist as a result of interacting with a machine studying mannequin itself just isn’t an excellent person expertise – you feed it a immediate and it returns a end result. That is a single-turn interplay. Enter and output. To create a product that individuals care about, mannequin makers have added help for multi-turn interplay, reminiscence of prior interactions, entry to instruments, orchestration to deal with information flowing between these instruments, and so forth. A few of this help has been baked into mannequin platforms, however a few of it has been added by way of harness tooling.

This could pose a enterprise drawback for frontier mannequin makers – they’ve invested billions to coach subtle fashions, however they threat being disintermediated by gatekeeping intermediaries that construct harnesses round their fashions and supply a greater person expertise.

One of many ways in which Anthropic has chosen to construct model loyalty is by promoting tokens to subscription prospects at a month-to-month worth, with usage limits, that finally ends up being less expensive than pay-as-you-go token purchases by way of the Claude API. Primarily, the economics are just like an all-you-can-eat buffet that is priced with sure utilization expectations.

That observe, successfully a subsidy for subscribers, led to token arbitrage. Clients accessed Claude fashions by way of subscriptions linked to third-party harnesses as a result of it value lower than doing the identical work by way of API key. 

The AI biz’s Client Phrases of Service have forbidden using third-party harnesses, besides with particular authorization since at least February 2024. The contractual language in Part 3.7, which stays unchanged from that point, says as a lot – any automated entry instrument not formally endorsed is forbidden.

It’s possible you’ll not entry or use, or assist one other particular person to entry or use, our Companies within the following methods:

Besides if you find yourself accessing our Companies by way of an Anthropic API Key or the place we in any other case explicitly allow it, to entry the Companies by way of automated or non-human means, whether or not by way of a bot, script, or in any other case.

Regardless of the presence of that passage for greater than two years, quite a lot of third-party instruments have flouted that rule and have allowed customers to produce a Claude subscription account key. 

The added rule explicitly states that OAuth authentication, the entry technique used for Claude Free, Professional, and Max tier subscribers, is simply meant for Claude Code and Claude.ai (the net interface for Claude fashions).

Utilizing OAuth tokens obtained by way of Claude Free, Professional, or Max accounts in another product, instrument, or service — together with the Agent SDK — just isn’t permitted and constitutes a violation of the Consumer Terms of Service,” the up to date legal compliance page says.

In accordance with Anthropic, the replace represents an try and make clear current coverage language to make it constant all through firm documentation.

Anthropic seems to have determined to police its guidelines at first of the yr. In a January social media thread, Anthropic engineer Thariq Shihipar stated the corporate had taken steps to stop third-party instruments from “spoofing the Claude Code harness.”

“Third-party harnesses utilizing Claude subscriptions create issues for customers and are prohibited by our Phrases of Service,” he wrote. “They generate uncommon visitors patterns with none of the same old telemetry that the Claude Code harness supplies, making it actually onerous for us to assist debug after they have questions on fee restrict utilization or account bans they usually have no different avenue for this help.”

The prohibition proved unpopular sufficient to elicit a response from the competitors. OpenAI’s Thibault Sottiaux pointedly endorsed using Codex subscriptions in third-party harnesses.

After banning accounts for making an attempt to recreation its pricing construction, Anthropic has now clarified its legalese, as Shihipar indicated would happen, and makers of third-party harnesses are taking observe. 

On Thursday, OpenCode pushed code to take away help for Claude Professional and Max account keys and Claude API keys. The commit cites “anthropic authorized requests.” ®


Source link