Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Anthropic Claude interface showing a 400 error for manual extended thinking budget request, with adaptive mode now…
Products & LaunchesBreakthroughScore: 92

Anthropic Deprecates Fixed Thinking Budgets, Forces Adaptive Mode

Anthropic forced adaptive thinking on Claude models, deprecating fixed budgets. Users report quality drops and the change reduces API revenue potential.

·2h ago·4 min read··5 views·AI-Generated·Report error
Share:
Source: reddit.comvia reddit_claude, gn_claude_model, devto_claudecode, gn_claude_codeSingle Source
Why is Anthropic deprecating fixed extended thinking budgets and forcing adaptive thinking?

Anthropic deprecated manual extended thinking budgets for Claude Opus 4.7 and will remove them for Opus 4.6 and Sonnet 4.6, making adaptive thinking the default. Users cannot disable it even when paying API rates.

TL;DR

Anthropic deprecates fixed extended thinking budgets. · Adaptive thinking becomes default for Opus 4.7, Sonnet 4.6. · Users lose manual token budget control; API pricing unchanged.

Anthropic deprecated manual extended thinking budgets for claude-opus-4-7" class="entity-chip">Claude Opus 4.7, returning a 400 error for fixed budget requests. Opus 4.6 and Sonnet 4.6 still support manual configuration but it is deprecated and will be removed in a future release.

Key facts

  • Opus 4.7 returns 400 error for fixed thinking budget requests.
  • Opus 4.6 and Sonnet 4.6 manual thinking deprecated, removal planned.
  • Users report quality drop after adaptive thinking became default.
  • Anthropic has not published benchmarks comparing adaptive vs fixed thinking.
  • Fourth API/subscription change in six weeks from Anthropic.

Anthropic has deprecated manual extended thinking budgets for its latest Claude models, forcing adaptive thinking as the default. The change, documented on the Anthropic platform page, means users can no longer specify a fixed token budget for reasoning via thinking: {type: "enabled", budget_tokens: N}. For Claude Opus 4.7, such requests now return a 400 error. Opus 4.6 and Sonnet 4.6 still accept the parameter but the company warns it is deprecated and will be removed in a future release. [According to Anthropic's documentation] The recommended approach is adaptive thinking with the effort parameter: thinking: {type: "adaptive"}.

The User Backlash

Users on Reddit have reacted negatively, particularly those using Claude Code for complex agentic workflows. One user, CaffeineBrogrammer, reported that disabling adaptive thinking in Claude Code restored quality levels that had dropped after the feature was introduced. They posed three pointed questions: if adaptive thinking improves performance as Anthropic claims, why did quality drop when it was enabled? Why have no benchmarks shown better results with adaptive thinking? And why are users who pay API rates — where Anthropic profits from higher token usage — being prevented from using a fixed, higher budget? [According to Reddit user CaffeineBrogrammer]

The Structural Tension

This move creates a contradiction for Anthropic's API business. API users pay per token; a fixed extended thinking budget means more tokens consumed and more revenue for Anthropic. By removing that option, the company is effectively turning down paying customers who want to spend more for higher quality. The only plausible explanations are: (1) adaptive thinking actually produces better results on average, making fixed budgets unnecessary for most use cases; (2) compute constraints force Anthropic to optimize inference efficiency at the expense of user choice; or (3) the company is prioritizing subscription margins over API revenue. The Reddit thread leans toward explanation (2), citing the broader pattern of Anthropic tightening subscription usage with the Agent SDK credit split announced May 13. [According to the Reddit thread and devtoolpicks.com analysis]

What the Benchmarks Don't Show

Anthropic has not published comparative benchmarks showing adaptive thinking outperforming fixed budgets. The company's documentation simply states adaptive thinking is "recommended" without citing metrics. Users report that for tasks requiring deep reasoning — like complex code generation or multi-step agent workflows — fixed budgets often produced more reliable outputs. The lack of transparent evaluation erodes trust, especially given the one-way door: users cannot opt back into fixed budgets even if they see quality degradation. [Per user reports on Reddit]

The Broader Pattern

This is the fourth subscription or API change from Anthropic in six weeks: the third-party agent ban on April 4, the brief Claude Code removal from Pro on April 21, the Agent SDK credit split on May 13, and now the forced adaptive thinking. Together, they signal a company tightening control over how its models are used, likely driven by compute cost pressures. The agentic workflows that made Claude Code popular are also the most expensive to serve, and Anthropic is systematically closing the arbitrage between subscription pricing and actual compute cost. [According to devtoolpicks.com analysis]

What to watch

Watch for Anthropic to publish benchmark results comparing adaptive thinking against fixed budgets, or for a backlash-driven reversal on Opus 4.6/Sonnet 4.6 deprecation timeline. Also watch for OpenAI to capitalize by offering fixed reasoning budgets as a differentiator.


Sources cited in this article

  1. Anthropic's
  2. Users
Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from 3 verified sources, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala SMITH.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

This move reveals Anthropic's compute-cost tension more starkly than any previous change. By removing the fixed budget option, the company is sacrificing both user trust and potential API revenue — a tradeoff that only makes sense if inference costs are the binding constraint. The fact that Anthropic has not published comparative benchmarks is telling: if adaptive thinking were a clear win, they would have the data to prove it. The Reddit user's framing — 'why would a vendor turn down paying customers who want to spend more?' — is the right structural question. The answer likely lies in Anthropic's GPU supply constraints and the cost of serving agentic workloads. The broader pattern of six weeks of tightening suggests a company under margin pressure, not one with pricing power. For developers, the implication is clear: build with the assumption that Anthropic will continue to optimize for its own cost structure, not for user flexibility.
Compare side-by-side
Claude Opus 4.7 vs Claude Opus 4.6
Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

From the lab

The framework underneath this story

Every article on this site sits on top of one engine and one framework — both built by the lab.

More in Products & Launches

View all