Claude AI by Anthropic Expands Prompt Length Capability

Anthropic Goes Big on Context, Trying to Keep the Coders in Its Crowd

In a move that feels a bit like giving the AI a bigger backpack, Anthropic has expanded the amount of text developers can shove into a single Claude request. The Sonnet 4 model now sports a 1 million‑token window. That means you could feed it a litany of 750,000 words—more than the entire Lord of the Rings saga—or 75,000 lines of code all at once. Five times larger than before, and more than double what OpenAI’s GPT‑5 offers.

Why This Matters

  • The new context size lets coders pass massive codebases or exhaustive documentation without needing to prune.
  • Scholars can now feed long scientific papers to the model for analysis or summarisation.
  • It gives Anthropic a tangible edge when competing with OpenAI’s exploding lineup.

Partner‑Enabled Growth

Anthropic’s size‑up isn’t just sliding into the sea of demo prompts. The extended context will also roll out through cloud allies like Amazon Bedrock and Google Cloud’s Vertex AI, ensuring developers across the board enjoy the same boost.

Enterprises, Coders, and the AI Showdown

The company has built its fortunes by feeding its model into popular coding assistants—think GitHub Copilot, Windsurf, and Anysphere’s Cursor. Claude has become the “go‑to” model for many developers. But GPT‑5 is looking like a formidable rival, especially with its attractive pricing and top‑notch coding chops. Curiously, the very CEO of Anysphere, Michael Truell, helped OpenAI roll out GPT‑5, and now GPT‑5 is the default engine in Cursor for fresh users.

Inside Conversation

Brad Abrams, Claude’s product lead, told TechCrunch that AI coding platforms should find the new context extension a significant advantage. When queried about GPT‑5’s impact, he brushed it aside, saying he’s “really happy with the API business and the way it’s been growing.”

Unlike OpenAI—which fishes in consumer subscriptions to ChatGPT—Anthropic’s core is B2B, selling AI models to enterprises via API. This focus makes coding platforms a rock‑solid customer base and likely explains why the company is upping its game to outpace GPT‑5.

More Power, More Promos

Just last week, Anthropic unveiled Claude Opus 4.1, raising the bar for its AI coding prowess. Now, with the jump to 1 million tokens, it’s clear the company isn’t slowing down—if anything, it’s tossing more goodies on the table to keep the dev community glued to Claude.

Tech and VC heavyweights join the Disrupt 2025 agenda

Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise.

Tech and VC heavyweights join the Disrupt 2025 agenda

Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise.

More Context, More Fun: Why AI Needs a Bigger Brain

Ever wished your AI assistant could remember the last week of your brunch conversations instead of just the last few minutes? That’s the new trend in big‑context AI.

Why Bigger Means Better

  • Software chores thrive on detail. A model tasked with launching a new feature performs way smoother when it can see the entire codebase, not just a snippet.
  • Long‑running projects benefit. Claude’s massive context window lets it keep track of every step over months, so it won’t forget where it left off.

Ultra‑Large Prompts: The Next Frontier?

Some companies are pushing the limits, claiming their models can handle astronomical prompts.

  • Google’s Gemini 2.5 Pro boasts a 2 million‑token window.
  • Meta’s Llama 4 Scout goes even further with a 10 million‑token horizon.
But Do Bigger Always Mean Better?

Not a sure thing. Research shows there’s a sweet spot: beyond a certain size, models struggle to chew through massive data streams.

Anthropic is tackling this by not only enlarging Claude’s window but also sharpening its effective understanding—so it can sift out the useful bits from the noise, though how they do it remains hush‑hush.

Paying the Price for Power

When you ask Claude Sonnet 4 for a prompt over 200 000 tokens, you’ll notice a price hike:

  • Input: $6 per million tokens (up from $3).
  • Output: $22.50 per million tokens (up from $15).

Think of it as a premium gym membership for your AI—big workouts, higher cost.

Get Involved – We’re Listening!

We’re always looking to level up. If you’ve got thoughts, drop them in our survey, and you might just snag a prize. Thanks for keeping the conversation fresh!