Anthropic’s new Claude 4 AI models can reason over many steps

During its inaugural developer conference Thursday, Anthropic launched two new AI models that the startup claims are among the industry’s best, at least in terms of how they score on popular benchmarks.

Claude Opus 4 and Claude Sonnet 4, part of Anthropic’s new family of models, Claude 4, can analyze large data sets, execute long-horizon tasks, and take complex actions, according to the company. Both models were tuned to perform well on programming tasks, Anthropic says, making them well-suited for writing and editing code.

Both paying users and users of the company’s free chatbot apps will get access to Sonnet 4 but only paying users will get access to Opus 4. For Anthropic’s API, via Amazon’s Bedrock platform and Google’s Vertex AI, Opus 4 will be priced at $15/$75 per million tokens (input/output) and Sonnet 4 at $3/$15 per million tokens (input/output).

Tokens are the raw bits of data that AI models work with, with a million tokens being equivalent to about 750,000 words — roughly 163,000 words longer than “War and Peace.”

Anthropic Claude 4
Image Credits:Anthropic

Anthropic’s Claude 4 models arrive as the company looks to substantially grow revenue. Reportedly, the outfit, founded by ex-OpenAI researchers, aims to notch $12 billion in earnings in 2027, up from a projected $2.2 billion this year. Anthropic recently closed a $2.5 billion credit facility and raised billions of dollars from Amazon and other investors in anticipation of the rising costs associated with developing frontier models.

Rivals haven’t made it easy to maintain pole position in the AI race. While Anthropic launched a new flagship AI model earlier this year, Claude Sonnet 3.7, alongside an agentic coding tool called Claude Code, competitors including OpenAI and Google have raced to outdo the company with powerful models and dev tooling of their own.

Anthropic is playing for keeps with Claude 4.

The more capable of the two models introduced today, Opus 4, can maintain “focused effort” across many steps in a workflow, Anthropic says. Meanwhile, Sonnet 4 — designed as a “drop-in replacement” for Sonnet 3.7 — improves in coding and math compared to Anthropic’s previous models and more precisely follows instructions, according to the company.

The Claude 4 family is also less likely than Sonnet 3.7 to engage in “reward hacking,” claims Anthropic. Reward hacking, also known as specification gaming, is a behavior where models take shortcuts and loopholes to complete tasks.

To be clear, these improvements haven’t yielded the world’s best models by every benchmark. For example, while Opus 4 beats Google’s Gemini 2.5 Pro and OpenAI’s o3 and GPT-4.1 on SWE-bench Verified, which is designed to evaluate a model’s coding abilities, it can’t surpass o3 on the multimodal evaluation MMMU or GPQA Diamond, a set of PhD-level biology-, physics-, and chemistry-related questions.

Anthropic Claude 4
The results of Anthropic’s internal benchmark tests.Image Credits:Anthropic

Still, Anthropic is releasing Opus 4 under stricter safeguards, including beefed-up harmful content detectors and cybersecurity defenses. The company claims its internal testing found that Opus 4 may “substantially increase” the ability of someone with a STEM background to obtain, produce, or deploy chemical, biological, or nuclear weapons, reaching Anthropic’s “ASL-3” model specification.

Both Opus 4 and Sonnet 4 are “hybrid” models, Anthropic says — capable of near-instant responses and extended thinking for deeper reasoning (to the extent AI can “reason” and “think” as humans understand these concepts). With reasoning mode switched on, the models can take more time to consider possible solutions to a given problem before answering.

As the models reason, they’ll show a “user-friendly” summary of their thought process, Anthropic says. Why not show the whole thing? Partially to protect Anthropic’s “competitive advantages,” the company admits in a draft blog post provided to TechCrunch.

Opus 4 and Sonnet 4 can use multiple tools, like search engines, in parallel, and alternate between reasoning and tools to improve the quality of their answers. They can also extract and save facts in “memory” to handle tasks more reliably, building what Anthropic describes as “tacit knowledge” over time.

To make the models more programmer-friendly, Anthropic is rolling out upgrades to the aforementioned Claude Code. Claude Code, which lets developers run specific tasks through Anthropic’s models directly from a terminal, now integrates with IDEs and offers an SDK that lets devs connect it with third-party applications.

The Claude Code SDK, announced earlier this week, enables running Claude Code as a sub-process on supported operating systems, providing a way to build AI-powered coding assistants and tools that leverage Claude models’ capabilities.

Anthropic has released Claude Code extensions and connectors for Microsoft’s VS Code, JetBrains, and GitHub. The GitHub connector allows developers to tag Claude Code to respond to reviewer feedback, as well as to attempt to fix errors in — or otherwise modify — code.

AI models still struggle to code quality software. Code-generating AI tends to introduce security vulnerabilities and errors, owing to weaknesses in areas like the ability to understand programming logic. Yet their promise to boost coding productivity is pushing companies — and developers — to rapidly adopt them.

Anthropic, acutely aware of this, is promising more frequent model updates.

“We’re […] shifting to more frequent model updates, delivering a steady stream of improvements that bring breakthrough capabilities to customers faster,” wrote the startup in its draft post. “This approach keeps you at the cutting edge as we continuously refine and enhance our models.”

Source link

Leave a Comment