%title%
Applied AI
The cybersecurity field is buzzing about an upcoming Anthropic AI model that could be used by hackers to “exploit [security] vulnerabilities in ways that far outpace the efforts of defenders,” according to a draft blog post the company mistakenly made public. Anthropic has been briefing cybersecurity researchers and is giving them early access to the model, which is said to be better than past models at generating and reviewing computer code. The AI firm is seeking feedback to make the model less dangerous in the hands of hackers. 
Apr 2, 2026

Applied AI

Aaron Holmes headshot
Supported by Sponsor Logo

Thank you for reading the Applied AI newsletter! I’d love your feedback, ideas and tips: aaron@theinformation.com.

If you think someone else might enjoy this newsletter, please pass it forward or they can sign up here.

Hi! If you’re finding value in our Applied AI newsletter, I encourage you to consider subscribing to The Information. It contains exclusive reporting on the most important stories in tech, like this story about the AI startup OpenRouter approaching a $1.3 billion valuation. Save up to $250 on your first year of access.


Welcome back!

The cybersecurity field is buzzing about an upcoming Anthropic AI model that could be used by hackers to “exploit [security] vulnerabilities in ways that far outpace the efforts of defenders,” according to a draft blog post the company mistakenly made public.

Anthropic has been briefing cybersecurity researchers and is giving them early access to the model, which is said to be better than past models at generating and reviewing computer code. The AI firm is seeking feedback to make the model less dangerous in the hands of hackers. 

Wiz, the cloud cybersecurity company Google acquired last month, expects to evaluate the forthcoming model, given that it has tested prior Anthropic models, Wiz CTO Ami Luttwak said, calling it “a very important step to allow researchers access to understand what’s coming.”

“We now believe the new models are essentially the best cybersecurity researchers in the world, and that’s a problem,” Luttwak said, because it means hackers could use the models’ capabilities to find and exploit vulnerabilities.

Luttwak and other sellers of cybersecurity defense say they’ve been surprised by the improvement AI from Anthropic, OpenAI and other developers has shown in finding previously unknown vulnerabilities in computer code, known as zero-day exploits. Such capabilities may have aided a major hack of Axios, a popular software for app developers, that was disclosed this week.

AI-Assisted Hacks

In a recent demonstration, Anthropic researchers asked the company’s Claude Code agent to review Ghost, an open source application that hosts more than 50,000 online newsletters.

Ghost had never reported a critical security vulnerability in its 13-year history, but within hours, Claude found a vulnerability that would let a hacker break into any Ghost user’s website and start editing it or stealing their personal information, Anthropic researcher Nicholas Carlini said at a security conference in early March.

To be sure, that’s a relatively basic web code problem and doesn’t mean AI models have undergone a reasoning breakthrough allowing them to find deeper flaws with apps. Moreover, hackers have been successfully discovering such vulnerabilities for at least the past decade. In other words, the bad guys haven’t had trouble finding a way into corporate networks.

But AI can now help hackers to exploit bugs faster by mapping out the network they’ve hacked, stealing the data, and encrypting it before defenders have a chance to respond. 

Undetected Movement

So while it’s common to shrug off warnings from Anthropic and other AI developers about the risks their models pose as a clever type of marketing, some cybersecurity buyers have also stopped scoffing.

“We’re aware there’s been a rise in use of AI by attackers,” said Scott Roberts, chief information security officer of software firm UiPath, who previously served as a security executive at Microsoft, Amazon and Coinbase. “I don‘t want to minimize the importance of the evolution that’s happened—it's like the day that electricity was invented.”

New AI threats are also hard for traditional security products to block, as those products were built for a world in which hackers didn’t have advanced AI at their disposal. For instance, researchers have found that leading AI models generally understand how to move through companies’ networks without being detected, according to Dah Lahav, founder of the AI security startup Irregular Security, which works with labs like OpenAI and Anthropic to test their models’ hacking capabilities

That could pose challenges for existing security scanners that aim to detect human hacker activity, according to some buyers of security products.

Using AI to Fight AI 

To fight the rising tide of AI-powered threats, CISOs and executives at cybersecurity firms such as Wiz and Tenzai say they’re using their own AI to rapidly discover and patch vulnerabilities in their code before a hacker’s AI could discover them.

The security tools defending against hackers’ misuse of Anthropic and OpenAI models are powered by the same models—with important exceptions. Cybersecurity companies developing such tools can request special access to “ungated” versions of the AI models, which differ from the versions sold to the public. The publicly sold versions refuse to fulfill requests that are obviously hacking-related, whereas the raw versions allow the security firms to use the models to try to hack their customers to find the weaknesses in their defenses.

Separately, there are also questions about whether the corporate use of AI coding tools is introducing unforeseen vulnerabilities in companies’ code. For instance, it’s not clear whether Anthropic’s accidental leak of Claude Code’s source code this week was related to the company’s heavy use of AI-generated code, but developers poring over the leaked source code have noted its structure bore the hallmarks of AI-made code.

Anthropic said in a statement that the “root cause” of the leak was a “packaging issue caused by human error” but declined to say whether the human error involved the use of AI coding tools.

A message from Google Cloud

Gemini Enterprise for Customer Experience

In this focused, 20-minute briefing, you’ll learn how to remove the friction that slows down your business by deploying AI agents that can see, hear, and remember interactions across the entire customer lifecycle.

Watch now.

New From Our Reporters

True Value

For Blackstone, Private Credit Fears Miss the Big Picture

By Anita Ramaswamy


Exclusive

Startup That Helps Developers Pick AI Models Nears $1.3 Billion Valuation

By Julia Hornstein, Stephanie Palazzolo and Kevin McLaughlin


AI Agenda

An AI Storytelling Startup is On Pace to Generate $100 Million in Annual Sales

By Juro Osawa

Upcoming Events

Thursday, April 9 — Inside the SaaSpocalypse: What Agents Mean for Software Businesses

Join The Information’s Kevin McLaughlin and Laura Bratton as they discuss the future of software businesses with Evan Skorpen, an investor at Lead Edge Capital, and Nimesh Mehta, CISO of National Life Group.

More details


Monday, April 27 — Financing the AI Revolution

Join The Information at the New York Stock Exchange on Monday, April 27, to hear from top executives and investors on how the rapid buildout of AI is reshaping tech, finance, and capital markets

More details


Wednesday, September 23 — AI Agenda Live SF 2026

Save the date for The Information’s annual AI Agenda Live in San Francisco, where top AI researchers, founders, investors and executives come together for a day of conversations about the breakthroughs and applications shaping the future of AI.

More details

Opportunities

Group subscriptions

Empower your teams to stay ahead of market trends with the most trusted tech journalism.

Learn more


Brand partnerships

Reach The Information’s influential audience with your message.

Connect with our team

About Applied AI

A The Information franchise that takes you inside how businesses are using AI to automate all kinds of work.

Read the archives

Follow us
X
LinkedIn
Facebook