
AI company Anthropic is developing
and has begun testing with early access customers a new AI model more capable than any it has released previously, the company said, following
a data leak that revealed the model’s existence.
An Anthropic spokesperson said the new model represented “a step change” in AI performance and was “the most capable we’ve built to date.” The company said the model is currently being trialed by “early access customers.”
Descriptions of the model were inadvertently stored in a publicly-accessible data cache and were reviewed by
Fortune.
A draft blog post that was available in an unsecured and publicly-searchable data store prior to Thursday evening said the new model is called “Claude Mythos” and that the company believes it poses unprecedented cybersecurity risks.
The same cache of unsecured, publicly discoverable documents revealed details of a planned, invite-only CEO summit in Europe that is part of the company’s drive to sell its AI models to large corporate customers.
In response to questions about the draft blog post, the company acknowledged training and testing a new model. “We’re developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity,” an Anthropic spokesperson said. “Given the strength of its capabilities, we’re being deliberate about how we release it. As is standard practice across the industry, we’re working with a small group of early access customers to test the model. We consider this model a step change and the most capable we’ve built to date.”—
Jeremy Kahn, Beatrice Nolan