Anthropic’s Mythos is Here. Is OpenAI’s Spud Next?A new era of powerful models is upon us. We're about to learn whether it's mostly marketing or field-advancing improvement.After Anthropic’s introduction of Mythos — the mega-model it said is too dangerous to release to the public — OpenAI is poised for AI’s next big moment with “Spud.” “Spud” is a massive new base model that OpenAI President Greg Brockman recently told us is “a new pre-train” that can understand instructions and context better. Brockman said Spud will be better “qualitatively” and “quantitatively” and better at solving “much harder problems.” Along with Mythos, it’s in a class of new, larger AI models whose performance will reveal just how much improvement is left in making AI models bigger. “There’s this thing called ‘big model smell’ that people talk about, where when these models are just actually much smarter and much more capable, they bend to you much more,” Brockman said. “And you feel it — when you ask a question and the AI doesn’t quite get it, it’s always so disappointing. You have to explain it, you’re like: you really should be able to figure this out.” Mythos, which Anthropic revealed last week, is a new general-purpose model the company says is too powerful to release publicly due to cybersecurity risks. It launched Project Glasswing — a closed program giving access to roughly 50 partners including Apple, Microsoft, Google, Nvidia, CrowdStrike, JPMorgan Chase, and others. Anthropic illustrated the risks by including an anecdote in Mythos’s system card about a time Mythos successfully escaped its sandbox, which a researcher only discovered after receiving a surprise email from the model while eating a sandwich in a park. Framing AI models as too dangerous to release and available only to a select group is starting to look like a new trend. When OpenAI recently hinted at a separate cybersecurity product it’s developing, company officials said it also plans to release it only to a small partner group. That raises a number of questions worth considering as these models come out: Has “most dangerous” simply become a new way of signaling “most powerful”? Is all the scary stuff mostly marketing? Is this danger talk a cover story for a lack of compute needed to deliver these models to the public? And who benefits most when access to the most capable models is concentrated among a handful of companies that already dominate the industry? It’s also hard not to notice the names Anthropic and OpenAI are using for their next-generation models. Spud calls to mind something plain, unassuming, and earthy. Mythos resembles something much more lofty and lasting. Like any myth, there’s always a chance the story is grander than reality. For now though, companies and governments are taking the companies’ claims seriously. After Anthropic announced Mythos, the Treasury and the Fed both reportedly warned major banks about new AI risks posed by Mythos’s capabilities. The head of the International Monetary Fund is also reportedly worried and said that “time is not our friend on this one.” Info-Tech LIVE 2026: Where CIOs Turn AI Momentum Into Measurable ROI (sponsor)If you’re a CIO or senior IT leader, the pressure to show real business value from AI has never been higher. Info-Tech LIVE 2026 in Las Vegas puts you in the room with high-caliber IT executives solving enterprise-scale challenges and gives you the practical strategies you need to justify your AI spend and lead your organization through what comes next. Submit your details through my link to claim your free ticket, subject to eligibility. Sponsored by Info-Tech Research Group The Intelligence Report |