TL;DR: Despite an ongoing legal battle with the Pentagon, Anthropic is quietly rebuilding its relationship with the Trump administration, and the government's appetite for its most powerful AI model appears to be winning out over politics. The detente underscores that the US doesn’t want to, and can’t afford to, sideline cutting-edge AI. What happened: Just over a month after the Pentagon moved to block Anthropic’s models with “supply chain risk” concerns, signs of a truce have emerged. On Friday, Anthropic CEO Dario Amodei sat down with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent to discuss how federal agencies could access Mythos, the company's most advanced model, which Anthropic says poses an enormous cybersecurity threat. (So far, Mythos has only been released to a select group of organizations.) Both sides called the meeting productive. Meanwhile, the National Security Agency reportedly has access to Mythos Preview. It’s not personal: The shift is largely due to necessity. Anthropic’s latest models are seen as best in class, particularly for cybersecurity. Walking away from that capability would mean giving up a competitive edge. According to reports from Axios, Wiles and Bessent both want the government to have a relationship with Anthropic. Bessent has reportedly separately encouraged major bank CEOs to test Mythos. And a US official told Axios that “every agency” except the Pentagon wants access to Anthropic models. There’s also a bigger strategic concern: Sidelining a major player like Anthropic could put the US at a disadvantage in the broader AI race. So much for “corporate murder”: When the Pentagon fight started in February, some wondered if the supply chain risk designation would be fatal for Anthropic. Hardly. Not only did the company’s annual revenue run rate recently increase to $30 billion (up from $9 billion at the end of last year) thanks to soaring demand for its products, it’s also now valued around $800 billion (basically the same as OpenAI). Bottom line: Agencies other than the Pentagon are likely to expand testing and deployment of Anthropic models, especially in areas like cybersecurity and intelligence analysis. The government will probably end up with a split relationship, suing the AI company with one hand and using its tools with the other. Meanwhile, Anthropic is starting to look like an AI zombie: The Pentagon tried to kill it, and it came back stronger. —AC |