Data centers are a distraction. The real fight is elsewhere.
A belated reckoning over A.I. oversight has begun
David Wallace-Wells
March 4, 2026
Ibrahim Rayintakath

A belated reckoning over A.I. oversight has begun

Last week, as New Scientist reported that leading A.I. models kept recommending nuclear strikes during war-game exercises, the Department of War tried to strong-arm Anthropic, its leading A.I. vendor, into backing down from its demand that its tools not be used for domestic surveillance or totally autonomous warfare. In general, I’m not an A.I. doomer who thinks existential risk or even thoroughgoing social disruption is right around the corner. But these developments didn’t seem great.

Then the Pentagon proceeded to launch an attack on Iran, reportedly with the help of Claude, even though President Trump had banned its use just hours before. It’s possible that one of the first targets was an elementary school in which at least 175 people were killed. (Neither Israel nor the United States has claimed responsibility for the strike.) Simultaneously, the Pentagon negotiated a new contract with OpenAI, whose tools had been previously judged inferior to Anthropic’s but which appeared to Secretary of War Pete Hegseth to be a bit more accommodating when it came to red lines. None of that seemed great, either.

Dean Ball, a former White House policy adviser who wrote the Trump administration’s A.I. strategy, called the Pentagon’s turn on Anthropic “the first major public debate that is truly about where the proper locus of control over frontier A.I. should be.”

“Each of us gets to choose which futures we wish to fight against, which we can live with and which we will fight for,” Ball went on, sounding less like a policy wonk than like a proselytizer or political organizer. His much-circulated essay may mark the beginning of a belated reckoning about who should be in charge of the A.I. future that we are now routinely told is unnervingly imminent.

Perhaps, like a lot of people in Silicon Valley, you don’t really trust that the government understands the new tech well enough to helpfully shape it. Perhaps, like many Americans who regard Big Tech with growing skepticism, you don’t trust the heads of the major A.I. labs to design the future for human benefit, and find yourself put off by the way that so many of them talk casually about broad human redundancy. Perhaps you find it strange that A.I. advocates who once invoked the possibility of post-scarcity universal basic income are now more likely to refer to cancer treatment as the upside of the A.I. bargain for the average American — without sketching out the path to a cure in an especially informed or persuasive way.

Whatever ideological baggage you bring to the question of A.I. oversight, the political deadlock is striking, and has left us with a pretty strange, quite contemporary, very American status quo. The country is hugely anxious about what’s to come while at the same time seeming to lack real faith in anyone, or in any institution, to actually manage it. One common analogue for artificial intelligence is nuclear weapons. But it’s not like we just let J. Robert Oppenheimer and Edward Teller decide for themselves what to do with the bombs.

Several years ago, with the country reeling from its first encounter with ChatGPT, Congress staged a series of high-profile hearings on the regulatory landscape ahead, on the presumption that, if A.I. was a world-changing technology, then presumably something should be done to establish guardrails around it. After a closed-door meeting with leading researchers and C.E.O.s, Chuck Schumer, then the Senate majority leader, reported that every person in the room had agreed that the government should play a role in regulating A.I. The question was what, exactly, that role would entail.

In the years since, the legislative answer has been almost nothing. An administration that could sometimes be caricatured as safetyist gave way to one more easily described as accelerationist. The major A.I. companies quickly grew so large and so important to the near future of the American economy that they began to seem not only too big to fail but perhaps so big that the government was scared to interfere with them. And now, partly in response, a genuinely democratic backlash is brewing; for the moment, though, it seems organized somewhat myopically around resistance to the construction of data centers.

You might think that, as Americans came to terms with A.I., using it more in their own lives and watching it become diffused throughout our economy and culture, it would seem less alien and more normal. In fact, the country has grown less excited about it and more concerned.

Probably that should not surprise us, given the speed of forecast improvements, the epochal rhetoric and the broader culture into which the technology has arrived. One way of understanding the past decade of populism is to see it as the worldwide howl of voters who feel that in the age of globalization and the new financial elite they are losing control over key aspects of their lives — their economic and social prospects, yes, but also the shape of their own nations and even the public understanding of gender, among other causes and gripes. A.I. arrives in that landscape like an all-encompassing symbol of people’s powerlessness, which is already here but is bound to grow worse, heralding a vision of the future in which much of the ordering of society has been handed over to robots operating in black boxes controlled by a small number of immensely wealthy people.

Increasingly, voters seem to be trying to take things into their own hands, rising up in opposition to the intrusion of A.I. infrastructure into their local communities. These fights are likely to produce environmental victories for those communities. But they are also a game of Whac-a-Mole, one that activists and anxious citizens are playing in part because, compared with diffuse fears about A.I., the fight against data centers is targeted and tangible.

“The Data Center Backlash Is Swallowing American Politics,” Jael Holzman, writing for climate publication Heatmap, declared in November. “Nearly every week now across the U.S.,” she wrote, “Americans are protesting, rejecting, restricting or banning new data center development.” One month later, Senator Bernie Sanders of Vermont began calling for a moratorium on new construction, like one that the Denver City Council has considered. New York State is considering a similar measure, one of a growing number of states that are doing so.

But you could spot the political effect well before all that in nearly every one of the off-year November elections — perhaps particularly in Georgia, where Democrats won two seats on the state’s public service commission, which oversees energy and electricity. Younger voters especially hated the building of data centers, Holzman highlighted, an interesting inversion of the conventional pattern in which younger people are both more tech-friendly and less opposed to change than their parents.

A year ago, when Heatmap surveyed registered voters across the country and asked whether they would support the construction of a data center near where they lived, the response was genuinely mixed — 44 percent said they’d support such a project, and 42 percent said they’d oppose it. But when the survey was repeated in February, the contrast was stark: 52 percent were now opposed, and only 28 percent supportive. The net approval of data centers had dropped 24 points in a year. That’s more ground than Donald Trump has lost, in what is now widely considered a catastrophic year for his popularity (not to mention the health of the democratic project).

This all makes NIMBY opposition to data centers look like a political winner at the local level. But can it address more nebulous anxieties about a technology that is now casually compared to the industrial revolution, which led to many decades of turmoil and strife before life expectancy and living standards began to trend upward, or the invention of the printing press, which produced — among other things — generations of religious civil war across Europe?

Personally, I find these comparisons a bit grand, and am skeptical that A.I. will bring about two hundred years’ worth of economic and cultural transformation in just a couple of decades, as Microsoft’s Satya Nadella, who is among Silicon Valley’s more careful C.E.O.-prognosticators, has suggested it might. But I also don’t know that slowing the rate of data center construction will stop A.I. from introducing machine hallucinations into the conduct of war; from destabilizing employment patterns by bringing about widespread layoffs, and contorting surviving kinds of work, too; from polluting the internet with slop; from interfering with education; from making dating an exchange between chatbots; and from turning what we call writing into something extruded by prompting, and moviemaking or music into things done entirely within a machine.

Perhaps more to the point: I don’t know how it will address fears that a small group of tech oligarchs are working feverishly to design a future in which many of the rest of us might be rendered functionally obsolete. “The cultural and economic impact of A.I. is going to be the biggest issue in politics over the next decade,” Senator Chris Murphy of Connecticut said in December, expressing what has become an even more common refrain in the couple of months since. “I think we have not a clue,” said Sanders in announcing his data center bill. “We are totally unprepared for what is coming,” he added, predicting “massive job loss” and widespread “cognitive decline.”

I want to repeat: I’m not an A.I. doomer. I don’t think an overlord is arising out of silicon in the next 18 months, and I don’t expect “most, if not all,” white-collar jobs to disappear in that time either, as Microsoft A.I.’s Mustafa Suleyman has suggested. The hype fog is thick, if not quite impenetrably so. I’m heartened to see a few political leaders doing their best to peer through it, with Senator Brian Schatz of Hawaii promising legislation to address the risk of labor loss, and Representative Ro Khanna of California recently laying out several principles for A.I. governance in his Silicon Valley home district.

But this is a very fast-moving technology, and it doesn’t seem likely that any meaningful A.I. legislation will become law without the White House changing hands. We are now just as far from that possibility as we are from those first A.I. hearings on Capitol Hill way back in 2023, which tells us that an awful lot can change in three years. In the past three, we’ve gone from casual users freaking out about their first encounters with ChatGPT to the Pentagon staging industrial-strategy-level fights over whether fully autonomous A.I.s can be deployed in war zones without any human oversight. In the next three? Those exponential curves may not bring us to a new godhead, but the genie doesn’t exactly look like it’s going back in the bottle, either. And those hoping to play a more active role in shaping the future that it’s conjuring will probably have to do more than stop ground from being broken for a few new data centers.

Read past editions of the newsletter here.

If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here.

A portrait of David Wallace-Wells

If you received this newsletter from someone else, subscribe here.

Need help? Review our newsletter help page or contact us for assistance.

You received this email because you signed up for David Wallace-Wells from The New York Times.

To stop receiving David Wallace-Wells, unsubscribe. To opt out of other promotional emails from The Times, including those regarding The Athletic, manage your email settings.

Subscribe to The Times

Connect with us on:

facebookxinstagramwhatsapp

Change Your EmailPrivacy PolicyContact UsCalifornia Notices

Zeta LogoAdChoices Logo

The New York Times Company. 620 Eighth Avenue New York, NY 10018