Scroll through any tech news feed and you'll find two dominant narratives about AI. On one side, venture capitalists like Marc Andreessen publish manifestos declaring that "intelligence is the ultimate engine of progress" and promising technology will liberate the human soul. On the other, respected researchers
sign open letters warning of "extinction risk" and comparing AI development to nuclear weapons proliferation.
Both camps share something in common beyond their certainty. They're drowning out everyone else, including plenty of smart AI researchers who don't subscribe to either vision. These researchers show up to conferences, publish papers, and do the slow work of studying how these systems actually behave in the real world and how they are likely to evolve. But their perspectives rarely break through because they lack the dramatic
appeal of salvation or extinction.
They're stuck arguing for something far less quotable — that we might be looking at a technology that's genuinely important but ultimately ordinary. |
What ‘normal’ actually means
A paper published last spring from Princeton researchers Arvind Narayanan and Sayash Kapoor offers a useful framework. Their argument is deceptively simple. What if AI is just normal technology?
Normal
doesn't mean insignificant. The printing press, electricity, and the internet all fundamentally changed the world. But they did it piecemeal, over decades, through messy processes of adoption that gave societies time to respond. Factory owners didn't immediately understand how to harness electric power. It took years of experimentation with layouts, worker training, and production processes before productivity gains materialized. The technology was revolutionary, but the revolution was gradual.
This framing cuts against both the utopian and dystopian visions. It suggests we don't need to prepare for superintelligent AI taking over the world or plan for college graduates getting jobs on spaceships by 2035, as OpenAI's Sam Altman recently suggested. We need to think about
discrimination in hiring algorithms, the erosion of the free press, labor displacement in specific industries, and all the problems that come with any powerful new tool.
That's a long list of problems. But they're the kind of problems we've solved before. We developed food safety regulations when industrialized production created new risks. We made commercial aviation remarkably safe through
decades of crash investigations, pilot training standards, and maintenance requirements. None of it was easy or fast, but we did it. If AI follows this pattern, we're not flying blind. We can do the hard, slow work of applying lessons we've already learned.
After the goldrush
The problem is that this framing doesn't serve the AI industry’s interest. Tech that will take decades to pan out won't help you raise billions in venture capital this quarter. And it doesn't offer a get-out-of-jail-free card for companies that want to skip safety testing because we're in an arms race with China. The extreme narratives are useful precisely because they justify extreme responses, whether that's showering AI labs
with cash or exempting them from oversight. The boring middle justifies nothing except careful, deliberate work.
That work can still account for the extremes. We stress-test banks for financial crises that may never come. We build earthquake codes into cities that might not shake for decades. Planning for normal doesn't mean ignoring tail risks. It means not only planning for tail risks. It also means asking the question that doesn't raise money. What if we bet on apocalypse or utopia and neither shows up? Then we've spent years having the wrong conversation.
The middle path requires something harder than prophecy. It requires patience, empiricism, and the willingness to admit we don't know how this plays out. That's not a satisfying story. But it might be the right one.
—Jackie Snow, Contributing Editor |