Good morning. Here’s the latest:
More news is below. But first, we examine how A.I. is being used against itself.
Prompt warsArtificial intelligence isn’t just for drafting essays and searching the web. It’s also a weapon. And on the internet, both the good guys and the bad guys are already using it. Offense: Bots and algorithms perpetrate much of the world’s cybercrime. Con artists use them to generate deepfakes and phishing scams. Want malware to steal someone’s data? A chatbot can write the code. Bots also cook up disinformation. As Israel and Iran fired missiles at each other last month, they also flooded the internet with A.I.-powered propaganda. Defense: Cybersecurity companies use A.I. to intercept malicious traffic and patch software vulnerabilities. Last week, Google announced that one of its bots had found a flaw in code used by billions of computers that cybercriminals wanted to exploit — likely the first time A.I. has managed such a feat. Cybersecurity used to be slow and laborious. Human hackers would concoct new attacks, and then security companies would tweak their defenses to parry them. But now, that cat-and-mouse game moves at the speed of A.I. And the stakes couldn’t be higher: Cybercrime is expected to cost the world more than $23 trillion per year by 2027, according to data from the F.B.I. and the International Monetary Fund. That’s more than the annual economic output of China. Today, I explain what the arrival of A.I. hacking means for the internet — and the billions who use it every day. The siegeThe newest cybercriminals are robots. They write with flawless grammar and code like veteran programmers. They solve problems in seconds that have vexed people for years. Malicious emails used to be riddled with typos and errors, so spam filters could spot and snag them. That strategy doesn’t work anymore. With generative A.I., anyone can craft bespoke, grammatical scams. Since ChatGPT launched in November 2022, phishing attacks have increased more than fortyfold. Deepfakes, which mimic photos, videos and audio of real people, have surged more than twentyfold. Because commercial chatbots have guardrails to prevent misuse, unscrupulous developers built spinoffs for cybercrime. But even the mainstream models — ChatGPT, Claude, Gemini — are easy to outsmart, said Dennis Xu, a cybersecurity analyst at Gartner, a research and business advisory firm. “If a hacker can’t get a chatbot to answer their malicious questions, then they’re not a very good hacker,” he told me. Google, which makes Gemini, said criminals (often from Iran, China, Russia and North Korea) used its chatbots to scope out victims, create malware and execute attacks. OpenAI, which makes ChatGPT, said criminals used its chatbots to generate fake personas, spread propaganda and write scams. “If you look at the full life cycle of a hack, 90 percent is done with A.I. now,” said Shane Sims, a cybersecurity consultant. Here’s something odd: Attacks aren’t necessarily getting smarter. Sandra Joyce, who leads the Google Threat Intelligence Group, told me she hadn’t seen any “game-changing incident where A.I. did something humans couldn’t do.” But cybercrime is a numbers game, and A.I. makes scaling easy. Strike enough times, and some hits are bound to land.
The fortressWhat makes A.I. good on offense — finding patterns in heaps of data — also makes it good on defense. Walk into any big cybersecurity conference, and virtually every vendor is pitching a new A.I. product. Algorithms analyze millions of network events per second; they catch bogus users and security breaches that take people weeks to spot. Because A.I. is so quick on offense, a mere human can’t play good defense anymore. “They’re going to be outnumbered 1,000 to 1,” said Ami Luttwak, co-founder of the cybersecurity company Wiz. Algorithms have been around for decades, but humans still manually check compliance, search for vulnerabilities and patch code. Now, cyber firms are automating all of it. That’s what Google said its bot had done. Others are on the way. Microsoft said that its Security Copilot bot made engineers 30 percent faster, and considerably more accurate. There’s a risk, though: A.I. still makes mistakes, and when it has more power, the errors can be much bigger. A well-meaning bot may try to block traffic from a specific threat and instead block an entire country. Related: Robots are taking over food delivery, carting fried chicken through Chicago streets and parachuting Panera strawberry lemonade to Charlotte, N.C., The Wall Street Journal reports.
Israel-Hamas War
Epstein Investigation
Politics
International
Other Big Stories
The Democratic Party is in shambles. To unify, its midterm candidates should campaign against the Republican budget bill, James Carville writes. Here’s a column by Ezra Klein on divisions among American Jews. Everything The Times offers. All in one subscription. Morning readers: Save on unlimited access to The Times with this introductory offer.
A.I. friend: Chatbots can get scary if you suspend your disbelief. One woman didn’t — and wound up in a relationship that was strangely, helpfully real. Trader Joe’s: The grocery chain has no store in London but its tote bags are all over the city. Toupee Queen: This woman is working to change the way people talk about men’s hair loss. Travel 101: Learn some of the local language in your destination before you fly. Here’s how. For the lucky few: Some airlines and credit card companies are stocking their elite lounges with caviar, sushi bars and big-name chefs. Metropolitan Diary: Champagne on the subway. Lives Lived: Peter Phillips was a vanguard figure in the British Pop Art movement of the 1960s who drew from his working-class background to incorporate images of automotive parts, pinups and film sirens in paintings that captured postwar culture’s swirl of sex and consumerism. He died at 86.
Golf: Scottie Scheffler won the Open Championship in Northern Ireland by four shots to capture his second major this year. N.F.L.: JC Tretter, who was one of the favorites to take over as interim head of the NFL Players Association, resigned from the organization.
TV shows like “The X-Files” have taught audiences to invest in conspiracy theories over the years, the critic James Poniewozik argues. “They didn’t create the breakdown of public trust,” he writes, “but they played it all out on TV.” Read more about how conspiracy thrillers fueled our politics. More on culture
|