Researchers disagree about the speed of gen AI adoption. But one thing’s clear: The tech is increasingly everywhere |
|
|
Hello and welcome to Eye on AI! In this newsletter…Elon Musk is reportedly raising xAI funding at a $40 billion valuation… Microsoft’s GitHub Copilot goes beyond OpenAI, striking deals with Anthropic and Google… AI ‘slop’ floods Medium… Apple Intelligence gets mixed reviews… U.S. finalizes rules to curb AI investments in China
A research paper has been making the rounds on social media that claims 40% of U.S. adults have used generative AI at work or at home, evidence that people are rapidly adopting generative AI tools. Indeed, the survey found the number is outpacing PC adoption in the early 80s, when just 20% of people used the then-new computers three years after they were introduced.
The paper, titled “The Rapid Adoption of Generative AI” and published last month by the National Bureau of Economic Research, comes from researchers at the Federal Reserve Bank of St. Louis, Vanderbilt University, and Harvard Kennedy School. It was based on responses from just over 5,000 people said to give a representative sample of the entire U.S. population.
But Arvind Narayanan, a Princeton University computer science professor and coauthor of the recently-released AI Snake Oil, called the paper a “hype case study” in a post on social media service X. The 40% figure, he pointed out, would include someone who simply asked ChatGPT to write a limerick once in the last month. The paper actually said only 0.5%-3.5% of work hours involved generative AI assistance—and only 24% of workers used it once in the last week prior to being surveyed, and only one in nine used it every workday.
“Compared to what AI boosters were predicting after ChatGPT was released, this is a glacial pace of adoption,” Narayanan wrote, adding that people spending thousands of dollars on early PCs were not just using them once a month.
Based on what I see in my own world, I agree with the more tempered view of AI adoption. Among the people I know, most are not using generative AI tools at all. Many don’t even know what I’m talking about if I mention a tool other than ChatGPT. In fact, my husband is actually among the few who would fall under the “super user” category that the Washington Post reported on yesterday. They’re defined as people who use tools such as ChatGPT, Google’s Gemini, or Anthropic’s Claude regularly to learn new skills, create reports, analyze data, and research topics.
But I believe this will change relatively quickly. That’s because it’s becoming increasingly impossible for consumers to avoid generative AI text, image, audio, and video tools. If you use Google, you’re seeing AI Overviews with every search. Meanwhile, for months, Google Docs has been prompting me to use its AI assistant. Apple Intelligence just arrived for the iPhone, while Microsoft Copilot is in everything from Word to Excel. As for Meta, its AI assistant is inescapable on Facebook, Instagram, and WhatsApp. Every day, consumers are breaking down and giving one of these tools a try and, intrigued by the results, might try again for another task.
I’ll give you an example: One of the most common questions people ask me is whether I use generative AI tools. The answer is yes, but so far I have only found them helpful for certain specific tasks. For example, headlines for articles are always hard to write, and sometimes I just want some feedback. For a long time, I would paste a tentative headline into ChatGPT and ask the AI to suggest a few other options and then I would edit from there.
The results were okay, but not always great. Finally, a few months ago I tried a new tack: I asked ChatGPT “What do you think of this headline?” The results are always really satisfying. For this essay, for example, I asked ChatGPT what it thought of the following: “Most people you know aren’t using generative AI. But nearly all are still experiencing it.”
ChatGPT responded with thoughts about what was working in my headline (contrast, intrigue, and relatability) as well as suggestions for refinement (something punchier with improved flow). It offered a few other options that I didn’t love, so I went back and forth with it a few times. Each time it got me closer to what I wanted (though my editor and I finally went in a different direction altogether).
Whether generative AI adoption reaches the rapid, mass adoption that companies and investors predicted remains to be seen. Some tools have been criticized as “half-baked” (like the New York Times said about Apple Intelligence’s new features yesterday). There will be plenty of generative AI products that bite the dust because users did not find them, well, useful. But good luck avoiding generative AI right now—it’s already everywhere, whether you are using it yet or not. Companies are crossing their fingers that you’ll give them a try, again and again.
With that, here’s more AI news.
Sharon Goldman sharon.goldman@fortune.com @sharongoldman
Request your invitation for the Fortune Global Forum in New York City on Nov. 11-12. Speakers include Honeywell CEO Vimal Kapur and Lumen CEO Kate Johnson who will be discussing AI’s impact on work and the workforce. Qualtrics CEO Zig Serafin and Eric Kutcher, McKinsey’s senior partner and North America chair, will be discussing how businesses can build the data pipelines and infrastructure they need to compete in the age of AI.
|
|
|
A first peek into the new world of gen AI-centric strategies With today’s IT world being completely rewritten by generative artificial intelligence, the Accenture-Nvidia deal illustrates a new IT reality. Learn more
|
|
|
Elon Musk's xAI is reportedly seeking funding at a $40 billion valuation. Just a few months ago, the startup was valued at $24 billion, when it raised $6 billion. A source said xAI hopes to raise several billion dollars in the new funding round—though the talks are in the very early stages, according to the Wall Street Journal.
Microsoft's GitHub Copilot goes beyond OpenAI. There's another sign that Microsoft's relationship with OpenAI, in which it's a big investor, is becoming less exclusive. GitHub Copilot, one of Microsoft's most successful AI products that is used for code generation, will now give developers the choice of using models from Anthropic and Google, rather than only OpenAI's models. “There is no one model to rule every scenario, and developers expect the agency to build with the models that work best for them,” GitHub CEO Thomas Dohmke said in a blog post.
AI 'slop' Is flooding Medium. “Slop” has become the term of choice to describe low-quality AI-generated content and images. Think clickbait articles, blog posts stuffed with keywords, and photos of people with six fingers. According to Wired, this kind of slop has flooded the blogging platform Medium, far more than on other websites. An analysis by AI detection startup Pangram Labs, commissioned by Wired, found that 47% of nearly 275,000 Medium posts sampled over a six-week period were likely AI-generated. “This is a couple orders of magnitude more than what I see on the rest of the internet,” said Pangram CEO Max Spero.
Apple Intelligence is here, but people want it to be smarter. Most publications reporting on Apple Intelligence’s new iPhone features had a similar reaction to the Verge, which said “like most AI on smartphones so far, it’s mostly underwhelming.” I have a Samsung Galaxy, so I couldn’t test Apple Intelligence myself, but it’s clear that more features are coming down the pike: In yesterday’s launch announcement, Apple teased features that haven’t debuted yet, including using Siri to take action in apps. For now, users can take advantage of features including AI summarization, email tweaking, and call transcription.
U.S. finalizes rules to curb AI investments in China. The Biden administration announced yesterday that it's finalizing rules to restrict U.S. investments in China’s AI and other tech sectors deemed potential threats to national security, according to Reuters. Initially proposed by the U.S. Treasury in June, the rules follow an executive order signed by President Joe Biden in August 2023, targeting three critical areas: semiconductors and microelectronics, quantum information technologies, and specific AI systems.
|
|
|
Oct. 28-30: Voice & AI, Arlington, Va.
Nov. 19-22: Microsoft Ignite, Chicago
Dec. 2-6: AWS re:Invent, Las Vegas
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia
Dec. 9-10: Fortune Brainstorm AI, San Francisco (register here)
|
|
|
Potential for LLMs for less-used languages. In February, Cohere for AI, the nonprofit research lab established by Cohere in 2022, unveiled Aya, an open-source large language model supporting 101 languages. That's more than twice the number of languages covered by existing open-source models. The researchers also released the Aya dataset, a corresponding collection of human annotations from a community of "language ambassadors" worldwide. This is important because one of the obstacles with less common languages is that there is less source material to train AI models on.
Last week, the organization released Aya Expanse, another family of models meant to help researchers help close the AI language gap. "We are a small lab, and this builds on years of dedicated research to connect the world with language," said Sara Hooker, a former Google Brain researcher who has led Cohere for AI since it was founded.
|
|
|
Could an AI model predict a "gray swan" weather event like a Category 5 tropical cyclone? AI models have been used for years to improve weather and climate predictions. But could an AI model predict a "gray swan" extreme weather event, like a Category 5 tropical cyclone? These events are possible, but so rare that they are often absent from the training dataset. That was the challenge researchers from the University of Chicago wanted to take on in a recent paper: They trained separate versions of an AI model with all data included, and another with Category 3-5 tropical cyclones removed.
The researchers wanted to know whether the AI model with the cyclones removed could extrapolate from weaker weather events present in the training set to stronger, unseen weather extremes. Unfortunately, the answer was no. "Our work demonstrates that novel learning strategies are needed for AI weather/climate models to provide early warning or estimated statistics for the rarest, most impactful weather extremes," the researchers wrote.
|
|
|
|