Join the 500+ paid subscribers who make this work possible. Sign up today for just $8/month to gain access to members-only articles and our private Discord server: The Three Faces Of Generative AIAfter the AI field spent years working to make its models smarter, three distinct use cases are emerging.
Nearly three years after ChatGPT’s debut, generative AI is finally settling into a core set of use cases. People today use large language models for three central purposes: 1) Getting things done 2) Developing thoughts, and 3) Love and companionship. The three use cases are extremely different, yet all tend to take place in the same product. You can ask ChatGPT to do something for you, have it make connections between ideas, and befriend it without closing the window. Over time, the AI field will likely break out these needs into individual products. But until then, we’re bound to see some continued weirdness as companies like OpenAI determine what to lead with. So today, let’s look at the three core uses of Generative AI, touching on the tradeoffs and economics of each. This should provide some context around the product decisions modern AI labs are grappling with as the technology advances. AgentAI research labs today are obsessed with building products that get things done for you, or ‘agentic AI’ as it’s known. Their focus makes sense given they’ve raised billions of dollars by promising investors their technology could one day augment or replace human labor. With GPT-5, for instance, OpenAI predominantly tuned its model for this agentic use case. “It just does stuff,” wrote Wharton professor Ethan Mollick in an early review of the model. GPT-5 is so tuned for agentic behavior that, whether asked or not, it will often produce action items, plans, and cards with its recommendations. Mollick, for instance, saw GPT-5 produce a one-pager, landing page copy, a deck outline, and a 90-day plan in response to a query that asked for none of those things. Given the economic incentive to get this use case right, we’ll likely see more AI products default toward it. Thought PartnerAs large language models become more intelligent, they’re also developing into thought partners. LLMs are now (with some limitations) able to connect concepts, expand ideas, and search the web for missing context. Advances in reasoning, where the model thinks for a while before answering, have made this possible. And OpenAI’s o3 reasoning model, which disappeared upon the release of GPT-5, was the state of the art for this use case. The agent is searching for efficiency and wants to move you on to the next thing. The thought partner is happy to dwell. The AI thought partner and agent are two completely different experiences. The agent is searching for efficiency and wants to move you on to the next thing. The thought partner is happy to dwell and make sure that you understand something fully. The ROI on the thought partner is unclear though. It tends to soak up a lot of computing power by thinking a lot and the result is less economically tangible than a bot doing work for you. Today, with o3 gone, OpenAI has built a thinking mode into GPT-5, but it still tends to default toward the agentic uses. When I ask the model about concepts in my stories for instance, it wants to rewrite them and make content calendars vs. think about the core ideas. Is this a business choice? Perhaps. But as the cost to serve the thought partner experience comes down, expect dedicated products that serve this need. CompanionThe most controversial (and perhaps most popular) use case for generative AI is the friend or lover. A string of recent stories — some disturbing, some not — show that people have put a massive amount of trust and love into their AI companions. Some leading AI voices, like Microsoft AI CEO Mustafa Suleyman, believe AI will differentiate entirely on the basis of personality. When you’re building an AI product, part of the trouble is some people will always fall in love with it. (Yes, there is even erotic fan fiction about Clippy.) And unless you’re fully aware of this, and building with it in mind, things will go wrong. Today’s leading AI labs haven’t attempted to sideline the companion use case entirely (they know it’s a motivation for paying users) but they’ll eventually have to sort out whether they want it, and whether to build it as a dedicated experience with more concrete safeguards. In SumAs AI companies decide which of these use cases to pursue, I expect to come back to this framework when evaluating their choices. My bet is those that most clearly know which they’re pursuing will end up winning the race. It’s going to be a focus game. Cloudera’s EVOLVE25: Explore the Potential of Bringing AI to Your Data – Anywhere (sponsor) |