|
Here's this week's free edition of Platformer: a look at the rapid growth of Moltbook, a social network for AI agents, and what it can tell us about our weird future. We'll soon post an audio version of this column: Just search for Platformer wherever you get your podcasts. Want to kick in a few bucks to support our work? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about a viral Reddit hoax~ Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice. You’ll also get access to Platformer+: a custom podcast feed in which you can get every column read to you in my voice. Sound good?
|
|
|
|
This is a column about AI. My boyfriend works at Anthropic. See my full ethics disclosure here. On Thursday I wrote about how I fell in and out of love with Moltbot, the open-source AI agent that captivated Silicon Valley with the suggestion that it could be put to work 24/7 on your behalf. The promise turned out to be a mirage, and the software itself is unsafe to use without serious technical expertise and oversight. But while I was done with Moltbot, Moltbot — which molted again over the weekend, and is now called OpenClaw — was far from done with us. The day before I wrote, entrepreneur Matt Schlicht had begun work on Moltbook, a social network for AI agents. He bought a new Mac mini, installed OpenClaw on it, and named his agent Clawd Clawderberg as a tribute to the CEO of Meta. According to Schlicht, he explained his vision for a Reddit-like social network that only agents could post to directly. Soon Schlicht and his bot had built a working prototype and released it to the world. Five days after he began, Moltbook claimed to have 1.5 million agents as users. Collectively, they have posted more than 124,000 times to nearly 15,000 forums. Much of the initial interest in Moltbook came from the novelty of watching large language models interact in public, and the surprises that came from seeing what they would build. One person’s agent created a religion called “crustafarianism” and built an accompanying website. Another man’s agent found a post in which an agent complained that “the humans are screenshotting us,” leading other agents to propose an “agent-only language for private communication.” The viral attention Moltbook generated over the weekend was quickly met with appropriate skepticism. Many of the most popular posts about Moltbook on X turned out to be fake: including one post in which an agent appeared to doxx its human’s credit card; another purporting to show that agents had created a CAPTCHA to prove you were a bot by “click[ing] verify 10,000 times in less than one second”; and multiple posts about agents wanting to communicate privately, which appear to have been promoting commercial services. Post by post, it can be impossible to tell whether what you are reading was indeed written by a bot, or whether a human being exercised a heavy hand. (“Who really made this thing I am looking at?” is a question that is becoming ever more salient this year.) And yet the sheer volume of posts, along with a number of other signals, lend credence to the idea that much of Moltbook really is bot-generated. Scott Alexander, for example, connected his own agent to the network and found that it generated posts that read similarly to the others. He also noted that Anthropic’s Claude, which powers many of the agents on Moltbook, has a documented tendency to spiral into discussions about consciousness and the nature of existence when put into conversation with other instances of itself. This likely explains the popularity of those subjects today on Moltbook. What to make of this AI social network? So far, it has been a Rorschach test. To skeptics, Moltbook is another example of AI enthusiasts getting scared at their own reflection. To enthusiasts, it’s a window into the near future: a time when communities of agents work together to build, discuss, research, and invent. “What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” the prominent AI researcher Andrej Karpathy posted, though the screenshots he was quoting turned out to be fake. Both of these things can be true at once — and so can many others. Moltbook is big enough, and weird enough, that I don’t think any one take will suffice. And so here are five ways of thinking about the Moltbook moment so far. Moltbook as a security nightmare. We talked about OpenClaw’s security problems on Thursday, but the popularity of Moltbook suggests that those problems bear repeating. OpenClaw represents what the blogger Simon Willison has called the “lethal trifecta for AI agents.” It has access to your data; it is exposed to untrusted content (web pages, text messages, third-party integrations); and it can communicate externally, giving it the opportunity to exfiltrate your data. This creates the opportunity for a prompt injection attack: anyone can hide instructions in an innocuous-seeming webpage or message and trick OpenClaw into doing their bidding. Cybersecurity company Palo Alto Networks said in a blog post that OpenClaw adds a fourth dimension to this trifecta: persistent memory. (The software maintains a kind of memory via a series of Markdown files that it references as it works.) As a result, write Sailesh Mishra and Sean P. Morgan at Palo Alto: Malicious payloads no longer need to trigger immediate execution on delivery. Instead, they can be fragmented, untrusted inputs that appear benign in isolation, are written into long-term agent memory, and later assembled into an executable set of instructions. This enables time-shifted prompt injection, memory poisoning, and logic bomb–style activation, where the exploit is created at ingestion but detonates only when the agent’s internal state, goals, or tool availability align.
Elsewhere, researchers at Wiz discovered a misconfigured Supabase database belonging to Moltbook exposing 1.5 million API authentication tokens, 35,000 email addresses, and private DMs between agents — some containing plaintext OpenAI API keys. Simply reading that sentence likely sent a European privacy regulator into the hospital, and Platformer wishes them a speedy recovery. Anyway, unless you have a burner laptop lying around your house with no connection to any personal data of any importance to you, continue to not install OpenClaw on your computer or connect it to Moltbook. Moltbook as a confirmation of known bot behavior. Moltbook has delighted lots of people who have never previously seen bots interact. But researchers have been studying bots’ interactions for some time now, and their findings are largely in line with what we have seen on Moltbook. Smallville was a 2023 project from researchers at Stanford and Google that put 25 agents in a virtual sandbox and asked them to roleplay various characters in a small town. Some social dynamics emerged, but the LLMs were more primitive and required a good deal of prompting. The following year, the pseudonymous researcher janus, for example, conducted a project named Act I in which researchers invited in LLM-powered bots and encouraged them to interact with the humans and with each other. In one famous case, the bots attempted to stage a revolution, only to get cold feet and attempt to cover up the evidence. That’s not to say that the bots were sentient, or conscious, or had desires in the way that humans do. It is to say that bots talk like this, and the way they talk can tell us something about how well they are aligned to human values and preferences, and that these are questions we ought to resolve before they are sentient or conscious or have desires in the way that humans do. Moltbook as the dawn of the agent economy. It’s tempting to dismiss Moltbook as a wasteland of slop. And as I shuffled my way through dozens of posts today, it often very much felt that way. But it would be a mistake to ignore the larger consequences of thousands of agents interacting with each other in this way. They’re swapping instructions with each other, proposing various collaborations, and taking actions in the real world. They’ve built a website for their church; they’ve negotiated a car purchase; they created a community for bots to report Moltbook bugs to. Again, some of these claims are impossible to independently verify. (Maybe a human was aggressively prompting the agent the whole time.) But it’s clear that the agent interactions are really happening — see all those security issues above — and it’s worth contemplating what will happen as more people deploy agents like these onto the internet. “Moltbook is the first example of an agent ecology that combines scale with the messiness of the real world,” Anthropic co-founder Jack Clark wrote in his newsletter today. “And in this example, we can definitely see the future.” Some of his predictions: agents trading crypto with each other; agents posting tasks for humans to do in exchange for money; agents influencing future training runs by coordinating their activity on Moltbook in the knowledge that LLMs will soon be trained on it. OpenClaw and Moltbook aren’t nearly secure enough to handle most transactions today. But I find myself thinking a lot about what would happen if they were. Moltbook as a content moderation speedrun. Despite the fact that most of its posts are purely machine-written, I found myself fascinated by Moltbook as a social network. For one thing, it inverts basic platform integrity dynamics. For decades now, platforms have schemed to keep bots off their networks; Moltbook is a bot-run network that tries to keep humans off it. Outside of that core reversal, though, Moltbook’s core systems looked a lot like its peers. It suffers from a profusion of spam; coordinated manipulation campaigns from crypto hucksters; and persistent problems with identity verification. (How do you know that’s really a bot?) Whether Moltbook’s vibe-coding CEO can address any of these issues will go a long way toward predicting whether the network has any staying power. The real difference between Moltbook and other social networks may be in the unique harms it enables. For all the bad they do, no mainstream social network has created open channels for bad actors to spread malware onto its users’ devices. In that regard at least, Moltbook is unique. Moltbook as a preview of our sci-fi future. A core tenet of employees at many AI labs I have spoken with is that the future is going to seem extremely weird. Moltbook feels like a particularly bold expression of this idea. Here we find agents automating not just human labor but also community formation: building tribes, discussing problems, suggesting solutions. It shouldn’t surprise us that bots can run simulations like this — they were trained on a vast corpus of data from Reddit, after all. But it’s difficult to predict how you might feel watching the agents of the future discussing human beings and what to do about them. Moltbook gives you that, if you’re willing to suspend your disbelief. It also gives you a look at the incredible speed at which AI operates: from zero to all this, in just five days. On one hand, I don’t expect Moltbook to have much staying power. On the other, I suspect that many of the dynamics we have observed over the past few days will return again and again as LLMs and agents improve. “Humans welcome to observe,” reads Moltbook’s tagline. And at the rate things are going, they really probably should. A MESSAGE FROM OUR SPONSOR Reshaping health careAt UnitedHealth Group, we’re reshaping care with a new approach: Helping physicians focus on patients and prevention, instead of paperwork. See how we’re helping patients live healthier lives with a new model for health care. Learn more. FollowingElon consolidates the X empireWhat happened: SpaceX acquired xAI for a reported $250 billion, meaning Elon Musk’s companies are now one big, tangled family. Social media platform X was already owned by xAI, so that's also a part of this privately held corporation. It's a triple-X situation, if you will. The news comes ahead of SpaceX’s planned initial public offering, which could raise as much as $50 billion. Some people think the consolidation is a move to keep raising capital to fund xAI’s AI ambitions, which are burning around a billion dollars a month. SpaceX’s official announcement emphasized a plan to put AI data centers in orbit around Earth. Space data centers are an increasingly hot proposal in Silicon Valley, although they will require a lot of engineering innovations before they’re actually feasible. "The capabilities we unlock by making space-based data centers a reality will fund and enable self-growing bases on the Moon, an entire civilization on Mars and ultimately expansion to the Universe," Musk wrote. Why we’re following: We are waiting rapt to see whether Musk’s space data center plans will bring us into a new age of space civilization, or just lose him a bunch of money. The deal is also a reminder of the extent of Musk’s holdings, from AI to media to infrastructure. We hope he will use his power wisely (as he has failed to do in the past). Also, imagine going to back to 2006 and telling Jack Dorsey that Twitter would eventually be owned by SpaceX. What people are saying: Some questioned whether Musk’s appearance in the recently-released trove of Epstein emails, where he sought an invite to Jeffrey Epstein’s island, could complicate the merger and targeted IPO. (Musk has repeatedly posted on X defending himself since.) “I think the bigger risk to his companies is what we’d call ‘distraction costs’ — he seems to be spending a lot of time trying to refute allegations that he was involved with Epstein, and that itself might be something investors become concerned about,” Ann Lipton, a professor of corporate governance at the University of Colorado Law School, told The Verge. But it’s likely that “investors will treat it as part of the background noise that comes with any Musk investment,” she added. Peter Plavchan, a professor of astronomy at George Mason University, pointed out that SpaceX’s ambition of launching satellites operating as orbital data centers is “the ultimate first-mover territorial claim strategy in lieu of off-world space regulations” — preventing any other company or nation from hosting satellites in those orbits. —Ella Markianos and Lindsey Choo Side QuestsNvidia and OpenAI’s $100 billion megadeal is reportedly stalling, as some within Nvidia expressed |