|
Here's this week's free edition of Platformer: a look inside Meta as the company gets hit with a 1-2 punch of AI-related surveillance and mass layoffs, and an exploration of what it means for white-collar work in general. We'll soon post an audio version of this column: Just search for Platformer wherever you get your podcasts, including Spotify and Apple. Want to support more independent reporting like this? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent piece about the potential end of the Meta Oversight Board. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice. You’ll also get access to Platformer+: a custom podcast feed in which you can get every column read to you in my voice. Sound good?
|
|
|
|
This is a column about AI. My fiancé works at Anthropic. See my full ethics disclosure here. Having your every click, tap, pause, and scroll monitored has long been part of the bargain of using Facebook and Instagram. Now it’s part of the bargain of working there, too. Reuters reported this week that Meta is installing software on the computers of U.S.-based employees that captures their mouse movements, clicks, keystrokes, and occasional snapshots of the contents of their screens. The program, called the Model Capability Initiative or MCI, is meant to train AI agents to perform computer tasks more like humans do. In an internal memo, Meta CTO Andrew Bosworth described a future in which agents “primarily do the work” while employees “direct, review and help them improve.” Meta says the data collected will not be used in performance reviews, and that safeguards are in place for “sensitive content.” Still, the move provoked deep concerns among employees I’ve spoken with, and according to screenshots of internal discussions obtained by Platformer. (Sources earlier reported on some of the messages.) They asked how the company would avoid capturing users’ personally identifying information, or their own health- or finance-related data, particularly given that the tool is allowed to observe them on Gmail. (“Gmail is an approved context so if you have concerns it may be best not to check personal email on your work computer,” Bosworth responded.) They asked whether the program had been subjected to a privacy review and what safeguards, if any, had been put into place to prevent data misuse. (“This project completed a privacy review,” Bosworth said. “Not sure ‘what kind’ you mean but, the usual kind?”) And when one employee asked if there was any way to opt out, Bosworth took the opportunity to remind them who is in charge. “No there is no opt out on your work provided laptop,” he said. (Technically, there is one way to opt out: relocate to Europe. European privacy laws and worker protections prevent invasive tracking of the sort represented by MCI, and so Meta can’t implement it there. It turns out GDPR really was about more than just cookie banners.) Meta contractors have long labored under much worse conditions. In 2019 I began writing about the lives of Facebook content moderators, whose work was closely monitored by automated systems and could be fired for making just a few errors in a week. Data labelers and model raters for Meta and other companies operate under similar levels of surveillance and job precarity. MCI, by contrast, has been presented to employees as relatively benign: a silent observer that will record their workplace actions to help build systems to deliver on Meta’s new mission of “personal superintelligence.” “If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus,” a company spokesman told me. “To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models.” For years, tech companies have asked contractors to behave like machines so that machines can learn to behave like people. Now Meta is asking its own full-time employees, who once occupied the top of the digital labor hierarchy, to do the same. There is a word for this in the history of work: Taylorism. A century ago, managers hovered over factory workers with stopwatches, breaking down skilled labor into measurable motions so it could be standardized, sped up, and assigned to cheaper workers. Last year I visited an Amazon fulfillment center and saw that logic at work: automated systems told workers what to pick, pack, and route, monitored their pace, and were poised to intervene should they fall behind. Meta’s MCI is not a stopwatch, exactly. But it reflects the same impulse: make knowledge work legible to AI systems, capture it, optimize it, and automate it. Initially, most Meta employees won’t feel any effects from the system at all. If it works, though, eventually it might replace them. None of this comes as a surprise, really. In June 2025, Meta paid $14.3 billion for a 49% stake in Scale AI and installed its co-founder and CEO, Alexandr Wang, as the head of its new superintelligence team. Scale built its business on harvesting workflow data from contractors. “For a lot of the capabilities that we want to build into the models, the biggest blocker is actually a lack of data,” Wang told an interviewer from Andreessen Horowitz in 2024. “There’s no pool of really valuable agent data that’s just sitting around anywhere. And so we have to figure out how to produce really high quality data.” MCI appears to be one such effort to figure it out. At the same time Meta ratchets up monitoring of its workforce, it is also shrinking it. The company confirmed today that it will lay off 10 percent of the workforce — about 8,000 people — as part of a continued push for “efficiency” as it looks to spend up to $135 billion this year in its buildout of AI infrastructure. It also will not fill 6,000 open positions. Those cuts will bring Meta’s headcount down to just above where it was at the end of 2023, when a year of cuts slashed its ranks by more than 20,000 people. But among employees I’ve spoken with, rumors are rampant that much bigger cuts are coming. Mark Zuckerberg laid out a relevant vision of the future on the company’s most recent earnings call: “We're starting to see projects that used to require big teams now be accomplished by a single very talented person,” he said. Meta will not be the last company to install MCI-like systems on workers’ devices to help build systems that might one day replace them. With the most accessible stores of human-written text already heavily mined for model training, fears of a “data wall” are driving more companies to find ways to generate their own unique data sets. And it seems that one way to do that is to bring the logic of blue-collar labor management into white-collar jobs that were once defined by their autonomy, judgment, and trust. The result is that the people who were once entrusted with building the machine have now become raw materials for it. At Meta, that used to be what the users were for. Now it’s what the employees are for, too. On the podcast this week: Kevin and I discuss Tim Cook's tenure at Apple. Then, Andrew Yang joins us to talk about being too early to the idea of universal basic income and why it's making a comeback. And finally, some HatGPT. Apple | Spotify | Stitcher | Amazon | Google | YouTube Following
SpaceX gears up for its IPOWhat happened: In a new S-1 filing viewed by Reuters, SpaceX appears to be moving away from its namesake and toward the hottest thing in Silicon Valley today — AI for businesses. In the new filing, SpaceX estimates its total addressable market could be worth as much as $28.5 trillion. Of that staggering figure, the space-turned-AI company estimates that more than 90 percent of it could come from AI services. More specifically, from AI for enterprises. A TAM estimate can help investors evaluate a company’s potential, but offers no guarantee for how well it will actually perform. So while SpaceX brags about identifying the largest actionable TAM “in human history” in its filing, it still has a long way to go to get there. Why we’re following: Elon Musk has been on an AI consolidation spree. SpaceX acquired xAI (which already owned X) for a reported $250 billion in February. On Tuesday, the company said it had an agreement giving it the right to acquire AI startup Cursor for $60 billion, or to pay $10 billion as a kind of break-up fee. Microsoft, which has been trying to gain traction with its AI coding tools, reportedly considered buying Cursor before the SpaceX announcement, though it later chose not to proceed. Of note: the pseudo-acquisition of Cursor isn’t yet a real acquisition because of the impending IPO, a source told Bloomberg. A major acquisition would mean updated filings and financials, and would potentially delay the offering. The Cursor acquisition would give SpaceX a significant leg-up in the AI coding market — 67 percent of Fortune 500 companies use its tech, Fortune reported. Then again, Cursor currently has access to Anthropic's Claude models — and xAI doesn't. Will Anthropic cut access to Cursor, which is one of its largest customers? What will Cursor customers do if it does? While we wait to find out, SpaceX is targeting a summer IPO at a valuation of $2 trillion. That would make it the biggest IPO ever. What people are saying: At Stratechery, Ben Thompson thinks the deal makes sense: since Elon basically decided to dissolve and restart xAI, he needs someone to use all the data centers he’s built. So it makes sense to get an AI coding startup to do it. “SpaceXAI has a ton of compute, and no one to use it, either for R&D or inference,” Thompson writes. “There is really obvious synergy between SpaceXAI and Cursor: the former has compute, and the latter has a product, data, and a decent amount of distribution for the use case that is most important for AI.” Bloomberg columnist Matt Levine had a colorful explanation of why SpaceX couldn’t yet acquire Cursor. “There’s an IPO! In like two months! It’s bad enough that the SpaceX IPO became Also The xAI And Twitter IPO in February, but making it also the Cursor IPO now is too much.” He added the Cursor deal could be a good way to get talent who would be otherwise skeptical about working for xAI. “If you’re leaving your startup to go work for Musk, a famously demanding and mercurial boss, you will want to get cashed out of your startup. Selling for $60 billion is a good deal; going to work for him on spec for a few months is not.” But, Levine said, “Of course Musk does change his mind a lot. It would be very funny if he sours on Cursor by July and walks away from the deal, and they make $10 billion for three months’ work.” —Lindsey Choo and Ella Markianos Side QuestsThe White House accused China of stealing tech from US AI labs on an industrial scale. An in-depth examination of how AI-generated CSAM is overwhelming law enforcement teams. Anthropic outspent OpenAI in Q1 2026 in their largest lobbying quarter yet; Anthropic spent $1.6 million, and OpenAI spent $1 million. (Why? Did something happen?) OpenAI has reportedly briefed federal agencies and Five Eyes allies on its new cyber product. (Who's "fear-based marketing" now!) Chinese cybersecurity firm 360 Digital Security said it developed an AI agent that has discovered 1,000 previously unknown vulnerabilities. Kalshi suspended three Congressional candidates from Minnesota, Texas and Virginia amid allegations of insider trading. More than half of the world’s nations could have tech capable of hacking into the UK’s infrastructure, UK intelligence warned. London’s police force can continue using facial recognition to identify suspects, a judge ruled. Apple fixed a bug that allowed police to extract iPhone and iPad messages that were deleted or had disappeared. Turkish lawmakers passed a bill that restricts social media access for those under 15. Los Angeles became the first major school district to restrict students’ use of laptops and tablets in class. Australia asked gaming platforms including Roblox and Minecraft to detail their child safety measures. New gas projects linked to 11 US data centers could create more greenhouse gasses than entire countries, a review found. Environmentalists in Brazil are pushing back on TikTok’s planned $9.5 billion data center on the country’s coast. Major corporations including Apple and Amazon are pushing back on tightening emissions reporting rules. A look at how Microsoft appears to be turning its back on carbon removal. |