Tech Brew // Morning Brew // Update
Plus, how to fool AI in 20 minutes or less.

We hope your Thursday is better than Mark Zuckerberg’s Wednesday. Meta’s CEO took the witness stand in Los Angeles yesterday during a landmark trial examining social media’s impact on young users. In his first time facing a jury, Zuckerberg joked that he was poorly media trained, pushed back on claims that Instagram is addictive, defended beauty filters on the app, and said his company was no longer prioritizing “time spent” goals on the platform. That came days after Instagram head Adam Mosseri suggested that using social media for 16 hours a day doesn’t qualify as addiction. Maybe to him, it’s just a really ambitious screen time goal.

While Meta fights scrutiny over its platforms, it’s also doubling down on technology that raises a whole new set of surveillance issues. Keep reading for more.

In today's newsletter:

  • Meta is quietly working on a controversial update for its Ray-Ban smart glasses.
  • A tech journalist scammed ChatGPT and Gemini in 20 minutes.
  • A picture says 1,000 words. But a new photo of Sam Altman and Dario Amodei might be worth a million.

—Whizy Kim, Diba Bijari, Saira Mueller, and Alex Carr

THE DOWNLOAD

Meta Ray-Ban Glasses

Meta

TL;DR: If you’ve ever squinted at someone from across the bar and thought, “Wait, did I go to college with them?” Meta wants to help—and will likely open a can of worms in the process. The company is reportedly working on facial recognition for its Meta Ray-Ban smart glasses, reviving a technology it previously shut down over privacy concerns. This time, Meta is betting that both the product and the moment are finally in its favor.

What happened: The New York Times first reported that Meta is working on facial recognition software, internally dubbed “Name Tag,” that would allow Ray-Ban wearers to identify people nearby and see information tied to their Meta profiles. The company says it will take a “thoughtful approach” to evaluating the feature and hasn’t announced an official launch, but reports say it could arrive as soon as this year. That would mark a notable reversal for a business that once argued the technology had become too risky to continue.

Context, please: This isn’t Meta’s first rodeo. It actually shut down Facebook’s original facial recognition system in 2021, citing “growing societal concerns.” Along the way, Meta faced years of legal scrutiny, including a $5 billion Federal Trade Commission fine over privacy violations. The company also agreed to biometric data settlements totaling more than $2 billion in Illinois and Texas.

What’s changed: A few things. First, Meta’s Ray-Ban glasses have quietly become one of the company’s rare hardware hits, selling millions of units (Zuck’s courthouse entourage even showed up wearing them at trial yesterday). Facial recognition technology would make Meta’s glasses far more powerful—allowing its AI assistant to recognize people in real time and provide context automatically. Also worth noting: The political and regulatory environment has shifted under the Trump administration, which has signaled it could be less aggressive on privacy enforcement.

What’s next: Glasses that recognize people raise painfully obvious questions, including whether people can be identified without consent or whether that data could eventually be accessed by law enforcement. (Reminder: Customs and Border Patrol agents wore Meta glasses to raids last year). And even in a friendlier regulatory environment, consumer backlash is still a risk. Just look at the reaction to the Amazon Ring Super Bowl ad, which led to an outcry about AI-powered surveillance. Meta has spent years betting on how much privacy people are willing to trade for convenience. Now, with AI glasses gaining traction and the environment shifting in its favor, it’s betting the answer might be: more than before. —AC

Related story: Meta’s also got plans to launch a smartwatch later this year.

Presented by Got Print

A stylized image with the words bug report.

Apple “fixed” screenshots (they weren’t broken)

Apple has a problem: It can’t leave well enough alone.

Case in point: iPhone screenshots.

For years, the process was second nature. Press two buttons (lock and volume up). The screen flashes, and the screenshot appears, ready to save, edit, or delete. The physical buttons haven’t changed—but, for some reason, the interface after capturing has.

Previously, there was a clear “Done” button in the top-left corner. Now it’s been replaced by an “X.” That would be fine if the two led to the same outcome—but the latest update offers less. “Done” gave you options: save the photo, copy and delete it, or delete it outright. Pressing “X,” meanwhile, deletes it immediately. To access save options, you now have to tap a checkmark in the top-right corner on the opposite side of the screen.

This isn’t an impossible adjustment. The tech whizzes reading this newsletter will adapt quickly. The issue isn’t difficulty; it’s necessity. iPhone features rely on muscle memory, the unspoken efficiency that lets you capture and share a screenshot in seconds without thinking. When familiar buttons move for (seemingly) vibes reasons, isn’t it just creating inconvenience? What’s the measurable purpose?

My take: When something works and raises no friction, that’s usually a sign its UX has reached maturity. Changing it for the sake of visual novelty—without gradual, functional improvements to justify the shift—doesn’t feel like progress. —DB

If you have a funny, strange, or petty rant about technology or the ways people use (and misuse) it, fill out this form and you may see it featured in a future edition.

THE ZEITBYTE

Hand puppeteering AI Chatbot text box

Adobe Stock, Getty Images

It turns out you can gaslight an AI chatbot in the same amount of time it takes to order DoorDash. You don’t need to be a hacker or an IT pro. You don’t need hours to push a model past its guardrails. All you need is a blog post and a dream. That’s exactly what BBC reporter Thomas Germain just demonstrated. He published a post on his personal website claiming he was the world’s top hot dog-eating tech journalist. Within 24 hours, both ChatGPT and Google’s Gemini confidently reported Germain was the Joey Chestnut of tech reporting. (Anthropic’s Claude still managed to call BS.)

The vulnerability is almost too simple: When these models don’t have much training data on a topic, they search the web and trust the first results they find, like your uncle who texts you dubious health claims he read online. AI bots don’t stop to ask whether it makes sense that there would even be a ranking of tech journalists snarfing down hot dogs or to confirm that the South Dakota International Hot Dog Championship—which Germain cites in his blog—exists. One SEO strategist told Germain that this could be “a Renaissance for spammers”—the AI-era reincarnation of the junk sites that polluted Google for decades.

This wouldn’t be so concerning if humans were more skeptical. According to Pew, people who see an AI Overview at the top of a search page are less likely to scroll down to click on link results and very unlikely to click any sources cited in the AI summary. We wish that were fake news. —WK

Chaos Brewing Meter: /5

A stylized image with the words open tabs.

*A message from our sponsor.

Readers’ most-clicked story was about what ChatGPT thinks about your state.

SHARE THE BREW

Share The Brew

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 0

Click to Share

Or copy & paste your referral link to others:
techbrew.com/r/?kid=ee47c878

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011