Tech Brew // Morning Brew // Update
It considered features such as a Santa bot and Tamagotchi pet.

The Streisand Effect strikes again. Microsoft banned the word "Microslop" from its official Copilot Discord server this week, autoblocking any message with the term for containing an "inappropriate" phrase. Users immediately started spamming workarounds like "Microsl0p.”

So moderators banned accounts, locked parts of the server, and hid the message history before seemingly giving up and dropping the ban. Big mistake. The server exploded with "Microslop" spam, with one of the top suggestions in #server-ideas being to delete the entire thing.

This comes two months after CEO Satya Nadella blogged that "we need to get beyond the arguments of slop vs sophistication." Mission accomplished.

Also in today's newsletter:

  • Meta identified kids in the 5–11 age range as an “unmet need.”
  • AI can unmask your anonymous accounts in record time.
  • ChatGPT’s uninstall spike after the Pentagon deal (and what it’s doing about it).

—Carlin Maine, Whizy Kim, Molly Liebergall, Saira Mueller, and Alex Carr

THE DOWNLOAD

Young child using a phone

Getty Images

TL;DR: New Mexico is nearly a month into a trial that accuses Meta of failing to protect young users from predators, based in part on internal communications from the past decade. In one of these documents, shared exclusively with Tech Brew, Meta cited market research on children as young as 3 years old and brainstormed toy-like product features to attract preteen users.

What happened: In a draft presentation from 2016, titled “The Teens Team,” Meta included a chart on the social media usage of kids aged 3 through 15 based on the publicly available market research, and identified the 5–11 range—where online activity was lowest—as an “unmet need.” A few slides later, Meta listed a “Santa messaging bot,” a “Tamagotchi Pet,” and a “Kid Video App,” as ideas for engaging users under 13 (who are infamously not teens), potentially via a whole new child-focused platform: “For U13 kids, we should build Facebook One,” reads another slide. It also mentions growing time spent among U13 kids as a goal.

As a Meta attorney clarified in court yesterday, Facebook One was never actually created. The company’s only U13 product to date is Messenger Kids, a parentally controlled messaging platform launched at the end of 2017. While Instagram and Facebook prohibit kids under 13 from signing up, Meta reportedly estimated in 2015 that 4 million of them found their way in anyway (potentially including the plaintiff in the current California social media trial, who said she got on Instagram at age 9 during the mid-2010s).

In a statement to Tech Brew, a Meta spokesperson reiterated that: “We don’t allow people under 13 to use Instagram or Facebook. These concepts, which came from brainstorms over a decade ago, were never launched. They did help inform the 2017 launch of Messenger Kids, a dedicated, parent-managed service designed for kids to connect with family and friends.”

Why this matters: Though Meta decided against Facebook One and those toy-like features, the fact that they were considered at all speaks to the company’s attitude toward underage engagement and adds to a growing pile of internal communications that seemingly run counter to Meta’s public messaging on youth safety. An internal 2018 Meta document from the California trial reportedly stated: “If we wanna win big with teens, we must bring them in as tweens.”

Fast-forward to the trial in New Mexico—Meta is not only arguing that it takes adequate steps to protect teen users, like building supervision tools for parents, but also that it’s not the only party responsible for their safety. Meta’s attorney said in opening statements seen on Courtroom View Network that the onus also lies with families, schools, state officials, and the child predators misusing Instagram and Facebook (yes, really).

What’s next: The trial is expected to last a few more weeks. Unlike in California, Meta CEO Mark Zuckerberg isn’t set to testify. New Mexico’s attorney general is seeking millions to hundreds of millions of dollars in civil penalties and said he wants to see Meta “broadly” implement “effective” age verification, which many tech companies are facing pressure to do (Apple recently rolled out age verification for apps rated 18+ in a few states and countries). New Mexico’s lawsuit is the first standalone, state-led case against Meta to reach a jury trial in the US, and there’s no telling yet how it’ll play out.

Generally speaking, meaningful regulation on child safety is a lengthy process in the US, even if there’s a wider federal push. After the National Highway Traffic Safety Administration set its first federal standards in 1971, it took 14 years for every state and the District of Columbia to pass laws requiring child seat belts. But we all know tech companies like to move fast and break things. Maybe this time they can fix things. —ML

Presented By iHerb

A stylized image with the words bug report.

@ me maybe (if it’s urgent)

The remote-work era offers many perks, like a better work-life balance (or the ability to work in sweatpants on the couch). But it also comes with its downsides. For example, Tech Brew reader James from Ft. Lauderdale, Florida, is tired of the incessant pinging of work notifications on their company’s communications platform, which constantly (and often needlessly) interrupts the workday.

People need to learn how to properly tag in Microsoft Teams. Some of my coworkers tag me and take 2-5 minutes to write a several-sentence message, usually when I'm in the middle of doing something else. Of course because they tagged me, I assume it's urgent and requires my immediate attention so I stop what I'm doing. Then it turns out to be something that could wait or they could look up the answer to. Sometimes it's something that takes a while to do or I wouldn't know. Then I spend 5-10 minutes trying to remember what I was doing before. I can't just ignore the tag because one out of 50 times it's something that actually can't wait.

Nothing's worse than dropping what you're doing and rushing to read your colleague's end-of-day message only to find out it could have waited until tomorrow. So don’t ruin a coworker’s train of thought—think before you @ them. —CM

If you have a funny, strange, or petty rant about technology or the ways people use (and misuse) it, fill out this form and you may see it featured in a future edition.

THE ZEITBYTE

Social media anonymous user profile

Tech Brew Design

Bad news if you’ve ever used a burner account to rant about your relationship on Reddit: AI is making it dramatically easier to identify pseudonymous users online. A paper published in February by researchers at ETH Zurich and Anthropic found that LLMs are better and faster at "deanonymizing" internet accounts, because they can easily crawl through tens of thousands of posts and match “identity signals” to a known person. Across several experiments, up to 68% of users were correctly linked to their other accounts based on nothing but publicly posted text.

Your identity can be narrowed down based on a surprisingly small number of details and quirks—a prior study found that just a ZIP code, birth date, and gender can uniquely identify 87% of Americans. Previously, sifting through the ocean of unstructured text online to connect those signals to a real person required hours of painstaking work by a skilled investigator. AI can now do that at scale in minutes.

Here's how it can work: Maybe you reference the city you live in, let slip that you have two dogs, and post a passionate defense of Madame Web across two unrelated accounts. None of those details are identifying on their own, but the combination of just a few mundane facts narrows the field fast. In one experiment, the researchers scraped references to career history, location, and even programming languages used from Hacker News accounts and had an AI agent autonomously search the web to find matches on LinkedIn. It correctly identified 67% of users.

The implications are grim: easier stalking and doxxing of activists and whistleblowers, government surveillance of dissidents, hyperpersonalized scams, or hypertargeted ads based on a throwaway joke you made on Economic Job Market Rumors (not only an actual forum, but one cited in the ETH Zurich paper). The only surefire way to stay anonymous online may be to stop posting altogether—a strategy Satoshi Nakamoto adopted about 15 years ahead of the curve. —WK

Chaos Brewing Meter: /5

A stylized image with the words open tabs.

  • Wellness delivered: Skip the long grocery lines. iHerb ships vitamins, supplements, and other wellness essentials straight to your door. They’ve even got products that are tested for purity and quality standards to boost your wellness routine.*

*A message from our sponsor.

Readers’ most-clicked story was about Anthropic's three AI fluency tips in its recent education report. (You can find them in the “Developing your own AI fluency” box.)

SHARE THE BREW

Share The Brew

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 0

Click to Share

Or copy & paste your referral link to others:
techbrew.com/r/?kid=ee47c878

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011