Tech Brew // Morning Brew // Update
Plus, we test new Gemini features in Google Docs.
Advertisement Advertisement

Just in time to piss everyone off at the Oscars, Hollywood’s most controversial nonperson, AI actor Tilly Norwood, just released a music video for a song called “Take the Lead.” It’s an anthem about, drumroll, the creative power of AI and includes subtle lyrics like “AI is not the enemy.” For some reason, the video is also littered with pink flamingos. In one scene, a distressed-looking Norwood faces down a computer captcha test (relatable). At the end, an out-of-frame hater throws a brick at her inflatable home, with the anti-AI slur “CLANKER” scrawled across it.

The song was created with AI music generation app Suno (whose CEO, if you’ll recall, said that making music wasn’t enjoyable), but the video was made with the help of 18 humans, including motion capture by Norwood’s flesh-and-blood creator, Eline Van der Velden. Norwood won’t be winning any awards this Sunday, but to be fair, she also won’t be losing any sleep over it.

Also in today's newsletter:

  • AI-generated code hits a speed bump.
  • Meet the people training AI models to dig their own professional graves.
  • Another day, another Waymo in a place it probably shouldn’t be.

—Whizy Kim, Saira Mueller, and Alex Carr

THE DOWNLOAD

An iMac monitor full of code and four error message pop-ups on top

Tech Brew Design

TL;DR: A recent string of outages at Amazon’s website and cloud services has revived questions about the risks of AI-assisted coding. Amazon publicly disputes claims that AI-generated code was responsible, but internal discussions and reports suggest generative AI tools may have played a role–highlighting an existential challenge in the tech industry about the risks of using AI to code.

What happened: Yesterday, Amazon engineers devoted a team meeting to “a deep dive” on a recent slew of outages, according to the Financial Times. A memo urging attendance initially referenced “GenAI-assisted changes” and “GenAI tools,” but CNBC reported that the bullet point was later scrubbed from the document ahead of the meeting.

Despite all the cloak-and-dagger, the outages themselves have been quite public. The FT reported last month that Amazon’s coding agent Kiro was responsible for multiple Amazon Web Services outages—including one in December when Kiro took down AWS for 13 hours after “deleting and then recreating” part of its environment. Amazon published a whole blog post disputing the issues, blaming the problems on humans rather than AI. Separately, another store outage was tied to Amazon’s coding assistant Q, according to an internal document obtained by Business Insider.

The company is now reportedly tightening oversight, requiring senior engineers to sign off on AI-assisted changes from junior developers.

An Amazon spokesperson disputed the reported new code review process. “As part of normal business, the meeting will include a review of the availability of our website and app as we focus on continual improvement,” they said of yesterday’s meeting.

AI code is everywhere now: The debate over Amazon’s outages comes as software development becomes one of the most widely adopted uses of generative AI. Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have both said that AI is already writing around 30% of their new code. Some top engineers at Anthropic and OpenAI claim they no longer even write code. They just review AI output.

The tongue-in-cheek term “vibe coding” has also caught on for a reason: AI-assisted programming has a reputation for playing faster and looser than the traditionally exacting process of writing code. The result? Quicker development but messier code.

That’s created a new bottleneck: Companies can generate far more software than they can easily review. (This week, Anthropic released a new AI code-review tool to catch bugs in AI-generated code before it reaches production.)

That speed can come with risks: “This sort of issue will become more prevalent because right now when a human operator acts, they do things with an understanding of the overall environment and knowledge of what they should and should not do,” according to research company Forrester Principal Analyst Brent Ellis. “An AI however will use whatever resources it has access to in order to try to achieve the goal it is given.”

Still, some analysts say it’s too early to conclude that AI-generated code will lead to more outages overall.

“The bar for AI code is the human error rate,” Constellation Research Principal Analyst Holger Mueller said. But, he noted, AI code could create more widespread security vulnerabilities and errors if all of the big cloud companies are drawing from the same AI coding platform.

The bottom line: AI coding has quickly become the norm at many of the biggest tech companies. But the Amazon outages offer a glimpse of the trade-off: When companies generate far more code, they also have to figure out how to review and control it.

Presented By Comcast Business

A stylized banner image that says Signal or Noise.

I let Google's AI rifle through my Drive. Here's what happened

Google rolled out new Gemini features for Docs yesterday (among other Drive products)—promising to pull data from across your Google Suite to do everything from draft documents from scratch to refine sections and even build project plans. I've been testing them for the past day, curious to see if AI could actually synthesize information across my (admittedly extensive) collection of documents and emails to create something useful.

My first test: "Create a list of all the projects I have in the works." On my personal Drive, which has very few documents, it pulled from ancient Gmail threads (a 2019 website relaunch for a job I no longer work at and my 2022 green card process). Not exactly "in the works." But on my work Drive? Different story. It generated a solid overview of current projects and even pulled action items from things like my 1-on-1 docs, prioritized by urgency.

The real magic happened when I asked it to refine what it generated about the products we’re currently testing for this newsletter. It had pulled info from an old doc instead of our current tracking sheet. I told it to use the new sheet instead, and it struck through the old content and added fresh data (broken down by each tester) from the correct source, letting me accept or reject the changes. Genuinely useful.

Google Gemini logo with Google Docs logoGoogle

For test two, I asked for a weekly team meeting agenda for work. It generated a one-page draft with discussion prompts pulled from our documented pain points. Less impressive: It scheduled the meeting for Tuesday at 10am (right in the middle of our newsletter production process) and left a key person off the attendee list despite suggesting a part of the meeting involving her work. Calendar integration would've helped here, but that's not available yet.

My final test: a full project plan with SWOT analysis for a hypothetical podcast launch. I gave it access to both my Drive and the web. It created a clean milestone schedule with owners and dependencies. The real problem was the SWOT analysis, which only pulled from internal Drive data despite having web access enabled. When I asked it to refine it with external factors, it added generic entries like "Influencer Collaborations" and "External Risk Mitigation"—clearly not from actual web research.

The Good: If you’re also someone who documents everything, this is genuinely helpful as a starting point. The refinement feature is the standout—being able to point it toward specific sources and see tracked changes makes iteration fast. If you're willing to iterate on prompts, these features genuinely save time on first drafts and busywork.

The Bad: It's only as good as your documentation, and vague prompts can get vague results. It's not clear how useful web search as a source actually is. It can pull outdated info even if it has access to newer data in a different format. It’s currently only available to Gemini Alpha business customers and Google AI Pro & Ultra subscribers.

The Verdict: Signal—with caveats. If your documentation is sparse or you expect it to magically know context without explicit direction, you'll be disappointed. —SM

Disclosure: Companies may send us products to test, but they never pay for our opinions. Our recommendations are unbiased and unfiltered, and Tech Brew may earn a commission if you buy through our links.

If you have a gadget you love, let us know and we may feature it in a future edition.

THE ZEITBYTE

miserable gig economy training AI replacements

Getty Images

They say those who can't do, teach. But in the new white-collar gig economy, those who can do are forced to teach anyway—except their student is a large language model, and the classroom is a computer stuffed with surveillance software that stops paying you the moment you stop typing.

A sweeping new investigation from The Verge and New York Magazine profiles the laid-off lawyers, scientists, and screenwriters who are now producing data for AI training companies—writing ideal prompts and responses, trying to stump models, and grading outputs. One such firm is Mercor, a $10 billion company founded by three then-19-year-olds (now the world's youngest self-made billionaires), but a growing industry of data vendors are all competing to harvest professional expertise.

It’s a business that’s found eager recruits. Hiring is at a low in the US, and $45 per hour for copywriting training data looks pretty good when your actual copywriting career got automated. But the work is brutal and erratic—projects launch and vanish without warning, and the workload keeps increasing, while the pay decreases. Some workers even admit to using AI to hit their deadlines, which these companies forbid. It's common to wake up one morning and find you've been terminated without explanation.

The magic AI box that can instantly spit out answers has always run on human labor, much of it outsourced to countries where workers earn as little as $2 an hour. What The Verge's reporting reveals is that this kind of gig work is creeping up the income ladder to knowledge workers. And they know what’s on the horizon: After their human know-how is extracted and they no longer have anything to teach the AI, they’re out of a job again. As one Emmy-winning documentary maker put it, "I'm being handed a shovel and told to dig my own grave." —WK

Chaos Brewing Meter: /5

A stylized image with the words open tabs.

*A message from our sponsor.

Readers’ most-clicked story was about the new, very cheap MacBook announcement.

SHARE THE BREW

Share The Brew

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 0

Click to Share

Or copy & paste your referral link to others:
techbrew.com/r/?kid=ee47c878

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011