I’ve reported on cybersecurity for years. From Obama to Trump, here are three cautionary tales on the government’s adoption of emerging tech.
Nonprofit, investigative journalism on a mission to hold the powerful to account. Donate

ProPublica
ProPublica

Dispatches

March 28, 2026 · View in browser

In this week’s Dispatches: As the Trump administration rushes toward AI, ProPublica reporter Renee Dudley distills three lessons from her reporting into the federal government’s adoption of emerging technology over the years.

 

As a cybersecurity reporter at ProPublica, much of my work over the past two years has focused on how the federal government and its IT contractors, like Microsoft, have navigated major technological transitions. The one now in the news every day is artificial intelligence.

Renee Dudley, ProPublica reporter

This emerging technology has its grip on everyone: Home users, corporations and the federal government are all rushing to use it. President Donald Trump and his Cabinet say AI will transform the nation, making us more prosperous, efficient and secure — if only we can adopt it fast enough. 

 

But this messaging isn’t new. President Barack Obama’s administration used nearly identical language a decade and a half ago as the U.S. barreled into the technological revolution of cloud computing.

 

I’ve studied how the federal government has handled — and mishandled — this transition over the past two decades, and my reporting offers some cautionary tales and valuable lessons as policymakers encourage the use of AI and federal agencies adopt the technology.

Lesson 1: There's no such thing as a free lunch

Then: In the early 2020s, a series of cyberattacks linked to Russia, China and Iran left the federal government reeling. The Biden administration called on major tech companies to help the U.S. bolster its defenses. In response, Microsoft CEO Satya Nadella pledged to give the government $150 million in technical services to help upgrade its digital security. It also offered a “free” security upgrade for government customers.

 

Now: Last year, the Trump administration announced a raft of agreements with tech companies that were meant to help federal agencies “purchase enterprise AI tools at government-friendly pricing.” Agencies could use OpenAI’s ChatGPT for $1. Google’s Gemini for 47 cents. Grok by xAI for 42 cents. The administration hoped that the low-cost pricing would make it “easier for federal teams to acquire powerful AI capabilities … to enhance mission delivery and operational efficiency.”

 

The takeaway: Be wary of freebies. Our investigation into Microsoft’s seemingly straightforward commitment revealed a more complex, profit-driven agenda. After installing the upgrades, federal customers would be effectively locked in, because shifting to a competitor after the free trial would be cumbersome and costly. At that point, the customer would have little choice but to pay for the higher subscription fees. The plan worked: One former Microsoft salesperson told me “it was successful beyond what any of us could have imagined.” In response to questions about the commitment, Microsoft has said its “sole goal during this period was to support an urgent request by the Administration to enhance the security posture of federal agencies who were continuously being targeted by sophisticated nation-state threat actors.”

 

Agencies looking to buy AI tools at discounted rates today must consider how the costs might balloon down the road. The General Services Administration warns that AI “usage costs can grow quickly without proper monitoring and management controls” and advises agencies to “set usage limits and regularly review consumption reports.”

 

Read the full investigation

Federal Cyber Experts Thought Microsoft’s Cloud Was “a Pile of Shit.” They Approved It Anyway.

 

Lesson 2: Oversight programs are only as effective as their resources

Then: In the Obama era, the federal government shifted its sensitive information and computing needs to data centers owned and operated by private companies. Acknowledging the potential risks, the administration created the Federal Risk and Authorization Management Program, or FedRAMP, in 2011 to help ensure the security of the cloud computing services that it was encouraging U.S. agencies to use.

 

But in my recent investigation of the program, I found it was no match for Microsoft, which effectively wore down the FedRAMP team over five years as the company sought the program’s seal of approval for a major cloud offering known as GCC High. Despite serious reservations about its cybersecurity, FedRAMP ultimately authorized the product, in part because it lacked the resources to keep going. In response to questions, Microsoft told me: “We stand by our products and the comprehensive steps we’ve taken to ensure all FedRAMP-authorized products meet the security and compliance requirements necessary.”

 

Now: Today, this tiny outpost within the General Services Administration has even fewer resources to oversee the cloud technology on which the government relies — including AI. FedRAMP says it now operates “with an absolute minimum of support staff” and “limited customer service.” The program was an early target of the Trump administration’s Department of Government Efficiency. 

 

The takeaway: FedRAMP, which a 2024 White House memo said "must be an expert program that can analyze and validate the security claims" of cloud providers, is now little more than a rubber stamp for the tech industry, former employees told me. As federal agencies adopt AI tools that draw upon reams of sensitive information, the implications of this downsizing for federal cybersecurity are far-reaching. A GSA spokesperson defended the program and said FedRAMP now “operates with strengthened oversight and accountability mechanisms.”

Lesson 3: "Independent" reviews are only so independent

Then: The government has long relied on so-called third-party assessors to verify the security claims made by cloud service providers like Microsoft and Google. In theory, these firms are supposed to be independent experts that offer a recommendation to FedRAMP on whether a product meets federal standards. But in practice, their independence has an asterisk: They are paid by the companies they are evaluating.

 

My recent investigation found that this setup creates an inherent conflict of interest. In the case of Microsoft’s GCC High, two assessors recommended the product despite being unable to fully vet it, according to a former FedRAMP reviewer. One of those firms did not respond to my questions and the other denied this account.

 

FedRAMP, we found, is well aware of how the financial arrangement between the cloud companies and their assessors can distort official findings about cybersecurity problems. The program even created a “back channel” to encourage assessors to share concerns they might not otherwise raise in their official reports for fear of angering their tech clients and losing business.

 

Now: With FedRAMP reduced to being a “paper pusher,” as one former GSA official put it, these third-party assessment firms have taken on even more importance in the vetting process. In response to questions from ProPublica, the GSA said that FedRAMP’s system “does not create an inherent conflict of interest for professional auditors who meet ethical and contractual performance expectations.” It did not respond to questions about the program’s back channel.

 

The takeaway: The pendulum has essentially swung back to the pre-FedRAMP era, when each federal agency was individually responsible for vetting the products it used. The GSA told me that FedRAMP’s job is “to ensure agencies have sufficient information to make these risk decisions.” The problem is that agencies often lack the staff and resources to do thorough reviews, which means the whole system is leaning on the claims of the cloud companies and the assessments of the third-party firms they pay to evaluate them.

 

More From Our Newsroom

 

Utah Bans Polygraph Tests for Those Reporting Sexual Assault

The Horrors That Could Lie Ahead if Vaccines Vanish

An OB-GYN Was Repeatedly Accused of Sexual Misconduct. The State Medical Board Let Him Keep Practicing.

Trump Called My Neighbors “Paid Agitators.” This is Who They Really Are.

This Sheriff Says His Department Eliminated Racial Bias. Data Shows Otherwise.

 
 
Find us on Facebook Find us on Facebook Threads Find us on Instagram Find us on Instagram Instagram Watch us on TikTok Watch us on TikTok TikTok Find us on X Find us on X (Twitter) Find us on Mastodon Find us on Mastodon Mastodon

Was this email forwarded to you from a friend? Subscribe.

 

This email was sent to npy7hz0ktx@niepodam.pl.

 
Preferences Unsubscribe
 

ProPublica

155 Ave of the Americas, 13th Floor

New York, NY 10013