Google Will Teach You To Write For AI, But Not How To Keep AI From LyingGoogle’s tech writing courses improve clarity and structure, but they stop short of teaching the practices needed to make AI-generated answers accurate and reliableThere’s something reassuring about Google offering free courses on tech writing. You can almost picture the outcome, right? Engineers take the course — and “Voila!” — less ambiguity; more clarity. Sentences become shorter, paragraphs stop wandering, and instructions begin to resemble actual instructions. Their online courses walk through the fundamentals with clarity. They also include advice on working with large language models (LLMs). If you follow their guidance, your writing will likely improve. Your docs should become easier to read; your intent harder to misinterpret. It feels like progress. And it is. But only up to a point. The Subtle Shift No One Can IgnoreTech docs have a new reader. Not a person skimming for answers while jumping between online meetings (and while distracted by a steady stream of Slack notifications) but a system that retrieves, assembles, and delivers those answers on demand. The system does not read the way humans do. Instead, it predicts, reconstructs, and responds. It takes what you wrote and turns it into something else. Google’s guidance helps us with that process. Clear sentences and logical and consistent content organization give the model something to work with. But clarity alone does not determine what the model produces. That is where the gap begins. When The Model Fills In The BlanksLarge language models are good at producing language that sounds correct. They’re less reliable when the source material is incomplete or inconsistent. Unlike humans, when context is missing, the model does not pause and ask us for clarification. Instead, it proceeds full steam ahead. Researchers at Stanford University have shown that models can generate citations that look real but do not exist. The titles and authors seem credible at first glance. The problem is the sources are fabricated, and far too often, completely detached from reality. Other research shows that LLMs can explain what code does while missing why it was designed that way, because the reasons why weren’t captured in the docs to begin with. Reporting from The New York Times has identified cases where AI assistants combine outdated and current info into a single answer without signaling the conflict.
What Clear Writing Can And Cannot DoI think Google’s guidance on clear sentences and organizing content is fairly solid.
We know these things are required to produce usable docs. But, they’re not enough if we’re hoping they’ll help us obtain reliable AI output. A polished sentence may still lack context. A neatly structured page may still leave out key constraints. And a clear explanation may apply only to one version, audience, or configuration without ever saying so.
|