Good morning. AI is moving fast, but many companies still have not decided who should own the job of turning that momentum into measurable business value.
At
Fortune’s Modern CFO dinner in San Francisco last Thursday, sponsored by
Deloitte and
ServiceNow, Melissa Valentine, a senior fellow at Stanford Institute for Human-Centered Artificial Intelligence, delivered a clear message to CFOs: they have a narrowing window to take command of AI value creation.
Valentine pointed to a recent
Harvard Business Review article by the founders of the Return on AI Institute, citing survey findings that underscore this opening. Only 2% of the C-suite executives surveyed said CFOs were charged with capturing value from AI. Yet when CFOs were responsible, 76% reported generating substantial value, well ahead of other functions. Laks Srinivasan, coauthor of that report, told me that
finance chiefs are uniquely positioned to define, evaluate, fund, and measure AI initiatives, then apply that framework across the company.
Valentine, a tenured associate professor of management science and engineering at Stanford’s School of Engineering, told the room of finance chiefs that CFOs have a strategic opening to lead on AI if they are willing to quantify the value and be accountable for it. She argued that generative AI is moving out of its experimental phase and into something CFOs know well: systematic measurement. Two years ago, she said, rigorous accountability would have been premature. Today, it’s essential.
On the question of guardrails, Valentine pointed to a recent incident in which
Anthropic inadvertently exposed internal source code for its Claude coding tool, offering a rare public glimpse into how frontier AI labs protect their models. She called attention to the concept of “harness engineering,” the infrastructure surrounding models to make them usable and safe, including secondary AI systems designed to monitor primary ones. Her advice to CFOs: study that architecture because leaders must understand whether the system around a model is robust enough to govern, monitor, and trust at enterprise scale.
That example reinforced a broader point in Valentine’s remarks: the requirements for safe, production-grade AI are fundamentally different from those for everyday employee experimentation. She drew a sharp distinction between two very different forms of AI transformation. One begins at the frontline, where employees use tools such as Gemini or NotebookLM and discover practical applications through experimentation. The other is driven from the top, where production-grade use cases demand robust data infrastructure, engineering rigor, and governance. Both matter. Each requires a distinct operating model.
The main takeaway for finance leaders is that AI accountability is becoming a CFO-level competency. As AI moves from novelty to operating imperative, the executives who impose discipline will be the ones best positioned to capture its value.
Sheryl Estradasheryl.estrada@fortune.co