- In today’s CEO Daily: UL Solutions CEO Jennifer Scanlon talks to Fortune’s Diane Brady on the company’s new UL 3115 standard
- The big leadership story: How to earn $18.4 million without working a single day
- The markets: It’s bad out there
- Plus: All the news and watercooler chat from Fortune.
Good morning. For more than 120 years, UL has put its mark on products from tree lights to toaster cords to convey a promise: This won’t kill you. Last week, for the first time, the $3 billion-a-year safety science company issued a new certification for AI-embedded products. As UL Solutions CEO Jennifer Scanlon told me: “Innovation without safety is failure.”
Rarely has there been a technology that’s evolved so fast
with so little oversight. (The
patchwork of emerging state laws adds to the confusion.) This week, the spotlight is on OpenClaw, the autonomous virtual agent that’s spawned
a new craze in China. It got a shout-out from
Nvidia CEO Jensen Huang during
his developers conference this week,
where he announced NemoClaw and declared OpenClaw framework to be “the next ChatGPT.”
Can private-sector safety standards do what Washington has not: provide guardrails to fast-moving technologies with potentially profound consequences? The UL mark already goes on about 22 billion products worldwide every year. This latest standard, UL 3115, evaluates whether an AI-enabled product is safe, robust and well-governed with a “human in control” throughout a product’s lifecycle. “Whether or not there’s government regulation around this, our customers are coming to us because they need broader protections and assurances,” Scanlon told me. “They’re clamoring to have at least a standard that they can adhere to that gives them the confidence in how they’re getting out in front of their customers.”
UL’s expertise is in functional safety. As Scanlon puts it: “When you turn the radio on in your car, you do not want your brakes to slam. So how is that embedded software being tested and proven? They’re embedding AIs in toys. How do we know those toys are safe for kids?”
That’s why UL’s AI Center of Excellence set out to apply its safety protocols to the new world of AI-embedded physical products. “We start with an outline of investigation, which is a precursor to safety. That’s our engineers and scientists working with customers to understand what they’re worried about, what they believe the challenges are—and then we come at it from the scientific perspective, which is: what else should you worry about?”
“In the case of AI‑embedded products, they started thinking about: How transparent is the algorithm? How much bias is built into those algorithms? What’s the veracity of the training data? And if some of that training data is not true, how do you eliminate it from the learning model? And type of human oversight and verification—that essential final check—is in place? What are those processes?”
Thus far, two products have been AI‑certified: Qcells’ Energy Management System, an AI-enabled control engine for data centers, and the Omniconn Platform 4.0, a smart building solution. It’s one part of the puzzle in a world where leaders are trying to match speed with safety.
Contact CEO Daily via Diane Brady at diane.brady@fortune.com