The Maddening Struggle to Make Robo-Cars Safe—and Prove It

Startup Zoox is just one of many self-driving developers looking for the best way to address the complex issue of safety.
Image may contain Vehicle Transportation Car Automobile and Roof Rack
“A company might think it’s OK because it checks some box," says Zoox's Mark Rosekind. "The reality is, you don’t know if that’s really going to create more safety for the vehicle, or for the public,”Drive.ai/Getty Images

Here’s the deal, says Mark Rosekind. He’s sitting in a large, white conference room, deep within Zoox’s sprawling offices in Foster City, California, halfway between Palo Alto and San Francisco. Around him, some 400 workers clack away on computers, or roll out yoga mats in the central “town hall” space, or tend to the startup’s fleet of self-driving, golf-cart-on-steroids prototypes. The deal is that in spite of all this kind of work—work that has put autonomous vehicles on the streets of cities around the world—regulators don’t know how to ensure the potentially life-saving technology won’t instead make roads more dangerous.

Actually, nobody does.

“A company might think it’s OK because it checks some box,” says Rosekind, whose job is to help Zoox solve this puzzle. Maybe its robocar has amassed 50 million of miles of data, or has executed a perfect three-point turn, or reliably pulls over when a wailing police car appears behind it. All good stuff. But is it enough?

“The reality is, you don’t know if that’s really going to create more safety for the vehicle or for the public,” adds Rosekind, folding his lengthy body into a rolling chair and gesticulating over the wide conference table. “I'm trying to stay calm,” he explains, almost apologetic.

The problem stems from self-driving tech’s novel and potentially lethal nature. Even the pros can have trouble understanding how their cars perceive their surroundings and make decisions. And lack of clarity is scary when making two-ton machines that move at speed among fragile beings. Autonomy has already killed 49-year-old Elaine Herzberg, who was crossing the street in Tempe, Arizona, when she was hit by a self-driving Uber SUV.

In an effort to make its technology safe, Zoox has brought on Mark Rosekind (left), who headed up the National Highway Traffic Safety Administration, and aviation industry and safety engineering veteran Gonzalo Rey.

Zoox

Given that testing on public streets is the only way to make these vehicles ready for real, it’s incumbent on this young industry to guarantee that these things are safe. So can these companies come up with best practices based on little information? And critically, are they willing to share what they’ve learned from their mistakes, so others don’t make them too?

Increasingly, industry leaders say serious discussions about safety are sorely needed. “Most of what I’d heard in press and at events with autonomous vehicle people was political rhetoric: ‘We won’t hurt anybody; this will be safer than a person.’ But there is no engineering to back that up,” says Stefan Seltz-Axmacher, CEO and cofounder of robotic truck startup Starsky Robotics. “In the world of aviation and wider automation, safety engineering is a real discipline.”

The good news is that more firms are making serious investments in “safety culture”—a concept rooted in the idea that humans, like machines, can be organized optimally to create any kind of outcome. That could be profit or efficiency. Or it could be keeping people alive.

“‘Safety culture’ is when you don’t let things slide,” says Philip Koopman, who studies safety in autonomy at Carnegie Mellon University. “If your self-driving system does something unexpected, just one time, you drill down and you don’t stop until you figure out why, and how to stop it happening again.” This sounds simple, but tracking down every last little, sometimes inconsequential bug takes a heap of time and money. You can find this sort of safety culture at work in factories, the oil industry, and hospitals. But the best example—the one especially relevant to a human-toting technology—comes from the sky.

Since the late 1960s, the American airline industry has cut its fatality rate in half. Until an engine blew on a Southwest flight this spring, killing a woman, no one had died on an American commercial jet in eight years. The impressive record has a few explanations, ones that can be replicated. For one, internal auditors oversee many elements of aircraft construction and programming to ensure a particular level of safety. For two, the industry makes great use of checklists—a way to ensure that everyone is paying attention and staying on task. And for three, airlines and aircraft designers don’t compete on safety. They share knowledge. In the US, a secure third party contractor facilitates data sharing between airlines and aircraft designers on everyone's mistakes. If a plane crashes, the entire sector is going to find out why, and get the information it needs to know before it happens to them.

To deliver on their talk about this sort of safety culture, self-driving companies have turned to a classic Silicon Valley trick: poaching safety talent. Waymo hired a former chair of the National Transportation Safety Board. Starsky Robotics hired its first head of safety in the spring, a drone and aviation industry vet with experience with the Federal Aviation Administration. Uber signed on another former NTSB chair, a man with a background in aviation, to advise the company on its “safety culture.” And Zoox has Rosekind, who headed up the National Highway Traffic Safety Administration and has three decades of experience in human fatigue and transportation. And there's another aviation veteran: Gonzalo Rey, its vice president of system design, most recently managed 1,200 workers as engineering lead and then CTO for aerospace company Moog.

Meanwhile, companies and observers are floating more ideas about maintaining safety—which are also bids to keep the industry in the public’s good graces. The companies building AVs could all agree to use the sort of vehicle engineering safety standards that have already been devised for electronics in the automotive industry, with some tweaks for self-driving. (The standard, called ISO 26262, establishes a framework for building and documenting software safety systems.) There also have been rumblings about creating some sort of platform that would allow self-driving developers to share data and learnings, like they do in aviation.

And there's talk about improving the information that some companies have shared, documents called “voluntary safety self-assessments”. The federal government politely requested companies start putting out these letters last fall, with details on how their engineers approach safety. (As of this week, seven companies have published VSSAs.) But thus far, they’ve been criticized as glossy brochures, better for marketing than informing. (Seltz-Axmacher calls them “marketing copy.”)

As for some sort of test that a company might pass in the future? At Zoox, they like the idea of asking autonomous cars to do exactly what normal ones can’t: show that their sensor and computer setups can eliminate 94 percent of crashes—the share, according to NHTSA, that are due to human error. “Everybody's kind of waving their hands trying to make new things up, but nobody's actually showing how they might apply their systems to get the level of safety right,” says Rosekind, still animated. Self-driving is new, and will require new ways of thinking about transportation, as well as labor and public space. Safety, on the other hand, doesn't have to come from scratch.


More Great WIRED Stories