wait, what? —

Elon Musk promises big new Tesla Autopilot upgrade, but is it legal?

But the announcement leaves us with many more questions than answers.

Big googly eyes have photoshopped onto the windshield of a Tesla sedan.

Tesla CEO Elon Musk is famous for the use of his Twitter feed, where he often engages with his almost 24 million followers. Sometimes it gets him in trouble—as with the case of the infamous "420" tweet that landed both him and Tesla with $20 million in fines. Sometimes it gets him dates, as was the case with a bizarre theory involving an artificial intelligence that some people believe will one day torture digital replicas of people who knew about the intelligence but failed to help usher it into existence. (Yes, really.) And sometimes, Musk just uses Twitter to tell the world what his engineers have cooking.

On Sunday evening, a few hours before appearing on 60 Minutes, Musk engaged in the last of these. First, he just wanted to remind owners of the most recent Tesla vehicle about the most recent update, Navigate on Autopilot:

But then came more momentous news:

What Musk is promising sounds like a big upgrade to the functionality of the system, consistent with Tesla's approach of reaching full autonomy (SAE Level 5) incrementally. This upgrade, in effect, advances Autopilot from SAE Level 2 (a driver assist, but the human remains in control) to SAE Level 3, a conditionally automated system in which the driver presumably kicks back, and the car assumes all responsibility for situational awareness until the car tells them it's time to take back control.

Of course, it is entirely possible that Tesla intends Autopilot to remain at Level 2. But Musk's description of a system that can stop for stop signs and traffic lights and negotiate traffic at roundabouts with no driver input does sound far more autonomous than a system where the human occupant is still responsible for situational awareness.

Certainly, Musk has made no secret of his desire to see his vehicles achieve full autonomy. Although the company recently dropped "full self-driving" as an option, he has repeatedly promised a driverless coast-to-coast demonstration run.

The announcement was evidently popular among his followers—at the time of writing, the tweet has 48,000 likes. But reading it left me with far more questions than answers. Like, has the NHTSA or any of the state DMVs responsible for licensing drivers in this country signed off on this? I asked Tesla this, but the company has yet to reply. It was also not forthcoming on a more concrete timeline for the rollout of this new feature.

Who’s liable if it crashes?

Neither did Tesla tell us whether it would accept liability for any of its vehicles operating under this new autonomous mode—something that many think is an absolute necessity when asking the public to trust their lives to a company's software. Volvo CEO Håkan Samuelsson set the tone for this conversation several years ago, stating that, "When you drive manually, the driver is responsible. When it's automatic, we as the manufacturer are liable. If you're not ready to make such a statement, you're not ready to develop autonomous solutions."

Other OEMs have followed suit, and it's the reason they all plan to first deploy their autonomous vehicles as fleets that they own and self-insure. Notably, Musk has rejected this approach in the past: a curious stance to take since it's hard to arrive at the conclusion that a human who isn't driving a car should be liable if it crashes just because that human is the car's registered owner. And surely, no insurance company would issue a human owner such a policy without good data on the incidence of crashes per mile of such a system. Unfortunately, Tesla also declined to tell Ars how many millions (or billions) of miles of testing the system has under its belt to date.

That's actually quite important. A study from the RAND Corporation in 2016 demonstrated that a vehicle would need to be driven for 275 million failure-free miles to demonstrate a fatality rate equal to that of a human driver. At this point, Tesla owners will often chime in to point out that the company has been gathering "shadow data" from the hundreds of thousands of cars it has already sold and that somehow this gives Tesla a massive competitive edge.

Do not overestimate the power of shadow data

However, it's far from clear that Tesla's data is as valuable as some claim. For one thing, this data is only valuable as training information for neural nets once it has been properly annotated. That's a laborious, time-consuming process that requires every road sign, lane marker, car, pedestrian, and other relevant object to be categorized for each frame of video. As Josh Sacks notes, the cost of this annotation far exceeds the cost of collecting the road imagery, probably by two orders of magnitude.

Waymo published a paper earlier on Monday that appears to show the limitations of Tesla's approach of relying on neural networks trained on such data. Waymo's effort appears to be far ahead of the competition, with millions of real-world miles and billions of simulated miles of testing, yet the paper concluded that "simple imitation of a large number of expert demonstrations is not enough to create a capable and reliable self-driving technology."

Edge cases abound, and there's no substitute for real-world testing. Yet we know Tesla has told California regulators that it did not do any testing of autonomous vehicles on public roads in 2017 and only 550 miles in 2016.

Sam Abuelsamid, a senior research analyst at Navigant Research, is also skeptical of Musk's announcement. "As usual, Elon's tweet is heavy on hype and short on details. Is this a conditionally automated system (Level 3) where the driver must still be present and be capable of taking over control when the system encounters situations that it cannot handle? If so, that must be made absolutely clear and there should absolutely be a robust driver monitor system. It should never be allowed to be activated if the driver is not alert and responsive. I suspect that this will not be the case for Tesla, since no Model S or X has anything beyond a steering angle sensor, which is clearly inadequate," he told me.

What about redundancy?

Abuelsamid also pointed out the lack of hardware and software redundancy that would be necessary for a truly autonomous vehicle (SAE level 5 in the jargon).

Back in October 2016 when Elon announced Autopilot V2, the first sentence was "Basic news is that all cars exiting the factory have hardware necessary for level 5 autonomy, so that's in terms of cameras, compute power; it's in every car we make. On the order of 2,000 cars a week are shipping now with level 5—literally meaning hardware capable of full self-driving for driver-less capability." First and foremost, by definition L5 means the car can operate without human intervention anywhere under any circumstances. That means it must be fully fail operational without a human in the vehicle. That requires levels of hardware and software redundancy and diversity that [are] not present in any Tesla vehicle. It also requires multiple sensor modalities and the ability to keep those sensors clean so they can "see." Again, this is something every Tesla ever built lacks.

Finally, Abuelsamid was also concerned about the product liability angle. "From my conversations with lawyers of late, I think that whether a company publicly acknowledges that it will take responsibility or not for the failures of automated driving systems, product liability will be the means that will ensure that they are, in fact, liable. One thing that will be critical is making sure that Tesla doesn't try to use forced arbitration clauses to prevent going to court. No customer should ever buy such a system where the manufacturer has not accepted full liability for any failings of the system."

This post will be updated with any replies from Tesla to our questions as we receive them.

Channel Ars Technica