BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Tesla Autopilot ‘Self-Driving’ Possibly Getting More Aggressive In Evasive Maneuvering Which Could Be A Hidden Sign About Level 5

Following
This article is more than 3 years old.

A recent news story reported that a Tesla on Autopilot managed to avoid striking a deer by undertaking an aggressive maneuver of swerving to avert striking the living animal that was in the middle of a highway lane.

Thankfully, no one was injured, neither those inside the Tesla and nor was the deer struck, and we can all relish that this story had a happy ending.

What makes this story especially newsworthy involves the aspect that the evasive maneuver was conducted in a seemingly aggressive or blatantly assertive manner by Autopilot, which until now has seemingly been more subdued and unlikely to perform harsh driving actions.

Some wonder whether this kind of emphatic driving activity might be tied to Elon Musk’s recent bold proclamation that Tesla’s Autopilot is supposedly nearing the topmost level of autonomy, Level 5, which he boasted about at the World Artificial Intelligence Conference (WAIC) in Shanghai, China, claiming that he is: “extremely confident that level, or essentially complete autonomy, will happen very quickly.”

Musk also put a proposed timetable to his claim, which was that: “At Tesla, I feel like we are very close to level 5 autonomy. We will have the basic functionality for level 5 autonomy complete this year.” Similar kinds of predictions have been made by Musk and each has in turn come-and-gone without the vaunted Level 5 being revealed by Tesla (see my prior coverage at this link here and this link here).

The slew of needed capabilities to reach Level 5 have not yet been directly revealed or unveiled by Musk and Tesla, thus, if they really are close to Level 5 in the sense of having it readied within the next 5 months or so (meaning by the end of this year), there is very little overt indication of such progress.

Could the reported suggestion that the evasive maneuvering is a hidden showcase of the advances toward becoming a Level 5 be somehow a telltale clue?

Let’s take a moment to consider how humans react in circumstances involving such cases, and then reflect on what we would hope that any AI-based computer driving system would do or will end-up doing in such instances (for additional details on this, see my in-depth analysis at this link here).

So, quick, there’s a deer in the roadway up ahead, what are you going to do?

We have all encountered those scary moments when an obstruction has suddenly appeared in front of our vehicle, causing your hands to tense-up on the steering wheel as your mind races wildly to decide what action to take.

Maybe you should jam on the brakes.

But, if the distance to the upcoming object is too close, slamming on the brakes might not stop the car in time to keep from hitting what is in the way. Furthermore, depending upon the traffic behind you, sharp-edged braking could cause other cars to ram into your vehicle, either harming you and your passengers or inexorably pushing you into the very thing you were trying to avoid hitting.

Okay, instead of braking, perhaps it would be wiser to swerve the car.

Now you have to choose between swerving to the left of the object or the right of the object.

One question involves whether the swerve in either of the two directions will bring you into even more danger. Perhaps swinging to the left will force you into head-on traffic, facing a potentially injurious or death producing frontal collision. Swerving over to the right might take you to the edge of the roadway, possibly causing the car to fly into a ditch or possibly go over a cliff.

There is much more to the calculus.

The object itself needs to be assessed in many ways.

I mentioned that the obstruction in this case was a deer.

Suppose that it was a horse, or a dog, or a squirrel, would any of those variations change what evasive action you are likely to choose?

Sure, they might very well impact the mental equations as to braking versus swerving.

A squirrel might be small enough that you can take a chance and just roll atop the creature, doing so in the possibility that there is sufficient clearance to avoid harming it. And, though this might seem callous, most people would likely rate that hitting a squirrel is not quite the same ethical dilemma as hitting a deer (I realize that some would argue that they are both equally precious).

Don’t forget to add the fact that the obstruction might be in motion and able to continue in motion, which also provides fodder for considering what to do.

It could be that when you opt to swerve to the left, the object scampers to the right and thus there is less need to radically go to the left because the clearance has widened. When that happens, you typically breathe a sigh of relief and are appreciative that fate dealt the hand in that manner.

Regrettably, the object might alternatively and mistakenly turn to the left, moving further into your path, although you were trying to avoid it and now the object itself is making the situation worse. This is likely to cause a curse word or two to come from your lips as you get irked at the animal for having made the wrong choice.

Yet another factor involves the size and heft of the obstruction.

A small-sized animal is going to do less destruction and endangerment to your vehicle, and though of course you would be personally devastated at having hit the creature, at least you know that you and your passengers will likely survive relatively unharmed.

The problem of striking a larger beast is that besides harming the animal, there’s a high chance that the action can do substantial damage to your car and simultaneously cause you to lose control of the vehicle. It could be that upon striking the animal, you lose your steering, or the physics forces the car to head into oncoming traffic or over into an embankment.

Besides considering the role of animals, the obstruction could be something inanimate, perhaps a large piece of furniture that dropped off the back of a truck and has been sitting in the lane, awaiting a car to come along and deal with it. From time-to-time, you’ve probably seen debris from a smashed couch or a lazy chair that was struck by other traffic and step-by-step dashed into tons of bits and pieces.

An inanimate object does not necessarily need to be standing still.

I’ve seen with my own eyes an instance of a wheelchair that was slowly rolling back-and-forth on a freeway, being narrowly struck by passing cars, and upon each near strike, the wheelchair would suddenly and erratically roll in one direction or another.

All in all, whenever there is something in the roadway up ahead, and you need to consider what to do, there is a complex series of mental contortions that need to be undertaken.

The rub is that this needs to be mentally worked-out in split seconds.

Unless you are lucky enough to spy something in the roadway with a great deal of distance, usually there is very little time to consider the multitude of options, and you are forced into a nearly instantaneous selection of what to do.

You can be caught so completely off-guard that in essence, you do not decide, merely ramming directly into the obstruction.

The act of striking the obstruction might also be a deliberately derived approach, based on weighing all the factors and reaching the conclusion that the least dangerous way to cope involves proceeding unabated and hope for the best.

Why take you through all the agony and angst of what human drivers have to endure when driving a car?

Because we are gradually having AI-based driving systems that are going to be making the same kinds of choices.

Do you know how your AI-based driving system is making these types of arduous and life deciding selections?

Probably not.

There aren’t automakers and self-driving tech firms revealing the proprietary means that their software arrives at such decisions.

As a human inside such vehicles, you are assuming that whatever the automation is going to do will be the “right” choice.

Of course, you are betting your life on that assumption.

It is quite a high stakes bet on something that you have no idea about how it is being calculated and whether all factors are being considered, or perhaps the approach is extremely simplistic and only has one factor or two that come into consideration.

Along those lines, realize too that for any given situation, there is not necessarily only one way to proceed.

Each of the choices involves probabilities and uncertainties, which is confounding to human drivers, and likewise if being used by the AI-based driving systems requires assessing the chances of what might occur and what might result (for more on uncertainties while driving, see my analysis at this link here).

If you are driving, you might assess that say if a deer is walking across the lane from left to right and already in-motion, the probability that the deer will continue in that path is presumably relatively strong. You do not know that to a one-hundred percent certainty. It is a presumed best guess.

The reality could turn out that the deer hadn’t yet noticed the car, and upon realizing that the car is bearing down toward it, the deer reactively tries to retreat to whence it came, thus it turns to the left and tries to dash back in that direction.

Thus, your initial guess about the deer continuing to the right was incorrect.

Upon your detecting that the deer is now motioning to the left, you might recalibrate your choices, and even if you had already started to swerve left, maybe you now change your mind and opt to switch over to aiming to the right of the deer.

This is an intricate dance that involves a real-time reassessment of the factors and either large-scale adjustments or micro-adjustments about the heading of the car, the speed of the car, and the like.

Throughout all of this, you are somewhat shaping your choices by how radical an action you are willing to take.

Novice drivers are often timid in these situations and are fearful of possibly rolling the car or otherwise not sure how the car will react when making dramatic inflections in steering or braking. The opposite can be true too, namely, some novices do not realize that a sharp twist of the steering wheel while at high speeds could cause the car to become unstable, and so the novice unknowingly puts the vehicle into possibly greater danger than via taking more measured action.

Returning to the AI-based driving systems, consider what you want such a driving system to do.

Would you prefer that the automation select a radical maneuver, which might have advantages and also disadvantages, or take a more muted approach, which also will have its advantages and disadvantages?

Undoubtedly, some people would say that if they were a passenger in such a vehicle, they would want the AI to select only the seemingly milder maneuver rather than risking the overall safety of the entire car and its occupants, while other people might insist that radical maneuvers are fine as a means to hopefully increase the odds of avoiding striking the obstruction.

This all turns out to be more than a mathematical exercise as it involves ethical decisions that humans seem to make almost subconsciously, and yet we do make those decisions, every day that we are on the roads and driving a car.

The public-at-large has not yet especially considered the AI Ethics aspects of how AI systems are making potential life-and-death decisions involving us, though this serious matter has gradually been surfacing in other ways, such as the recent qualms about AI-based facial recognition systems that might have embedded racial bias and the same for more mundane AI like a system that decides whether to grant someone a car loan or not (for my discussion about AI Ethics, see the link here).

For the use case of AI-based driving systems, we are heading to a moment in time that will determine how the public and regulators are going to cope with self-driving that makes crucial real-time decisions, and for which none of us might know what method or approach is being used.

Some say they don’t care how the cars do it, just as long as there are no car crashes and no human injuries or deaths.

The notion of zero fatalities for self-driving cars is farfetched and completely misleading and outright false, as I have repeatedly exhorted (see my indication that zero fatalities are a zero chance, at this link here), and we must all realize that the physics involved in the motion of multi-ton cars is not going to lend itself to always averting a car crash.

A deer that pops out of the bushes and surprisingly comes onto a highway lane, while a car is moving at 65 miles per hour, and if the distance is just a few feet, there is no chance of swerving or stopping to prevent hitting the deer.

And, not wanting to seem overly doom and gloom, if that was a human that decided to step onto the highway, they too would be in a bad way, regardless that the circumstance involved a human instead of a deer.

Here’s a question to ponder: Do we want AI-based true self-driving cars to be making radical driving maneuvers or do we want something more modest, and if so, who decides this?

Let’s unpack the matter and see.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out, see my indication at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Maneuvering

For a Level 2 car (a Tesla on Autopilot is currently considered at Level 2), one of the greatest concerns for these types of semi-autonomous vehicles is that the human and the automation might end-up at odds with each other when making driving choices.

As earlier pointed out about deciding what to do about a deer in the roadway, the automation might ascertain that swerving is the maneuver to undertake, and yet suppose the human driver at the wheel believes it is better to not swerve and instead brake, or perhaps just proceed ahead and possibly strike the animal.

The usual answer is that if the human driver wants to do something else, all they need to do is override the automation.

That seems to solve any questions on this matter.

Unfortunately, this simplistic and rather flippant answer does not solve things.

Imagine that the automation has already begun to swerve to the left. The human driver now has a much different situation than they did a split second earlier. The car is already now amid a choice that the human driver did not make and presumably (we are assuming for the moment) believed was unwise.

This is what happens when you co-share the driving task.

It is akin to having another driver sitting next to you that has full access to the driving controls. When split-second decisions need to be made, you do not have time to chat with each other about what choice is best. Instead, each of you is going to make a rather instantaneous choice.

The problem then becomes that one of you engages their choice, and the other has not yet done so, but now that the other one began their choice, it puts the other driver into a further pickle.

Anyone that says the human driver can merely takeover the controls is not being realistic about those split-second life-or-death moments that arise when driving a car. There are of course lots of situations whereby there might be enough time to have the human overrule the automation and take a course of action, yet this does not mean and nor imply that such ample opportunity will always be the case.

Consider too how the automation is making these choices.

We don’t know since Tesla and likewise, other automakers are not revealing their approaches being used.

It could be a straightforward mathematical equation, and if so, what factors are being used?

How does the equation figure out the odds of striking versus not striking the obstruction, and what about the risks of the maneuver in terms of subtle versus radical, and what about the value associated with those inside the car versus the animal, and so on.

Some argue that we should just let the Machine Learning (ML) and Deep Learning (DL) determine what to do.

The use of ML/DL is a computational pattern matching mechanism that uses past data to try and find patterns that can be invoked for making later decisions. If we fed lots of instances of evasive maneuvers, the pattern matching would try to calculate how to react to future such occasions.

Do not be misled into believing that ML/DL solves the matter since the nature of the data and cases used to train the ML/DL might create a false semblance of what to do (often using large datasets that are fed into an artificial neural network).

Suppose that most or all prior instances were “solved” via the act of braking, then the calculations would statistically tend toward using braking in future situations. There isn’t any kind of common-sense reasoning involved by the automation.

Conclusion

If indeed the Tesla Autopilot is getting more aggressive in evasive maneuvers, we should be wondering why.

For example:

·        Has the use of ML/DL led the Autopilot to gradually over time mathematically calculate that using a radial maneuver is best?

·        In all cases or only in certain kinds of situations?

·        Are the AI developers and engineers that have devised the Autopilot opting to ratchet up the radical maneuver facet, and if so, what kind of testing and to what degree is this a safer method of driving?

·        Does this foretell a piece-at-a-time revealing of their efforts to achieve Level 5?

The usual answer for Tesla and other Level 2 providers is that it doesn’t matter what the automation does since the human driver is the final arbitrator and fully responsible for the driving of the car. Though that seems perhaps sensible on the surface, we will undoubtedly eventually find out via our legal system and lawsuits entailing injuries and deaths involving Level 2 cars as to whether this kind of shifting of the blame can be societally acceptable (for my predictions about such lawsuits, see the link here).

In the case of a Level 5, there will no longer be a human driver in-the-loop and the AI will be expected to make driving decisions autonomously.

Musk asserted at the WAIC event that “I think there are no fundamental challenges remaining for Level 5 autonomy.” Furthermore, he described the situation this way: “There are many small problems. And then there’s the challenge of solving all those small problems and putting the whole system together and just keep addressing the long tail of problems.”

Sometimes the devil is in the details, and getting to Level 5 is more than simply cobbling together lots of pinpoint features, and requires a comprehensive and cohesive collective that works together in harmony.

Whether Tesla really is close to Level 5 seems rather hard to believe and the postulated timetable that Autopilot would be Level 5 readied by the end of this year is unsubstantiated by hard facts, thus seemingly yet another fanciful claim by Musk.

Overall, if you are expecting to get an Autopilot upgrade to Level 5 from Santa Claus this coming December, be prepared for a likely disappointment in this year’s gift department.

Follow me on Twitter