BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Silicon Valley Event On Machine Learning Tackles The Latest Riddles Vexing AI Self-Driving Cars

Following
This article is more than 4 years old.

There’s a child’s riddle that asks you to indicate what can be held in your left hand and yet cannot be held in your right hand.

Take a moment to ponder this riddle.

Your first thought might be that anything that could be held in your left hand should also be able to be held in your right hand, assuming of course that there’s no trickery involved.

One trick might be that you could hold your right hand in your left hand, but that you cannot presumably “hold” your right hand in your right hand since your right hand is your right hand.

Another trick might be that your right hand is perchance weaker than your left hand, thus if an object was heavy, potentially you could hold it in your left hand, but you could not do so with your less powerful right hand.

If we eliminate all the trickery potential answers, what else remains?

Supposedly, the “answer” is that you can hold your right elbow in your left hand, but you cannot hold your right elbow in your right hand.

Some would object to the alleged answer since it seems unfair to single out the elbow and you could presumably argue that other areas of the right arm might also be unreachable by your right hand. Thus, the claimed answer is only one of potentially multiple answers.

And, there are some people that are so loosely limbed that they could indeed hold their right elbow in their right hand, though this kind of contortionist capability is admittedly relatively far and few between.

What kinds of lessons can we learn from this simple riddle?

One notable lesson is that a riddle might not be answered by a simple answer, even if the riddle itself seems quite simple.

More so, it’s possible and often likely that harder riddles are bound to have even more complicated potential answers.

Another lesson is that we tend to assume that a riddle must have only one right answer.

Perhaps this is due to growing up in an education system that focuses on always arriving at the one right answer. By training and habit, we are cognitively shaped to assume that whenever a question is asked, there must be one and only one right answer.

Those multiple-choice tests that you used to take were mentally warping you into believing that there must be one answer from the set given, and therefore all of life must somehow have singular answers to pressing questions.

I admit that when I was a university professor, I usually made sure to include in the multiple choices that there was the ever daunting “all of the above” and also the worrisome “none of the above” as potential choices.

The bane of existence for students is having to put aside the search for the one right answer and realize that it could be all of them or it could be none of them. I don’t know whether I should be happy to have caused such torment, or whether it was good for their psyche and cognitive development (by the way, in a meta-analysis, you could argue that any such test question still has only had one right answer, i.e., “all of the above” is a one-answer answer).

All this talk about riddles brings up the fact that there are many riddles hidden within much of what we do in industry, and for which we are all on a quest to solve or at least resolve.

Consider this aspect: For the advent of true self-driving cars, there are a number of crucial riddles that still need to be figured out, along with raising new riddles that we’ve not yet surfaced.

Let’s unpack the matter and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect that’s been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Some Thorny Riddles

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

There are quite a number of riddles that exist and others that will emerge related to the nature of AI driving systems.

For those that aren’t directly involved in the autonomous cars field, they are at times puzzled that those within the driverless car realm would have anything to be puzzled about.

On the surface, the quest is presumably rather straightforward, namely, create an AI system that can drive a car.

Period, drop the mic.

Yes, at the thirty-thousand-foot level, perhaps you can say it is a simple matter, but as we saw about the left hand and right-hand riddle, simplicity seemingly in the core does not necessarily lead to simplicity in the answer.

A recent AI conference put on by the Silicon Valley group known as ValleyML.ai included a number of fascinating and illuminating sessions about AI and Machine Learning (ML) in a slew of areas and managed to reveal numerous riddles facing AI/ML.

The Valley Machine Learning and Artificial Intelligence group encompasses AI/ML companies, researchers, startups, business leaders, non-profits and others that are interested in AI/ML, see the link here.

Program Chair for the conference was Dr. Kiran Gunnam, Distinguished Engineer of Machine Learning & Computer Vision at Western Digital. Kudos goes to him and the ValleyML.ai team for a great event.

I’ll focus on one particular panel session that concentrated on AI self-driving cars.

The session was entitled: “Collaboration for Safety of Autonomous Vehicles”

The panel chair was ably and professionally undertaken by John Currie, Director of Business Development, Mobility, UL, serving as moderator and contributor to the discussion.

The esteemed panelists were experts in various facets of AI self-driving cars, consisting of:

·        Miguel Acosta, Chief of the Autonomous Vehicles Branch, California DMV

·        Sagar Behere, Senior Manager, Highly Automated Driving, Toyota Research Institute (TRI)

·        Benjamin Lewis, Director, Automotive & Mobility Strategic Partnerships, Liberty Mutual Insurance

·        Liam Pedersen, Deputy Director, Robotics, Renault Nissan Mitsubishi, Alliance Innovation Lab Silicon Valley

·        Mike Wagner, CEO of Edge Case Research

A lot of ground was covered during the invigorating panel session.

To keep this analysis herein succinct, I’ll cover just two selected subtopics and showcase the riddles contained within them as exemplars of what the self-driving car industry is grappling with.

Infrastructure And Self-Driving Cars

Here’s an intriguing riddle that offers plenty of discussion and debate.

Should our roadway infrastructure be changed to accommodate self-driving cars, or should self-driving cars be expected to cope with the roadway infrastructure as it exists and as is customarily experienced by human drivers?

Allow a moment of elaboration.

Some believe that our roadway infrastructure ought to be changed or adapted to better suit the needs of self-driving cars.

For example, detecting curbs can be at times a difficult task for AI driving systems, and thus if our curbs were higher or painted a special color or otherwise amplified in some manner, doing so would make it easier and more likely that the sensors of the self-driving car and the AI system could detect and deal with the borders and boundaries of streets and sidewalks (for aspects of street scene free-space detection, see my discussion at the link here).

Another example involves making right turns at blind intersections.

When a self-driving car tries to make a right turn and cannot via its sensors readily detect what’s around the corner, there’s a heightened risk that the AI will opt to proceed with the turn and then abruptly discover that a pedestrian is in the street or maybe a bicyclist is stationary there and happens to be in the wrong spot at the wrong time.

To lessen the chances of hitting someone or something, it would be handy if there was an electronic device mounted on a nearby pole or building that could use its own sensors to broadcast a message to all nearby self-driving cars (this is sometimes referred to as edge computing, see discussion at this link). The message might be a forewarning that there’s someone just around the blind corner, or it might be that the coast is clear, and the AI can proceed without delay.

Both of these examples highlight the value of adjusting the infrastructure to aid the advent of self-driving cars.

In the use case of the curbs, the change might be a traditional type of modification involving raising the height of curbs or painting them with a highly visible color, while in the case of the blind corner the change is actually the addition of an electronic device (these kinds of devices would be communicating via V2I or Vehicle-to-Infrastructure electronic messaging).

This all seems quite sensible.

Of course, there is a notable cost involved in such sweeping infrastructure changes and additions.

Imagine the tremendous cost if all across the United States there was an effort to raise the height of curbs.

It would be an astronomical price.

Suppose that electronic devices for corner presence detection were placed on all blind intersections throughout the country.

Again, a likely stiff price.

One argument is that as a society we should not need to bear the cost of changing the infrastructure to allow self-driving cars to be sufficiently able to drive our roads. In short, if a human can drive and not need heightened curbs or corner pedestrian detection devices, gosh darn it the AI driverless car should not need it either.

Indeed, some say that the automakers and self-driving tech firms are being “lazy” or taking the easy way out by trying to change the infrastructure. Put your head down into your software and hardware and get it cranking so that no such alterations are needed, shout some pundits.

On the other hand, there is already overwhelming agreement that our existing roadway infrastructure is in bad shape and desperately needs massive repairs and an overhaul. At the federal level, there have been various regulatory bills and discussions about what the price tag might be and how to best undertake the needed changes (see my discussion at the link here).

As such, if we are going to be modifying or changing things anyway, some point out that we might as well go ahead and include aspects that could aid the emergence of self-driving cars. Thus, rather than going out of our way to do so, these alterations and new additions would simply be part-and-parcel infused into the large basket of infrastructure alterations.

Furthermore, presumably many of the changes would be helpful to human drivers too. In that manner, you’d be getting two benefits for the price of one, so to speak.

Yet another plus would be that the cost of AI self-driving cars could possibly be lessened if there was sufficient V2I established.

Here’s the logic.

If every AI self-driving car has to be outfitted with special sensors to try and gauge what’s around a blind corner, using various trickery such as computational periscopy and high-priced devices, the costs of each self-driving car are presumably increased.

But if there were electronic devices at street corners that did this for the self-driving cars, it would imply that such expensive on-board equipment wasn’t needed, and essentially the added cost of those street corner devices is divided out across all the millions of driverless cars that might someday be on our road.

That seems sensible, though immediately one might be worried that the self-driving cars could become overly reliant on the V2I and if somehow a street corner device was broken or faltering, the AI driverless car would not be able to safely navigate by itself.

The counter-argument is that AI self-driving cars would be expected to figure out whether not a V2I device was present and functioning, and if not then the AI would proceed on a precautionary basis, going very slowly and taking longer to make the turn, yet still ultimately making the turn.

In terms of the potential infrastructure cost, some emphasize that not every street corner and not every curb would need to be modified. Instead, wherever we anticipate self-driving cars to be most used, perhaps in downtown areas, the modifications that pertain especially to driverless cars would be made, and not need to be set up everywhere.

Like many riddles, we can go around and around trying to solve the riddle.

At this juncture, there hasn’t been any resolution or “solving” of the infrastructure riddle and it remains an active and at times contentious debate.

You’ve now become part of the riddle-solving team.

Welcome to the club.

Machine Learning And Changing AI

Here’s another riddle that was bandied about during the panel session.

Should we be worried about Machine Learning (ML) that presumably will be changing the AI driving system over time, meaning that it won’t be the same “driver” at any point in time and might keep altering how it drives, or should we chalk this up to being no different than the nature of human drivers?

With Machine Learning and Deep Learning (DL), it is possible to have an AI system that changes and does things differently than it did before.

In one sense, this certainly would be handy.

We would likely want the AI driving systems to get better and better at driving. For each mile driven, there are bound to be new and novel situations that arise, and for which the AI ought to be set up to adjust and become better at handling.

If the AI was static and never changed, and assuming it wasn’t already all-encompassing and essentially all-knowing (this is highly unlikely), it would never gain or benefit from whatever arises and would presumably repeat the same (possibly) driving snafus or inadequate driving actions over and over.

Well, we certainly know that humans seem to learn over time to be better drivers. Take a look at any newbie teenage driver and you can see substantive progress in their driving prowess over time (yes, there are some exceptions of teenagers that don’t improve, but by-and-large they do).

In fact, some argue that whereas humans oftentimes end-up with deteriorating driving skills when they reach a limited elderly state, the AI won’t weaken or diminish over time and will faithfully remain as ever-present as it once was.

Seems like this riddle can be put to bed.

Not so fast!

First, it is an oversimplification to suggest that the ML/DL “learning” is any kind of equivalent to human learning.

They are radically different, at least that’s the case for now and the foreseeable future.

Humans use common-sense reasoning, for example, and by doing so are able to judge whether something learned is valuable or not, and also can grasp the context of a learned idea or action and usually apply it only in related and appropriate circumstances.

As I’ve repeatedly exhorted, there is no AI system as yet with common-sense reasoning of a caliber like humans.

In essence, the counter-argument about allowing DL/ML to self-learn is that it is completely unlike how humans learn and therefore the end result is not nearly as robust (and can be “riddled” with errors, pun intended).

Having an AI system that plays chess and learns over time is rather non-threatening since losing a chess game is not usually a life-or-death matter.

Having an AI driving system that learns over time has a huge life-or-death potential consequence since it is driving a multi-ton vehicle that can readily get into deadly crashes.

There is an ongoing discussion and heated debates about how much on-the-fly ML/DL ought to be allowed for self-driving cars (see my discussion at the link here).

Please go ahead and add this riddle to your list of puzzles to be solved.

Conclusion

One of the handy benefits of conferences like the ValleyML.ai event is that it provides an opportunity to get onto the table a lot of the riddles that are confronting the AI community.

It takes a village to solve these riddles.

Isolated attempts to figure out the answers are likely to be insufficient and unable to scale.

Recall that the panel topic was entitled as collaboration for the safety of autonomous vehicles.

The watchword there is collaboration.

Ultimately, all stakeholders will need to weigh-in on these matters, and the sooner that we make the riddles known, hopefully, the sooner and more elegantly the answers will emerge.

Follow me on Twitter