BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Breakthroughs In Roadway Edge Computing Are The Missing Link For Self-Driving Car Collaboration

Following
This article is more than 4 years old.

Have you ever been driving a car and found yourself reaching a veritable sensory overload in the act of driving the vehicle?

It can readily happen.

Suppose you are driving your car in the rain, which right away ups the ante for performing the driving task.

With rain slicked streets and a massive downpour obscuring your view of the traffic ahead, the odds are that you are having to strain to keep the car from slipping and sliding and are mightily seeking to avoid hitting any nearby vehicles or pedestrians.

Pretend that you have your family in the car with you, along with some out-of-town guests that you are toting around, and your beloved family pet is squeezed in there too, a golden retriever that loves to go for car rides.

At any moment in time, one or more family members might be telling you to watch out for a big semi-trailer truck that is within inches of your car in the lane to your right. And, the guests you are hosting might be pointing out that you need to go slower and be more mindful of large puddles of water on the roadway.

Even your beloved dog gets into the act, barking every time that you make a sudden maneuver to avert a car crash due to other maniac drivers that are recklessly driving in the rain.

You, the driver of the car, must remain steadfast in driving the car, and somehow also receive these varied inputs from your car mates, ascertaining whether they are providing you with helpful info to ensure a safe drive, or maybe providing only verbal clutter that distracts from you from performing the arduous driving chore.

There’s a ton of verbal messaging flying through the air in that car, making for a large amount of sensory input to be processed.

Plus, you are visually scanning the roadway, mentally processing the myriad of elements in the rainy environment.

Is that car to your left going to veer into your lane, perhaps doing so since they might not have a clear view of the traffic as a result of their rain-soaked side mirrors?

That pedestrian on the sidewalk appears to be poised to dart across the street, jaywalking, opting to get out of the rain quickly rather than doing the right thing and walking down to the proper crosswalk at the corner.

You’ve got your eyes on the signal at the next intersection, currently showing a green light but it might soon switch to yellow and then red.

Would it be better to gun it and try to make the green, squeaking into the intersection if the light goes to yellow and then red, or would it be safer to start slowing down in anticipation of the red light that will soon inevitably appear?

Maybe the car behind you will try to rush-up upon your car, wanting to make it before the green light becomes the red light, and if you opt to slow down the other car might ram into your car.

Meanwhile, your “teammates” inside the car are all offering sage advice, some saying that you can make the green light by hammering down on the gas pedal, while others are urging you to get ready to halt at the intersection and are bracing for a sudden stop.

The golden retriever is offering his two cents too, whimpering about something, maybe over the green light versus red light dilemma, or it could be that he sees a stick outside the car and wants you to let him out of the car to go retrieve it.

Yikes!

A lot is going on.

Sensory Overload In Car Driving

Upon contemplating how often these kinds of sensory overload situations seem to arise, it is nearly a miracle that we don’t have more car accidents than we already do.

Whenever you get into your car and head out into traffic, the amount of info you’ll be getting during the course of a driving journey can range from being relatively modest to becoming a humongous deluge.

There’s essential info such as the status of traffic signals, along with numerous flashing signs that might be warning you to watch out for a flooded street or imploring you to slow down.

Other nearby cars are also a kind of signal or info that you are receiving since the behavior of those other cars and their drivers will shape where you can go and what you need to watch out for.

And, there are those pesky but treasured passengers, acting like back-seat drivers and inundating you with driving advice.

Speaking of driving advice, teenage novice drivers often find that they are unable to cope with an excessive amount of input when first learning to drive.

Astute parents discover that trying to aid their teenage novice driver by bombarding them with instructions when initially driving can be overwhelming and do more harm than good. You gradually realize that it is best to offer key points at key junctures, such as watch out for that double-parked van, rather than continually blathering about every minuscule detail of the driving task.

When we have true self-driving cars, the AI of the driverless car will be programmed to cope with complex driving situations and be able to presumably handle whatever volume of inputs comes it's way.

There is a bit of cheating going on right now in that most of the existing tryouts of self-driving cars do not engage the human passengers in a dialogue about the driving task. Thus, the AI doesn’t have to deal as yet with vocal and insistent occupants that are offering varied opinions about which way to go or how to best drive the car.

Some believe that the automakers and self-driving tech firms won’t ever aim to allow occupants to offer driving tips and comments, though I’ve pointed out that this is something that humans are going to want to do and pretending to avoid the matter is like burying your head in the sand (for more on this, see the link here).

Here’s an interesting question to consider: Should we be worried about true self-driving cars being potentially overloaded with sensory inputs akin to how humans can get overloaded?

Yes, it’s a valid concern.

One aspect that few are yet discussing involves an overload of V2X (vehicle-to-everything) electronic communications.

Let’s unpack the matter and find out what it’s about.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so I’m not going to include them in this discussion about sensory overload (though for clarification, Level 2 and Level 3 could indeed be vulnerable to sensory overload too, thus this discussion is relevant even to semi-autonomous cars).

For semi-autonomous cars, it is equally important that I mention a disturbing aspect that’s been arising, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Update Problem

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect that will be a boon to driverless cars is the advent of V2X electronic communications, which refers to the notion that the AI of the self-driving car will be able to receive and send electronic messages to other roadway-related entities.

There will be V2V (vehicle-to-vehicle) electronic transmissions, allowing a driverless car to send out messages to other nearby driverless cars (this is considered one type of V2X capability).

Imagine that a driverless car has encountered debris in the roadway and can quickly let other self-driving cars behind it know that there’s say a couch that’s plopped onto the fast lane.

By using V2V, the AI of the driverless car that first spots the offending debris can spread the word to other self-driving cars. As a result, those other self-driving cars might opt to slow down or get out of the fast lane before they come upon the debris or exit early from the highway and avoid the blocked lane entirely.

Prevailing standards indicate that V2V uses a WiFi-like broadcast capability with a range of about 300 meters (there is ongoing discussion and debate about the protocols that should dominate), suggesting that if a driverless car transmits a V2V message to other driverless cars nearby, those, in turn, could aid in spreading the message by sending it along to other self-driving cars and in approximately 5 to 7 hops get the message to vehicles nearly a mile away.

I refer to this as enabling a potential form of automotive omnipresence, allowing any given driverless car to essentially piece together a bigger picture of what’s going on nearby, far beyond the normal limits of everyday vision and radar.

There will also be V2I (vehicle-to-infrastructure) electronic communications, yet another type of V2X.

We’ll eventually have our roadway infrastructure wired-up with computers that can electronically give their status. A bridge that’s out of commission can use V2I to let any approaching driverless cars and trucks know that the bridge is closed to traffic.

Some believe we’ll also have V2P (vehicle-to-pedestrian) capabilities.

A pedestrian using their smartphone will be able to send a signal to self-driving cars coming down a street, telling those driverless cars that the pedestrian is intending to cross the street. This heads-up would be intended to reduce the chances that a pedestrian might be otherwise undetected and get hit by a driverless car.

Let’s focus on V2V.

At first thought, the idea of having other nearby cars let you know about local driving conditions seems to be a godsend.

Human drivers have to guess about roadway issues by spotting clues such as cars ahead that are all slowing down, perhaps due to debris in the lane, though it could be some other problem altogether that is causing the cars to hit their brakes.

Just imagine if all human drivers were equipped with walkie-talkies and could yell out whatever they spot while driving, providing a boatload of added driving info to other nearby drivers.

Sure, today there are social media-oriented platforms that allow human drivers to indicate limited amounts of roadway conditions, which then get displayed on GPS maps for others to see, but that’s a teensy-weensy kind of notification in comparison to what V2V promises.

In theory, all driverless cars will be equipped with V2V, and they will all abide by the same standardized protocols about the info that will be transmitted (there are ongoing discussions and deliberations on proposed protocols and approaches).

Whereas human drivers might variously choose to use a walkie-talkie or social media to let others know about the roadway status, presumably driverless cars will do so reliably and consistently. A human driver that might have been too lazy or not willing to aid their fellow mankind is going to be replaced by AI systems in driverless cars that will programmatically seek to transit what’s happening on the streets.

That’s great!

There you are, settled into a self-driving car on your way to work, and meanwhile, the AI of the self-driving car is engaging in a colossal electronic chitchat about the traffic situation, aiming to use that info to make your ride as efficient and effective as feasible.

There is a rub though.

Suppose you are on the freeway and your driverless car is surrounded by dozens upon dozens of other self-driving cars.

Keep in mind that there might be self-driving cars way up ahead of you.

There might be driverless cars some distance behind you.

There could be driverless cars right next to you.

If the freeway is elevated, there might be driverless cars below the freeway that are street driving.

More self-driving cars are sitting on the on-ramps leading into the freeway.

It’s a vast herd of driverless cars, and all of them are vying for attention by transmitting the roadway status via V2V and trying to ascertain the roadway status by receiving and interpreting V2V messages from other self-driving cars.

Picture this as though you are standing in a crowded bar for a wild party and there are lots of discussions going on, the din is so loud and overwhelming that you can barely understand the person standing next to you as they try to carry on a conversation.

A cacophony of V2V messages might be less helpful than we all assume it will.

The AI that’s driving the self-driving car that you are peacefully residing in might be getting a barrage of messages from those dozens upon dozens of other driverless cars.

Some of those messages are likely irrelevant to the existing driving task. For example, the V2V coming from the cars that are below the freeway is unlikely to be pertinent to the driving actions underway on the freeway.

Meanwhile, those driverless cars that are beneath the freeway are potentially keenly interested in the V2V coming from the cars on the freeway since perhaps they are intending to soon get onto the freeway and they are trying to find out what the traffic is like.

Consider other facets of when the V2V might not be especially helpful.

If a couch fell onto a lane, the first self-driving car to spot it would presumably send out a V2V to let other nearby driverless cars know that the couch is blocking the lane. Those nearby driverless cars then transmit the message to other further away self-driving cars, allowing those driverless cars a mile back or more to become aware of the couch issue. Think of the messaging as rippling like a wave.

How many such messages might be sent?

Well, without any predetermined means of coordination, each time that any of the driverless cars get the message they might opt to send it along to other self-driving cars, and simultaneously other driverless cars coming upon the debris will be sending out their own V2V messages saying the same thing (such messages shotgunning out in a fraction of second, each).

Zillions of messages all about the same instance are potentially now flying here and there (this is a mesh network, each car being a node, and communicating on a peer-to-peer basis, see the standard SAE 2735 for messaging details).

Suppose that a highway patrol car happened to be near the dropped couch and quickly pushed it out of the way.

Will nearby driverless cars transmit that the lane is no longer blocked?

Well, they might not do so since why send out that something is not blocked unless you realized that others might have prior awareness that it once was blocked. It could be that the self-driving cars just proceed now that the lane is no longer blocked and aren’t necessarily going to update or rescind that the prior lane was blocked V2V messages.

Or, you might have all of them continuously transmitting whatever state of the driverless car is, once again emitting tons and tons of messages.

And so on.

It could be chaotic messaging, ending up as a sensory overload and being problematic for each AI system of each driverless car to figure out what’s useful and what’s not among the tsunami of messages.

Roadway Edge Computing

The overarching model of this use of V2V is one of no central coordination and instead a distributed form of V2V messaging.

One breakthrough approach being pursued involves using roadway edge computing as a concentrator and disseminator, putting small computers at key points along the roadway infrastructure that could aid in receiving the plethora of messages and trying to sort them out accordingly (for more on the rise of edge computing for this purpose, see the link here).

Such edge devices would then seek to reduce duplicative messaging, along with labeling and shunting along with refined messages in a compact and selective manner.

Though the driverless cars are still potentially deluging the airwaves with messages, the roadway edge computer is figuring out how to categorize and streamline the messages and then transmitting them smartly accordingly.

This would potentially cut down on the voluminous amount of V2V messages that any particular driverless car might have to closely examine. Using a filter, the AI might primarily look at the edge computer-generated messages and then selectively inspect V2V’s coming from other driverless cars as warranted.

The V2V sensory overload bombarding the AI of the driverless cars is potentially reduced.

One qualm is that there is a possible bottleneck due to the roadway edge computer, and if it is not quick enough or has a hiccup, the streamlined messaging will be undermined.

The counterargument is that the AI of the driverless cars would still be able to resort to examining the deluge of V2V’s and would simply assume there is no edge device available, until or if the edge computer was able to continue proper operations.

Conclusion

Questions abound about the possibility of roadway edge computers:

·        Who will pay for them to be put in place?

·        Should this be done by the government or by private enterprises?

·        How will the roadway edge computers be maintained?

·        Will they be secure enough to avoid spoofing or cyber-hacking that could wreak havoc upon driverless cars that are relying upon those edge devices?

·        Etc.

There’s a classic line that it is hard to solve a problem for which the problem itself has not yet emerged.

In other words, we often aren’t able to foresee new problems that will arise as a result of new innovations and technologies. As such, you aren’t aware of and nor motivated to try and solve a problem that seemingly doesn’t yet exist.

Dealing with the collaboration among self-driving cars is not yet a problem, and only once we have thousands of them, ultimately millions of driverless cars on our public roadways, are we apt to become concerned about these messaging difficulties.

Anyway, go ahead and put this one in your thinking cap and let’s aim to solve a future problem before it becomes one.

Follow me on Twitter