BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Supposing The FBI Secretly Ran A Fleet Of AI Self-Driving Cars To Track And Ultimately Nab Unsuspecting Criminals

Following
This article is more than 2 years old.

Gotcha!

You might have seen the recent headlines blaring that the FBI secretly duped hundreds of alleged criminals by providing in the open marketplace one of those seemingly secure communications apps akin to a WhatsApp, Signal, Wickr Me, Viber, or similar encryption messaging platforms. Would-be and actual crooks apparently could not resist the allure of using a form of communication that they apparently thought was completely safeguarded from the authorities and altogether foolproof.

Those caught in the sting are likely feeling quite foolish now.

The very mechanism that was intended to protect them from getting caught was their actual downfall of, in fact, getting nabbed.

Ironic.

Clever.

An enthralling plot that is certainly worthy of a movie or TV drama about ways that crooks shoot their own foot, as it were.

The FBI artfully carried out these sting operations by co-opting the Anom system, a somewhat popular encrypted mobile device and network service that had already become relatively prevalent for criminal activities. According to the FBI, the big bust took place worldwide and netted hundreds upon hundreds of arrests. The claim is that these were criminal participants in narcotics trafficking, organized crime, and a slew of illegal and dastardly activities.

Some have heralded this sly ploy of providing a type of honeypot to get potential evildoers on the hook.

The voluminous communique of those using the secure app was essentially secure to the rest of the world but completely unsecure and readily readable by the FBI. One can imagine the puzzlement of the nabbed users as they likely had already ascertained that their communications were seemingly encrypted and secure. They must have felt comfortable and confident that the encryption would rebuff any attempts by the police or law enforcement, stopping those authorities cold and leaving no trail and no means of knowing what clandestine efforts were afoot.

Law enforcement was reportedly monitoring over 12,000 users that were globally spread across over one hundred countries. Encrypted messages were easily decrypted since the FBI had all the keys that were being used by the communications app. If the communications had not been encrypted, surely the presumed thieves would not have been using the app. The false sense of security that the messages were encrypted had to be a feeling of relief that they could carry on whatever bad deeds they wished to discuss via the app.

Not everyone thinks this is a decidedly good thing. Some have expressed qualms that there might very easily be innocent users that have gotten swept into the large fishnet. Also, the question arises as to whether users that had no criminal activity underway were being screened and their messages examined, routinely, even though they were simply using the app for personal privacy purposes. Did this eagerness to get the bad actors inadvertently overextend into invading the privacy of others? This will all certainly play out as the cases are brought to court and the matter gets additional coverage.

Interestingly, some say that this might scare other criminals into avoiding the use of secure communications apps. Perhaps the lesson to be learned by potential crooks is do not to trust any encrypted messaging platform. A retort to that contention is that this might simply shift the criminals toward using the most well-known platforms and deny the startups from getting any air time. In other words, the evildoers might assume that the prominent platforms would not be run by the FBI, while the FBI might try again to co-opt some smaller and less known platform in a future similar sting.

Makes your head spin.

In any case, let’s leverage the concept of having law enforcement decide to co-opt some form of technological innovation and make use of it to try and nab the baddies of this planet.

What other type of high-tech could the FBI choose to use for such a crimefighting purpose?

The answer: Self-driving cars.

Say what?

Yes, you could envision a scenario in which the FBI or essentially any law enforcement agency might co-opt the use of AI-based true self-driving cars and use those AVs (autonomous vehicles) to track and arrest wrongdoers.

Keep in mind that self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage about AVs and especially self-driving cars, see the link here.

Here’s an intriguing question that is worth pondering: In what ways could law enforcement run a fleet of AI-based true self-driving cars to ensnare criminals?

Before jumping into the details, I’d like to further clarify what is meant when referring to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Snagging Crooks

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

I’ve previously provided insights about the use of self-driving cars for solving murder cases and other unsolved crimes, see the link here. That type of crime-solving approach would be an aboveboard use of self-driving cars as an aid in crimefighting. There isn’t anything especially sneaky or behind-the-scenes going on.

There is also the possibility of using self-driving cars as bait cars, namely having the AVs sitting around to see if a would-be car thieve will take the bait and try to steal the car (see the link here). This admittedly is a slyer use of self-driving cars by law enforcement.

Our attention in this discussion is the overarching notion that law enforcement might run a fleet of self-driving cars, doing so without telling anyone that the fleet is operated by law enforcement.

The underlying rationale would be to try and track and eventually arrest lawbreakers (without their awareness that this is happening). Once the big bust occurs of nabbing alleged criminals, the gig would be up and the odds of running another similar sting would be likely diminished.

It is somewhat a one-trick pony, akin to the encryption messaging sting.

To see how this might be undertaken, let’s sketch the key facets of this scenario.

First, the odds are that self-driving cars will be deployed in fleets. A large company that wants to get into ridesharing or ride-hailing might purchase a bunch of self-driving cars and operate them as a collective. Some pundits assert that rental car companies will do this, along with other firms that see a profitable opportunity by making money from the ridesharing of self-driving cars.

We will still have human-driven cars available for ridesharing purposes. There is a false narrative by some that we would have only self-driving cars on our roadways and completely do away with all human-driven cars. That is quite a wild stretch of the imagination. Besides the obvious aspect that the replacement or excising of all conventionally driven cars would take many decades to gradually take place, some human drivers insist they won’t give up their driving until the day that their cold dead hands are pried from the steering wheel.

Okay, so we will gradually witness the emergence of self-driving cars and they will oftentimes be organized into fleets. Self-driving cars will typically be listed on ridesharing networks. A person needing a ridesharing lift will use the desired network and see that there are human-driven ridesharing cars and there are also self-driving ridesharing cars available.

The self-driving cars might be listed on popular ridesharing networks or could be utilized exclusively by using a specialized app that the company operating the fleet makes available.

You won’t especially care which self-driving car is available for ridesharing. The odds are that you will mainly be interested in using either a human-driven car versus a self-driving car and will not focus on which brand or model of a self-driving car is being proffered. The exception to this rule-of-thumb will be that the size of the vehicle and where it is able to go will be the key determiner of selecting a specific self-driving car offering.

A perceived advantage to using a self-driving car will be that it might be considered a safer driver than using a human-driven ridesharing car. In theory, self-driving cars will always be driving in a legal manner, obeying the speed limits and other driving rules. The AI driving system will not get drunk, will not drive while distracted, and so on. The hope is that self-driving cars will overall be safer at driving than human drivers.

Another perceived advantage would be that human drivers can sometimes be a pain in the neck, while an AI driving system is ostensibly quiet and unintrusive. As a passenger, you don’t need to carry on a pleasant conversation with an AI driving system.

From a crooks' perspective, a human driver is a bit of a potential problem. The human driver could be a witness to the fact that a crook took a ride. This presumably would include an indication of where the crook was picked up, what they looked like, and where they were dropped off. In addition, the human driver could, later on, attest to what the crook did while inside the self-driving car, perhaps overhearing snippets of conversation.

In that sense, the perceived beauty of using an AI self-driving car would be that a crook would not have to worry about that overly curious human driver that seems to be sticking their nose where it does not belong. There won’t be a human driver inside a self-driving car. The crook would be able to ride along and not be observed by a nosey human driver.

You might be thinking that the crook could just enlist the human driver into whatever criminal enterprise is taking place. Sure, that’s a possibility. The problem is that using a human driver can be a loose end. Will the human driver keep their mouth shut? Suppose the human driver turns and becomes a state witness? Worse still, what if the human driver is an undercover law enforcement officer.

Using a self-driving car cuts out all of those apprehensive misgivings about having a human driver.

Seems pretty convincing. Law abiding people will be attracted to using self-driving cars. Criminals too will be attracted to using self-driving cars. A perfect convergence to possibly get the criminals to let down their guard. The self-driving car fleet operator would normally not be differentiating one type of rider from another. As long as the riders are paying to ride, that’s what counts. People will go where they want. People will do whatever they want while inside the autonomous vehicle (well, within limits, see my discussion at this link here).

In a manner of speaking, crooks might perceive self-driving cars in the same light as using a secure encrypted messaging platform. Self-driving cars would seem to provide a criminally handy form of transportation. No human driver. No human witnesses. Conduct your criminal activity while inside the self-driving car, having private discussions with your fellow lawbreakers.

No one would ever be the wiser.

Maybe yes, maybe no.

Imagine that the FBI realizes that self-driving cars could be a means to ensnare riders that are criminals such as organized crime participants, narcotics traffickers, etc.

One approach would be to try and convince an existing fleet operator to let the FBI be immersed in the fleet deployment. This could be dicey as a secretive arrangement and might end up tipping the hand of law enforcement since an insider of the fleet operator might spill the beans to the crooks.

It might be sounder for the FBI to put together its own fleet. This could be done from scratch. They go out and buy a slew of self-driving cars and then start listing them for use. Things are a bit more complicated due to the need to keep the self-driving cars properly maintained, properly serviced, updated with needed AI driving system patches, and the like. In any case, it is all doable.

One supposes that an alternative would be to approach a fledgling startup that is operating a fleet of self-driving cars and do a co-opt deal with that firm. This might be handier since the cover story is already established and this might lessen suspicions by any potential crooks that might decide to use those particular self-driving cars.

What could the FBI use those self-driving cars for?

The obvious aspect would be to keep track of where the riders are picked up and where they are dropped off. This would include the date of the travels made, the times of the traveling, and other facets.

But that’s not all, by a long shot.

Self-driving cars have a host of sensors that are used to detect the driving scene. This encompasses devices such as video cameras, radar, LIDAR, ultrasonic units, thermal imagining, and other data capturing sensors. Though the purpose of those sensors is presumably for discerning the driving environment, they are collecting widely a lot of data that has little to do with the driving act per se.

I’ve described this as the “roving eye” problem, see my analysis at this link here.

When a self-driving car is driving down a normal neighborhood street, there is more than just video imagery about the roadway itself. The camera is also capturing the activities taking place such as someone mowing their lawn, a house that has kids playing outside, and pretty much anything that is within visual range. Companies that are deploying fleets of self-driving cars can attempt to monetize this data. For example, potentially selling the data to real estate firms that want to know what the status of the housing units is in a given community.

This can be a potential privacy intrusion nightmare. Given that self-driving cars will be going back and forth throughout your community, it would be feasible to play back the collected data and start to piece together the daily activities of us all. Open questions exist about the ownership and use of this data.

For the FBI, the data from a self-driving car could be golden.

A criminal is picked up by a self-driving car being run by the FBI. Imagine that other allied crooks are there chatting with the criminal when the person gets into the self-driving car. This would be captured on video. When the crook arrives at the designated destination, the person gets out of the self-driving car and is met by other people. All of them are now captured on video. If you multiply out this type of video capturing, over time you could piece together the entire network of crooks involved in whatever lawless acts they are doing.

There is more fuel to be placed on this fire.

Self-driving cars will have inward-facing cameras too. This is to allow for catching riders that might decide to mar the interior of the vehicle by writing graffiti or tearing up the seats. A more positive-minded use would be for taking online courses and being able to do Zoom-like discussions while riding along in a self-driving car.

The conversations inside a self-driving car could entirely be captured on video. Think of the treasure trove of info. If the criminal has a fellow crook inside the self-driving car, their conversation would be entirely recorded. Even if the person is alone inside the self-driving car, their use of a personal smartphone could be detected and recorded.

Of course, all of this presumes that the criminals are lulled into believing that self-driving cars are secure and not being used to surveil them. Here’s how that would possibly be cleverly staged to hoodwink the baddies.

The odds are that once self-driving cars become prevalent, the general public will wise up that self-driving cars have this intrinsic capability of being a tattletale. You can bet that many people will refuse to ride in self-driving cars for that very reason. There is even a chance that communities might try to ban the use of self-driving cars or put hefty restrictions on what they can record when inside their locale.

So, expect that there will be some self-driving car fleets that will advertise and pledge that you won’t have your privacy intruded if you use their particular fleet. This might be a key differentiator for prospective riders. They will look at any listed self-driving cars on a ridesharing network and seek to use only the ones that have made that kind of promise.

Does this mean that the FBI is therefore fouled out by that kind of preference?

Nope.

They could presumably tout that their fleet is indeed the type that averts using the non-driving related data. Similar to how Anom was touting that their encrypted messaging platform was secure, the FBI could ostensibly make the same kind of claim about the secretly run fleet of self-driving cars.

There might be additional under-the-table ways to especially allure the crooks they are targeting.

For example, possibly allow payment for the rides via some shady cryptocurrency. The FBI wouldn’t care whether the rides were paid for or were using an unreliable form of payment, since the subterfuge is a pretext to catch the baddies. The prices for the rides could also be set extremely low, below the prices of other competing self-driving car fleets.

The best dream for law enforcement would be that the criminals involved would somehow generally agree to use that specific fleet, more so than any other self-driving car. Get the evil masterminds to standardize on a seemingly secure and well-guarded fleet of self-driving cars. Perhaps the undercover agents operating the fleet could offer a sweetly priced deal that also includes top-priority riding privileges and any other bells and whistles associated with using self-driving cars.

And then nab them.

Conclusion

All of this isn’t going to happen if we don’t have self-driving cars in any large-scale way. Thus, until there is ongoing and booming growth in the adoption of self-driving cars, the law enforcement gambit is not especially viable. Eventually, the FBI or some akin law enforcement agency running a secretly arranged sting operation is a possibility.

One last parting thought on this mind-bending topic.

When it comes time to bust the crooks, you could potentially have the self-driving cars go pick them up, acting in a normal fashion as though nothing untoward is about to happen. Those criminals would gladly get into the self-driving cars that they have been using routinely for months or years.

The next thing they know, the self-driving fleet all at once takes the unsuspecting rubes to a local FBI branch office, handing them directly over to the eagerly awaiting law enforcement agents.

Now that’s a form of special delivery.

Follow me on Twitter