Tech News: When artificial intelligence facilitates crime

Artificial Intelligence (AI) is one of the important building blocks of the Fourth Industrial Revolution (4IR) or the age of “intelligentisation.”

Artificial Intelligence (AI) is one of the important building blocks of the Fourth Industrial Revolution (4IR) or the age of “intelligentisation.”

Published Aug 14, 2020

Share

By Louis Fourie

JOHANNESBURG - Artificial Intelligence (AI) is one of the important building blocks of the Fourth Industrial Revolution (4IR) or the age of “intelligentisation.”

The past few years have seen tremendous advances in machine learning and the building of algorithms from data; deep learning simulating the human brain and the processing power and decreasing costs of powerful and fast computers.

Intelligent devices are, therefore, increasingly finding their way into our lives, whether it is a personal assistant such as the Amazon Alexa, Google Home, Apple Siri or Samsung Bixby; satellite navigation; real-time language translation; biometric identification such as fingerprint, iris or facial recognition; or industrial process management and decision-making.

AI exploitation experiments

Unfortunately, noble AI technology also has the possibility to be misused and exploited for criminal purposes. In 2016 two computational social scientists by the name of Seymour and Tully used AI to convince social media users to click on a phishing link within a mass-produced message.

The messages were very convincing since the content of each message was tailored to the intended individual through the use of machine learning techniques applied to users’ past behaviours and public profiles on social media. The real intention behind each message was thus skilfully concealed with the result that a large number of people clicked on the link. If an unsuspecting victim clicks on the phishing link and completed the web-form in a real-word situation, the criminal would obtain personal and private information that could be used in fraud and theft.

In the same year, three computer scientists, Martínez-Miranda, McBurney and Howard, simulated a commerce market and found that through reinforcement learning, AI trading agents could execute a market manipulation consisting of a set of deceiving false-orders (order-book spoofing) to result in a substantial profit for the owner of the AI trading agents.

The orders are placed with absolutely no intention to execute them and is merely intended to manipulate the participants and prices in the marketplace. The very same technique can be used to inflate stock prices in the stock market through fraudulent promotion before selling at inflated prices to unsuspecting parties (pump-and-dump scheme through the use of social bots to spread disinformation with the aim of triggering algorithmic trading agents).

Both these theoretical experiments showed that AI provides a real and serious novel threat in the form of AI facilitated crime. Although the intent of AI is mostly beneficial, this threat is exacerbated in that AI applications are increasingly becoming an interactive, autonomous, and self-learning agency, and can deal independently with tasks previously requiring human intelligence and intervention.

Real AI threats

A few days ago, the results of research into the possible applications of AI and related technologies in the perpetration of crime, were published in the Crime Science Journal. AI threats were rated according to four dimensions, namely harm (e.g. financial loss or undermining trust), criminal profit (financial return, terror, reputational damage), achievability (readiness of technology), and defeatability (measures to prevent, detect or render unprofitable). The final result was twenty crimes rated according to the four dimensions as can be seen in the figure below.

AI threats

The above crimes were then organised into high, medium and low categories according to a combined threat severity score.

• High threat AI crimes

Audio/video impersonation was ranked as the number one threat. People have a strong tendency to believe audio and video evidence, but recent advances in deep learning, in particular the use of Generative Adversarial Networks (GANs) to generate convincing artificial (“deepfake”) content, have dramatically increased the opportunity to generate fake content.

Credible and even interactive impersonations or identity forgery could be used in the impersonation of children to elderly parents over video calls to acquire access to funds; to obtain access to secure systems; and to produce fake videos of public notables speaking or acting deplorably in order to manipulate support. Although marginal success in algorithmic detection of impersonation has been achieved, it is not very successful in the longer term due to the many uncontrolled propagation routes.

• Driverless vehicles as weapons. In many countries, vehicles have been a popular delivery mechanism for explosives or as a kinetic weapon over the past number of years. With fully autonomous and driverless vehicles controlled by AI increasingly gaining traction, the threat of hacking into the AI system despite extensive safety systems, and using the vehicles for vehicular terrorism, is very real. GPS targeting could assist the vehicle to reach its target, and machine vision could target pedestrians.

• Tailored phishing. Most people are used to the indiscriminate phishing attacks with generic messages from a trusted party about the reset of a password, an account being frozen and many more. The aim of these social engineering attacks is to collect secure information or install malware when the unsuspecting victim clicks on the link. Until now the conversion rate on the huge number of digital messages was very low. However, AI has the potential to gather information from social networks or to fake the style of a trusted party to make the message appear more genuine. AI effectively automates spear-phishing attacks, while simultaneously learning what works and adapting the message accordingly to maximise responses.

• Disrupting AI-controlled systems. As AI are increasingly becoming an essential part of our daily lives in business, government and home, the opportunities for exploitation are multiplying. Targeted disruption of the power grid, traffic systems, food logistics, financial systems, and other public systems are becoming a real threat.

• Large-scale blackmail. Due to the ability of AI to harvest information (AI snooping) on a large scale from social media or personal datasets (home hubs, email logs, browser history, hard drives, smartphone content, smart TVs), AI could identify certain vulnerabilities of the prospective target, and then tailor the blackmail messages to create a threat of exposure of criminality, wrongdoing or embarrassing information based on the particular vulnerability.

• AI-authored fake news. AI is currently often used to create many versions of fake news from multiple sources to boost its visibility and credibility, as well as choosing the content and presentation of the fake news on a personalised basis to increase its impact. In adequate quantity, fake news can shift attention away from true information.

Medium threat AI crimes

In the medium AI threat category the research reported automated military robots in the hands of criminal or terrorist organisations; the sale of fraudulent services under the guise of AI; data poisoning by introducing biases in the machine learning data; cyber-attacks based on machine learning; autonomous attack drones; online eviction from essential services through subtle and tailored AI attacks; tricking AI facial recognition systems through morphing attacks; and market bombing or the manipulation of financial or stock markets via targeted patterns of trade in order to manipulate the market or damage competitors, currencies or particular economic systems.

Low threat AI crimes

In the low AI threat category, the report mentioned bias exploitation in prominent algorithms such as YouTube recommendations to direct viewers to propaganda, or Google rankings to elevate the profile of products; small autonomous burglar bots that could enter via small openings to retrieve keys or open doors for burglars; evading AI detection such as adversarial perturbations to conceal pornographic material from automated detection; AI-assisted stalking through the use of machine learning systems; and the forgery of music or art through AI.

Our biggest AI threat

Our biggest threat regarding AI is thus not the science-fiction threat of intelligent robots taking over the world, but rather much more sophisticated and creative uses of AI to facilitate everyday criminal activities. We live in an ever-changing world of which the outcome can be good or bad. It is imperative that we take notice of the possible threats and act in time.

Professor Louis C H Fourie is a futurist and technology strategist.

BUSINESS REPORT

Related Topics: