Report explores current malicious uses of AI to better understand the future of cybercrime

0
Report explores current malicious uses of AI to better understand the future of cybercrime

The exponential growth of communication technologies has led to an increase in cybercrime cases. In the research paper “Malicious Uses and Abuses of Artificial Intelligence,” security provider Trend Micro, the United Nations Interregional Crime and Justice Research Institute (UNICRI), and Europol present the state in 2020 of malicious and abusive uses of AI and ML technologies, but also plausible future scenarios in which cybercriminals could abuse these technologies for malicious purposes.

AI can bring enormous benefits to society and help solve some of the biggest challenges we currently face, but it also increases the risks of cybercrime.

The report provides law enforcement, policymakers, and other organizations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.

Edvardas Šileris, head of Europol’s Cybercrime Centre, said:

“AI promises the world more efficiency, automation and autonomy. At a time when the public is increasingly concerned about the possible misuse of AI, we need to be transparent about the threats, but also look at the potential benefits of AI technology. This report will not only help us anticipate potential malicious and abusive uses of AI, but also proactively prevent and mitigate these threats. This is how we can unlock the potential of AI and benefit from the positive use of AI systems.”

The paper warns that AI systems are being developed to improve the effectiveness of malware and disrupt anti-malware and facial recognition systems.

Martin Roesler, head of forward-looking threat research at Trend Micro, states:

“Cybercriminals have always been early adopters of the latest technologies, and AI is no different. As this report reveals, it is already being used to guess passwords, break CAPTCHA and clone voice, and many other malicious innovations are on the way.”

Using AI for cybercrime.

The first part of the report outlines various AI-based methods already employed by cybercriminals.

Malware

The AI-supported or AI-enhanced cyberattack techniques studied demonstrate that criminals are already taking steps to expand the use of AI. However, malware developers may be using AI in more obscure ways without being detected by researchers and analysts.

For example, in 2015, a study proved that a system could create emails to bypass spam filters. This approach uses a generative grammar that can create a large set of phishing emails with a high degree of semantic quality to scramble filter inputs, so the system can adapt to filters and identify content that is no longer detected.

In 2017, at Black Hat USA, an information security conference, researchers demonstrated how to use ML techniques to analyze years of data related to BEC (Business Email Compromise) attacks, a form of cybercrime that uses email fraud to scam organizations to identify potential targets for attacks.

This system leverages both data leaks and freely available social media information and can accurately predict whether an attack will be successful.

AI-supported password hacking

Cybercriminals are using ML to improve algorithms for guessing users’ passwords. Approaches such as HashCat and John the Ripper, compare different variants of the hash of frequently used passwords to successfully identify the password matching the hash.

By leveraging neural networks and generative adversarial networks, cybercriminals are able to analyze large datasets of passwords and generate password variations that are tailored to the statistical distribution, more accurate and targeted.

For example, the report’s authors discovered AI-based software that can analyze a large set of passwords recovered from data leaks in an article listing a collection of open-source hacking tools. This software improves its ability to guess passwords by training a GAN to learn how people tend to change and update passwords, most often by adding a letter or number.

They also found a GitHub repository on an underground forum post in February 2020 with a password analysis tool that could analyze 1.4 billion credentials and generate password variation rules.

Breaking CAPTCHAs with AI

The application of ML to break CAPTCHA security systems is frequently discussed on criminal forums. CAPTCHA images are commonly used on websites to thwart criminals when they attempt to automate attacks, among other things (some attempts involve creating new accounts).

According to the report, software that implements neural networks to solve CAPTCHAs is being tested on criminal forums.

Social engineering and AI

Now recognized as one of the biggest threats to corporate security, social engineering allows cybercriminals to gain legitimate and authorized access to confidential information.

The report discusses discussions looking at AI-based tools to improve social engineering tasks found on various underground forums.

According to a Europol report, the recognition tool named “Eagle Eyes is claimed on the French forum Freedom Zone to be able to find all social media accounts associated with a specific profile. It uses facial recognition algorithms to match a user’s profiles using different names.

Another tool identified by Europol enables real-time voice cloning: a voice recording of just five seconds of a target allows a malicious actor to clone that voice. A UK-based energy company was duped by one of these and transferred nearly £200,000 to a Hungarian bank account. The cybercriminal had used deepfake audio technology to impersonate the company’s CEO to authorize the payments.

Deepfakes and AI

Deepfakes involve the use of AI techniques to create or manipulate audio and visual content to make it appear authentic. A combination of deep learning and fake media “, deepfakes are particularly used for disinformation campaigns because they are difficult to immediately differentiate from legitimate content, even with the use of technological solutions. Due to the wide use of the Internet and social media, deepfakes can reach millions of people in different parts of the world very quickly.

In the case of fake videos, AI can replace a person’s face in a sequence with another using numerous photos. Many people are fooled.

Last May, a deepfake was posted on YouTube using Elon Musk’s face to scam people into sending Bitcoin and Ethereum crypto-currencies to cybercriminals.

Future uses of AI and ML for cybercrime

The report’s authors expect to see cybercriminals exploit AI in a variety of ways in the future with the goal of improving the scope and scale of their attacks, evading detection, and using AI as both an attack vector and an attack surface.

They anticipate that they will attack organizations via social engineering tactics. Cybercriminals can automate the early stages of an attack through content generation, improve business intelligence gathering, and accelerate the detection rate at which potential victims and business processes are compromised. This will enable faster and more accurate fraud of businesses through a variety of attacks, including phishing and business email scams (BEC).

AI can also be abused to manipulate crypto-currency trading practices. The authors point to a forum discussion about AI-powered bots trained on successful trading strategies from historical data to develop better predictions and trades.

Furthermore, AI could be used to harm or inflict physical damage on individuals in the future. The authors report that AI-powered facial recognition drones carrying a gram of explosive are currently under development. These drones, which are designed to look like small birds or insects to appear unobtrusive, can be used for micro-targeted or single-person bombing and can be operated via cellular Internet.”

AI and ML technologies have many positive use cases, however, these technologies are also used for criminal and malicious purposes. Therefore, there is an urgent need to understand the capabilities, scenarios and attack vectors that demonstrate how these technologies are being exploited to be better prepared to protect systems, devices and the general public from advanced attacks and abuse

The three organizations make several recommendations to conclude the report:

  • Harness the potential of AI technology as a crime-fighting tool to ensure the sustainability of the cybersecurity and law enforcement industry
  • Pursue research to drive the development of defensive technologies
  • Promote and develop secure AI design frameworks
  • Defuse politically charged rhetoric about the use of AI for cybersecurity purposes
  • Leverage public-private partnerships and establish multidisciplinary expert groups

For more information: Malicious uses and abuses of artificial intelligence

Translated from Un rapport explore les utilisations malveillantes actuelles de l’IA pour mieux appréhender le futur de la cybercriminalité