Aug 23, 2023

Cybersecurity Risks of AI and LLMs (White Paper)

Author

Matthew Wise

The full version of the content can be found in our white paper, available here.

Welcome to the first blog post from the Emerging Risks and Implications of AI and LLMs series. In this post, we will focus on the cybersecurity risks that may arise from the adoption and usage of AI and LLMs. 

Our article provides a concise definition of cybersecurity and explores the list of attack surfaces and vectors. We discuss how AI and LLMs extend the attack surface and create new attack vectors in cybersecurity. We also examine how AI and LMMs can support malicious actors in exploiting established attack vectors by drawing on existing literature, publications, and our internal thoughts.

Cybersecurity Concise Definition

Cybersecurity is the practice of protecting electronic devices, networks, and sensitive information from unauthorized access, theft, damage, or other malicious attacks. It involves the use of various technologies, processes, and practices to secure digital assets against cyber threats, including viruses, malware, phishing, hacking, and identity theft. The primary goal of cybersecurity is to ensure the confidentiality, integrity, and availability of digital information—and to protect the privacy and safety of individuals and organizations in the digital realm. This is achieved through the implementation of technologies, methods, and policies that minimize the risk of cyber incidents that could result in data loss, financial loss, reputational damage, or other negative impacts.

Attack Surfaces and Vectors in Cybersecurity

Attack Surfaces

In cybersecurity, an attack surface refers to the sum of all the points or areas within an organization's digital infrastructure that an attacker can target or exploit. This includes all the hardware, software, and networks that are accessible to potential attackers. Examples of attack surfaces can include:

  1. Network infrastructure: All devices connected to the network, such as servers, routers, switches, and firewalls, represent potential entry points for attackers.
  2. Web applications: Web applications that interact with users over the internet can be exploited through vulnerabilities in the application code or web server configuration.
  3. Endpoint devices: Smartphones, tablets, laptops, and other mobile devices that connect to corporate networks and data may be vulnerable to attacks.
  4. Cloud infrastructure: Cloud services and applications that store and process data can be vulnerable to attacks, particularly if they are not properly secured.
  5. Internet of Things (IoT): Smart home devices, medical devices, and other IoT devices can be exploited if they have vulnerabilities or are not properly secured.
  6. Social engineering: Attackers can exploit employees' human vulnerabilities to gain access to sensitive information or systems through tactics such as phishing, and baiting.

Attack Vectors

An attack vector refers to the specific path or method that an attacker uses to exploit a vulnerability in an organization's digital infrastructure. Examples of attack vectors can include:

  1. Malware: Malicious software such as viruses, Trojans, scareware, and ransomware can be used to gain access to or damage an organization's digital assets.
  2. Password attacks: Attackers can use brute force methods to crack passwords or use stolen credentials to gain access to systems and data.
  3. Man-in-the-middle attacks: Attackers can intercept communications between two parties to steal sensitive information or modify data.
  4. Denial-of-service attacks: Attackers can flood a system with traffic to overload it and make it unavailable to legitimate users.
  5. SQL injection: Attackers can use SQL injection techniques to exploit vulnerabilities in web applications and gain access to databases or execute unauthorized commands.
  6. Cross-site scripting (XSS): Attackers can inject malicious code into websites to gain access to sensitive data or modify content.
  7. Remote code execution: Attackers can exploit vulnerabilities in software applications to execute malicious code on a target system.
  8. Zero-day exploits: Attackers can use unknown vulnerabilities in software or hardware to gain access to systems and data.
  9. Physical attacks: Attackers can physically access and tamper with hardware devices or steal sensitive information by intercepting devices during shipment.
  10. Supply chain attacks: Attackers can compromise the software or hardware supply chain to gain access to systems or steal sensitive information.
  11. Watering hole attacks: Attackers can compromise a website that is frequently visited by a target organization's employees and infect it with malware to gain access to systems and data.
  12. Phishing: Attackers can use phishing emails (pretending to be a trusted sender) to trick employees into divulging sensitive information or clicking on links that lead to malware downloads.
  13. Baiting: Attackers can create fake or attractive digital content, and distribute it through file-sharing sites or social media. When users download the content, they inadvertently install malware that can give the attacker access to their systems or data.

Emerging Cybersecurity Risks of AI and LLMs

AI and LLMs as Emerging Attack Surface

The integration of AI and LLMs into various systems and applications is becoming increasingly popular. From chatbots and virtual assistants to content creation, and business automation services—AI and LLMs are used to improve the efficiency and accuracy of various processes. In parallel, they are also becoming a new attack surface for cybercriminals to exploit, leading to the creation of new, more sophisticated and targeted attacks. One of the most significant threats to these integrated technologies is adversarial attacks.

Adversarial Attacks

Adversarial attacks are a type of attack where an attacker intentionally manipulates or alters data inputs to a machine learning algorithm with the goal of causing misclassification or other unintended behavior. These attacks can be particularly dangerous in applications where the accuracy of the machine learning model is critical, such as in medical diagnosis or autonomous vehicles. Adversarial attacks can cause the model to make incorrect decisions, which could have serious consequences.

There are several types of adversarial attacks, including:

  1. Poisoning attacks: An attacker intentionally inserts incorrect or malicious data into the training set of a machine learning model to manipulate its behavior.
  2. Evasion attacks: An attacker manipulates the input data to the model to cause it to misclassify or make incorrect decisions.
  3. Model stealing attacks: An attacker creates a new machine learning model by reverse-engineering an existing one using its output data, which can be used to extract sensitive information.
  4. Backdoor attacks: An attacker adds a hidden trigger to a model that can cause it to behave in a specific way when triggered, allowing for unauthorized access or data theft.
  5. Prompt injection attacks (LLMs): An attacker inserts a biased or malicious prompt into an LLM to manipulate its behavior and influence output.

In our white paper, available here, we also explore a newly defined attack, called indirect prompt injection.

AI and LLMs Supporting Malicious Actors in Exploiting Known and Existing Attack Vectors

Malware

The risk of a malicious actor using AI and LLM for malware is significant. AI and LLM can be used to create more sophisticated and effective malware that can evade traditional security measures, making it harder to detect and mitigate.

For example, AI and LLM can be used to create malware that learns and adapts to security measures in real time, making it more difficult to defend against. It can also be used to automate the process of designing and distributing malware, increasing the scale and frequency of attacks.

A proof of concept attack called Black Mamba has recently demonstrated the potential dangers of using AI for malicious purposes. This attack utilizes AI to dynamically modify benign code at runtime without relying on any command-and-control infrastructure, which allows the malware to evade automated security systems that are designed to detect such suspicious activity. The researchers behind the attack tested it against an “industry-leading” EDR system, resulting in zero alerts or detections[1].

Password attacks

AI can be also used to enhance password attacks. It accelerates the speed and efficiency of password cracking. Machine learning algorithms can also be trained on large datasets of stolen credentials to better identify patterns and create more effective attacks.

Briland Hitaj et al[2] in 2019 developed PassGAN, an AI-powered password attack tool that utilizes a Generative Adversarial Network (GAN) to automatically learn the distribution of real passwords from leaked data and generate high-quality password guesses. The tool has shown promising results in surpassing rule-based and state-of-the-art machine learning password-guessing tools in experiments conducted on two large password datasets. PassGAN achieved these results without any prior knowledge of common password structures or properties, making it a powerful tool for password attacks. When combined with the output of another password cracking tool called HashCat, PassGAN was able to match a significantly higher number (51%-73% more) of passwords compared to HashCat alone, demonstrating its ability to extract a considerable number of password properties that current state-of-the-art rules do not encode.

Man-in-the-middle attacks

Malicious actors could potentially use AI to improve the effectiveness of man-in-the-middle (MitM) attacks. For example, they could use machine learning algorithms to analyze and understand patterns in network traffic, allowing them to more effectively identify and target vulnerable communication channels.

Denial-of-service attacks

Malicious actors using AI and LLMs for Denial-of-Service (DoS) attacks can significantly increase the scale and effectiveness of the attack. They can use AI algorithms to identify vulnerabilities in target systems and launch coordinated and automated attacks from multiple sources, which makes it difficult for defenders to mitigate the attack. AI and LLM can also be used to create more sophisticated and complex attack patterns that can bypass traditional defense mechanisms. Moreover, attackers can use machine learning algorithms to generate realistic-looking traffic that is harder to detect and block by security systems[3].

SQL injection

AI and LLMs for SQL injection attacks can automate and streamline the process of exploiting SQL injection vulnerabilities in web applications. For example, an attacker can use AI and LLM to generate and test a large number of SQL injection payloads that are tailored to specific web applications, increasing the likelihood of success. They can also use AI and LLMs to analyze the structure of a database and extract sensitive information such as usernames, passwords, and credit card numbers. Additionally, AI and LLM can be used to obfuscate SQL injection payloads to evade detection by web application firewalls and other security measures.

The paper "A GAN-based Method for Generating SQL Injection Attack Samples"[4] by Dongzhe et al. proposes a solution to the problem of limited data availability for training classification models to detect SQL injections. The paper suggests using deep convolutional generative adversarial networks and genetic algorithms to generate additional SQL injection samples and improve the accuracy of detection models. However, it is important to consider that these methods could potentially be utilized by malicious actors to enhance their attack performance.

Phishing and Baiting

The risk of malicious actors using AI and LLM for phishing and baiting is significant. AI and LLM can be used to create more sophisticated and convincing phishing emails and fake content, making it harder for employees to identify them. For example, AI and LLM can be used to generate personalized content that looks like coming from a trusted source, with realistic language, tone, and formatting. As a result, employees may be more likely to click on links, download malware onto their system or provide sensitive information.

ChatGPT specifically can be misused by malicious actors to enhance their phishing emails with well-crafted language skills, which they may lack. By leveraging this chatbot, even novice cybercriminals can elevate their social engineering attacks by producing phishing emails that are coherent, conversational, and almost indistinguishable from genuine messages for free. Traditional telltale signs of a phishing email, such as misspelled words or clumsy grammar, are no longer sufficient to raise suspicion. While ChatGPT has implemented measures to prevent such misuse, a malicious actor can easily evade them by rephrasing their requests to avoid detection. Additionally, attackers can also use the tool to refine their existing phishing communications and produce advanced phishing emails that can deceive even the most tech-savvy users, leading to a surge in account takeover attacks[5].

Conclusion

This blog post highlights that AI and LLMs can increase productivity and efficiency across organizations and industries—however, they also create new attack vectors that malicious actors may exploit and new tools for them to achieve their aims. 

For additional details and access to a complete version of the table (Appendix) that compiles attack surfaces and attack vectors associated with AI and LLMs, along with examples and available resources, please refer to our white paper, which can be accessed at the following link.

We are Archipelo

At Archipelo, we recognize the critical importance of software supply chain security and compliance. Archipelo gives enterprises the ability to understand how their code is created—to verify code provenance and increase software security, integrity, and compliance. Archipelo provides proactive observability of security and compliance risks at the earliest stages of the SDLC—from research and design to development and deployment. The Archipelo platform strengthens software supply chain security by addressing the root source of many security and compliance issues: verifying code provenance before, during and after every commit and release.

Appendix

This is a preview version of the table. The full version can be found in our white paper, available here.

 

Attack Surface

Attack Vector

Example

Resources

AI/LLMs Risks

1

Integrated AI & Large Language Models

Poisoning 

An attacker intentionally inserts incorrect or malicious data into the training set of a machine learning model to manipulate its behavior.

https://www.forbes.com/sites/alexandralevine/2023/05/05/tiktok-bytedance-sensitive-words-suppression-china/?

Integrated AI/LLMs as attack surface

Resources

  1. “AI-Powered 'BlackMamba' Keylogging Attack Evades Modern EDR Security”, Elizabeth Montalbano, 2023
    https://www.darkreading.com/endpoint/ai-blackmamba-keylogging-edr-security
  2. “PassGAN: A Deep Learning Approach for Password Guessing∗”, Hitaj et al., 2019
    https://arxiv.org/pdf/1709.00440.pdf
  3. “The Rise of Artificial Intelligence (AI) DDoS Attacks”, Cloudbric, 2021
    https://www.cloudbric.com/the-rise-of-artificial-intelligence-ddos-attacks/
  4. “A GAN-based Method for Generating SQL Injection Attack Samples”, Dongzhe Lu et al., 2022
    https://ieeexplore.ieee.org/document/9836726
  5. “ChatGPT is changing the phishing game”, Matt Caulfield, 2023
    https://www.securityinfowatch.com/cybersecurity/information-security/breach-det
    ection/article/53057705/chatgpt-is-changing-the-phishing-game

 

Archipelo Intelligent Code Provenance Platform for Software Supply Chain Security

Verify code provenance and increase security and compliance with Archipelo.

Contact Us