The full version of the content can be found in our white paper, available here.
Welcome to the first blog post from the Emerging Risks and Implications of AI and LLMs series. In this post, we will focus on the cybersecurity risks that may arise from the adoption and usage of AI and LLMs.
Our article provides a concise definition of cybersecurity and explores the list of attack surfaces and vectors. We discuss how AI and LLMs extend the attack surface and create new attack vectors in cybersecurity. We also examine how AI and LMMs can support malicious actors in exploiting established attack vectors by drawing on existing literature, publications, and our internal thoughts.
Cybersecurity is the practice of protecting electronic devices, networks, and sensitive information from unauthorized access, theft, damage, or other malicious attacks. It involves the use of various technologies, processes, and practices to secure digital assets against cyber threats, including viruses, malware, phishing, hacking, and identity theft. The primary goal of cybersecurity is to ensure the confidentiality, integrity, and availability of digital information—and to protect the privacy and safety of individuals and organizations in the digital realm. This is achieved through the implementation of technologies, methods, and policies that minimize the risk of cyber incidents that could result in data loss, financial loss, reputational damage, or other negative impacts.
In cybersecurity, an attack surface refers to the sum of all the points or areas within an organization's digital infrastructure that an attacker can target or exploit. This includes all the hardware, software, and networks that are accessible to potential attackers. Examples of attack surfaces can include:
An attack vector refers to the specific path or method that an attacker uses to exploit a vulnerability in an organization's digital infrastructure. Examples of attack vectors can include:
The integration of AI and LLMs into various systems and applications is becoming increasingly popular. From chatbots and virtual assistants to content creation, and business automation services—AI and LLMs are used to improve the efficiency and accuracy of various processes. In parallel, they are also becoming a new attack surface for cybercriminals to exploit, leading to the creation of new, more sophisticated and targeted attacks. One of the most significant threats to these integrated technologies is adversarial attacks.
Adversarial attacks are a type of attack where an attacker intentionally manipulates or alters data inputs to a machine learning algorithm with the goal of causing misclassification or other unintended behavior. These attacks can be particularly dangerous in applications where the accuracy of the machine learning model is critical, such as in medical diagnosis or autonomous vehicles. Adversarial attacks can cause the model to make incorrect decisions, which could have serious consequences.
There are several types of adversarial attacks, including:
In our white paper, available here, we also explore a newly defined attack, called indirect prompt injection.
The risk of a malicious actor using AI and LLM for malware is significant. AI and LLM can be used to create more sophisticated and effective malware that can evade traditional security measures, making it harder to detect and mitigate.
For example, AI and LLM can be used to create malware that learns and adapts to security measures in real time, making it more difficult to defend against. It can also be used to automate the process of designing and distributing malware, increasing the scale and frequency of attacks.
A proof of concept attack called Black Mamba has recently demonstrated the potential dangers of using AI for malicious purposes. This attack utilizes AI to dynamically modify benign code at runtime without relying on any command-and-control infrastructure, which allows the malware to evade automated security systems that are designed to detect such suspicious activity. The researchers behind the attack tested it against an “industry-leading” EDR system, resulting in zero alerts or detections[1].
AI can be also used to enhance password attacks. It accelerates the speed and efficiency of password cracking. Machine learning algorithms can also be trained on large datasets of stolen credentials to better identify patterns and create more effective attacks.
Briland Hitaj et al[2] in 2019 developed PassGAN, an AI-powered password attack tool that utilizes a Generative Adversarial Network (GAN) to automatically learn the distribution of real passwords from leaked data and generate high-quality password guesses. The tool has shown promising results in surpassing rule-based and state-of-the-art machine learning password-guessing tools in experiments conducted on two large password datasets. PassGAN achieved these results without any prior knowledge of common password structures or properties, making it a powerful tool for password attacks. When combined with the output of another password cracking tool called HashCat, PassGAN was able to match a significantly higher number (51%-73% more) of passwords compared to HashCat alone, demonstrating its ability to extract a considerable number of password properties that current state-of-the-art rules do not encode.
Malicious actors could potentially use AI to improve the effectiveness of man-in-the-middle (MitM) attacks. For example, they could use machine learning algorithms to analyze and understand patterns in network traffic, allowing them to more effectively identify and target vulnerable communication channels.
Malicious actors using AI and LLMs for Denial-of-Service (DoS) attacks can significantly increase the scale and effectiveness of the attack. They can use AI algorithms to identify vulnerabilities in target systems and launch coordinated and automated attacks from multiple sources, which makes it difficult for defenders to mitigate the attack. AI and LLM can also be used to create more sophisticated and complex attack patterns that can bypass traditional defense mechanisms. Moreover, attackers can use machine learning algorithms to generate realistic-looking traffic that is harder to detect and block by security systems[3].
AI and LLMs for SQL injection attacks can automate and streamline the process of exploiting SQL injection vulnerabilities in web applications. For example, an attacker can use AI and LLM to generate and test a large number of SQL injection payloads that are tailored to specific web applications, increasing the likelihood of success. They can also use AI and LLMs to analyze the structure of a database and extract sensitive information such as usernames, passwords, and credit card numbers. Additionally, AI and LLM can be used to obfuscate SQL injection payloads to evade detection by web application firewalls and other security measures.
The paper "A GAN-based Method for Generating SQL Injection Attack Samples"[4] by Dongzhe et al. proposes a solution to the problem of limited data availability for training classification models to detect SQL injections. The paper suggests using deep convolutional generative adversarial networks and genetic algorithms to generate additional SQL injection samples and improve the accuracy of detection models. However, it is important to consider that these methods could potentially be utilized by malicious actors to enhance their attack performance.
The risk of malicious actors using AI and LLM for phishing and baiting is significant. AI and LLM can be used to create more sophisticated and convincing phishing emails and fake content, making it harder for employees to identify them. For example, AI and LLM can be used to generate personalized content that looks like coming from a trusted source, with realistic language, tone, and formatting. As a result, employees may be more likely to click on links, download malware onto their system or provide sensitive information.
ChatGPT specifically can be misused by malicious actors to enhance their phishing emails with well-crafted language skills, which they may lack. By leveraging this chatbot, even novice cybercriminals can elevate their social engineering attacks by producing phishing emails that are coherent, conversational, and almost indistinguishable from genuine messages for free. Traditional telltale signs of a phishing email, such as misspelled words or clumsy grammar, are no longer sufficient to raise suspicion. While ChatGPT has implemented measures to prevent such misuse, a malicious actor can easily evade them by rephrasing their requests to avoid detection. Additionally, attackers can also use the tool to refine their existing phishing communications and produce advanced phishing emails that can deceive even the most tech-savvy users, leading to a surge in account takeover attacks[5].
This blog post highlights that AI and LLMs can increase productivity and efficiency across organizations and industries—however, they also create new attack vectors that malicious actors may exploit and new tools for them to achieve their aims.
For additional details and access to a complete version of the table (Appendix) that compiles attack surfaces and attack vectors associated with AI and LLMs, along with examples and available resources, please refer to our white paper, which can be accessed at the following link.
At Archipelo, we recognize the critical importance of software supply chain security and compliance. Archipelo gives enterprises the ability to understand how their code is created—to verify code provenance and increase software security, integrity, and compliance. Archipelo provides proactive observability of security and compliance risks at the earliest stages of the SDLC—from research and design to development and deployment. The Archipelo platform strengthens software supply chain security by addressing the root source of many security and compliance issues: verifying code provenance before, during and after every commit and release.
This is a preview version of the table. The full version can be found in our white paper, available here.
Attack Surface |
Attack Vector |
Example |
Resources |
AI/LLMs Risks |
|
1 |
Integrated AI & Large Language Models |
Poisoning |
An attacker intentionally inserts incorrect or malicious data into the training set of a machine learning model to manipulate its behavior. |
Integrated AI/LLMs as attack surface |
Archipelo helps organizations ensure developer security, resulting in increased software security and trust for your business.
Try Archipelo Now