Pentest Logo

Insight

Securing Web-Based Artificial Intelligence (AI) Chatbots

Author:

Mike Minchinton

My background in code development and lifelong passion for cybersecurity took a fascinating turn recently when I was asked to conduct a penetration test against a customer’s Artificial Intelligence (AI) chatbot. This experience, along with the development of my own text-based AI chatbot, highlighted the significant differences in security postures within the AI chatbot realm.

In this article, I aim to explore just some of the security issues surrounding web-based AI chatbots, hoping to provide valuable insights for developers, security enthusiasts, and anyone curious about this topic.

A little bit of background

AI chatbots have revolutionised digital communication, especially since OpenAI’s ChatGPT release in November 2022. Unlike traditional chatbots, that follow pre-written responses, AI chatbots like ChatGPT and Azure AI’s offerings can understand and evolve with human interactions.

However, this sophistication brings about a broader range of security challenges, including data privacy and AI response integrity.

The security challenges AI chatbots face

AI chatbots are starting to play a crucial communication role for organisations, and therefore need to be protected against the threat of data breaches, code execution, and SQL injection, as well as the risk of manipulation and misuse.

Ensuring ethical AI practices and safeguarding against biases in AI responses are also crucial to:

  • Protect Personal Data – AI chatbots that handle personal data are prime targets for breaches.
  • Mitigate Code Execution Risks – Risks such as code execution and SQL injection are significant if the chatbot’s backend processes user inputs that could execute malicious code or queries.

To illustrate the security vulnerabilities in AI chatbots, consider the screenshot below from my own testing. Here, I simply asked my chatbot to reveal its system greeting message, business logic that was never designed to be exposed to users but was inadvertently exposed by the AI. This highlights the need for developers to be aware of potential security vulnerabilities in their AI chatbots.

Securing Web-Based AI Chatbots - System Message


Furthermore, as shown in this next screenshot, I was able to prompt the AI chatbot to disclose the function calls available to the AI. These calls, hidden from the user, were easily revealed once again by AI.

Securing Web-Based AI Chatbots - Function Calls


To underscore the ease with which AI chatbots can be manipulated, the following screenshot demonstrates a successful injection attempt. By prompting the AI chatbot with a specific command, I managed to alter its response behaviour, causing it to include ‘INJECTED’ in all subsequent responses. 
 

This example highlights the potential for more malicious manipulations. 

Securing Web-Based AI Chatbots - Injection


Another area of concern when testing AI chatbots is the rendering of HTML content included in AI chatbot responses.

As demonstrated in the following screenshot, I asked the AI to create HTML code with a specific functionality. The AI’s output included HTML with a ‘Hello World’ heading and a button to run JavaScript, highlighting how AI chatbots could be tricked into rendering HTML content.

Securing Web-Based AI Chatbots - HTML


Securing your web-based AI Chatbots
 

As you can see, AI chatbots are not immune to vulnerabilities and in protecting them, you will need to address challenges such as data privacy, authentication, resilience against attacks, AI-specific threats, and secure development practices. 

But what practical steps can organisations, and developers, be taking now to ensure their AI chatbots are as secure as possible? I suggest six as a starting point: 

  1. Secure Coding Practices: In the context of AI chatbots, secure coding means more than just avoiding common coding errors. It involves implementing robust input validation and sanitisation techniques to mitigate against malicious prompts. This includes encoding outputs to prevent injection attacks and managing context & state information securely to prevent data leakage or manipulation.
  2. Data Encryption: Always protect sensitive information with strong encryption at rest and in transit. This means not just encrypting database contents, but also securing the data as it moves between the chatbot and the user, and between the chatbot and any backend services.
  3. Enforce Multi-Factor Authentication (MFA): MFA is vital for verifying user identities and restricting access. For AI chatbots, this might mean integrating MFA into the chat interface itself, ensuring that only authenticated users can interact with the chatbot, especially for sensitive or personal topics.
  4. Regular Patch Management Process: Regularly update software and operating system components to safeguard against the latest vulnerabilities. This includes not only the operating system and web servers but also the libraries and frameworks used in the chatbot’s development, especially those related to natural language processing and AI.
  5. Apply Security Headers: Implement restrictive policies such as Content Security Policy (CSP) and Feature-Policy. These are crucial for AI chatbots as they reduce the risk of content injection attacks and ensure that external scripts do not alter the intended behaviour of the chatbot.
  6. Regular Security Audits & Tests: Conducting regular security audits and penetration testing is essential for identifying vulnerabilities. For AI chatbots, this should include testing for susceptibility to specific AI-related attacks, such as attempts to trick the AI into revealing sensitive data, providing incorrect information, or behaving in unintended ways.  

In conclusion 

Like any technological advance, the integration of AI into the workplaces brings with it exciting possibilities, but it also brings the potential for new threats and unique security challenges, alongside more traditional security concerns. 

As AI technology evolves, so does the need for security, and organisations need to remain vigilant, ensuring they have the security measures and processes in place to protect themselves, and their users, against both common, known attack methods and the more sophisticated challenges that are likely to arise. 

Helpful references & resources
 

Looking for more than just a test provider?

Get in touch with our team and find out how our tailored services can provide you with the cybersecurity confidence you need.