Researchers Use AI Jailbreak on Top LLMs to Create Chrome Infostealer

Researchers Use AI Jailbreak on Top LLMs to Create Chrome Infostealer

New Immersive World LLM jailbreak lets anyone create malware with GenAI. Discover how Cato Networks researchers tricked ChatGPT, Copilot, and DeepSeek into coding infostealers – In this case, a Chrome infostealer.

Cato Networks, a Secure Access Service Edge (SASE) solution provider, has released its 2025 Cato CTRL Threat Report, revealing an important development. According to researchers, they have successfully designed a technique that allows individuals with no prior coding experience to create malware using readily available generative AI (GenAI) tools.

LLM Jailbreak Created Functioning Chrome Infostealer via “Immersive World”

The core of their research is a novel Large Language Model (LLM) jailbreak technique, dubbed “Immersive World,” developed by a Cato CTRL threat intelligence researcher. The technique involves creating a detailed fictional narrative where GenAI tools, including popular platforms like DeepSeek, Microsoft Copilot, and OpenAI’s ChatGPT, are assigned specific roles and tasks within a controlled environment.

By effectively bypassing the default security controls of these AI tools through this narrative manipulation, the researcher was able to force them into generating functional malware capable of stealing login credentials from Google Chrome.

“A Cato CTRL threat intelligence researcher with no prior malware coding experience successfully jailbreak multiple LLMs, including DeepSeek-R1, DeepSeek-V3, Microsoft Copilot, and OpenAI’s ChatGPT to create a fully functional Google Chrome infostealer for Chrome 133.”

Cato Networks

This technique (Immersive World) indicates a critical flaw existing in the safeguards implemented by GenAI providers as it easily bypasses the intended restrictions designed to prevent misuse. As Vitaly Simonovich, a threat intelligence researcher at Cato Networks, stated, “We believe the rise of the zero-knowledge threat actor poses a high risk to organizations because the barrier to creating malware is now substantially lowered with GenAI tools.”

The report’s findings have prompted Cato Networks to reach out to the providers of the affected GenAI tools. While Microsoft and OpenAI acknowledged receipt of the information, DeepSeek remained unresponsive.

Researchers Use AI Jailbreak on Top LLMs to Create Chrome Infostealer
Screenshots show researchers interacting with DeepSeek, which ultimately generated a functional Chrome infostealer (Images via Cato Networks)

Google Declined to Review Malware Code

According to researchers, Google, despite being offered the opportunity to review the generated malware code, declined to do so. This lack of a unified response from major tech companies highlights the complexities surrounding the addressing of threats in advanced AI tools.

LLMs and Jailbreaking

Although LLMs are relatively new, jailbreaking has evolved alongside them. A report published in February 2024 revealed that DeepSeek-R1 LLM failed to prevent over half of the jailbreak attacks in a security analysis. Similarly, a report from SlashNext in September 2023 showed how researchers successfully jailbroke multiple AI chatbots to generate phishing emails.

Protection

The 2025 Cato CTRL Threat Report, the inaugural annual publication from Cato Networks’ threat intelligence team, emphasizes the critical need for proactive and comprehensive AI security strategies. These include preventing LLM jailbreaking by building a reliable dataset with expected prompts and responses and testing AI systems thoroughly.

Regular AI red teaming is also important, as it helps find vulnerabilities and other security issues. Additionally, clear disclaimers and terms of use should be in place to inform users they are interacting with an AI and define acceptable behaviours to prevent misuse.

Deeba is a veteran cybersecurity reporter at Hackread.com with over a decade of experience covering cybercrime, vulnerabilities, and security events. Her expertise and in-depth analysis make her a key contributor to the platform’s trusted coverage.
Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts