Browsing Tag
LLM
13 posts
EchoGram Flaw Bypasses Guardrails in Major LLMs
HiddenLayer reveals the EchoGram vulnerability, which bypasses safety guardrails on GPT-5.1 and other major LLMs, giving security teams just a 3-month head start.
November 17, 2025
Shadow Escape 0-Click Attack in AI Assistants Puts Trillions of Records at Risk
Operant AI reveals Shadow Escape, a zero-click attack using the MCP flaw in ChatGPT, Gemini, and Claude to secretly steal trillions of SSNs and financial data. Traditional security is blind to this new AI threat.
October 23, 2025
OpenAI’s Guardrails Can Be Bypassed by Simple Prompt Injection Attack
Just weeks after its release, OpenAI’s Guardrails system was quickly bypassed by researchers. Read how simple prompt injection attacks fooled the system’s AI judges and exposed an ongoing security concern for OpenAI.
October 13, 2025
Microsoft Flags AI Phishing Attack Hiding in SVG Files
Microsoft Threat Intelligence detected a new AI-powered phishing campaign using LLMs to hide malicious code inside SVG files disguised as business dashboards.
September 30, 2025
LegalPwn Attack Tricks GenAI Tools Into Misclassifying Malware as Safe Code
A new security flaw, LegalPwn, exploits a weakness in generative AI tools like GitHub Copilot and ChatGPT, where malicious code is disguised as legal disclaimers. Learn why human oversight is now more critical than ever for AI security.
August 4, 2025
Malicious AI Models Are Behind a New Wave of Cybercrime, Cisco Talos
Cybercriminals use malicious AI models to write malware and phishing scams Cisco Talos warns of rising threats from uncensored and custom AI tools.
June 28, 2025
WormGPT Makes a Comeback Using Jailbroken Grok and Mixtral Models
Cato CTRL uncovers new WormGPT variants on Telegram powered by jailbroken Grok and Mixtral. Learn how cybercriminals jailbreak top LLMs for uncensored, illegal activities in this latest threat research.
June 18, 2025
New “Slopsquatting” Threat Emerges from AI-Generated Code Hallucinations
AI code tools often hallucinate fake packages, creating a new threat called slopsquatting that attackers can exploit in…
April 15, 2025
Researchers Use AI Jailbreak on Top LLMs to Create Chrome Infostealer
New Immersive World LLM jailbreak lets anyone create malware with GenAI. Discover how Cato Networks researchers tricked ChatGPT, Copilot, and DeepSeek into coding infostealers - In this case, a Chrome infostealer.
March 19, 2025
Hackers Monetize LLMjacking, Selling Stolen AI Access for $30 per Month
LLMjacking attacks target DeepSeek, racking up huge cloud costs. Sysdig reveals a black market for LLM access has…
February 8, 2025