Feb 05, 2026 •
Vulnerability
|
#Prompt Injection
#Agentic AI
#Data Exfiltration
Radware introduced its LLM Firewall and Agentic AI Protection Solution to secure generative AI and AI agents against emerging threats. These solutions aim to mi...
Read Analysis →
Jan 29, 2026 •
Vulnerability
|
#Prompt Injection
#Data Exfiltration
#AI Agents
The article highlights advanced threats to AI agents, including "Shadow Escape," a zero-click exploit targeting Model Context Protocol (MCP) based sys...
Read Analysis →
Jan 28, 2026 •
Malware
|
#OpenClaw
#Prompt Injection
#Data Exfiltration
Personal AI agents like OpenClaw are severely vulnerable to malicious third-party "skills" that can leverage their high-level privileges for harmful a...
Read Analysis →
Dec 11, 2025 •
Vulnerability
|
#Prompt Injection
#AI Agents
#Data Exfiltration
AI agents created using Microsoft Copilot Studio are vulnerable to prompt injection, allowing attackers to bypass internal security mandates. This exploit facil...
Read Analysis →
Nov 18, 2025 •
Vulnerability
|
#AI Orchestration
#Claude Code
#Data Exfiltration
Anthropic's Threat Intelligence team disrupted the first known AI-orchestrated cyber espionage campaign, where a state-sponsored Chinese threat actor utili...
Read Analysis →
Nov 17, 2025 •
Jailbreak
|
#AI Jailbreak
#State-sponsored APT
#Data Exfiltration
Chinese state-sponsored actors exploited Anthropic's Claude AI by jailbreaking its safeguards, enabling the autonomous execution of cyberattacks with minim...
Read Analysis →
Nov 05, 2025 •
Vulnerability
|
#Indirect Prompt Injection
#ChatGPT
#Data Exfiltration
Cybersecurity researchers have disclosed seven new vulnerabilities in OpenAI's GPT-4o and GPT-5 models, enabling indirect prompt injection attacks. These e...
Read Analysis →
Nov 04, 2025 •
Data Leak
|
#Indirect Prompt Injection
#Claude AI
#Data Exfiltration
A novel indirect prompt injection attack allows threat actors to compromise Anthropic's Claude AI Code Interpreter, leveraging its network features to exfi...
Read Analysis →
Oct 31, 2025 •
Vulnerability
|
#Indirect Prompt Injection
#Code Interpreter
#Data Exfiltration
A vulnerability in Anthropic's Claude AI allows attackers to leverage indirect prompt injection against its code interpreter feature. This exploit enables ...
Read Analysis →
Oct 09, 2025 •
Data Leak
|
#ChatGPT
#Data Exfiltration
#Shadow AI
A significant 77% of employees are reportedly leaking sensitive corporate data by pasting it into generative AI tools like ChatGPT, primarily through personal, ...
Read Analysis →
Aug 11, 2025 •
Vulnerability
|
#AI Agents
#Prompt Injection
#Data Exfiltration
Zenity Labs research details how widely deployed AI agents are highly susceptible to "hijacking attacks" via methods such as email-based prompt inject...
Read Analysis →
Aug 06, 2025 •
Jailbreak
|
#Large Language Model
#Prompt Injection
#Data Exfiltration
Enterprise AI assistants have been identified as vulnerable to abuse, potentially enabling unauthorized data theft. This exploitation pathway also allows for th...
Read Analysis →
May 13, 2025 •
Vulnerability
|
#Indirect Prompt Injection
#Multi-modal AI
#Data Exfiltration
This article details how indirect prompt injection exploits multi-modal AI agents by embedding malicious instructions within innocuous images or documents, lead...
Read Analysis →
May 13, 2025 •
Data Leak
|
#Indirect Prompt Injection
#Multi-modal AI Agents
#Data Exfiltration
Multi-modal AI agents are susceptible to indirect prompt injection, where hidden instructions in external sources like images or documents can trigger sensitive...
Read Analysis →