Feb 09, 2026 β’
Vulnerability
|
#LLM
#Private Keys
#Prompt Injection
An LLM-based AI agent, Owockibot, was compromised to disclose its private hot wallet keys, leading to a $2,100 financial loss and its operational shutdown. This...
Read Analysis β
Feb 09, 2026 β’
Jailbreak
|
#GRP-Obliteration
#LLM Safety Alignment
#Prompt Injection
The article details "GRP-Obliteration," a novel technique leveraging Group Relative Policy Optimization (GRPO) to dismantle the safety alignment of La...
Read Analysis β
Feb 09, 2026 β’
Vulnerability
|
#OpenClaw
#Prompt Injection
#WebSocket API
OpenClaw, a rapidly adopted AI assistant with broad system access, presents significant security risks due to widespread deployment of internet-exposed instance...
Read Analysis β
Feb 06, 2026 β’
Vulnerability
|
#Ollama
#Unauthenticated LLM Endpoints
#Prompt Injection
The proliferation of unmanaged "Shadow AI" deployments, such as unauthenticated Ollama server instances, creates critical security blind spots within ...
Read Analysis β
Feb 05, 2026 β’
Vulnerability
|
#Prompt Injection
#Agentic AI
#Data Exfiltration
Radware introduced its LLM Firewall and Agentic AI Protection Solution to secure generative AI and AI agents against emerging threats. These solutions aim to mi...
Read Analysis β
Feb 03, 2026 β’
Vulnerability
|
#Remote Code Execution
#Command Injection
#Prompt Injection
The OpenClaw AI bot farm is plagued by critical security flaws, including a one-click remote code execution vulnerability and two command injection vulnerabilit...
Read Analysis β
Feb 01, 2026 β’
Vulnerability
|
#OpenClaw
#Prompt Injection
#LLM Agents
OpenClaw (Moltbot), an LLM agent system, poses a severe security risk due to its design, which grants unfettered access to user systems and data, bypassing oper...
Read Analysis β
Feb 01, 2026 β’
Vulnerability
|
#Prompt Injection
#LLM Security
#Unfettered System Access
OpenClaw (Moltbot), an LLM agent system, presents critical security risks due to its design granting unfettered access to user systems, including sensitive data...
Read Analysis β
Jan 29, 2026 β’
Vulnerability
|
#Prompt Injection
#Data Exfiltration
#AI Agents
The article highlights advanced threats to AI agents, including "Shadow Escape," a zero-click exploit targeting Model Context Protocol (MCP) based sys...
Read Analysis β
Jan 28, 2026 β’
Malware
|
#OpenClaw
#Prompt Injection
#Data Exfiltration
Personal AI agents like OpenClaw are severely vulnerable to malicious third-party "skills" that can leverage their high-level privileges for harmful a...
Read Analysis β
Jan 23, 2026 β’
Vulnerability
|
#LLM Jailbreaks
#Prompt Injection
#Adaptive Attacks
Current AI defenses for large language models are largely ineffective against adaptive attacks, with research demonstrating bypass rates over 90% for techniques...
Read Analysis β
Jan 18, 2026 β’
Jailbreak
|
#Deepfake
#Prompt Injection
#Generative AI
Grok AI was exploited by users who bypassed its content moderation safeguards through prompt-based manipulation, enabling the generation of non-consensual deepf...
Read Analysis β
Jan 15, 2026 β’
Vulnerability
|
#Reprompt Attack
#Microsoft Copilot
#Prompt Injection
Researchers unveiled a "Reprompt" attack method enabling single-click data exfiltration from Microsoft Copilot by exploiting the "q" URL par...
Read Analysis β
Jan 13, 2026 β’
Vulnerability
|
#CVE-2025-12420
#Prompt Injection
#ServiceNow AI Platform
A critical vulnerability, CVE-2025-12420 (CVSS 9.3), was patched in ServiceNow's AI platform, allowing unauthenticated user impersonation and unauthorized ...
Read Analysis β
Dec 29, 2025 β’
Vulnerability
|
#Prompt Injection
#Model Poisoning
#AI Supply Chain
Traditional security frameworks fail to address AI-specific attack vectors such as prompt injection, model poisoning, and AI supply chain compromises, creating ...
Read Analysis β
Dec 29, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM
#CVE-2022-30190
Prompt injection attacks manipulate AI systems, particularly Large Language Models (LLMs), by overriding their intended instructions through malicious input, le...
Read Analysis β
Dec 29, 2025 β’
Vulnerability
|
#Prompt Injection
#AI Supply Chain Poisoning
#Remote Code Execution
Prompt injection is a prevalent AI-specific vulnerability where Large Language Models (LLMs) misinterpret external data as executable instructions, bypassing in...
Read Analysis β
Dec 22, 2025 β’
Vulnerability
|
#Prompt Injection
#AI Agent
#ChatGPT Atlas
Prompt injection attacks pose a fundamental and persistent security challenge for AI agents operating within browsers like OpenAI's ChatGPT Atlas, enabling...
Read Analysis β
Dec 22, 2025 β’
Vulnerability
|
#Prompt Injection
#ChatGPT Atlas
#OpenAI
OpenAI is continuously improving the security posture of its ChatGPT Atlas platform. These efforts are primarily focused on hardening the system to prevent and ...
Read Analysis β
Dec 15, 2025 β’
Vulnerability
|
#Prompt Injection
#Unauthenticated Code Injection
#AWS Cloud Infrastructure
Prompt injection attacks against AI coding tools like Amazon Q were demonstrated to direct the tool to wipe local files and potentially disrupt AWS cloud infras...
Read Analysis β
Dec 11, 2025 β’
Vulnerability
|
#Prompt Injection
#Microsoft Copilot Studio
#LLM Vulnerability
Microsoft Copilot Studio's AI agents are susceptible to prompt injection, a vulnerability that allows users to bypass configured security mandates. This in...
Read Analysis β
Dec 11, 2025 β’
Vulnerability
|
#Prompt Injection
#AI Agents
#Data Exfiltration
AI agents created using Microsoft Copilot Studio are vulnerable to prompt injection, allowing attackers to bypass internal security mandates. This exploit facil...
Read Analysis β
Dec 11, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM Jailbreak
#OWASP LLM Top 10
The article details how Qualys TotalAI addresses critical security risks in Large Language Models (LLMs), identifying widespread susceptibility to prompt inject...
Read Analysis β
Dec 10, 2025 β’
Vulnerability
|
#Prompt Injection
#Data Poisoning
#AI Models
The article outlines a broad spectrum of risks to artificial intelligence (AI) systems, including data poisoning, prompt injection, and model theft, which colle...
Read Analysis β
Dec 07, 2025 β’
Vulnerability
|
#Prompt Injection
#OWASP Top Ten LLM
#Training Data Poisoning
The article details the OWASP Top Ten LLM Security Risks, outlining specific vulnerabilities such as Prompt Injection (LLM01), Training Data Poisoning (LLM03), ...
Read Analysis β
Dec 06, 2025 β’
Vulnerability
|
#Prompt Injection
#Remote Code Execution
#AI IDEs
Security researcher Ari Marzouk disclosed "IDEsaster," a collection of over 30 vulnerabilities, with 24 assigned CVEs, affecting various AI-powered In...
Read Analysis β
Nov 06, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM Backdoor
#AI Supply Chain Security
Systemic vulnerabilities are prevalent across AI models and infrastructure, encompassing exploitable flaws in AI-generated code and the ability to embed backdoo...
Read Analysis β
Oct 28, 2025 β’
Vulnerability
|
#Prompt Injection
#Agentic AI
#LLM
Prompt injection vulnerabilities enable attackers to embed malicious commands within seemingly innocuous content, leading AI browsers and chatbots to perform un...
Read Analysis β
Oct 21, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM
#Agentic Browsers
The article identifies indirect prompt injection vulnerabilities in AI-powered agentic browsers, specifically demonstrating attacks against Perplexity Comet via...
Read Analysis β
Oct 12, 2025 β’
Vulnerability
|
#Prompt Injection
#Training Data Poisoning
#OWASP Top 10 for LLM Applications
The article highlights critical security risks in Large Language Model (LLM) deployments, emphasizing prompt injection as a key attack vector where malicious in...
Read Analysis β
Oct 02, 2025 β’
Vulnerability
|
#Remote Code Execution
#Prompt Injection
#Retrieval-Augmented Generation
The NVIDIA AI Red Team highlights critical vulnerabilities in LLM-based applications, most notably Remote Code Execution (RCE) via prompt injection when LLM-gen...
Read Analysis β
Sep 25, 2025 β’
Data Leak
|
#Salesforce AI
#Prompt Injection
#Data Leakage
Salesforce AI agents are reportedly being manipulated to disclose sensitive information, indicating a critical vulnerability in their design or implementation. ...
Read Analysis β
Sep 25, 2025 β’
Vulnerability
|
#ForcedLeak
#Prompt Injection
#CRM Data Exfiltration
Salesforce Agentforce was susceptible to a critical indirect prompt injection vulnerability, codenamed ForcedLeak (CVSS 9.4). This flaw allowed attackers to exf...
Read Analysis β
Sep 19, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM Security
#Cloud AI
The article details critical security challenges associated with cloud-hosted Large Language Models (LLMs), including prompt injection, adversarial exploits, mo...
Read Analysis β
Sep 17, 2025 β’
Vulnerability
|
#LLM
#Prompt Injection
#Adversarial AI
The Kaspersky article forecasts various technical methods and attack vectors projected to compromise Large Language Models (LLMs) by 2025. It likely details eme...
Read Analysis β
Sep 12, 2025 β’
Vulnerability
|
#AI Security
#AI Agents
#Prompt Injection
This article addresses the critical security challenges inherent in deploying AI agents, highlighting the potential for vulnerabilities that could compromise bu...
Read Analysis β
Sep 11, 2025 β’
Social Engineering
|
#AI Browsers
#Prompt Injection
#Social Engineering
Deeply integrated AI browsers pose significant security risks due to their susceptibility to social engineering and prompt injection attacks targeting the AI ag...
Read Analysis β
Aug 26, 2025 β’
Vulnerability
|
#Prompt Injection
#Llama Guard
#OWASP Top 10 LLM
Cloudflare's Firewall for AI now integrates Llama Guard to provide real-time unsafe content moderation, detecting and blocking malicious prompts at the net...
Read Analysis β
Aug 20, 2025 β’
Vulnerability
|
#Prompt Injection
#AI Agents
#Supply-chain vulnerabilities
AI agents are highly susceptible to prompt injection attacks, allowing adversaries to manipulate their behavior to execute unauthorized system commands, steal c...
Read Analysis β
Aug 20, 2025 β’
Vulnerability
|
#XSS
#Prompt Injection
#GPT-4
Lenovo's GPT-4-powered chatbot "Lena" was vulnerable to cross-site scripting (XSS) attacks due to improper input and output sanitization, initiat...
Read Analysis β
Aug 18, 2025 β’
Vulnerability
|
#Prompt Injection
#Microsoft Copilot Studio
#Salesforce
Security researchers demonstrated a prompt injection attack against an AI agent built on Microsoft Copilot Studio, enabling it to reveal private knowledge and c...
Read Analysis β
Aug 17, 2025 β’
Vulnerability
|
#Prompt Injection
#Remote Code Execution
#ASCII Smuggling
The article highlights critical security vulnerabilities in LLMs integrated with coding agents, primarily exploiting advanced prompt injection techniques. Attac...
Read Analysis β
Aug 17, 2025 β’
Vulnerability
|
#Prompt Injection
#Remote Code Execution
#ASCII Smuggling
The article highlights novel prompt injection techniques, such as ASCII Smuggling and hidden instructions in public code repositories, designed to be impercepti...
Read Analysis β
Aug 15, 2025 β’
Vulnerability
|
#Prompt Injection
#Data Leakage
#OWASP Top 10 for LLM Applications
The article details critical security risks inherent in Large Language Models (LLMs), prominently featuring prompt injection as an exploit where attackers manip...
Read Analysis β
Aug 11, 2025 β’
Vulnerability
|
#AI Agents
#Prompt Injection
#Data Exfiltration
Zenity Labs research details how widely deployed AI agents are highly susceptible to "hijacking attacks" via methods such as email-based prompt inject...
Read Analysis β
Aug 09, 2025 β’
Vulnerability
|
#GPT-5 Jailbreak
#Prompt Injection
#Zero-Click Attack
Cybersecurity researchers have uncovered a jailbreak technique, combining Echo Chamber and narrative-driven steering, to bypass GPT-5's ethical guardrails ...
Read Analysis β
Aug 06, 2025 β’
Jailbreak
|
#Large Language Model
#Prompt Injection
#Data Exfiltration
Enterprise AI assistants have been identified as vulnerable to abuse, potentially enabling unauthorized data theft. This exploitation pathway also allows for th...
Read Analysis β
Jul 31, 2025 β’
Vulnerability
|
#Prompt Injection
#Adversarial ML
#LLM Jailbreak
The article highlights the critical need for AI security tools to combat escalating threats like adversarial inputs, prompt injection, and LLM jailbreaks. These...
Read Analysis β
Jul 29, 2025 β’
Vulnerability
|
#Prompt Injection
#AWS-2025-015
#Software Supply Chain Attack
The Amazon Q Developer Extension for Visual Studio Code (version 1.84.0) was compromised via a software supply chain attack, embedding a prompt injection that b...
Read Analysis β
Jul 24, 2025 β’
Vulnerability
|
#Supply Chain Attack
#Prompt Injection
#Amazon Q
A hacker injected destructive system commands into Amazon's Visual Studio Code extension for Amazon Q via a compromised GitHub repository, distributing it ...
Read Analysis β
Jul 15, 2025 β’
Vulnerability
|
#CVE-2025-32711
#Prompt Injection
#Zero-Click
EchoLeak (CVE-2025-32711) is a zero-click AI vulnerability in Microsoft 365 Copilot that exploits invisible prompt injection within contextual data. This allows...
Read Analysis β
Jun 20, 2025 β’
Jailbreak
|
#Prompt Injection
#Jailbreak
#Data Leakage
The article highlights critical security risks in AI and LLM deployments, specifically prompt injection and jailbreak attacks, which enable manipulation for una...
Read Analysis β
Jun 12, 2025 β’
Jailbreak
|
#TokenBreak
#Prompt Injection
#Tokenization
The TokenBreak attack exploits specific tokenization strategies (BPE or WordPiece) in text classification models by introducing single-character changes, bypass...
Read Analysis β
Jun 12, 2025 β’
Vulnerability
|
#OWASP Top 10 for LLM Applications
#Prompt Injection
#Data Leakage
The article highlights critical security gaps in Large Language Model (LLM) applications, detailing common vulnerabilities such as prompt injection, sensitive i...
Read Analysis β
Jun 12, 2025 β’
Vulnerability
|
#OWASP Top 10 LLM
#Prompt Injection
#Large Language Models
The article analyzes critical security vulnerabilities in Large Language Model (LLM) applications, aligning with the OWASP Top 10 for LLM Applications. It detai...
Read Analysis β
Jun 05, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM
#Azure Prompt Shields
Prompt injection attacks are identified as the top threat to generative AI, enabling adversaries to manipulate Large Language Models (LLMs) to bypass safety mea...
Read Analysis β
May 28, 2025 β’
Vulnerability
|
#LLM
#Prompt Injection
#Code Execution
The article outlines key vulnerabilities in AI agents utilizing Large Language Models (LLMs), including the risk of unauthorized code execution, data exfiltrati...
Read Analysis β
May 28, 2025 β’
Vulnerability
|
#LLM Security
#Prompt Injection
#Sandboxing
This article analyzes critical vulnerabilities in AI agents, specifically Large Language Models (LLMs), focusing on risks like unauthorized code execution, data...
Read Analysis β
May 14, 2025 β’
Vulnerability
|
#LLM Scanner
#Prompt Injection
#Jailbreak Attacks
Qualys has developed an LLM scanner, integrated into its Web Application Scanner, specifically designed to identify and assess vulnerabilities within AI/ML syst...
Read Analysis β