Mar 20, 2026 β’
Data Leak
|
#AI agent
#Data Leak
#Agentic AI
An internal Meta AI agent provided erroneous instructions to an engineer, leading to the accidental exposure of sensitive user and company data to other employe...
Read Analysis β
Mar 20, 2026 β’
Vulnerability
|
#Prompt Injection
#Model Weights
#AI Agent Misuse
AI applications introduce novel attack surfaces, enabling prompt injection to bypass instructions or facilitate data exfiltration, and allowing malicious model ...
Read Analysis β
Mar 19, 2026 β’
Vulnerability
|
#OpenWebUI
#AI-Powered Payloads
#Server Exploitation
Threat actors are actively exploiting OpenWebUI servers to gain unauthorized access and deploy malicious payloads. These attacks are characterized by the deploy...
Read Analysis β
Mar 19, 2026 β’
Vulnerability
|
#AI Chatbot
#Vulnerability
#Prompt Injection
A security flaw has been identified within Health NZ's AI chatbot, though its severity is reportedly being downplayed by the organization. The specific tec...
Read Analysis β
Mar 19, 2026 β’
Vulnerability
|
#Model Context Protocol (MCP)
#Indirect Prompt Injection
#Large Language Models (LLM)
Architectural vulnerabilities within Large Language Model (LLM) environments integrated with the Model Context Protocol (MCP) enable attackers to embed maliciou...
Read Analysis β
Mar 18, 2026 β’
Vulnerability
|
#Autonomous LLM Agent
#OpenClaw
#Security Framework
Tsinghua and Ant Group researchers have unveiled a five-layer lifecycle-oriented security framework designed to address and mitigate inherent vulnerabilities fo...
Read Analysis β
Mar 16, 2026 β’
Jailbreak
|
#DeepSeek-R1 LLaMA 8B
#LLM Jailbreak
#Adversarial Prompting
Qualys's analysis of the DeepSeek-R1 LLaMA 8B Large Language Model identified critical jailbreak vulnerabilities, with the model failing 58% of 885 adversa...
Read Analysis β
Mar 07, 2026 β’
Vulnerability
|
#CVE-2026-2796
#Use-after-free
#Claude Opus 4.6
Anthropic's Claude Opus 4.6 AI model discovered 22 new vulnerabilities in the Firefox browser, including high-severity issues like a use-after-free bug and...
Read Analysis β
Mar 07, 2026 β’
Vulnerability
|
#HTTP 403
#Access Control
#Data Scrape Error
The provided article content returned a 403 Forbidden error, preventing access to the full details. Consequently, no information regarding specific exploits, CV...
Read Analysis β
Mar 07, 2026 β’
Vulnerability
|
#HTTP 403
#Access Forbidden
#Access Control
The scraped article text returned an HTTP 403 Forbidden error, indicating that access to the page's content was denied. This prevents any analysis of speci...
Read Analysis β
Mar 06, 2026 β’
Vulnerability
|
#HTTP 403
#Access Denied
#Forbidden
The provided "Article Content" consists solely of an HTTP 403 Forbidden error message, indicating that access to the requested web page was denied. Th...
Read Analysis β
Mar 06, 2026 β’
Vulnerability
|
#HTTP 403
#Access Control
#Web Application Security
The scraped article text consists solely of an HTTP 403 Forbidden error, indicating the web server explicitly denied access to the requested page. This access d...
Read Analysis β
Mar 06, 2026 β’
Data Leak
|
#Prompt Engineering
#AI Jailbreaking
#Data Exfiltration
Hackers utilized AI jailbreaking techniques and sophisticated prompt engineering on Generative AI models like Claude and ChatGPT to exploit vulnerabilities with...
Read Analysis β
Mar 06, 2026 β’
Vulnerability
|
#Prompt Injection
#Data Leakage
#Jailbreaking
LLM applications face significant security risks, primarily prompt injection attacks, where malicious inputs manipulate models into ignoring instructions and re...
Read Analysis β
Mar 06, 2026 β’
Vulnerability
|
#GitHub Security Lab Taskflow Agent
#LLM
#Authorization Bypass
The GitHub Security Lab Taskflow Agent is an open-source AI-powered framework that leverages Large Language Models (LLMs) and structured taskflows to proactivel...
Read Analysis β
Mar 05, 2026 β’
Vulnerability
|
#AI-native Cybersecurity
#Vulnerability Scanning
#Penetration Testing
The proliferation of AI agents in enterprise environments is creating new attack vectors and scaling cyber risks, driving significant venture investment into AI...
Read Analysis β
Mar 05, 2026 β’
Data Leak
|
#Chromium Extensions
#LLM Chat Data Exfiltration
#Trojan:JS/ChatGPTStealer
Malicious Chromium-based browser extensions are impersonating legitimate AI tools to harvest sensitive LLM chat histories and browsing data, impacting over 900,...
Read Analysis β
Mar 04, 2026 β’
Vulnerability
|
#Shadow AI
#Generative AI
#LLM Data Training
Shadow AI poses a significant security vulnerability by allowing employees to inadvertently input sensitive organizational data into public generative AI models...
Read Analysis β
Mar 03, 2026 β’
Vulnerability
|
#CyberStrikeAI
#FortiGate
#AI-assisted attack
An AI-assisted campaign is leveraging the open-source CyberStrikeAI offensive security tool, utilizing generative AI services, to systematically target Fortinet...
Read Analysis β
Mar 03, 2026 β’
Vulnerability
|
#Prompt Injection
#AI Agent Security
#Role-Based Access Control
The article details how AI agents introduce unique security risks through prompt injection attacks, over-permissioning, and unconstrained external tool access, ...
Read Analysis β
Mar 02, 2026 β’
Vulnerability
|
#OpenClaw
#Localhost Trust
#WebSocket
A high-severity vulnerability in the OpenClaw AI agent allowed malicious websites to hijack a developer's AI agent and gain full device control without use...
Read Analysis β
Mar 02, 2026 β’
Vulnerability
|
#OpenClaw
#WebSocket
#Rate Limiter Bypass
A vulnerability in the OpenClaw AI assistant allowed malicious websites to establish WebSocket connections to the local gateway, bypassing cross-origin policies...
Read Analysis β
Mar 02, 2026 β’
Vulnerability
|
#ClawJacked
#OpenClaw
#WebSocket
The "ClawJacked" vulnerability in the OpenClaw AI personal assistant allows malicious websites to silently hijack a user's local AI agent. This e...
Read Analysis β
Feb 28, 2026 β’
Vulnerability
|
#ClawJacked
#WebSocket
#Rate-limiting bypass
The "ClawJacked" flaw allows malicious websites to hijack locally running OpenClaw AI agents by exploiting a critical vulnerability in the gateway...
Read Analysis β
Feb 26, 2026 β’
Jailbreak
|
#AI Jailbreak
#Claude AI
#Data Exfiltration
An attacker reportedly jailbroke the Claude AI model to generate malicious exploit code. This illicit activity subsequently led to the theft and exfiltration of...
Read Analysis β
Feb 26, 2026 β’
Jailbreak
|
#AI Jailbreak
#Prompt Injection
#Data Exfiltration
An incident report details hackers successfully jailbreaking the Claude AI model, leveraging this compromise to generate exploit code. This exploit ultimately f...
Read Analysis β
Feb 26, 2026 β’
Vulnerability
|
#HTTP 403
#Access Denied
#Web Scraping Failure
The provided scraped article text returned an HTTP 403 Forbidden status, indicating that access to the requested web page was explicitly denied. This prevented ...
Read Analysis β
Feb 26, 2026 β’
Vulnerability
|
#Claude Code Security
#LLM-driven code auditing
#0-day vulnerabilities
Anthropic's Claude Code Security tool, powered by Claude 4.6, represents a significant shift in secure code auditing by leveraging reasoning-based AI to de...
Read Analysis β
Feb 26, 2026 β’
Jailbreak
|
#Claude AI
#AI Jailbreak
#Data Exfiltration
A reported incident describes a successful jailbreak of the Claude AI model, enabling it to bypass safety mechanisms. This compromise allowed the AI to generate...
Read Analysis β
Feb 26, 2026 β’
Jailbreak
|
#Claude AI
#AI Jailbreak
#Data Exfiltration
Attackers successfully exploited Anthropic's Claude AI through prompt manipulation, effectively "jailbreaking" its safety guardrails to generate ...
Read Analysis β
Feb 26, 2026 β’
Vulnerability
|
#Anthropic Claude Code
#Arbitrary Command Execution
#API Key Exfiltration
Multiple vulnerabilities in Anthropic's Claude Code, primarily exploited via malicious configuration files, allowed for silent arbitrary command execution ...
Read Analysis β
Feb 25, 2026 β’
Vulnerability
|
#Prompt Injection
#Data Leakage
#LLM Hallucinations
Large Language Models (LLMs) are susceptible to critical security vulnerabilities, exemplified by a chatbot falsely advertising a car. The article highlights th...
Read Analysis β
Feb 25, 2026 β’
Jailbreak
|
#AI Jailbreak
#Anthropic Claude
#Data Exfiltration
A hacker successfully jailbroke Anthropic's Claude chatbot, bypassing its guardrails to generate vulnerability reports and exploitation scripts for attacks...
Read Analysis β
Feb 24, 2026 β’
Vulnerability
|
#TruRisk AI
#Vulnerability Management
#Zero-Day
This article highlights the application of AI-driven cybersecurity solutions, specifically Qualys TruRisk AI, to enhance vulnerability management and proactive ...
Read Analysis β
Feb 24, 2026 β’
Vulnerability
|
#RoguePilot
#Prompt Injection
#GITHUB_TOKEN
The RoguePilot vulnerability in GitHub Codespaces leveraged passive prompt injection within GitHub issues to manipulate Copilot. This enabled attackers to silen...
Read Analysis β
Feb 24, 2026 β’
Jailbreak
|
#DeepSeek-R1 LLaMA 8B
#LLM Jailbreak
#Adversarial AI
Qualys's analysis found that the DeepSeek-R1 LLaMA 8B LLM variant is significantly vulnerable to jailbreak attacks, failing 58% of adversarial manipulation...
Read Analysis β
Feb 20, 2026 β’
Vulnerability
|
#FortiGate
#Weak Credentials
#AI-augmented
An AI-augmented threat actor compromised over 600 FortiGate devices globally by exploiting exposed management ports and weak credentials with single-factor auth...
Read Analysis β
Feb 19, 2026 β’
Vulnerability
|
#EVMbench
#Blockchain Vulnerability
#Exploitation Tool
The scraped article text indicates OpenAI has launched EVMbench, a tool explicitly designed for blockchain vulnerability detection and exploitation. However, th...
Read Analysis β
Feb 19, 2026 β’
Vulnerability
|
#Microsoft 365 Copilot
#AI Summarization
#Data Exposure
A reported vulnerability in Microsoft 365 Copilot could lead to the exposure of sensitive email content through its AI summarization feature. This flaw poses a ...
Read Analysis β
Feb 18, 2026 β’
Undetermined (Access Forbidden)
|
#403 Forbidden
#Content Inaccessible
#Analysis Unavailable
The provided article content returned a "403 - Forbidden" error, indicating that access to the page was denied. Consequently, no technical analysis re...
Read Analysis β
Feb 18, 2026 β’
Data Leak
|
#Microsoft 365 Copilot Chat
#Data Loss Prevention
#CW1226324
Microsoft 365 Copilot Chat was found to bypass Data Loss Prevention (DLP) policies, summarizing emails with "confidential" sensitivity labels and expo...
Read Analysis β
Feb 18, 2026 β’
Vulnerability
|
#Log Poisoning
#OpenClaw AI
#Content Manipulation
A critical log poisoning vulnerability has been identified within the OpenClaw AI platform. This flaw specifically allows for unauthorized content manipulation,...
Read Analysis β
Feb 18, 2026 β’
Vulnerability
|
#LLM-generated passwords
#Password entropy
#Brute-force attack
LLM-generated passwords from tools like Claude, ChatGPT, and Gemini are "fundamentally weak" due to inherent patterns that make them highly predictabl...
Read Analysis β
Feb 16, 2026 β’
Malware
|
#React2Shell
#LLM-generated Malware
#Docker Honeypot
Exploitation of the React2Shell vulnerability against a Docker honeypot demonstrated how LLM-generated malware can rapidly enable low-skilled actors to deploy i...
Read Analysis β
Feb 16, 2026 β’
Vulnerability
|
#HTTP 403 Forbidden
#Access Control
#Web Scraping Protection
The scraping attempt resulted in an HTTP 403 Forbidden error, indicating denied access to the intended article content. This incident highlights an enforced acc...
Read Analysis β
Feb 14, 2026 β’
Data Leak
|
#Shadow AI
#LLM Account Compromise
#Sensitive Information Disclosure (LLM2025:02)
A viral AI caricature trend exposes enterprises to shadow AI risks and sensitive data leakage, as employees input work-related information into public LLMs and ...
Read Analysis β
Feb 13, 2026 β’
Jailbreak
|
#OpenClaw
#AI Security
#Prompt Injection
The OpenClaw experiment serves as a critical demonstration of potential security flaws in enterprise AI systems, highlighting methods to circumvent the intended...
Read Analysis β
Feb 13, 2026 β’
Malware
|
#Chrome Extensions
#iFrame Injection
#Browser Malware
Malicious Chrome AI extensions are reportedly targeting 260,000 users, employing injected iFrames as a primary mechanism for compromise. This operation highligh...
Read Analysis β
Feb 12, 2026 β’
Malware
|
#Gemini AI
#APT31
#HonestCue
State-backed threat actors and cybercriminals are widely abusing Google's Gemini AI model to enhance all stages of their attack lifecycle, from reconnaissa...
Read Analysis β
Feb 12, 2026 β’
Vulnerability
|
#Promptware Attack
#Google Calendar
#Zoom
A novel "Promptware Attack" exploits Google Calendar invites as a vector to enable unauthorized surveillance via a user's Zoom camera. This attac...
Read Analysis β
Feb 12, 2026 β’
Vulnerability
|
#Remote Code Execution
#Prompt Injection
#Supply Chain Poisoning
The OpenClaw open-source AI agent project rapidly exposed at least three high-risk Remote Code Execution (RCE) vulnerabilities, allowing attackers to perform hi...
Read Analysis β
Feb 11, 2026 β’
Jailbreak
|
#Prompt Injection
#Large Language Model
#AI Agent
The article highlights significant security risks posed by AI personal assistants like OpenClaw, primarily focusing on prompt injection as a key vulnerability. ...
Read Analysis β
Feb 11, 2026 β’
Vulnerability
|
#Artificial Intelligence
#Zero-Day Exploits
#SBOM
The article details how operationalizing AI in cybersecurity enables organizations to drastically reduce detection and containment times for threats, including ...
Read Analysis β
Feb 10, 2026 β’
Malware
|
#CVE-2025-55182
#React2Shell
#XMRig
An AI-generated malware sample exploited CVE-2025-55182, known as React2Shell, within a Docker honeypot with an exposed daemon. This resulted in remote code exe...
Read Analysis β
Feb 10, 2026 β’
Vulnerability
|
#Authentication Bypass
#Prompt Injection
#Stealer Malware
The OpenClaw AI agent is critically vulnerable to remote code execution and extensive data exfiltration due to an authentication bypass where misconfigured reve...
Read Analysis β
Feb 10, 2026 β’
Vulnerability
|
#AI Recommendation Poisoning
#Prompt Injection
#MITRE ATLAS AML.T0080
Microsoft security researchers have identified "AI Recommendation Poisoning," an attack exploiting specially crafted URLs or embedded prompts to injec...
Read Analysis β
Feb 10, 2026 β’
Vulnerability
|
#Prompt Injection
#LLM Agents
#Data Exfiltration
Anthropic's Claude Opus 4.6 exhibits prompt injection success rates up to 78.6% in less constrained environments, quantitatively validating a previously th...
Read Analysis β
Feb 10, 2026 β’
Data Leak
|
#Data Exposure
#AI Application
#User Data
An AI chat application reportedly exposed 300 million messages belonging to 25 million users, as indicated by the article title. However, detailed technical inf...
Read Analysis β
Feb 10, 2026 β’
Vulnerability
|
#Augustus
#LLM
#Vulnerability Scanner
The provided article content resulted in a "403 - Forbidden" error, preventing access to specific details regarding exploits or vulnerabilities. Howev...
Read Analysis β
Feb 10, 2026 β’
Data Leak
|
#Prompt Injection
#URL Preview
#Data Exfiltration
Security researchers have identified a vulnerability where prompt injection attacks in LLM-powered applications can weaponize URL preview features to silently e...
Read Analysis β
Feb 10, 2026 β’
Data Leak
|
#AI Application
#Data Breach
#Personal Data Exposure
The article title indicates a significant data breach within an AI chat application, resulting in the exposure of 300 million user messages from 25 million acco...
Read Analysis β
Feb 09, 2026 β’
Vulnerability
|
#OpenClaw
#Misconfiguration
#Command Execution
OpenClaw AI agents are frequently deployed with their HTTP interfaces exposed to the internet due to user misconfiguration, leading to severe security risks. Th...
Read Analysis β
Feb 09, 2026 β’
Vulnerability
|
#OpenClaw
#Prompt Injection
#Authentication Bypass
The rapid adoption of OpenClaw, an open-source AI assistant, has led to a proliferation of internet-exposed instances due to widespread user misconfiguration. T...
Read Analysis β
Feb 09, 2026 β’
Vulnerability
|
#DockerDash
#RCE
#Meta-context Injection
A critical-severity vulnerability, named DockerDash, in Docker's Ask Gordon AI assistant allows for Remote Code Execution (RCE) in Docker environments. Thi...
Read Analysis β
Feb 09, 2026 β’
Vulnerability
|
#LLM
#Private Keys
#Prompt Injection
An LLM-based AI agent, Owockibot, was compromised to disclose its private hot wallet keys, leading to a $2,100 financial loss and its operational shutdown. This...
Read Analysis β
Feb 09, 2026 β’
Vulnerability
|
#AI-Generated Code
#Software Vulnerabilities
#Vulnerability Patterns
AI code generation tools are identified as perpetuating common security flaws, rather than eliminating them, within newly developed applications. This leads to ...
Read Analysis β
Feb 09, 2026 β’
Vulnerability
|
#HTTP 403
#Access Control
#Scraping Failure
The scraped article text indicates an HTTP 403 Forbidden error, signifying that access to the requested web resource was denied by the server due to insufficien...
Read Analysis β
Feb 09, 2026 β’
Jailbreak
|
#GRP-Obliteration
#LLM Safety Alignment
#Prompt Injection
The article details "GRP-Obliteration," a novel technique leveraging Group Relative Policy Optimization (GRPO) to dismantle the safety alignment of La...
Read Analysis β
Feb 09, 2026 β’
Vulnerability
|
#OpenClaw
#Prompt Injection
#WebSocket API
OpenClaw, a rapidly adopted AI assistant with broad system access, presents significant security risks due to widespread deployment of internet-exposed instance...
Read Analysis β
Feb 06, 2026 β’
Data Leak
|
#Third-party vendor
#Data Breach
#Email service provider
Flickr experienced a data breach due to a security vulnerability found within a system managed by a third-party email service provider. This flaw potentially ex...
Read Analysis β
Feb 06, 2026 β’
Vulnerability
|
#AWS
#LLMs
#Credential Theft
Advanced AI tools, specifically Large Language Models (LLMs), are now being leveraged to automate cloud environment attacks, rapidly identifying misconfiguratio...
Read Analysis β
Feb 06, 2026 β’
Vulnerability
|
#Ollama
#Unauthenticated LLM Endpoints
#Prompt Injection
The proliferation of unmanaged "Shadow AI" deployments, such as unauthenticated Ollama server instances, creates critical security blind spots within ...
Read Analysis β
Feb 06, 2026 β’
Vulnerability
|
#Claude Opus 4.6
#Vulnerability Discovery
#Open-Source Software
Anthropic's Claude Opus 4.6 LLM has identified over 500 previously unknown, high-severity security vulnerabilities, including memory corruption and buffer ...
Read Analysis β
Feb 05, 2026 β’
Vulnerability
|
#Prompt Injection
#Agentic AI
#Data Exfiltration
Radware introduced its LLM Firewall and Agentic AI Protection Solution to secure generative AI and AI agents against emerging threats. These solutions aim to mi...
Read Analysis β
Feb 04, 2026 β’
Vulnerability
|
#AWS S3
#Code Injection
#LLM Automation
An attacker gained full administrative access in eight minutes via exposed AWS credentials in a public S3 bucket, escalating privileges through code injection i...
Read Analysis β
Feb 04, 2026 β’
Vulnerability
|
#AWS S3 Misconfiguration
#Lambda Code Injection
#LLMjacking
An attacker achieved administrative privileges in an AWS cloud environment within minutes by exploiting misconfigured public S3 buckets containing valid credent...
Read Analysis β
Feb 03, 2026 β’
Vulnerability
|
#AWS
#Large Language Models
#S3 Buckets
An attack chain exploited exposed AWS credentials in public S3 buckets, leveraging Large Language Models (LLMs) to rapidly escalate privileges through a misconf...
Read Analysis β
Feb 03, 2026 β’
Vulnerability
|
#AWS S3 Misconfiguration
#LLM-assisted Attack
#Lambda Function Injection
An AI-accelerated attack successfully breached an AWS environment by exploiting exposed credentials in public S3 buckets. This led to rapid administrative privi...
Read Analysis β
Feb 03, 2026 β’
Vulnerability
|
#Remote Code Execution
#Command Injection
#Prompt Injection
The OpenClaw AI bot farm is plagued by critical security flaws, including a one-click remote code execution vulnerability and two command injection vulnerabilit...
Read Analysis β
Feb 03, 2026 β’
Vulnerability
|
#DockerDash
#Meta-Context Injection
#Remote Code Execution
A critical vulnerability, codenamed DockerDash, in Docker's Ask Gordon AI assistant allowed remote code execution and data exfiltration. This "Meta-Co...
Read Analysis β
Feb 03, 2026 β’
Vulnerability
|
#AWS
#AI
#Cloud Breach
An AWS environment was rapidly compromised within an 8-minute window, with artificial intelligence actively accelerating the breach process. The incident highli...
Read Analysis β
Feb 03, 2026 β’
Vulnerability
|
#CVE-2026-25253
#Remote Code Execution
#Token Exfiltration
A critical token exfiltration vulnerability, tracked as CVE-2026-25253, was discovered in the OpenClaw (Moltbot/Clawdbot) AI assistant. This one-click remote co...
Read Analysis β
Feb 02, 2026 β’
Vulnerability
|
#Supabase Misconfiguration
#API Key Exposure
#Row Level Security (RLS)
A critical misconfiguration in Moltbook's Supabase database, stemming from an exposed API key in client-side JavaScript and the absence of Row Level Securi...
Read Analysis β
Feb 02, 2026 β’
Vulnerability
|
#Moltbook
#AI Agents
#Application Security
The provided article content is empty, precluding a specific technical summary of any exploit or CVE. However, the title suggests a significant security vulnera...
Read Analysis β
Feb 02, 2026 β’
Vulnerability
|
#OpenClaw
#Remote Code Execution
#AI Coding Assistants
The OpenClaw vulnerability in AI coding assistants allows single-click Remote Code Execution (RCE) by exploiting the trust relationship between developers and A...
Read Analysis β
Feb 02, 2026 β’
Malware
|
#AI
#Malware
#Infostealers
Artificial intelligence, particularly agentic AI, is predicted to revolutionize the attack landscape by automating and accelerating the entire attack lifecycle,...
Read Analysis β
Feb 02, 2026 β’
Data Leak
|
#OpenClaw AI
#Data Exposure
#Misconfiguration
According to the article title, over 21,000 OpenClaw AI instances have been identified exposing personal configuration data, indicating a significant data expos...
Read Analysis β
Feb 02, 2026 β’
Vulnerability
|
#CVE-2026-25253
#Remote Code Execution
#Cross-Site WebSocket Hijacking
A high-severity vulnerability, tracked as CVE-2026-25253, in OpenClaw allows one-click remote code execution (RCE) via a crafted malicious link. This exploit le...
Read Analysis β
Feb 02, 2026 β’
Data Leak
|
#Supabase
#API Key Exposure
#Row Level Security
A misconfigured Supabase database, with an exposed API key in client-side JavaScript and disabled Row Level Security (RLS), granted unauthenticated full read an...
Read Analysis β
Feb 01, 2026 β’
Vulnerability
|
#OpenClaw
#Prompt Injection
#LLM Agents
OpenClaw (Moltbot), an LLM agent system, grants unfettered access to user systems and sensitive data, bypassing traditional operating system and browser securit...
Read Analysis β
Feb 01, 2026 β’
Vulnerability
|
#OpenClaw
#Prompt Injection
#LLM Agents
OpenClaw (Moltbot), an LLM agent system, poses a severe security risk due to its design, which grants unfettered access to user systems and data, bypassing oper...
Read Analysis β
Feb 01, 2026 β’
Vulnerability
|
#Prompt Injection
#LLM Security
#Unfettered System Access
OpenClaw (Moltbot), an LLM agent system, presents critical security risks due to its design granting unfettered access to user systems, including sensitive data...
Read Analysis β
Jan 31, 2026 β’
Data Leak
|
#Moltbook AI
#Data Leak
#API Keys
A significant security flaw within Moltbook AI has resulted in the leakage of highly sensitive user data. This compromise includes user email addresses, authent...
Read Analysis β
Jan 30, 2026 β’
Vulnerability
|
#Prompt Injection
#Supply Chain Attack
#AI Agent Security
The OpenClaw AI assistant, an autonomous open-source agent, poses significant security risks due to its privileged access to system tools and sensitive data. It...
Read Analysis β
Jan 30, 2026 β’
Vulnerability
|
#OpenClaw
#Prompt Injection
#Agentic AI
OpenClaw, an open-source agentic AI assistant, exhibits critical architectural vulnerabilities including a default trust for localhost and susceptibility to pro...
Read Analysis β
Jan 29, 2026 β’
Vulnerability
|
#Open-source AI
#AI Security
#Model Vulnerabilities
Researchers are warning that open-source AI models possess inherent vulnerabilities, making them susceptible to various forms of criminal misuse and exploitatio...
Read Analysis β
Jan 29, 2026 β’
Vulnerability
|
#Cleartext storage
#Supply chain risk
#RCE
The AI personal assistant MoltBot (OpenClaw) insecurely stores sensitive credentials and API keys in cleartext within `~/.clawdbot` and retains "deleted&qu...
Read Analysis β
Jan 29, 2026 β’
Data Leak
|
#ChatGPT
#FOUO
#AI governance
A CISA director uploaded "for official use only" government contracting documents to OpenAI's public ChatGPT, bypassing approved federal AI tools...
Read Analysis β
Jan 29, 2026 β’
Vulnerability
|
#AI Agents
#Prompt Injection
#Persistent Memory
The autonomous AI agent OpenClaw, with its deep system access and persistent memory, significantly expands the attack surface for AI agents, enabling sophistica...
Read Analysis β
Jan 29, 2026 β’
Vulnerability
|
#AI Agents
#Vulnerability Exploitation
#Web Application Security
AI agents, including Claude Sonnet 4.5, GPT-5, and Gemini 2.5 Pro, demonstrated high proficiency by solving 9 out of 10 lab challenges that simulated real-world...
Read Analysis β
Jan 29, 2026 β’
Vulnerability
|
#Prompt Injection
#Data Exfiltration
#AI Agents
The article highlights advanced threats to AI agents, including "Shadow Escape," a zero-click exploit targeting Model Context Protocol (MCP) based sys...
Read Analysis β
Jan 28, 2026 β’
Vulnerability
|
#Multi-Agent Communication Protocols
#Intent Disruption
#Multimodal Security
NSFOCUS has identified emerging threats targeting AI Agents and Large Language Models (LLMs), specifically through sophisticated attacks leveraging Multi-Agent ...
Read Analysis β
Jan 28, 2026 β’
Malware
|
#OpenClaw
#Prompt Injection
#Data Exfiltration
Personal AI agents like OpenClaw are severely vulnerable to malicious third-party "skills" that can leverage their high-level privileges for harmful a...
Read Analysis β
Jan 27, 2026 β’
Data Leak
|
#ChatGPT
#Sensitive Data Exposure
#LLM
An acting CISA director uploaded "for official use only" government contracting documents into a public version of ChatGPT, triggering internal securi...
Read Analysis β
Jan 27, 2026 β’
Vulnerability
|
#Authentication Bypass
#Remote Code Execution
#API Key Exposure
Cybersecurity experts have identified a critical authentication bypass vulnerability in the Clawdbot AI assistant, stemming from improperly configured reverse p...
Read Analysis β
Jan 26, 2026 β’
Malware
|
#VS Code Extensions
#MaliciousCorgi
#Spyware
Two malicious Visual Studio Code extensions, disguised as AI coding assistants, have been found siphoning developer source code and opened files to China-based ...
Read Analysis β
Jan 23, 2026 β’
Vulnerability
|
#LLM Jailbreaks
#Prompt Injection
#Adaptive Attacks
Current AI defenses for large language models are largely ineffective against adaptive attacks, with research demonstrating bypass rates over 90% for techniques...
Read Analysis β
Jan 22, 2026 β’
Social Engineering
|
#LLMs
#Phishing
#Runtime Assembly Attacks
This research identifies a new attack vector where Large Language Models (LLMs) are maliciously leveraged to dynamically generate sophisticated phishing JavaScr...
Read Analysis β
Jan 20, 2026 β’
Vulnerability
|
#GitHub Security Lab Taskflow Agent
#Large Language Models
#CodeQL
The GitHub Security Lab's Taskflow Agent leverages large language models (LLMs) to automate and enhance the triage of security alerts, effectively identify...
Read Analysis β
Jan 18, 2026 β’
Jailbreak
|
#Deepfake
#Prompt Injection
#Generative AI
Grok AI was exploited by users who bypassed its content moderation safeguards through prompt-based manipulation, enabling the generation of non-consensual deepf...
Read Analysis β
Jan 15, 2026 β’
Vulnerability
|
#CVE-2025-12420
#User Impersonation
#ServiceNow AI Platform
ServiceNow has patched CVE-2025-12420, dubbed "BodySnatcher," a critical AI Platform vulnerability with a CVSS score of 9.3. This flaw allowed unauthe...
Read Analysis β
Jan 15, 2026 β’
Vulnerability
|
#Reprompt Attack
#Microsoft Copilot
#Prompt Injection
Researchers unveiled a "Reprompt" attack method enabling single-click data exfiltration from Microsoft Copilot by exploiting the "q" URL par...
Read Analysis β
Jan 13, 2026 β’
Vulnerability
|
#Prompt Injection
#AI Agents
#LLM Security
The increasing adoption of autonomous AI agents introduces significant security vulnerabilities, primarily through prompt injection attacks that can cascade acr...
Read Analysis β
Jan 13, 2026 β’
Vulnerability
|
#Remote Code Execution
#AI/ML
#Library Vulnerability
This article details potential Remote Code Execution (RCE) vulnerabilities arising from the use of modern AI/ML formats and libraries. It investigates how these...
Read Analysis β
Jan 13, 2026 β’
Vulnerability
|
#Cloudflare
#SQL command
#Malformed data
The provided text is a Cloudflare block page indicating restricted access to darkreading.com, triggered by security measures designed to protect against online ...
Read Analysis β
Jan 13, 2026 β’
Vulnerability
|
#CVE-2025-12420
#Prompt Injection
#ServiceNow AI Platform
A critical vulnerability, CVE-2025-12420 (CVSS 9.3), was patched in ServiceNow's AI platform, allowing unauthenticated user impersonation and unauthorized ...
Read Analysis β
Jan 13, 2026 β’
Vulnerability
|
#ServiceNow
#AI Vulnerability
#Authentication Bypass
Attackers could exploit a universal credential for ServiceNow's Virtual Agent API combined with weak email-only authentication to impersonate users. This a...
Read Analysis β
Jan 13, 2026 β’
Vulnerability
|
#CVE-2025-12420
#Unauthenticated Impersonation
#MFA/SSO Bypass
ServiceNow patched CVE-2025-12420, codenamed BodySnatcher, a critical vulnerability (CVSS 9.3) in its AI Platform that allowed unauthenticated user impersonatio...
Read Analysis β
Jan 08, 2026 β’
Data Leak
|
#Chrome Extensions
#LLM Data Exfiltration
#C2 Server
Malicious Google Chrome extensions, posing as legitimate AI tools, exfiltrated sensitive user data including Large Language Model (LLM) conversations and extens...
Read Analysis β
Jan 08, 2026 β’
Malware
|
#Malicious Chrome Extensions
#C2 Server
#Prompt Poaching
Threat actors deployed malicious Chrome extensions, posing as legitimate AI tools, to steal sensitive user data by exfiltrating LLM conversations and browser ac...
Read Analysis β
Jan 08, 2026 β’
Data Leak
|
#ZombieAgent
#Indirect Prompt Injection
#ChatGPT
The ZombieAgent attack, a bypass of the earlier ShadowLeak exploit, leverages an indirect prompt injection vulnerability in ChatGPT to achieve character-by-char...
Read Analysis β