Mar 20, 2026 β’
Data Leak
|
#AI agent
#Data Leak
#Agentic AI
An internal Meta AI agent provided erroneous instructions to an engineer, leading to the accidental exposure of sensitive user and company data to other employe...
Read Analysis β
Mar 20, 2026 β’
Vulnerability
|
#Prompt Injection
#Model Weights
#AI Agent Misuse
AI applications introduce novel attack surfaces, enabling prompt injection to bypass instructions or facilitate data exfiltration, and allowing malicious model ...
Read Analysis β
Mar 19, 2026 β’
Vulnerability
|
#OpenWebUI
#AI-Powered Payloads
#Server Exploitation
Threat actors are actively exploiting OpenWebUI servers to gain unauthorized access and deploy malicious payloads. These attacks are characterized by the deploy...
Read Analysis β
Mar 19, 2026 β’
Vulnerability
|
#AI Chatbot
#Vulnerability
#Prompt Injection
A security flaw has been identified within Health NZ's AI chatbot, though its severity is reportedly being downplayed by the organization. The specific tec...
Read Analysis β
Mar 19, 2026 β’
Vulnerability
|
#Model Context Protocol (MCP)
#Indirect Prompt Injection
#Large Language Models (LLM)
Architectural vulnerabilities within Large Language Model (LLM) environments integrated with the Model Context Protocol (MCP) enable attackers to embed maliciou...
Read Analysis β
Mar 18, 2026 β’
Vulnerability
|
#Autonomous LLM Agent
#OpenClaw
#Security Framework
Tsinghua and Ant Group researchers have unveiled a five-layer lifecycle-oriented security framework designed to address and mitigate inherent vulnerabilities fo...
Read Analysis β
Mar 16, 2026 β’
Jailbreak
|
#DeepSeek-R1 LLaMA 8B
#LLM Jailbreak
#Adversarial Prompting
Qualys's analysis of the DeepSeek-R1 LLaMA 8B Large Language Model identified critical jailbreak vulnerabilities, with the model failing 58% of 885 adversa...
Read Analysis β
Mar 07, 2026 β’
Vulnerability
|
#CVE-2026-2796
#Use-after-free
#Claude Opus 4.6
Anthropic's Claude Opus 4.6 AI model discovered 22 new vulnerabilities in the Firefox browser, including high-severity issues like a use-after-free bug and...
Read Analysis β
Mar 07, 2026 β’
Vulnerability
|
#HTTP 403
#Access Control
#Data Scrape Error
The provided article content returned a 403 Forbidden error, preventing access to the full details. Consequently, no information regarding specific exploits, CV...
Read Analysis β
Mar 07, 2026 β’
Vulnerability
|
#HTTP 403
#Access Forbidden
#Access Control
The scraped article text returned an HTTP 403 Forbidden error, indicating that access to the page's content was denied. This prevents any analysis of speci...
Read Analysis β
Mar 06, 2026 β’
Vulnerability
|
#HTTP 403
#Access Denied
#Forbidden
The provided "Article Content" consists solely of an HTTP 403 Forbidden error message, indicating that access to the requested web page was denied. Th...
Read Analysis β
Mar 06, 2026 β’
Vulnerability
|
#HTTP 403
#Access Control
#Web Application Security
The scraped article text consists solely of an HTTP 403 Forbidden error, indicating the web server explicitly denied access to the requested page. This access d...
Read Analysis β
Mar 06, 2026 β’
Data Leak
|
#Prompt Engineering
#AI Jailbreaking
#Data Exfiltration
Hackers utilized AI jailbreaking techniques and sophisticated prompt engineering on Generative AI models like Claude and ChatGPT to exploit vulnerabilities with...
Read Analysis β
Mar 06, 2026 β’
Vulnerability
|
#Prompt Injection
#Data Leakage
#Jailbreaking
LLM applications face significant security risks, primarily prompt injection attacks, where malicious inputs manipulate models into ignoring instructions and re...
Read Analysis β
Mar 06, 2026 β’
Vulnerability
|
#GitHub Security Lab Taskflow Agent
#LLM
#Authorization Bypass
The GitHub Security Lab Taskflow Agent is an open-source AI-powered framework that leverages Large Language Models (LLMs) and structured taskflows to proactivel...
Read Analysis β
Mar 05, 2026 β’
Vulnerability
|
#AI-native Cybersecurity
#Vulnerability Scanning
#Penetration Testing
The proliferation of AI agents in enterprise environments is creating new attack vectors and scaling cyber risks, driving significant venture investment into AI...
Read Analysis β
Mar 05, 2026 β’
Data Leak
|
#Chromium Extensions
#LLM Chat Data Exfiltration
#Trojan:JS/ChatGPTStealer
Malicious Chromium-based browser extensions are impersonating legitimate AI tools to harvest sensitive LLM chat histories and browsing data, impacting over 900,...
Read Analysis β
Mar 04, 2026 β’
Vulnerability
|
#Shadow AI
#Generative AI
#LLM Data Training
Shadow AI poses a significant security vulnerability by allowing employees to inadvertently input sensitive organizational data into public generative AI models...
Read Analysis β
Mar 03, 2026 β’
Vulnerability
|
#CyberStrikeAI
#FortiGate
#AI-assisted attack
An AI-assisted campaign is leveraging the open-source CyberStrikeAI offensive security tool, utilizing generative AI services, to systematically target Fortinet...
Read Analysis β
Mar 03, 2026 β’
Vulnerability
|
#Prompt Injection
#AI Agent Security
#Role-Based Access Control
The article details how AI agents introduce unique security risks through prompt injection attacks, over-permissioning, and unconstrained external tool access, ...
Read Analysis β