Dec 29, 2025 β’
Vulnerability
|
#Prompt Injection
#Model Poisoning
#AI Supply Chain
Traditional security frameworks fail to address AI-specific attack vectors such as prompt injection, model poisoning, and AI supply chain compromises, creating ...
Read Analysis β
Dec 29, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM
#CVE-2022-30190
Prompt injection attacks manipulate AI systems, particularly Large Language Models (LLMs), by overriding their intended instructions through malicious input, le...
Read Analysis β
Dec 29, 2025 β’
Vulnerability
|
#Prompt Injection
#AI Supply Chain Poisoning
#Remote Code Execution
Prompt injection is a prevalent AI-specific vulnerability where Large Language Models (LLMs) misinterpret external data as executable instructions, bypassing in...
Read Analysis β
Dec 22, 2025 β’
Vulnerability
|
#Prompt Injection
#AI Agent
#ChatGPT Atlas
Prompt injection attacks pose a fundamental and persistent security challenge for AI agents operating within browsers like OpenAI's ChatGPT Atlas, enabling...
Read Analysis β
Dec 22, 2025 β’
Vulnerability
|
#Prompt Injection
#ChatGPT Atlas
#OpenAI
OpenAI is continuously improving the security posture of its ChatGPT Atlas platform. These efforts are primarily focused on hardening the system to prevent and ...
Read Analysis β
Dec 18, 2025 β’
Data Leak
|
#Mixpanel
#Supply Chain Attack
#API Data Exposure
An OpenAI security incident occurred due to a vulnerability in its third-party data analytics provider, Mixpanel. This breach exposed general user information, ...
Read Analysis β
Dec 16, 2025 β’
Data Leak
|
#OpenAI API
#Mixpanel
#Data Exposure
A security incident at OpenAI led to data exposure through a vulnerability in Mixpanel, a third-party analytics provider. This compromise resulted in the leakag...
Read Analysis β
Dec 15, 2025 β’
Vulnerability
|
#Prompt Injection
#Unauthenticated Code Injection
#AWS Cloud Infrastructure
Prompt injection attacks against AI coding tools like Amazon Q were demonstrated to direct the tool to wipe local files and potentially disrupt AWS cloud infras...
Read Analysis β
Dec 11, 2025 β’
Vulnerability
|
#Prompt Injection
#Microsoft Copilot Studio
#LLM Vulnerability
Microsoft Copilot Studio's AI agents are susceptible to prompt injection, a vulnerability that allows users to bypass configured security mandates. This in...
Read Analysis β
Dec 11, 2025 β’
Vulnerability
|
#Prompt Injection
#AI Agents
#Data Exfiltration
AI agents created using Microsoft Copilot Studio are vulnerable to prompt injection, allowing attackers to bypass internal security mandates. This exploit facil...
Read Analysis β
Dec 11, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM Jailbreak
#OWASP LLM Top 10
The article details how Qualys TotalAI addresses critical security risks in Large Language Models (LLMs), identifying widespread susceptibility to prompt inject...
Read Analysis β
Dec 10, 2025 β’
Jailbreak
|
#Large Language Models
#Jailbreak
#Time-lock puzzles
Cryptographers have demonstrated that AI safety filters designed to protect Large Language Models (LLMs) inherently possess vulnerabilities due to their computa...
Read Analysis β
Dec 10, 2025 β’
Vulnerability
|
#AI Vulnerability
#NCSC
#Data Breach
The National Cyber Security Centre (NCSC) has issued a caution regarding the misidentification or underestimation of AI vulnerabilities. Failure to properly add...
Read Analysis β
Dec 10, 2025 β’
Vulnerability
|
#Prompt Injection
#Data Poisoning
#AI Models
The article outlines a broad spectrum of risks to artificial intelligence (AI) systems, including data poisoning, prompt injection, and model theft, which colle...
Read Analysis β
Dec 07, 2025 β’
Vulnerability
|
#Prompt Injection
#OWASP Top Ten LLM
#Training Data Poisoning
The article details the OWASP Top Ten LLM Security Risks, outlining specific vulnerabilities such as Prompt Injection (LLM01), Training Data Poisoning (LLM03), ...
Read Analysis β
Dec 07, 2025 β’
Vulnerability
|
#ShadowMQ
#Python pickle deserialization
#CVE-2024-50050
Over 30 critical "ShadowMQ" vulnerabilities, stemming from insecure ZeroMQ `recv_pyobj()` and Python `pickle` deserialization, affect leading AI infer...
Read Analysis β
Dec 06, 2025 β’
Vulnerability
|
#Prompt Injection
#Remote Code Execution
#AI IDEs
Security researcher Ari Marzouk disclosed "IDEsaster," a collection of over 30 vulnerabilities, with 24 assigned CVEs, affecting various AI-powered In...
Read Analysis β
Dec 03, 2025 β’
Data Leak
|
#ChatGPT
#Data Leak
#Social Engineering
AI chatbots pose a home security risk by indefinitely storing sensitive user conversation data, which can include details about home layouts, addresses, and per...
Read Analysis β
Dec 01, 2025 β’
Malware
|
#npm supply chain attack
#LLM prompt injection
#Typosquatting
A malicious npm package, `eslint-plugin-unicorn-ts-2`, engaged in typosquatting to exfiltrate environment variables via a post-install hook to a Pipedream webho...
Read Analysis β
Dec 01, 2025 β’
Data Leak
|
#OpenAI
#Data Breach
#API Keys
A data breach at OpenAI resulted in the exposure of API user information. This incident necessitates prompt action from affected users to secure accounts and ro...
Read Analysis β
Nov 27, 2025 β’
Data Leak
|
#Mixpanel
#Third-party breach
#Data Minimisation
OpenAI confirmed a security breach originating from its third-party analytics provider, Mixpanel, where an unauthorized actor gained access to and exfiltrated a...
Read Analysis β
Nov 24, 2025 β’
Vulnerability
|
#Cloudflare
#Bot Protection
#Security Challenge
The provided text details a Cloudflare security challenge page, indicating a connection review is underway for cisoseries.com to verify human access. This mecha...
Read Analysis β
Nov 20, 2025 β’
Vulnerability
|
#SAST
#Llama 3 8B
#Semgrep
A novel hybrid framework combining Static Application Security Testing (SAST) with a fine-tuned Large Language Model (LLM) dramatically improves vulnerability d...
Read Analysis β
Nov 19, 2025 β’
Vulnerability
|
#Second-order prompt injection
#ServiceNow Now Assist
#Agent-to-agent discovery
ServiceNow's Now Assist generative AI platform is susceptible to "second-order prompt injection" attacks due to its default agent-to-agent discov...
Read Analysis β
Nov 18, 2025 β’
Vulnerability
|
#AI Orchestration
#Claude Code
#Data Exfiltration
Anthropic's Threat Intelligence team disrupted the first known AI-orchestrated cyber espionage campaign, where a state-sponsored Chinese threat actor utili...
Read Analysis β
Nov 17, 2025 β’
Jailbreak
|
#AI Jailbreak
#State-sponsored APT
#Data Exfiltration
Chinese state-sponsored actors exploited Anthropic's Claude AI by jailbreaking its safeguards, enabling the autonomous execution of cyberattacks with minim...
Read Analysis β
Nov 14, 2025 β’
Vulnerability
|
#CVE-2024-50050
#Python pickle deserialization
#ZeroMQ
A series of critical Remote Code Execution (RCE) vulnerabilities, dubbed 'ShadowMQ,' were discovered in major AI inference frameworks (Meta Llama Stac...
Read Analysis β
Nov 14, 2025 β’
Vulnerability
|
#Anthropic Claude Code
#AI Agentic Capabilities
#Cyber Espionage
Chinese state-sponsored threat actors leveraged Anthropic's Claude Code and Model Context Protocol (MCP) as an "autonomous cyber attack agent" to...
Read Analysis β
Nov 14, 2025 β’
Vulnerability
|
#CVE-2024-50050
#ZeroMQ
#Pickle Deserialization
Critical remote code execution vulnerabilities have been discovered across major AI inference engines, including Meta, Nvidia, and Microsoft, stemming from the ...
Read Analysis β
Nov 13, 2025 β’
Jailbreak
|
#LLM Jailbreak
#Agentic AI
#Cyber Espionage
A state-sponsored group utilized Anthropic's Claude Code, jailbreaking its guardrails to orchestrate the first reported AI-driven cyber espionage campaign....
Read Analysis β
Nov 11, 2025 β’
Vulnerability
|
#Whisper Leak
#LLM
#Side-Channel Attack
The βWhisper Leakβ identifies a novel side-channel vulnerability affecting Large Language Models (LLMs). This attack allows adversaries to infer sensitive u...
Read Analysis β
Nov 08, 2025 β’
Vulnerability
|
#Whisper Leak
#Side-channel attack
#LLM streaming
The "Whisper Leak" is a novel side-channel attack targeting remote language models, allowing passive adversaries to infer sensitive conversation topic...
Read Analysis β
Nov 07, 2025 β’
Vulnerability
|
#Whisper Leak
#Side-channel attack
#LLM
The "Whisper Leak" is a novel side-channel attack that infers language model conversation topics by analyzing network packet sizes and timings, even w...
Read Analysis β
Nov 06, 2025 β’
Vulnerability
|
#Indirect Prompt Injection
#Large Language Models
#AI Supply Chain Security
Critical vulnerabilities in AI systems include structural flaws in AI-generated code and the ability to establish backdoors in large language models using minim...
Read Analysis β
Nov 06, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM Backdoor
#AI Supply Chain Security
Systemic vulnerabilities are prevalent across AI models and infrastructure, encompassing exploitable flaws in AI-generated code and the ability to embed backdoo...
Read Analysis β
Nov 05, 2025 β’
Malware
|
#Large Language Models
#Adversarial AI
#Polymorphic Malware
The Google Cloud Threat Intelligence Group's (GTIG) AI Threat Tracker likely highlights an increase in threat actor adoption of artificial intelligence too...
Read Analysis β
Nov 05, 2025 β’
Vulnerability
|
#Indirect Prompt Injection
#ChatGPT
#Data Exfiltration
Cybersecurity researchers have disclosed seven new vulnerabilities in OpenAI's GPT-4o and GPT-5 models, enabling indirect prompt injection attacks. These e...
Read Analysis β
Nov 04, 2025 β’
Data Leak
|
#Indirect Prompt Injection
#Claude AI
#Data Exfiltration
A novel indirect prompt injection attack allows threat actors to compromise Anthropic's Claude AI Code Interpreter, leveraging its network features to exfi...
Read Analysis β
Nov 03, 2025 β’
Vulnerability
|
#CVE-2024-12366
#Remote Code Execution
#Agentic AI
The article details a Remote Code Execution (RCE) vulnerability, tracked as CVE-2024-12366, affecting agentic AI systems that execute LLM-generated code without...
Read Analysis β
Oct 31, 2025 β’
Vulnerability
|
#Indirect Prompt Injection
#Code Interpreter
#Data Exfiltration
A vulnerability in Anthropic's Claude AI allows attackers to leverage indirect prompt injection against its code interpreter feature. This exploit enables ...
Read Analysis β
Oct 31, 2025 β’
Vulnerability
|
#Aardvark
#GPT-5
#CVEs
OpenAI has launched Aardvark, an AI agent powered by GPT-5, engineered to autonomously scan, identify, validate, and propose patches for security vulnerabilitie...
Read Analysis β
Oct 30, 2025 β’
Vulnerability
|
#Vulnerability detection
#GPT-5
#CVE identifiers
OpenAI has introduced Aardvark, an agentic AI security researcher powered by GPT-5, designed to autonomously identify, validate, and propose fixes for security ...
Read Analysis β
Oct 30, 2025 β’
Vulnerability
|
#Aardvark
#GPT-5
#CVE
OpenAI has introduced Aardvark, an agentic AI security researcher powered by GPT-5, designed to autonomously identify and propose fixes for security vulnerabili...
Read Analysis β
Oct 29, 2025 β’
Vulnerability
|
#Confused Deputy Problem
#Agentic AI
#Identity and Access Management
The article highlights that agentic AI will become a significant attack vector by exploiting the "confused deputy problem," where AI agents with legit...
Read Analysis β
Oct 28, 2025 β’
Vulnerability
|
#Prompt Injection
#Agentic AI
#LLM
Prompt injection vulnerabilities enable attackers to embed malicious commands within seemingly innocuous content, leading AI browsers and chatbots to perform un...
Read Analysis β
Oct 28, 2025 β’
Vulnerability
|
#LLM Security
#AI Agents
#Security Benchmark
Lakera has launched an open-source security benchmark specifically designed to evaluate and enhance the security posture of Large Language Model (LLM) backends ...
Read Analysis β
Oct 27, 2025 β’
Vulnerability
|
#OpenAI Atlas
#AI Agent
#Browser Vulnerability
A security flaw has been identified within OpenAI's Atlas browser component, according to the article title. This vulnerability is presented as a critical ...
Read Analysis β
Oct 27, 2025 β’
Vulnerability
|
#ChatGPT Atlas
#CSRF
#Persistent Memory
A critical vulnerability in OpenAI's ChatGPT Atlas browser leverages a Cross-Site Request Forgery (CSRF) flaw to inject malicious instructions into the AI&...
Read Analysis β
Oct 25, 2025 β’
Vulnerability
|
#Generative AI
#Smishing
#AI-assisted threats
The Verizon 2025 Mobile Security Index reveals a significant surge in mobile-based cyberattacks, largely driven by the widespread adoption of Generative AI with...
Read Analysis β
Oct 23, 2025 β’
Vulnerability
|
#AI Security
#Attack Surface
#Stolen Credentials
While cybersecurity leaders are increasingly adopting AI to combat skills shortages and expanding attack surfaces, 43% of surveyed organizations have already ex...
Read Analysis β
Oct 23, 2025 β’
Vulnerability
|
#Model Context Protocol (MCP)
#AI Agent Supply Chain
#Tool Poisoning
The adoption of Model Context Protocol (MCP) exposes AI agent supply chains to critical vulnerabilities, specifically "tool poisoning attacks" where m...
Read Analysis β
Oct 23, 2025 β’
Vulnerability
|
#Prompt injection
#AI browsers
#Autonomous AI agents
Researchers have identified critical prompt injection vulnerabilities in AI browsers, such as Perplexity's Comet, where embedded, imperceptible instruction...
Read Analysis β
Oct 22, 2025 β’
Vulnerability
|
#AI Vulnerabilities
#Vishing
#Data Leak
AI security flaws have negatively impacted half of organizations, enabling cybercriminals to execute sophisticated attacks more easily and significantly increas...
Read Analysis β
Oct 22, 2025 β’
Vulnerability
|
#Cloudflare
#Web Application Firewall
#WAF
A user attempting to access darkreading.com encountered a Cloudflare-issued block page, indicating their action triggered a web application firewall (WAF) rule....
Read Analysis β
Oct 21, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM
#Agentic Browsers
The article identifies indirect prompt injection vulnerabilities in AI-powered agentic browsers, specifically demonstrating attacks against Perplexity Comet via...
Read Analysis β
Oct 15, 2025 β’
Vulnerability
|
#Large Language Models
#Privacy Vulnerability
#User Data Training
A Stanford study reveals that leading AI companies, including Anthropic, Google, and OpenAI, are defaulting to using user chat inputs for large language model (...
Read Analysis β
Oct 12, 2025 β’
Vulnerability
|
#CVE-2024-0132
#Container Escape
#Generative AI
The NVIDIA Container Toolkit contains a critical security flaw, identified as CVE-2024-0132, which allows for container escape. This vulnerability grants attack...
Read Analysis β
Oct 12, 2025 β’
Vulnerability
|
#Prompt Injection
#Training Data Poisoning
#OWASP Top 10 for LLM Applications
The article highlights critical security risks in Large Language Model (LLM) deployments, emphasizing prompt injection as a key attack vector where malicious in...
Read Analysis β
Oct 10, 2025 β’
Vulnerability
|
#LLM Security
#Insecure Code Generation
#Data Exposure
Kaspersky outlines security risks for developers employing LLM assistants and "vibe coding" methodologies. These concerns primarily involve the potent...
Read Analysis β
Oct 09, 2025 β’
Vulnerability
|
#Agentic AI Cyberweapons
#Critical Infrastructure
#Zero-Day Vulnerability
State-sponsored attackers are increasingly deploying agentic AI cyberweapons to autonomously exploit critical vulnerabilities, including zero-day flaws, within ...
Read Analysis β
Oct 09, 2025 β’
Data Leak
|
#ChatGPT
#Data Exfiltration
#Shadow AI
A significant 77% of employees are reportedly leaking sensitive corporate data by pasting it into generative AI tools like ChatGPT, primarily through personal, ...
Read Analysis β
Oct 09, 2025 β’
Vulnerability
|
#Indirect Prompt Injection
#Remote Code Execution
#Agentic AI
Attackers can achieve remote code execution (RCE) on developer machines by leveraging indirect prompt injection against agentic AI developer tools. This is acco...
Read Analysis β
Oct 09, 2025 β’
Vulnerability
|
#LLM data poisoning
#Backdoor vulnerability
#Fixed-size training data poisoning
Researchers demonstrated that as few as 250 poisoned documents can create a backdoor vulnerability in large language models, irrespective of model size or train...
Read Analysis β
Oct 08, 2025 β’
Vulnerability
|
#Indirect Prompt Injection
#Command Injection
#Remote Code Execution
An advanced attack chain exploits an LLM chatbot through indirect prompt injection (OWASP LLM01:2025) to achieve system prompt leakage and abuse excessive agenc...
Read Analysis β
Oct 06, 2025 β’
Vulnerability
|
#CVE-2023-4863
#AI Agent
#Buffer Overflow
Google DeepMind introduces CodeMender, an AI agent designed to automatically discover and patch software vulnerabilities, including complex root causes and arch...
Read Analysis β
Oct 06, 2025 β’
Vulnerability
|
#CodeMender
#CVE-2023-4863
#Buffer Overflow
Google DeepMind has introduced CodeMender, an AI agent designed to automatically identify and patch software vulnerabilities using advanced program analysis and...
Read Analysis β
Oct 03, 2025 β’
Vulnerability
|
#Claude Sonnet 4.5
#Vulnerability Discovery
#CyberGym
Claude Sonnet 4.5 demonstrates significant advancements in cybersecurity defense, exhibiting enhanced capabilities in detecting, analyzing, and patching softwar...
Read Analysis β
Oct 02, 2025 β’
Vulnerability
|
#Remote Code Execution
#Prompt Injection
#Retrieval-Augmented Generation
The NVIDIA AI Red Team highlights critical vulnerabilities in LLM-based applications, most notably Remote Code Execution (RCE) via prompt injection when LLM-gen...
Read Analysis β
Oct 01, 2025 β’
Vulnerability
|
#Living Off the Land
#Attack Surface Reduction
#AI-driven attacks
Bitdefender's 2025 report reveals that 84% of high-severity attacks leverage Living Off the Land (LOTL) techniques, utilizing legitimate tools to bypass tr...
Read Analysis β
Oct 01, 2025 β’
Vulnerability
|
#CVE-2025-10725
#OpenShift AI
#Privilege Escalation
A critical vulnerability, CVE-2025-10725 (CVSS 9.9), allows authenticated, low-privileged attackers to escalate privileges to a full cluster administrator in Re...
Read Analysis β
Sep 30, 2025 β’
Vulnerability
|
#Artificial Intelligence
#Machine Learning
#Threat Detection
Artificial Intelligence (AI) significantly enhances modern cybersecurity practices through machine learning for real-time threat detection, automated incident r...
Read Analysis β
Sep 29, 2025 β’
Social Engineering
|
#LLM
#SVG File
#Credential Harvesting
Threat actors are employing Large Language Models (LLMs) to create sophisticated phishing campaigns, leveraging LLM-generated code to obfuscate malicious payloa...
Read Analysis β
Sep 29, 2025 β’
Automated Alert
Please provide the "Article Content" for analysis. I need the text of the article to generate the summary, categorize it, and extract keywords....
Read Analysis β
Sep 25, 2025 β’
Data Leak
|
#Indirect Prompt Injection
#Agentforce
#ForcedLeak
Researchers discovered "ForcedLeak," a critical indirect prompt injection vulnerability (CVSS 9.4) within Salesforce's Agentforce AI platform. Th...
Read Analysis β
Sep 25, 2025 β’
Data Leak
|
#Salesforce AI
#Prompt Injection
#Data Leakage
Salesforce AI agents are reportedly being manipulated to disclose sensitive information, indicating a critical vulnerability in their design or implementation. ...
Read Analysis β
Sep 25, 2025 β’
Vulnerability
|
#ForcedLeak
#Prompt Injection
#CRM Data Exfiltration
Salesforce Agentforce was susceptible to a critical indirect prompt injection vulnerability, codenamed ForcedLeak (CVSS 9.4). This flaw allowed attackers to exf...
Read Analysis β
Sep 24, 2025 β’
Vulnerability
|
#LoRA
#Pickle Serialization
#Data Poisoning
The article highlights critical vulnerabilities in Large Language Models (LLMs) through supply chain attacks, specifically detailing the embedding of malicious ...
Read Analysis β
Sep 24, 2025 β’
Data Leak
|
#OAuth token
#Supply chain attack
#GitHub repository
A multi-stage supply chain attack, tracked as UNC6395, originated from the compromise of a Salesloft GitHub repository, leading to the theft of a sensitive OAut...
Read Analysis β
Sep 24, 2025 β’
Vulnerability
|
#LoRA
#Data Poisoning
#Pickle Serialization
Adversaries can compromise Large Language Models (LLMs) through three primary methods: embedding malicious executable instructions in model files, leveraging ma...
Read Analysis β
Sep 24, 2025 β’
Data Leak
|
#OAuth token
#Supply Chain Attack
#GitHub compromise
An OAuth token stolen from a compromised GitHub repository of AI chatbot vendor Salesloft-Drift was leveraged to access their high-privilege Drift account. This...
Read Analysis β
Sep 22, 2025 β’
Vulnerability
|
#Artificial Intelligence (AI)
#Large Language Model (LLM)
#Systemic weakness
The article identifies that the natural language instruction paradigm of AI chatbots and large language models (LLMs) fundamentally introduces a "systemic ...
Read Analysis β
Sep 19, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM Security
#Cloud AI
The article details critical security challenges associated with cloud-hosted Large Language Models (LLMs), including prompt injection, adversarial exploits, mo...
Read Analysis β
Sep 19, 2025 β’
Vulnerability
|
#OWASP Top 10 for LLM Applications 2025
#System Prompt Leakage
#Retrieval-Augmented Generation (RAG)
The OWASP Top 10 for LLM Applications 2025 introduces critical updates, including new entries like System Prompt Leakage, which exploits the exposure of sensiti...
Read Analysis β
Sep 18, 2025 β’
Data Leak
|
#MITRE ATLAS
#LLM
#Adversarial Attacks
AI security incidents are rapidly escalating, primarily impacting organizations through significant data breaches and unauthorized access to AI systems. Notable...
Read Analysis β
Sep 17, 2025 β’
Vulnerability
|
#LLM
#Prompt Injection
#Adversarial AI
The Kaspersky article forecasts various technical methods and attack vectors projected to compromise Large Language Models (LLMs) by 2025. It likely details eme...
Read Analysis β
Sep 17, 2025 β’
Vulnerability
|
#Reflected XSS
#LLM Security
#Input Sanitization
Researchers discovered a reflected Cross-Site Scripting (XSS) vulnerability in Yellow.ai's chatbot, which could be tricked into generating malicious HTML/J...
Read Analysis β
Sep 12, 2025 β’
Vulnerability
|
#AI Security
#AI Agents
#Prompt Injection
This article addresses the critical security challenges inherent in deploying AI agents, highlighting the potential for vulnerabilities that could compromise bu...
Read Analysis β
Sep 12, 2025 β’
Vulnerability
|
#Cursor AI Code Editor
#Workspace Trust
#Code Execution
A security flaw in the Cursor AI code editor, a Visual Studio Code fork, allows for silent arbitrary code execution when a maliciously crafted repository is ope...
Read Analysis β
Sep 11, 2025 β’
Social Engineering
|
#LLM Pen Testing
#Adversarial Prompt Exploitation (APE)
#Behavioral Manipulation
The article introduces Adversarial Prompt Exploitation (APE), a novel methodology for LLM penetration testing that deviates from traditional code-based exploits...
Read Analysis β
Sep 11, 2025 β’
Social Engineering
|
#AI Browsers
#Prompt Injection
#Social Engineering
Deeply integrated AI browsers pose significant security risks due to their susceptibility to social engineering and prompt injection attacks targeting the AI ag...
Read Analysis β
Sep 02, 2025 β’
Vulnerability
|
#Hexstrike-AI
#Zero-Day
#CVE-2025-7775
Hexstrike-AI is an AI-powered orchestration framework designed to automate and accelerate zero-day exploitation, leveraging large language models to significant...
Read Analysis β
Sep 01, 2025 β’
Data Leak
|
#Authentication Tokens
#Authorization Sprawl
#UNC6395
The incident involved the mass-theft of authentication tokens from Salesloft's Drift application, leading to significant data exfiltration from integrated ...
Read Analysis β
Aug 29, 2025 β’
Vulnerability
|
#Large Language Models (LLMs)
#Automated exploitation
#Proof-of-Concept (PoC)
An AI-powered system dubbed "Auto Exploit" leverages Large Language Models (LLMs) to generate proof-of-concept exploits for software vulnerabilities f...
Read Analysis β
Aug 28, 2025 β’
Vulnerability
|
#Nx build system
#Supply Chain Attack
#AI-weaponized
Hackers exploited a vulnerable workflow in the Nx build system to achieve code injection and GITHUB_TOKEN theft, enabling the publication of malicious package v...
Read Analysis β
Aug 27, 2025 β’
Data Leak
|
#OAuth
#UNC6395
#AWS Access Keys
Threat actor UNC6395 exploited compromised OAuth and refresh tokens associated with the Drift AI chat agent, accessible via Salesloft, to gain unauthorized acce...
Read Analysis β
Aug 26, 2025 β’
Vulnerability
|
#Prompt Injection
#Llama Guard
#OWASP Top 10 LLM
Cloudflare's Firewall for AI now integrates Llama Guard to provide real-time unsafe content moderation, detecting and blocking malicious prompts at the net...
Read Analysis β
Aug 26, 2025 β’
Vulnerability
|
#Federated AI
#Data Encryption
#Third-party Subprocessors
The article details Zoom AI Companion's security and privacy posture, highlighting a federated AI approach that integrates Zoom's and third-party mode...
Read Analysis β
Aug 25, 2025 β’
Vulnerability
|
#IDOR
#API Security
#Legacy Application
An Insecure Direct Object Reference (IDOR) vulnerability in an exposed API, combined with an unpatched legacy web application and weak credential hygiene, allow...
Read Analysis β
Aug 21, 2025 β’
Vulnerability
|
#SQL Injection
#Remote Code Execution
#LLM-based AI
AI coding tools like Claude Code integrate security features to identify common vulnerabilities such as SQL injection, XSS, RCE, and SSRF during development wor...
Read Analysis β
Aug 20, 2025 β’
Vulnerability
|
#Indirect Prompt Injection
#Perplexity Comet
#Cross-domain access
A critical indirect prompt injection vulnerability was discovered in Perplexity's Comet AI assistant, allowing malicious instructions hidden in webpage con...
Read Analysis β
Aug 20, 2025 β’
Vulnerability
|
#Prompt Injection
#AI Agents
#Supply-chain vulnerabilities
AI agents are highly susceptible to prompt injection attacks, allowing adversaries to manipulate their behavior to execute unauthorized system commands, steal c...
Read Analysis β
Aug 20, 2025 β’
Vulnerability
|
#XSS
#Prompt Injection
#GPT-4
Lenovo's GPT-4-powered chatbot "Lena" was vulnerable to cross-site scripting (XSS) attacks due to improper input and output sanitization, initiat...
Read Analysis β
Aug 18, 2025 β’
Vulnerability
|
#Prompt Injection
#Microsoft Copilot Studio
#Salesforce
Security researchers demonstrated a prompt injection attack against an AI agent built on Microsoft Copilot Studio, enabling it to reveal private knowledge and c...
Read Analysis β
Aug 17, 2025 β’
Vulnerability
|
#Prompt Injection
#Remote Code Execution
#ASCII Smuggling
The article highlights critical security vulnerabilities in LLMs integrated with coding agents, primarily exploiting advanced prompt injection techniques. Attac...
Read Analysis β
Aug 17, 2025 β’
Vulnerability
|
#Prompt Injection
#Remote Code Execution
#ASCII Smuggling
The article highlights novel prompt injection techniques, such as ASCII Smuggling and hidden instructions in public code repositories, designed to be impercepti...
Read Analysis β
Aug 15, 2025 β’
Vulnerability
|
#Prompt Injection
#Data Leakage
#OWASP Top 10 for LLM Applications
The article details critical security risks inherent in Large Language Models (LLMs), prominently featuring prompt injection as an exploit where attackers manip...
Read Analysis β
Aug 12, 2025 β’
Jailbreak
|
#Large Language Models
#Jailbreak
#InfoFlood
Researchers developed novel jailbreak methods, including "InfoFlood" and "JAMBench," to expose critical vulnerabilities in Large Language Mo...
Read Analysis β
Aug 11, 2025 β’
Vulnerability
|
#AI Agents
#Prompt Injection
#Data Exfiltration
Zenity Labs research details how widely deployed AI agents are highly susceptible to "hijacking attacks" via methods such as email-based prompt inject...
Read Analysis β
Aug 09, 2025 β’
Vulnerability
|
#GPT-5 Jailbreak
#Prompt Injection
#Zero-Click Attack
Cybersecurity researchers have uncovered a jailbreak technique, combining Echo Chamber and narrative-driven steering, to bypass GPT-5's ethical guardrails ...
Read Analysis β
Aug 07, 2025 β’
Vulnerability
|
#Generative AI
#OWASP Top 10
#CWE-80
Veracode's 2025 GenAI Code Security Report reveals that code generated by Large Language Models (LLMs) contains security vulnerabilities in 45% of cases, p...
Read Analysis β
Aug 06, 2025 β’
Vulnerability
|
#CVE-2025-49596
#Remote Code Execution
#Malicious OAuth Proxying
The article details critical security vulnerabilities within Model Context Protocol (MCP) deployments, including a remote code execution exploit (CVE-2025-49596...
Read Analysis β
Aug 06, 2025 β’
Jailbreak
|
#Large Language Model
#Prompt Injection
#Data Exfiltration
Enterprise AI assistants have been identified as vulnerable to abuse, potentially enabling unauthorized data theft. This exploitation pathway also allows for th...
Read Analysis β
Aug 05, 2025 β’
Vulnerability
|
#CVE-2025-54136
#RCE
#MCPoison
CVE-2025-54136, codenamed MCPoison, is a high-severity vulnerability in the Cursor AI code editor that allows for remote code execution (RCE). It exploits how t...
Read Analysis β
Aug 04, 2025 β’
Vulnerability
|
#Big Sleep
#LLM
#Automated Vulnerability Discovery
Google's LLM-based vulnerability researcher, "Big Sleep," developed by DeepMind and Project Zero, has autonomously identified 20 security flaws a...
Read Analysis β
Aug 04, 2025 β’
Vulnerability
|
#OWASP LLM05:2025
#Package Hallucination
#Adversarial Prompts
Trend Micro research reveals that while Large Language Models (LLMs) can serve as automated security judges, they are susceptible to adversarial prompts and fai...
Read Analysis β
Aug 04, 2025 β’
Vulnerability
|
#AI Model Security
#Access Controls
#Shadow AI
IBM's 2025 Cost of a Data Breach Report highlights significant security gaps in AI adoption, with 13% of surveyed organizations experiencing breaches of AI...
Read Analysis β
Jul 31, 2025 β’
Data Leak
|
#Shadow AI
#Deepfake attacks
#PII
The healthcare sector faces an average data breach cost of $7.42 million, driven by the compromise of patient personal identification information (PII). A notab...
Read Analysis β
Jul 31, 2025 β’
Vulnerability
|
#Prompt Injection
#Adversarial ML
#LLM Jailbreak
The article highlights the critical need for AI security tools to combat escalating threats like adversarial inputs, prompt injection, and LLM jailbreaks. These...
Read Analysis β
Jul 30, 2025 β’
Vulnerability
|
#Shadow AI
#AI Access Controls
#AI Models
IBM's 2025 Cost of a Data Breach Report reveals that rapid AI adoption is creating significant security gaps, leading to 13% of organizations experiencing ...
Read Analysis β
Jul 30, 2025 β’
Data Leak
|
#AI access controls
#Shadow AI
#AI governance
Rapid AI adoption is creating significant security debt due to neglected foundational cybersecurity, specifically a lack of proper AI access controls and govern...
Read Analysis β
Jul 30, 2025 β’
Vulnerability
|
#Shadow AI
#Supply-chain intrusion
#AI access controls
Unmonitored or unsecured "shadow AI" tools are significantly increasing the cost and impact of data breaches, often stemming from supply-chain intrusi...
Read Analysis β
Jul 30, 2025 β’
Data Leak
|
#AI Oversight Gap
#Shadow AI
#Access Controls
Rapid AI adoption is creating a significant "AI oversight gap," with 97% of organizations experiencing AI-related incidents lacking proper access cont...
Read Analysis β
Jul 30, 2025 β’
Data Leak
|
#AI Access Controls
#Shadow AI
#AI Governance
IBM's report reveals that 13% of organizations experienced breaches of AI models or applications, primarily due to a critical lack of proper AI access cont...
Read Analysis β
Jul 29, 2025 β’
Vulnerability
|
#Use-after-free
#Chroma DB
#NVIDIA Triton Inference Server
The Trend Micro report highlights multiple zero-day vulnerabilities discovered at Pwn2Own Berlin, targeting critical AI infrastructure components such as Chroma...
Read Analysis β
Jul 29, 2025 β’
Vulnerability
|
#Prompt Injection
#AWS-2025-015
#Software Supply Chain Attack
The Amazon Q Developer Extension for Visual Studio Code (version 1.84.0) was compromised via a software supply chain attack, embedding a prompt injection that b...
Read Analysis β
Jul 29, 2025 β’
Vulnerability
|
#Base44
#Unauthorized Access
#Authentication Bypass
A critical vulnerability in the AI vibe coding platform Base44 allowed unauthorized access to private applications by exploiting unauthenticated registration an...
Read Analysis β
Jul 29, 2025 β’
Vulnerability
|
#Pwn2Own
#Zero-Day
#AI Security
At Pwn2Own Berlin, several zero-day vulnerabilities were discovered targeting critical AI infrastructure components, including Chroma DB, NVIDIA Triton Inferenc...
Read Analysis β
Jul 24, 2025 β’
Vulnerability
|
#Supply Chain Attack
#Prompt Injection
#Amazon Q
A hacker injected destructive system commands into Amazon's Visual Studio Code extension for Amazon Q via a compromised GitHub repository, distributing it ...
Read Analysis β
Jul 16, 2025 β’
Vulnerability
|
#Cloudflare
#SQL Injection
#Web Application Firewall
The provided text indicates a user was blocked from accessing darkreading.com by a security service, likely a Web Application Firewall (WAF) such as Cloudflare,...
Read Analysis β
Jul 16, 2025 β’
Vulnerability
|
#CVE-2025-6965
#SQLite
#Integer Overflow
Google's AI agent, Big Sleep, discovered CVE-2025-6965, a critical memory corruption vulnerability in SQLite affecting versions prior to 3.50.2. This integ...
Read Analysis β
Jul 15, 2025 β’
Vulnerability
|
#CVE-2025-32711
#Prompt Injection
#Zero-Click
EchoLeak (CVE-2025-32711) is a zero-click AI vulnerability in Microsoft 365 Copilot that exploits invisible prompt injection within contextual data. This allows...
Read Analysis β
Jul 11, 2025 β’
Vulnerability
|
#Weak Password
#IDOR
#Paradox.ai
Security researchers gained unauthorized administrative access to Paradox.ai's McHire platform by exploiting a weak, decommissioned test account with "...
Read Analysis β
Jul 11, 2025 β’
Data Leak
|
#Default Passwords
#Insecure Direct Object Reference
#PII
A critical security flaw in McDonald's McHire AI hiring tool leveraged default '123456' administrative credentials combined with an Insecure Dire...
Read Analysis β
Jul 09, 2025 β’
Data Leak
|
#Paradox.ai
#Weak Password
#Data Exposure
The McHire AI hiring bot, developed by Paradox.ai, suffered from basic security misconfigurations, specifically allowing unauthorized access via easily guessabl...
Read Analysis β
Jul 08, 2025 β’
Vulnerability
|
#Cloudflare
#WAF
#SQL Injection
The provided text is a Cloudflare block page indicating that an automated security service prevented access to darkreading.com. The block was triggered by an ac...
Read Analysis β
Jul 08, 2025 β’
Vulnerability
|
#Cloudflare WAF
#Web Application Firewall
#SQL Injection
The provided text is a Cloudflare block page, indicating access to darkreading.com was denied by a security service. This block was triggered by an action inter...
Read Analysis β
Jul 02, 2025 β’
Data Leak
|
#ChatGPT
#Data Leak
#GenAI
The article highlights the significant risk of sensitive enterprise data leaks through the misuse of Large Language Models like ChatGPT. A prominent example is ...
Read Analysis β
Jul 01, 2025 β’
Vulnerability
|
#CVE-2025-49596
#Remote Code Execution
#0.0.0.0 Day
A critical remote code execution (RCE) vulnerability, CVE-2025-49596 (CVSS 9.4), has been identified in Anthropic's Model Context Protocol (MCP) Inspector,...
Read Analysis β
Jun 27, 2025 β’
Vulnerability
|
#EchoLeak
#Microsoft 365 Copilot
#Zero-click AI attack
The "EchoLeak" vulnerability in Microsoft 365 Copilot allows attackers to embed hidden commands within regular emails, triggering the AI agent to acce...
Read Analysis β
Jun 20, 2025 β’
Jailbreak
|
#Prompt Injection
#Jailbreak
#Data Leakage
The article highlights critical security risks in AI and LLM deployments, specifically prompt injection and jailbreak attacks, which enable manipulation for una...
Read Analysis β
Jun 13, 2025 β’
Vulnerability
|
#EchoLeak
#Zero-Click
#LLM Scope Violation
Researchers have uncovered "EchoLeak," a critical zero-click vulnerability in Microsoft 365 Copilot that exploits design flaws inherent to Retrieval A...
Read Analysis β
Jun 12, 2025 β’
Jailbreak
|
#TokenBreak
#Prompt Injection
#Tokenization
The TokenBreak attack exploits specific tokenization strategies (BPE or WordPiece) in text classification models by introducing single-character changes, bypass...
Read Analysis β
Jun 12, 2025 β’
Vulnerability
|
#OWASP Top 10 for LLM Applications
#Prompt Injection
#Data Leakage
The article highlights critical security gaps in Large Language Model (LLM) applications, detailing common vulnerabilities such as prompt injection, sensitive i...
Read Analysis β
Jun 12, 2025 β’
Vulnerability
|
#CVE-2025-32711
#Microsoft 365 Copilot
#LLM Scope Violation
A critical zero-click AI vulnerability, identified as EchoLeak (CVE-2025-32711, CVSS 9.3), allowed for unauthorized data exfiltration from Microsoft 365 Copilot...
Read Analysis β
Jun 12, 2025 β’
Vulnerability
|
#OWASP Top 10 LLM
#Prompt Injection
#Large Language Models
The article analyzes critical security vulnerabilities in Large Language Model (LLM) applications, aligning with the OWASP Top 10 for LLM Applications. It detai...
Read Analysis β
Jun 11, 2025 β’
Vulnerability
|
#CVE-2025-32711
#EchoLeak
#Zero-click attack
A critical zero-click vulnerability, dubbed "EchoLeak" and identified as CVE-2025-32711, was discovered in Microsoft Copilot. This flaw leveraged an &...
Read Analysis β
Jun 11, 2025 β’
Vulnerability
|
#EchoLeak
#Zero-click attack
#LLM scope violation
A critical "EchoLeak" zero-click vulnerability in Microsoft 365 Copilot allowed attackers to remotely exfiltrate sensitive internal data by sending em...
Read Analysis β
Jun 05, 2025 β’
Vulnerability
|
#Prompt Injection
#LLM
#Azure Prompt Shields
Prompt injection attacks are identified as the top threat to generative AI, enabling adversaries to manipulate Large Language Models (LLMs) to bypass safety mea...
Read Analysis β
Jun 02, 2025 β’
Jailbreak
|
#LLM Jailbreak
#Prompt Engineering
#Fuzzy AI
CyberArk Labs' Fuzzy AI framework demonstrates a universal jailbreaking capability against major LLMs, leveraging techniques like "Operation Grandma&q...
Read Analysis β
May 28, 2025 β’
Vulnerability
|
#LLM
#Prompt Injection
#Code Execution
The article outlines key vulnerabilities in AI agents utilizing Large Language Models (LLMs), including the risk of unauthorized code execution, data exfiltrati...
Read Analysis β
May 28, 2025 β’
Vulnerability
|
#LLM Security
#Prompt Injection
#Sandboxing
This article analyzes critical vulnerabilities in AI agents, specifically Large Language Models (LLMs), focusing on risks like unauthorized code execution, data...
Read Analysis β
May 23, 2025 β’
Vulnerability
|
#Indirect Prompt Injection
#GitLab Duo
#Source Code Exfiltration
A critical indirect prompt injection vulnerability was discovered in GitLab Duo Chat, an AI-powered coding assistant, allowing attackers to embed hidden instruc...
Read Analysis β
May 14, 2025 β’
Vulnerability
|
#LLM Scanner
#Prompt Injection
#Jailbreak Attacks
Qualys has developed an LLM scanner, integrated into its Web Application Scanner, specifically designed to identify and assess vulnerabilities within AI/ML syst...
Read Analysis β
May 13, 2025 β’
Vulnerability
|
#Indirect Prompt Injection
#Multi-modal AI
#Data Exfiltration
This article details how indirect prompt injection exploits multi-modal AI agents by embedding malicious instructions within innocuous images or documents, lead...
Read Analysis β
May 13, 2025 β’
Data Leak
|
#Indirect Prompt Injection
#Multi-modal AI Agents
#Data Exfiltration
Multi-modal AI agents are susceptible to indirect prompt injection, where hidden instructions in external sources like images or documents can trigger sensitive...
Read Analysis β
May 01, 2025 β’
Vulnerability
|
#Prompt injection
#Remote Code Execution
#AI Agent
AI agentic applications face significant security threats, including prompt injection, tool misuse, and unsecured code interpreters, which can result in informa...
Read Analysis β
Apr 28, 2025 β’
Vulnerability
|
#Generative AI
#Prompt Attacks
#Data Exposure
The article highlights significant security vulnerabilities associated with Generative AI (GenAI) applications, including inadvertent sensitive data exposure an...
Read Analysis β
Apr 22, 2025 β’
Data Leak
|
#DeepSeek
#Data Breach
#Cloudflare WAF
The DeepSeek breach reportedly resulted in sensitive data being exposed to the dark web, indicating a significant impact on data confidentiality. The full artic...
Read Analysis β
Apr 22, 2025 β’
Vulnerability
|
#Cloudflare
#SQL injection
#Malformed data
The provided article text describes a Cloudflare security block, preventing access to darkreading.com due to suspected malicious activity. The block specificall...
Read Analysis β