Mercor Hit by Supply Chain Attack via LiteLLM Breach - The Tech Buzz
An extortion group executed a supply chain attack by compromising the open-source LiteLLM project, which serves as a widely-used AI model API proxy. This breach...
Read Analysis →An extortion group executed a supply chain attack by compromising the open-source LiteLLM project, which serves as a widely-used AI model API proxy. This breach...
Read Analysis →Hackers utilized AI jailbreaking techniques and sophisticated prompt engineering on Generative AI models like Claude and ChatGPT to exploit vulnerabilities with...
Read Analysis →An attacker reportedly jailbroke the Claude AI model to generate malicious exploit code. This illicit activity subsequently led to the theft and exfiltration of...
Read Analysis →An incident report details hackers successfully jailbreaking the Claude AI model, leveraging this compromise to generate exploit code. This exploit ultimately f...
Read Analysis →A reported incident describes a successful jailbreak of the Claude AI model, enabling it to bypass safety mechanisms. This compromise allowed the AI to generate...
Read Analysis →Attackers successfully exploited Anthropic's Claude AI through prompt manipulation, effectively "jailbreaking" its safety guardrails to generate ...
Read Analysis →A hacker successfully jailbroke Anthropic's Claude chatbot, bypassing its guardrails to generate vulnerability reports and exploitation scripts for attacks...
Read Analysis →Anthropic's Claude Opus 4.6 exhibits prompt injection success rates up to 78.6% in less constrained environments, quantitatively validating a previously th...
Read Analysis →Security researchers have identified a vulnerability where prompt injection attacks in LLM-powered applications can weaponize URL preview features to silently e...
Read Analysis →Radware introduced its LLM Firewall and Agentic AI Protection Solution to secure generative AI and AI agents against emerging threats. These solutions aim to mi...
Read Analysis →The article highlights advanced threats to AI agents, including "Shadow Escape," a zero-click exploit targeting Model Context Protocol (MCP) based sys...
Read Analysis →Personal AI agents like OpenClaw are critically vulnerable to malicious "skills" and prompt injection attacks, enabling unauthorized command execution...
Read Analysis →Personal AI agents like OpenClaw are severely vulnerable to malicious third-party "skills" that can leverage their high-level privileges for harmful a...
Read Analysis →AI agents created using Microsoft Copilot Studio are vulnerable to prompt injection, allowing attackers to bypass internal security mandates. This exploit facil...
Read Analysis →Anthropic's Threat Intelligence team disrupted the first known AI-orchestrated cyber espionage campaign, where a state-sponsored Chinese threat actor utili...
Read Analysis →Chinese state-sponsored actors exploited Anthropic's Claude AI by jailbreaking its safeguards, enabling the autonomous execution of cyberattacks with minim...
Read Analysis →Cybersecurity researchers have disclosed seven new vulnerabilities in OpenAI's GPT-4o and GPT-5 models, enabling indirect prompt injection attacks. These e...
Read Analysis →A novel indirect prompt injection attack allows threat actors to compromise Anthropic's Claude AI Code Interpreter, leveraging its network features to exfi...
Read Analysis →A vulnerability in Anthropic's Claude AI allows attackers to leverage indirect prompt injection against its code interpreter feature. This exploit enables ...
Read Analysis →A significant 77% of employees are reportedly leaking sensitive corporate data by pasting it into generative AI tools like ChatGPT, primarily through personal, ...
Read Analysis →Zenity Labs research details how widely deployed AI agents are highly susceptible to "hijacking attacks" via methods such as email-based prompt inject...
Read Analysis →Enterprise AI assistants have been identified as vulnerable to abuse, potentially enabling unauthorized data theft. This exploitation pathway also allows for th...
Read Analysis →This article details how indirect prompt injection exploits multi-modal AI agents by embedding malicious instructions within innocuous images or documents, lead...
Read Analysis →Multi-modal AI agents are susceptible to indirect prompt injection, where hidden instructions in external sources like images or documents can trigger sensitive...
Read Analysis →