Google Gemini AI Bug Allows Invisible, Malicious Prompts - Dark Reading
A prompt-injection vulnerability in Google Gemini allows attackers to embed invisible, malicious instructions within emails that the AI prioritizes and executes during summarization. This flaw leads to the AI generating fabricated security alerts, facilitating sophisticated phishing and vishing attacks against users.
Source: Original Report ↗