
EchoLeak: The First Zero-Click AI Attack That Weaponized Microsoft 365 Copilot
Understanding EchoLeak: A Technical Analysis of Microsoft Copilot’s Vulnerability – How Hackers Weaponized Microsoft’s AI Assistant for Silent Data Exfiltration
EchoLeak: The First Zero-Click AI Attack That Weaponized Microsoft 365 Copilot
In the ever-evolving cybersecurity arena, artificial intelligence has introduced new attack vectors that traditional security measures simply weren’t designed to handle. The recent discovery of “EchoLeak,” a critical zero-click vulnerability in Microsoft 365 Copilot, represents a watershed moment in AI security that every business leader needs to understand.
The EchoLeak Vulnerability Explained
EchoLeak, formally designated as CVE-2025-32711, is a critical information disclosure vulnerability that affects Microsoft 365 Copilot’s core architecture. What makes this vulnerability particularly dangerous is its zero-click nature – attackers can exfiltrate sensitive organizational data without requiring any user interaction whatsoever.
The vulnerability exploits what security researchers have termed an “LLM Scope Violation,” where external, untrusted input can manipulate an AI model to access and leak confidential data that should remain protected. This represents the first documented zero-click attack on an AI agent, marking a significant milestone in understanding how threat actors can exploit the internal workings of artificial intelligence systems.
Severity Assessment: Critical Risk Level
Microsoft assigned this vulnerability a CVSS score of 9.3, classifying it as critical. The severity stems from several factors that make EchoLeak particularly concerning for enterprise environments:
The attack requires zero user interaction, making it completely invisible to victims. Traditional security awareness training becomes ineffective when users don’t need to click, download, or interact with anything malicious. The vulnerability affects Copilot’s Retrieval-Augmented Generation system, which processes organizational data including emails, OneDrive documents, SharePoint content, and Teams conversations. Any data within Copilot’s access scope becomes vulnerable to exfiltration.
The automated nature of the attack means threat actors can scale their operations across multiple targets simultaneously, turning what was once a targeted operation into a potential mass data breach scenario.
How EchoLeak Attacks Work
The attack methodology is deceptively simple yet devastatingly effective, requiring no user interaction to steal sensitive organizational data.
- Attackers send what appears to be a legitimate business email containing a carefully crafted prompt injection designed to bypass Microsoft’s XPIA classifier protections
- The malicious prompt is disguised as normal human language, making it appear like typical business correspondence rather than AI instructions
- When a user later interacts with Copilot on a related topic, the AI’s Retrieval-Augmented Generation engine pulls the malicious email into its context due to apparent relevance
- Once the hidden instructions reach the language model, they trick it into accessing sensitive internal data and embedding that information into crafted markdown images or links
- The exfiltration occurs when these specially formatted images cause the user’s browser to automatically request the image from an external server controlled by the attacker
The sensitive data is transmitted in the URL parameters, completing the silent theft without any visible indication to the user.
(EchoLeak Attack Flow – Source: Aim Labs)
The Threat Actors Behind the Discovery
EchoLeak was discovered by security researchers at Aim Security, an AI security startup focused on helping enterprises safely adopt artificial intelligence technologies. The research team, led by co-founder and CTO Adir Gruss, spent months reverse engineering Microsoft 365 Copilot to uncover potential vulnerabilities.
Aim Security reported their findings to Microsoft in January 2025 through responsible disclosure channels. The discovery process involved extensive analysis of how AI agents process untrusted inputs and whether they harbor fundamental security flaws similar to those that plagued software systems in previous decades.
The researchers characterized their work as part of a broader mission to improve AI security across the industry, not just to find flaws in Microsoft’s products. Their findings have implications for any AI system that processes external inputs alongside sensitive internal data.
Organizations at Risk
The scope of potential victims for EchoLeak attacks was extensive, with any organization using Microsoft 365 Copilot with its default configuration potentially vulnerable until Microsoft implemented server-side fixes in May 2025.
- Small and medium-sized businesses that rely heavily on Microsoft’s productivity suite for daily operations, often lacking dedicated cybersecurity teams to implement advanced protective measures
- Large enterprises and Fortune 500 companies that have adopted Copilot for enhanced productivity across their workforce, where the volume and sensitivity of accessible information creates significant risk
- Government agencies and contractors that handle classified or sensitive information through Microsoft 365 environments, making them prime targets for espionage operations
- Healthcare organizations that process patient data through integrated AI tools, potentially violating HIPAA compliance requirements if sensitive medical information is exfiltrated
- Financial institutions where customer data, trading information, and regulatory documents could be accessed and stolen through compromised AI interactions
The zero-click nature of the attack made it particularly suitable for threat actors seeking to conduct silent, large-scale data theft operations.
Remediation and Microsoft’s Response
Microsoft addressed the EchoLeak vulnerability through multiple layers of protection implemented server-side, requiring no action from customers. The company strengthened its XPIA protections to better detect and neutralize hidden prompt injections and has implemented additional defense-in-depth measures to enhance overall security posture.
The vulnerability has been completely resolved at the platform level, and Microsoft confirmed that no customers were impacted during the vulnerability period. However, the extended timeline of approximately five months between discovery and full resolution highlights the complexity of securing AI systems against novel attack vectors.
Organizations seeking additional protection can implement Data Loss Prevention tags to block processing of external emails, though this may reduce Copilot’s functionality. Microsoft has also introduced features in the M365 Roadmap that restrict Copilot from processing emails labeled with sensitivity tags.
For broader protection against similar vulnerabilities, security experts recommend implementing comprehensive AI governance policies, regular security audits of AI integrations, user education about prompt-based threats, and continuous monitoring of AI interactions for anomalous behavior.
How CinchOps Can Help Secure Your Business
The EchoLeak vulnerability demonstrates that traditional cybersecurity approaches are insufficient for protecting AI-integrated business environments. Organizations need specialized expertise to navigate the complex security challenges posed by artificial intelligence adoption.
- Comprehensive AI security assessments to identify vulnerabilities in your organization’s AI implementations
- Customized AI governance policies that balance productivity with security requirements
- Advanced monitoring solutions to detect prompt injection attempts and anomalous AI behavior
- Ongoing managed IT support to ensure your AI tools remain secure as new threats emerge
- Regular penetration testing of AI-integrated systems to identify potential exploitation paths
CinchOps understands that securing AI systems requires a fundamentally different approach than traditional cybersecurity. CinchOps helps small and medium-sized businesses implement enterprise-grade protections without the complexity and cost typically associated with advanced AI security measures.
Discover More 
Discover more about our enterprise-grade and business protecting cybersecurity services: CinchOps Cybersecurity
Discover related topics:UNK_SneakyStrike: Massive Microsoft Entra ID Account Takeover Campaign
For Additional Information on this topic: Breaking down ‘EchoLeak’, the First Zero-Click AI Vulnerability Enabling Data Exfiltration from Microsoft 365 Copilot
FREE CYBERSECURITY ASSESSMENT