News

The vulnerability, dubbed EchoLeak and assigned the identifier CVE-2025-32711, could have allowed hackers to mount an attack without the target user having to do anything. EchoLeak represents the ...
But EchoLeak, as detailed by Fortune, shows that trusting an AI with context is not the same as controlling it. The line between helpful and harmful isn’t always drawn in code, it’s drawn in ...
The vulnerability, called EchoLeak, allowed attackers to silently steal sensitive data from a user's environment by simply sending them an email. No clicks, downloads, or user actions were needed.
A critical AI vulnerability, 'EchoLeak,' was discovered in Microsoft 365 Copilot by Aim Labs researchers in January 2025. This flaw allowed attackers to exfiltrate sensitive user data through ...
The “EchoLeak,” as the security flaw is known, is the first known AI security vulnerability that doesn’t require users to click a link to become infected.
EchoLeak is a reminder that even robust, enterprise-grade AI tools can be leveraged for sophisticated and automated data theft," said Itay Ravia, Head of Aim Labs.
EchoLeak should be viewed as a wake-up call for a society that is embracing AI integration wholeheartedly. In a rush to implement agentic AI, we can’t keep up with the need to secure it.
What’s most unsettling about EchoLeak isn’t the technical jargon, it’s the everyday familiarity of the scenario. A junior employee opens a shared document.