Microsoft finds security flaw in AI chatbots that could expose conversation topics

Tech Xplore — AI & MLMonday, November 10, 2025 at 6:01:09 PM
Microsoft finds security flaw in AI chatbots that could expose conversation topics
Microsoft has identified a significant security flaw in AI chatbots, including ChatGPT and Google Gemini, which could compromise the privacy of user conversations. This vulnerability, named 'Whisper Leak,' affects nearly all large language models tested. The discovery raises serious concerns about the confidentiality of interactions with AI assistants, suggesting that users may not have the level of privacy they assume. As reliance on these technologies grows, addressing such flaws is crucial to maintaining user trust and ensuring secure communication.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about