Microsoft finds security flaw in AI chatbots that could expose conversation topics
NegativeArtificial Intelligence

Microsoft has identified a significant security flaw in AI chatbots, including ChatGPT and Google Gemini, which could compromise the privacy of user conversations. This vulnerability, named 'Whisper Leak,' affects nearly all large language models tested. The discovery raises serious concerns about the confidentiality of interactions with AI assistants, suggesting that users may not have the level of privacy they assume. As reliance on these technologies grows, addressing such flaws is crucial to maintaining user trust and ensuring secure communication.
— via World Pulse Now AI Editorial System




