How many malicious docs does it take to poison an LLM? Far fewer than you might think, Anthropic warns
NegativeTechnology

A recent study by Anthropic reveals that it takes as few as 250 malicious documents to compromise large AI models, raising significant concerns about the security of artificial intelligence systems. This finding is crucial as it highlights the vulnerability of AI to targeted attacks, which could have far-reaching implications for the reliability and safety of AI applications in various sectors.
— Curated by the World Pulse Now AI Editorial System