Large Language Models Are Unreliable for Cyber Threat Intelligence
NegativeArtificial Intelligence
The study published on November 13, 2025, critically evaluates the effectiveness of Large Language Models (LLMs) in Cyber Threat Intelligence (CTI) tasks. While some recent works suggested that LLMs could help manage the overwhelming data in cybersecurity, this evaluation presents new evidence indicating that LLMs are inconsistent and overconfident, failing to guarantee adequate performance on real-size reports. The methodology employed tested various learning techniques, including zero-shot and few-shot learning, but found that these approaches only partially improved results. This raises significant doubts about the reliability of LLMs in CTI scenarios, particularly given the critical need for accuracy and confidence in cybersecurity applications, where labeled datasets are often lacking.
— via World Pulse Now AI Editorial System
