Computational frame analysis revisited: On LLMs for studying news coverage
NeutralArtificial Intelligence
- A recent study has revisited the effectiveness of large language models (LLMs) like GPT and Claude in analyzing media frames, particularly in the context of news coverage surrounding the US Mpox epidemic of 2022. The research systematically evaluated these generative models against traditional methods, revealing that manual coders consistently outperformed LLMs in frame analysis tasks.
- This development highlights the limitations of current LLMs in accurately identifying media frames, suggesting that while they show potential for certain applications, human validation remains crucial for reliable outcomes in content analysis.
- The findings contribute to ongoing discussions about the reliability of AI in content analysis, particularly in light of challenges such as off-policy training data and the tendency of LLMs to generate plausible yet incorrect responses. These issues underscore the need for continued research into improving AI models for nuanced tasks like frame analysis.
— via World Pulse Now AI Editorial System

