Can’t tech a joke: AI does not understand puns, study finds

The Guardian — Artificial IntelligenceMonday, November 24, 2025 at 7:59:34 AM
Can’t tech a joke: AI does not understand puns, study finds
  • Researchers from universities in the UK and Italy have found that large language models (LLMs) struggle to understand puns, highlighting their limitations in grasping humor, empathy, and cultural nuances. This study suggests that AI's capabilities in comprehending clever wordplay are significantly lacking, providing some reassurance to comedians and writers who rely on such skills.
  • The findings underscore the ongoing challenges faced by AI technologies in replicating complex human cognitive functions, particularly in creative fields where nuanced understanding is crucial. This limitation may impact the development and acceptance of AI in areas traditionally dominated by human creativity.
  • The study reflects broader concerns regarding the reliability of AI systems, as other research indicates that LLMs often fail to generate outputs that align with desired probability distributions and can struggle with logical reasoning. These issues raise questions about the future role of AI in creative industries and the potential implications for professionals in those fields.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
UK PM Keir Starmer says X indicated to government officials it was acting to comply with UK laws by restricting the generation of non-consensual sexual images (Financial Times)
NeutralArtificial Intelligence
UK Prime Minister Keir Starmer announced that the social media platform X has indicated to government officials that it is taking steps to comply with UK laws by restricting the generation of non-consensual sexual images. This response follows significant public backlash regarding the platform's AI tool, Grok, which has faced criticism for its controversial outputs.
AI and high-throughput testing reveal stability limits in organic redox flow batteries
PositiveArtificial Intelligence
Recent advancements in artificial intelligence (AI) and high-throughput testing have unveiled the stability limits of organic redox flow batteries, showcasing the potential of these technologies to enhance scientific research and innovation.
X ‘acting to comply with UK law’ after outcry over sexualised images
NegativeArtificial Intelligence
Following significant public backlash, Elon Musk's social media platform X has communicated to the UK government that it is taking steps to comply with local laws regarding the use of its AI tool, Grok, which has been criticized for generating nonconsensual sexualized images of women and children. Recent polling indicates that 58% of Britons believe X should be banned if it does not address these issues effectively.
AI’s Hacking Skills Are Approaching an ‘Inflection Point’
NeutralArtificial Intelligence
AI models are increasingly proficient at identifying software vulnerabilities, prompting experts to suggest that the tech industry must reconsider its software development practices. This advancement indicates a significant shift in the capabilities of AI technologies, particularly in cybersecurity.
UK scraps digital ID requirement for workers
NeutralArtificial Intelligence
The UK government has announced the removal of the digital ID requirement for workers, a decision that reflects ongoing debates about privacy and technology in the workplace. This change is part of a broader reassessment of digital identification systems in the country.
Use of AI to harm women has only just begun, experts warn
NegativeArtificial Intelligence
Experts warn that the use of AI to create harmful sexualized imagery, particularly targeting women and children, is just beginning, as evidenced by the controversial Grok AI chatbot developed by Elon Musk's xAI. Despite recent attempts to implement safeguards, users continue to exploit the tool for generating explicit content.
Explaining Generalization of AI-Generated Text Detectors Through Linguistic Analysis
NeutralArtificial Intelligence
A recent study published on arXiv investigates the generalization capabilities of AI-generated text detectors, revealing that while these detectors perform well on in-domain benchmarks, they often fail to generalize across various generation conditions, such as unseen prompts and different model families. The research employs a comprehensive benchmark involving multiple prompting strategies and large language models to analyze performance variance through linguistic features.
Calibration Is Not Enough: Evaluating Confidence Estimation Under Language Variations
NeutralArtificial Intelligence
A recent study titled 'Calibration Is Not Enough: Evaluating Confidence Estimation Under Language Variations' highlights the limitations of current confidence estimation methods for large language models (LLMs), emphasizing the need for evaluations that account for language variations and semantic differences. The research proposes a new framework that assesses confidence quality based on robustness, stability, and sensitivity to variations in prompts and answers.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about