Large Language Models Will Never Be Intelligent, Expert Says

Futurism — AIFriday, November 28, 2025 at 1:15:00 PM
Large Language Models Will Never Be Intelligent, Expert Says
  • An expert has stated that Large Language Models (LLMs) will never achieve true intelligence, emphasizing that they function merely as tools that replicate language's communicative aspects. This assertion raises questions about the capabilities and limitations of LLMs in understanding and generating human-like knowledge.
  • The implications of this viewpoint are significant for the ongoing development and deployment of LLMs, as it challenges the perception that these models can possess human-like intelligence or understanding, potentially affecting their application in various sectors.
  • This discussion aligns with broader debates regarding the nature of artificial intelligence, particularly the distinction between human-like cognition and the probabilistic knowledge encoded in LLMs. The ongoing scrutiny of LLMs' decision-making processes and their vulnerabilities highlights the need for critical evaluation of their role in technology and society.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
South Korea’s Experiment in AI Textbooks Ends in Disaster
NegativeArtificial Intelligence
South Korea's initiative to integrate AI-generated textbooks into its education system has ended in failure, with reports indicating that the quality of the materials was subpar and hastily assembled. This experiment aimed to enhance learning through technology but has instead raised concerns about the efficacy of AI in educational contexts.
Nvidia CEO Says Instead of Taking Your Job, AI Will Force You to Work Even Harder
NeutralArtificial Intelligence
Nvidia CEO Jensen Huang stated that artificial intelligence (AI) will not take jobs but will instead require workers to adapt and work harder, emphasizing that everyone's roles will evolve. This perspective highlights a shift in the narrative surrounding AI's impact on employment.
Journalist Caught Publishing Fake Articles Generated by AI
NegativeArtificial Intelligence
A journalist has been caught publishing fake articles generated by artificial intelligence, raising serious ethical concerns about the integrity of journalism in the age of AI. The journalist's claims were disputed, with a source stating, 'I did not speak with this reporter and did not give this quote.' This incident highlights the potential for misinformation in media fueled by AI technologies.
Mixture of Attention Spans: Optimizing LLM Inference Efficiency with Heterogeneous Sliding-Window Lengths
PositiveArtificial Intelligence
A new approach called Mixture of Attention Spans (MoA) has been proposed to enhance the efficiency of Large Language Models (LLMs) by utilizing heterogeneous sliding-window lengths for attention mechanisms. This method addresses the limitations of traditional uniform window lengths, which fail to capture the diverse attention patterns across different heads and layers in LLMs.
Geometry of Decision Making in Language Models
NeutralArtificial Intelligence
A recent study on the geometry of decision-making in Large Language Models (LLMs) reveals insights into their internal processes, particularly in multiple-choice question answering (MCQA) tasks. The research analyzed 28 transformer models, uncovering a consistent pattern in the intrinsic dimension of hidden representations across different layers, indicating how LLMs project linguistic inputs onto low-dimensional manifolds.
Multi-Reward GRPO for Stable and Prosodic Single-Codebook TTS LLMs at Scale
PositiveArtificial Intelligence
Recent advancements in Large Language Models (LLMs) have led to the development of a multi-reward Group Relative Policy Optimization (GRPO) framework aimed at enhancing the stability and prosody of single-codebook text-to-speech (TTS) systems. This framework integrates various rule-based rewards to optimize token generation policies, addressing issues such as unstable prosody and speaker drift that have plagued existing models.
Aligning LLMs with Biomedical Knowledge using Balanced Fine-Tuning
PositiveArtificial Intelligence
Recent advancements in aligning Large Language Models (LLMs) with specialized biomedical knowledge have led to the introduction of Balanced Fine-Tuning (BFT), a method designed to enhance the models' ability to learn complex reasoning from sparse data without relying on external reward signals. This approach addresses the limitations of traditional Supervised Fine-Tuning and Reinforcement Learning in the biomedical domain.
Minimizing Hyperbolic Embedding Distortion with LLM-Guided Hierarchy Restructuring
PositiveArtificial Intelligence
A recent study has explored the potential of Large Language Models (LLMs) to assist in restructuring hierarchical knowledge to optimize hyperbolic embeddings. This research highlights the importance of a high branching factor and single inheritance in creating effective hyperbolic representations, which are crucial for applications in machine learning that rely on hierarchical data structures.