A profile of Meta Chief AI Scientist Yann LeCun, reportedly leaving after being sidelined, who says LLMs are a dead end for AGI and backs world models instead (Meghan Bobrowsky/Wall Street Journal)

TechmemeSaturday, November 15, 2025 at 6:05:00 AM
A profile of Meta Chief AI Scientist Yann LeCun, reportedly leaving after being sidelined, who says LLMs are a dead end for AGI and backs world models instead (Meghan Bobrowsky/Wall Street Journal)
Yann LeCun's departure from Meta highlights a significant shift in the AI landscape, as he has been a pivotal figure in the field. His criticism of large language models (LLMs) as a dead end for AGI resonates with ongoing debates about the future of AI. This aligns with recent trends where other tech leaders, like former Twitter CEO Parag Agrawal, are also exploring new ventures in AI, such as building tools for AI agents. LeCun's focus on world models suggests a potential pivot in AI research priorities, reflecting a broader reevaluation of methodologies in the industry.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Silenced Biases: The Dark Side LLMs Learned to Refuse
NegativeArtificial Intelligence
Safety-aligned large language models (LLMs) are increasingly used in sensitive applications where fairness is crucial. Evaluating their fairness is complex, often relying on standard question-answer schemes that may misinterpret refusal responses as indicators of fairness. This paper introduces the concept of silenced biases, which are unfair preferences hidden within the models' latent space, masked by safety-alignment. Previous methods have limitations, prompting the need for a new approach to assess these biases effectively.
Fair In-Context Learning via Latent Concept Variables
PositiveArtificial Intelligence
The paper titled 'Fair In-Context Learning via Latent Concept Variables' explores the in-context learning (ICL) capabilities of large language models (LLMs) and their potential biases when applied to tabular data. It emphasizes an optimal demonstration selection method that leverages latent concept variables to enhance task adaptation while promoting fairness. The study introduces data augmentation strategies aimed at minimizing correlations between sensitive variables and predictive outcomes, ultimately striving for equitable predictions.
Sector HQ Weekly Digest - November 17, 2025
NeutralArtificial Intelligence
The Sector HQ Weekly Digest for November 17, 2025, highlights the latest developments in the AI industry, focusing on the performance of top companies. OpenAI leads with a score of 442385.7 and 343 events, followed by Anthropic and Amazon. The report also notes significant movements, with Sony jumping 277 positions in the rankings, reflecting the dynamic nature of the AI sector.
Multimodal Peer Review Simulation with Actionable To-Do Recommendations for Community-Aware Manuscript Revisions
PositiveArtificial Intelligence
A new interactive web-based system for multimodal peer review simulation has been introduced, aimed at enhancing manuscript revisions prior to submission. This system leverages large language models (LLMs) to integrate textual and visual information, improving the quality of reviews through retrieval-augmented generation (RAG) based on OpenReview data. It converts generated reviews into actionable to-do lists, providing structured guidance for authors and seamlessly integrating with existing academic writing platforms.
Pre-Attention Expert Prediction and Prefetching for Mixture-of-Experts Large Language Models
PositiveArtificial Intelligence
The paper titled 'Pre-Attention Expert Prediction and Prefetching for Mixture-of-Experts Large Language Models' introduces a method to enhance the efficiency of Mixture-of-Experts (MoE) Large Language Models (LLMs). The authors propose a pre-attention expert prediction technique that improves accuracy and reduces computational overhead by utilizing activations before the attention block. This approach aims to optimize expert prefetching, achieving about a 15% improvement in accuracy over existing methods.
A Multifaceted Analysis of Negative Bias in Large Language Models through the Lens of Parametric Knowledge
NeutralArtificial Intelligence
A recent study published on arXiv examines the phenomenon of negative bias in large language models (LLMs), which refers to their tendency to generate negative responses in binary decision tasks. The research highlights that previous studies have primarily focused on identifying negative attention heads that contribute to this bias. The authors introduce a new evaluation pipeline that categorizes responses based on the model's parametric knowledge, revealing that the format of prompts significantly influences the responses more than the semantics of the content itself.
Modeling and Predicting Multi-Turn Answer Instability in Large Language Models
NeutralArtificial Intelligence
The paper titled 'Modeling and Predicting Multi-Turn Answer Instability in Large Language Models' discusses the evaluation of large language models (LLMs) in terms of their robustness during user interactions. The study employs multi-turn follow-up prompts to assess changes in model answers and accuracy dynamics using Markov chains. Results indicate vulnerabilities in LLMs, with a 10% accuracy drop for Gemini 1.5 Flash after a 'Think again' prompt over nine turns, and a 7.5% drop for Claude 3.5 Haiku with a reworded question. The findings suggest that accuracy can be modeled over time.
Who Gets the Reward, Who Gets the Blame? Evaluation-Aligned Training Signals for Multi-LLM Agents
PositiveArtificial Intelligence
The article discusses a new theoretical framework for training multi-agent systems using large language models (LLMs). It aims to connect system-level evaluations with agent-level learning by integrating cooperative game-theoretic attribution and process reward modeling. This approach produces local, signed, and credit-conserving signals, enhancing cooperation among agents while penalizing harmful actions in failure scenarios.