Google's Nested Learning aims to stop LLMs from catastrophic forgetting

THE DECODERSaturday, November 22, 2025 at 7:42:29 PM
Google's Nested Learning aims to stop LLMs from catastrophic forgetting
  • Google Research has unveiled a new approach called 'nested learning' aimed at preventing large language models (LLMs) from experiencing catastrophic forgetting, thereby enhancing their ability to learn continuously without losing previously acquired knowledge.
  • This development is significant for Google as it seeks to improve the reliability and performance of its AI models, particularly in light of recent benchmarks that have highlighted weaknesses in the factual accuracy of existing models, including its Gemini 3 Pro.
  • The introduction of nested learning reflects a broader trend in AI development focused on enhancing model resilience and reliability, especially as competition intensifies among major players like OpenAI and Google, with ongoing discussions about the future of AI capabilities and ethical considerations.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Continue Readings
Google plans a 1000x jump in AI compute over the next five years
PositiveArtificial Intelligence
Google is planning a significant expansion of its AI infrastructure, aiming to increase its computing capacity by 1,000 times over the next five years. This ambitious goal reflects the company's response to the surging demand for artificial intelligence capabilities, as outlined in internal communications from its AI infrastructure chief.
The future of AI browsing may depend on developers rethinking how they build websites
PositiveArtificial Intelligence
Researchers at TU Darmstadt have introduced the VOIX framework, which adds two new HTML elements to websites, enabling AI agents to recognize available actions without needing to interpret complex user interfaces visually. This innovation aims to enhance the interaction between AI and web environments.
Meta's SAM 3 segmentation model blurs the boundary between language and vision
PositiveArtificial Intelligence
Meta has unveiled the third generation of its Segment Anything Model (SAM 3), which utilizes an open vocabulary to enhance its understanding of images and videos. This model distinguishes itself from traditional segmentation models by employing a novel training method that integrates both human and AI annotators.
As Google pulls ahead, OpenAI's comeback plan is codenamed 'Shallotpeat'
NegativeArtificial Intelligence
OpenAI is under pressure as an internal memo reveals CEO Sam Altman's response to Google's advancements with its Gemini 3 model. The memo outlines OpenAI's strategy, codenamed 'Shallotpeat', to regain competitive ground in the AI landscape.
OpenAI report suggests GPT‑5 is starting to ease scientists’ daily workloads
PositiveArtificial Intelligence
OpenAI's GPT-5 Science Acceleration report highlights how researchers are utilizing the model to streamline their daily tasks. The report provides insights into the practical applications of AI in scientific research while emphasizing the continued need for human oversight in decision-making processes.
OpenAI launches group chats in ChatGPT worldwide
PositiveArtificial Intelligence
OpenAI has launched a group chat feature in ChatGPT, initially testing it in Japan, South Korea, Taiwan, and New Zealand. This new functionality aims to enhance user interaction by allowing multiple users to engage in conversations simultaneously, facilitating collaboration and dynamic discussions.
Google's latest image model Nano Banana Pro makes image generation feel truly intentional
PositiveArtificial Intelligence
Google has introduced an updated image model named Nano Banana Pro, also referred to as Gemini 3 Pro Image. This model aims to enhance the intentionality of image generation, providing users with improved capabilities for creating realistic AI-generated images.
OLMo 3 debuts as the first fully open "thinking" model with step-by-step logic exposed to users
PositiveArtificial Intelligence
The Allen Institute for AI (Ai2) has introduced OLMo 3, the first fully open AI model designed to expose its reasoning process. This new 32B 'thinking' model operates 2.5 times more efficiently than comparable models, marking a significant advancement in AI transparency and functionality.