HiCL: Hippocampal-Inspired Continual Learning

arXiv — cs.LGMonday, November 24, 2025 at 5:00:00 AM
arXiv:2508.16651v2 Announce Type: replace Abstract: We propose HiCL, a novel hippocampal-inspired dual-memory continual learning architecture designed to mitigate catastrophic forgetting by using elements inspired by the hippocampal circuitry. Our system encodes inputs through a grid-cell-like layer, followed by sparse pattern separation using a dentate gyrus-inspired module with top-k sparsity. Episodic memory traces are maintained in a CA3-like autoassociative memory. Task-specific processing is dynamically managed via a DG-gated mixture-of-experts mechanism, wherein inputs are routed to experts based on cosine similarity between their normalized sparse DG representations and learned task-specific DG prototypes computed through online exponential moving averages. This biologically grounded yet mathematically principled gating strategy enables differentiable, scalable task-routing without relying on a separate gating network, and enhances the model's adaptability and efficiency in learning multiple sequential tasks. Cortical outputs are consolidated using Elastic Weight Consolidation weighted by inter-task similarity. Crucially, we incorporate prioritized replay of stored patterns to reinforce essential past experiences. Evaluations on standard continual learning benchmarks demonstrate the effectiveness of our architecture in reducing task interference, achieving near state-of-the-art results in continual learning tasks at lower computational costs. Our code is available here https://github.com/kushalk173-sc/HiCL.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Cornell Tech Secures $7 Million From NASA and Schmidt Sciences to Modernise arXiv
PositiveArtificial Intelligence
Cornell Tech has secured a $7 million investment from NASA and Schmidt Sciences aimed at modernizing arXiv, a preprint repository for scientific papers. This funding will facilitate the migration of arXiv to cloud infrastructure, upgrade its outdated codebase, and develop new tools to enhance the discovery of relevant preprints for researchers.
Generating Reading Comprehension Exercises with Large Language Models for Educational Applications
PositiveArtificial Intelligence
A new framework named Reading Comprehension Exercise Generation (RCEG) has been proposed to leverage large language models (LLMs) for automatically generating personalized English reading comprehension exercises. This framework utilizes fine-tuned LLMs to create content candidates, which are then evaluated by a discriminator to select the highest quality output, significantly enhancing the educational content generation process.
Analysis of Semi-Supervised Learning on Hypergraphs
PositiveArtificial Intelligence
A recent analysis has been conducted on semi-supervised learning within hypergraphs, revealing that variational learning on random geometric hypergraphs can achieve asymptotic consistency. This study introduces Higher-Order Hypergraph Learning (HOHL), which utilizes Laplacians from skeleton graphs to enhance multiscale smoothness and converges to a higher-order Sobolev seminorm, demonstrating strong empirical performance on standard benchmarks.
Learning to See and Act: Task-Aware Virtual View Exploration for Robotic Manipulation
PositiveArtificial Intelligence
A new framework called Task-aware Virtual View Exploration (TVVE) has been introduced to enhance robotic manipulation by integrating virtual view exploration with task-specific representation learning. This approach addresses limitations in existing vision-language-action models that rely on static viewpoints, improving 3D perception and reducing task interference.
On the limitation of evaluating machine unlearning using only a single training seed
NeutralArtificial Intelligence
A recent study highlights the limitations of evaluating machine unlearning (MU) by relying solely on a single training seed, revealing that results can vary significantly based on the random number seed used during model training. This finding emphasizes the need for more robust empirical comparisons in MU algorithms, particularly those that are deterministic in nature.
PocketLLM: Ultimate Compression of Large Language Models via Meta Networks
PositiveArtificial Intelligence
A novel approach named PocketLLM has been introduced to address the challenges of compressing large language models (LLMs) for efficient storage and transmission on edge devices. This method utilizes meta-networks to project LLM weights into discrete latent vectors, achieving significant compression ratios, such as a 10x reduction for Llama 2-7B, while maintaining accuracy.
PRISM-Bench: A Benchmark of Puzzle-Based Visual Tasks with CoT Error Detection
PositiveArtificial Intelligence
PRISM-Bench has been introduced as a new benchmark for evaluating multimodal large language models (MLLMs) through puzzle-based visual tasks that assess both problem-solving capabilities and reasoning processes. This benchmark specifically requires models to identify errors in a step-by-step chain of thought, enhancing the evaluation of logical consistency and visual reasoning.
For Those Who May Find Themselves on the Red Team
NeutralArtificial Intelligence
A recent position paper emphasizes the need for literary scholars to engage with research on large language model (LLM) interpretability, suggesting that the red team could serve as a platform for this ideological struggle. The paper argues that current interpretability standards are insufficient for evaluating LLMs.