MapFormer: Self-Supervised Learning of Cognitive Maps with Input-Dependent Positional Embeddings

arXiv — cs.LGMonday, December 15, 2025 at 5:00:00 AM
  • The introduction of MapFormer, a new architecture based on Transformer models, marks a significant advancement in self-supervised learning of cognitive maps. This model learns to encode abstract relationships among entities, enabling better adaptability and out-of-distribution generalization, which current AI systems struggle to achieve.
  • This development is crucial as it enhances AI's ability to process and understand complex relationships in data, potentially leading to more sophisticated applications in various fields, including robotics and cognitive science.
  • The emergence of MapFormer highlights ongoing efforts to bridge the gap between human cognitive processes and artificial intelligence. It reflects a growing recognition of the need for AI systems to possess intrinsic cognitive abilities, paralleling advancements in neuroscience and the exploration of how AI can mimic human-like understanding and reasoning.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
AI and high-throughput testing reveal stability limits in organic redox flow batteries
PositiveArtificial Intelligence
Recent advancements in artificial intelligence (AI) and high-throughput testing have unveiled the stability limits of organic redox flow batteries, showcasing the potential of these technologies to enhance scientific research and innovation.
AI’s Hacking Skills Are Approaching an ‘Inflection Point’
NeutralArtificial Intelligence
AI models are increasingly proficient at identifying software vulnerabilities, prompting experts to suggest that the tech industry must reconsider its software development practices. This advancement indicates a significant shift in the capabilities of AI technologies, particularly in cybersecurity.
Glitches in the Attention Matrix
NeutralArtificial Intelligence
Recent research has highlighted persistent glitches in the attention matrix of Transformer models, which are critical for various AI applications. These artifacts can hinder performance, prompting ongoing investigations into effective solutions. The article discusses the historical context of these issues and the latest findings aimed at rectifying them.
RewriteNets: End-to-End Trainable String-Rewriting for Generative Sequence Modeling
PositiveArtificial Intelligence
The introduction of RewriteNets marks a significant advancement in generative sequence modeling, utilizing a novel architecture that employs explicit, parallel string rewriting instead of the traditional dense attention weights found in models like the Transformer. This method allows for more efficient processing by performing fuzzy matching, conflict resolution, and token propagation in a structured manner.
Contrastive and Multi-Task Learning on Noisy Brain Signals with Nonlinear Dynamical Signatures
PositiveArtificial Intelligence
A new two-stage multitask learning framework has been introduced for analyzing Electroencephalography (EEG) signals, focusing on denoising, dynamical modeling, and representation learning. The first stage employs a denoising autoencoder to enhance signal quality, while the second stage utilizes a multitask architecture for motor imagery classification and chaotic regime discrimination. This approach aims to improve the robustness of EEG signal analysis.
Theoretical Foundations of Prompt Engineering: From Heuristics to Expressivity
NeutralArtificial Intelligence
A recent study published on arXiv explores the theoretical foundations of prompt engineering, focusing on how prompts can alter the behavior of fixed Transformer models. The research presents a framework that treats prompts as externally injected programs, revealing a mechanism-level decomposition of how attention and feed-forward networks operate within these models.
Rethinking Recurrent Neural Networks for Time Series Forecasting: A Reinforced Recurrent Encoder with Prediction-Oriented Proximal Policy Optimization
PositiveArtificial Intelligence
A novel approach to time series forecasting has been introduced through the Reinforced Recurrent Encoder with Prediction-oriented Proximal Policy Optimization (RRE-PPO4Pred), enhancing the predictive capabilities of Recurrent Neural Networks (RNNs) by addressing the limitations of traditional encoder-only strategies.
Explaining Generalization of AI-Generated Text Detectors Through Linguistic Analysis
NeutralArtificial Intelligence
A recent study published on arXiv investigates the generalization capabilities of AI-generated text detectors, revealing that while these detectors perform well on in-domain benchmarks, they often fail to generalize across various generation conditions, such as unseen prompts and different model families. The research employs a comprehensive benchmark involving multiple prompting strategies and large language models to analyze performance variance through linguistic features.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about