Genomic Next-Token Predictors are In-Context Learners

arXiv — cs.LGMonday, November 24, 2025 at 5:00:00 AM
  • The Evo2 genomic model has been studied for its ability to perform in-context learning (ICL), demonstrating that it can infer and apply abstract patterns from genomic sequences, similar to large language models (LLMs) trained on human text. This research raises the question of whether ICL can emerge in non-linguistic domains through extensive predictive training.
  • This development is significant as it suggests that genomic sequences, which possess rich statistical structures, can be leveraged for advanced predictive modeling, potentially enhancing applications in bioinformatics and genomics.
  • The findings contribute to ongoing discussions about the capabilities of ICL across different domains, highlighting the importance of understanding how models recognize patterns and the implications for various applications, including drug discovery and reinforcement learning, where LLMs are increasingly utilized.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Can’t tech a joke: AI does not understand puns, study finds
NeutralArtificial Intelligence
Researchers from universities in the UK and Italy have found that large language models (LLMs) struggle to understand puns, highlighting their limitations in grasping humor, empathy, and cultural nuances. This study suggests that AI's capabilities in comprehending clever wordplay are significantly lacking, providing some reassurance to comedians and writers who rely on such skills.
Estonian WinoGrande Dataset: Comparative Analysis of LLM Performance on Human and Machine Translation
NeutralArtificial Intelligence
A new study presents a localized Estonian translation of the WinoGrande dataset, a benchmark for commonsense reasoning, highlighting the translation process by specialists and evaluating both human and machine translation performance. The results indicate that while human translations perform slightly lower than the original English set, machine translations show significantly poorer results.
Counterfactual World Models via Digital Twin-conditioned Video Diffusion
PositiveArtificial Intelligence
A new framework for counterfactual world models has been introduced, which allows for the prediction of temporal sequences under hypothetical modifications to observed scene properties. This advancement builds on traditional world models that focus solely on factual observations, enabling a more nuanced understanding of environments through forward simulation.
The Rise of Parameter Specialization for Knowledge Storage in Large Language Models
PositiveArtificial Intelligence
A recent study has analyzed twenty open-source large language models (LLMs) to explore how knowledge is stored in their MLP parameters, revealing that as models advance, their parameters become increasingly specialized in encoding similar types of knowledge. This research highlights a growing trend in parameter specialization for effective knowledge storage in LLMs.
Emergence of psychopathological computations in large language models
NeutralArtificial Intelligence
Recent research has established a computational-theoretical framework to explore whether large language models (LLMs) can instantiate computations of psychopathology. Experiments conducted within this framework indicate that LLMs possess a computational structure reflective of psychopathological functions, suggesting a significant intersection between AI systems and mental health concepts.
Efficient Penalty-Based Bilevel Methods: Improved Analysis, Novel Updates, and Flatness Condition
PositiveArtificial Intelligence
Recent advancements in penalty-based methods for bilevel optimization (BLO) have been highlighted, focusing on a novel penalty reformulation that decouples upper- and lower-level variables. This approach improves the analysis of smoothness constants, allowing for larger step sizes and reduced iteration complexity in Penalty-Based Gradient Descent algorithms, particularly through the introduction of a single-loop algorithm called PBGD-Free.
EventWeave: A Dynamic Framework for Capturing Core and Supporting Events in Dialogue Systems
PositiveArtificial Intelligence
EventWeave has been introduced as a dynamic framework designed to enhance dialogue systems by modeling the relationships between core and supporting events in conversations. This framework utilizes a multi-head attention mechanism to identify relevant events, aiming to produce more contextually appropriate dialogue responses.