Technology
MIT’s new ‘recursive’ framework lets LLMs process 10 million tokens without context rot
PositiveTechnology
Researchers at MIT CSAIL have developed a new framework called Recursive Language Models (RLMs) that enables large language models (LLMs) to process up to 10 million tokens without experiencing context rot. This innovative approach allows LLMs to programmatically analyze and decompose long prompts, enhancing their reasoning capabilities without the need for retraining.
Why reinforcement learning plateaus without representation depth (and other key takeaways from NeurIPS 2025)
NeutralTechnology
At NeurIPS 2025, significant papers emerged that challenge long-held beliefs in AI, suggesting that advancements are now more dependent on architecture and evaluation strategies rather than merely increasing model size. These insights indicate a shift in how AI systems are developed and assessed.
Black Forest Labs launches open source Flux.2 [klein] to generate AI images in less than a second
PositiveTechnology
Black Forest Labs (BFL), a German AI startup founded by former Stability AI engineers, has launched FLUX.2 [klein], a new suite of open-source AI image generators that can produce images in less than a second on Nvidia hardware. The release includes two models with parameter counts of 4 billion and 9 billion, available on Hugging Face and GitHub.
How Google’s 'internal RL' could unlock long-horizon AI agents
PositiveTechnology
Researchers at Google have developed a technique known as internal reinforcement learning (internal RL), which enhances the ability of AI models to learn complex reasoning tasks, reducing instances of hallucination typically seen in large language models (LLMs). This method shifts the focus from next-token prediction to guiding the model's internal processes towards structured problem-solving.