How Close Are Today’s AI Models to AGI—And to Self-Improving into Superintelligence?

Scientific American — GlobalSaturday, December 6, 2025 at 12:00:00 PM
  • Today's leading AI models have reached a point where they can write and refine their own software, raising questions about the potential for self-improvement to evolve into true superintelligence. This development marks a significant milestone in the pursuit of artificial general intelligence (AGI).
  • The ability of AI models to enhance their own capabilities could lead to breakthroughs in various fields, including medicine and materials science, as companies like Microsoft pursue superintelligence to achieve significant advancements.
  • The ongoing evolution of AI systems prompts a reevaluation of what constitutes humanlike intelligence, as benchmarks continue to shift. This discourse is further complicated by differing perspectives on whether larger models or improved learning processes are key to achieving AGI.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Why Humanoid Robots and Embodied AI Still Struggle in the Real World
NeutralArtificial Intelligence
General-purpose humanoid robots and embodied AI continue to face significant challenges in real-world applications, primarily due to their inability to replicate the physical intuition that humans develop through experience. This limitation has resulted in a slow progression in the deployment of these technologies despite advancements in hardware.
Photos Reveal Moths Sipping Tears from a Moose
NeutralArtificial Intelligence
New photographs have captured moths engaging in the unusual behavior of sipping tears from a moose, marking only the second observation of this phenomenon outside the tropics. This behavior, previously noted primarily in tropical regions, highlights a unique interaction between species in different ecosystems.
Verifying LLM Inference to Detect Model Weight Exfiltration
PositiveArtificial Intelligence
A recent study has introduced a verification framework aimed at detecting model weight exfiltration from AI inference servers, particularly focusing on the risks posed by attackers who may hide sensitive information within model outputs. This framework formalizes model exfiltration as a security game and evaluates its effectiveness on various open-weight models, including the MOE-Qwen-30B.
Blood Pressure Prediction for Coronary Artery Disease Diagnosis using Coronary Computed Tomography Angiography
PositiveArtificial Intelligence
A new automated pipeline has been developed to enhance the diagnosis of coronary artery disease (CAD) by predicting blood pressure distributions using coronary computed tomography angiography (CCTA). This system utilizes computational fluid dynamics (CFD) simulations to generate consistent training data while reducing the manual workload associated with traditional methods.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about