What even is the AI bubble?

MIT Technology ReviewMonday, December 15, 2025 at 10:00:00 AM
  • A recent MIT Technology Review article highlighted that a significant 95% of organizations investing in generative AI reported receiving no return on their investments, leading to a temporary plunge in tech stocks. This revelation has raised concerns about the sustainability and viability of the current AI hype.
  • The implications of these findings are profound, as they suggest a disconnect between the expectations set by the AI industry and the actual outcomes experienced by businesses. This could lead to increased scrutiny of AI investments and a reevaluation of strategies by companies.
  • The ongoing discourse surrounding AI's impact on the economy is marked by contrasting views, with some experts advocating for cautious optimism while others express skepticism about the technology's long
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
The Download: introducing the AI Hype Correction package
NeutralArtificial Intelligence
The latest edition of The Download introduces the AI Hype Correction package, addressing the exaggerated claims surrounding artificial intelligence (AI) and its potential to replicate human intelligence, eliminate diseases, and be the most significant invention in history. This initiative aims to provide a more grounded perspective on AI's capabilities and limitations.
The great AI hype correction of 2025
NeutralArtificial Intelligence
The release of OpenAI's ChatGPT in late 2022 marked a significant turning point in the AI industry, leading to widespread enthusiasm and expectations for further advancements. However, by 2025, a notable correction in this hype has emerged as users express disillusionment with the technology's limitations and perceived flaws, particularly following the launch of GPT-5.2.
The AI doomers feel undeterred
NeutralArtificial Intelligence
A small but influential community of researchers and policy experts, known as AI doomers, continues to express concerns that advancements in artificial intelligence could pose significant risks to humanity. This group advocates for AI safety, emphasizing the potential dangers of unchecked AI development.
AI might not be coming for lawyers’ jobs anytime soon
NeutralArtificial Intelligence
The rise of generative AI in 2022 sparked anxiety among law students like Rudi Miller, who feared its impact on future job prospects in the legal field. As a junior associate, Miller reflects on the discussions surrounding AI's potential to disrupt traditional legal roles, highlighting the uncertainty faced by new graduates entering the workforce.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about