See what’s trending right now
Mobile Paymentsin Technology
5 hours ago

Ant Group promotes AI smart glasses for mobile payments, while Microsoft phases out passwords in favor of passkeys by August, signaling tech's shift toward seamless, secure innovations.

HomeTechnologyAI Hardware
Technology
AI GPU accelerators with 6TB HBM memory could appear by 2035 as AI GPU die sizes set to shrink - but there's far worse coming up
negativeTechnology
AI hardware is advancing at a breakneck pace—new GPU accelerators could pack a staggering 6TB of high-bandwidth memory by 2035, and chip designs are shrinking to boost efficiency. But there’s a dark side: the energy demands of these power-hungry systems are spiraling out of control, sparking worries about sustainability and whether global infrastructure can even keep up.
Editor’s Note: While it’s exciting to see AI tech push boundaries, this story highlights a growing tension between innovation and real-world limits. If energy consumption keeps surging unchecked, the environmental and logistical fallout could overshadow the benefits of smarter, faster AI. It’s a wake-up call for the industry to balance progress with responsibility.
AMD debuts a 400GbE AI network card with an 800GbE PCIe Gen6 NIC coming in 2026, but will the industry be ready?
positiveTechnology
AMD is stepping up its game in high-speed networking with the launch of its Pollara 400 AI network card, which supports 400GbE (gigabit Ethernet) and Ultra Ethernet—a big deal for AI workloads. But they’re not stopping there: they’ve already teased an even faster 800GbE card, the Vulcano, slated for 2026 to match next-gen PCIe Gen6 hardware. The big question? Whether the rest of the industry can keep pace with these blistering speeds.
Editor’s Note: Faster networking is critical for AI clusters, where moving data between GPUs and servers can be a bottleneck. AMD’s push into ultra-high-speed NICs (network interface cards) signals a race to feed AI’s hunger for bandwidth. But hardware doesn’t exist in a vacuum—if data centers, software, and other components lag behind, these cutting-edge cards might hit a wall. For tech watchers, it’s a peek into how the infrastructure behind AI is evolving (or struggling to).
How Huawei’s AI chips outperform Nvidia’s in running DeepSeek’s R1 model
positiveTechnology
Huawei's latest AI chips, powered by their CloudMatrix 384 data center architecture, are reportedly outperforming Nvidia’s H800 GPUs when running DeepSeek’s R1 AI model. A joint research paper from Huawei and SiliconFlow highlights how this specialized "AI supernode" is designed to handle large-scale AI workloads more efficiently, marking a potential shift in the competitive landscape of AI hardware.
Editor’s Note: Nvidia has long dominated the AI chip market, but Huawei’s breakthrough suggests serious competition is heating up—especially in China, where tech independence is a growing priority. If these performance claims hold up, it could reshape supply chains and force Nvidia to innovate faster, while giving AI developers more options. For now, though, real-world adoption will be the real test.

Why World Pulse Now?

Global Coverage

All major sources, one page

Emotional Lens

Feel the mood behind headlines

Trending Topics

Know what’s trending, globally

Read Less, Know More

Get summaries. Save time

Stay informed, save time
Learn more

Live Stats

Articles Processed

6,790

Trending Topics

130

Sources Monitored

211

Last Updated

8 minutes ago

Live data processing
How it works

Mobile App

Get instant summaries, explore trending stories, and dive deeper into the headlines — all in one sleek, noise-free mobile experience.

Get it on Google PlayDownload on the App Store
Coming soon on iOS and Android.

1-Minute Daily Briefing

Stay sharp in 60 seconds. Get concise summaries of today’s biggest stories — markets, tech, sports, and more

By subscribing, you agree to our Privacy Policy