Intelligence per Watt: Measuring Intelligence Efficiency of Local AI

arXiv — cs.LGMonday, November 17, 2025 at 5:00:00 AM
arXiv:2511.07885v2 Announce Type: replace-cross Abstract: Large language model (LLM) queries are predominantly processed by frontier models in centralized cloud infrastructure. Rapidly growing demand strains this paradigm, and cloud providers struggle to scale infrastructure at pace. Two advances enable us to rethink this paradigm: small LMs (<=20B active parameters) now achieve competitive performance to frontier models on many tasks, and local accelerators (e.g., Apple M4 Max) run these models at interactive latencies. This raises the question: can local inference viably redistribute demand from centralized infrastructure? Answering this requires measuring whether local LMs can accurately answer real-world queries and whether they can do so efficiently enough to be practical on power-constrained devices (i.e., laptops). We propose intelligence per watt (IPW), task accuracy divided by unit of power, as a metric for assessing capability and efficiency of local inference across model-accelerator pairs. We conduct a large-scale empirical study across 20+ state-of-the-art local LMs, 8 accelerators, and a representative subset of LLM traffic: 1M real-world single-turn chat and reasoning queries. For each query, we measure accuracy, energy, latency, and power. Our analysis reveals $3$ findings. First, local LMs can accurately answer 88.7% of single-turn chat and reasoning queries with accuracy varying by domain. Second, from 2023-2025, IPW improved 5.3x and local query coverage rose from 23.2% to 71.3%. Third, local accelerators achieve at least 1.4x lower IPW than cloud accelerators running identical models, revealing significant headroom for optimization. These findings demonstrate that local inference can meaningfully redistribute demand from centralized infrastructure, with IPW serving as the critical metric for tracking this transition. We release our IPW profiling harness for systematic intelligence-per-watt benchmarking.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
ExPairT-LLM: Exact Learning for LLM Code Selection by Pairwise Queries
PositiveArtificial Intelligence
ExPairT-LLM is introduced as an exact learning algorithm aimed at improving code selection from multiple outputs generated by large language models (LLMs). Traditional code selection algorithms often struggle to identify the correct program due to misidentification of nonequivalent programs or reliance on LLMs that may not always provide accurate outputs. ExPairT-LLM addresses these issues by utilizing pairwise membership and pairwise equivalence queries, enhancing the accuracy of program selection. Evaluations show a significant improvement in success rates over existing algorithms.
Go-UT-Bench: A Fine-Tuning Dataset for LLM-Based Unit Test Generation in Go
PositiveArtificial Intelligence
The Go-UT-Bench dataset, introduced in a recent study, addresses the training data imbalance faced by code LLMs, particularly in Golang. This dataset comprises 5,264 pairs of code and unit tests sourced from 10 permissively licensed Golang repositories. The study demonstrates that fine-tuning LLMs with this dataset significantly enhances their performance, with models outperforming their base versions on over 75% of benchmark tasks.
Experience-Guided Adaptation of Inference-Time Reasoning Strategies
PositiveArtificial Intelligence
The article discusses the Experience-Guided Reasoner (EGuR), a novel AI system designed to adapt its problem-solving strategies based on experiences accumulated during inference time. Unlike existing systems that only modify textual inputs, EGuR generates tailored strategies dynamically, allowing for a more flexible approach to AI reasoning. This advancement addresses the challenge of enabling agentic AI systems to adapt their methodologies post-training.
Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness
NeutralArtificial Intelligence
The paper titled 'Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness' discusses the capabilities of large language models (LLMs) in biomedical natural language processing (NLP) tasks. It highlights the sensitivity of LLMs to demonstration selection and addresses the hallucination issue through retrieval-augmented LLMs (RAL). However, there is a lack of rigorous evaluation of RAL's impact on various biomedical NLP tasks, which complicates understanding its capabilities in this domain.
Increase my familiarity with BASE64.
NeutralArtificial Intelligence
The article discusses BASE64, a data encoding method that has been around for 30 years. It highlights the inefficiency of BASE64, which increases data size by 33%, yet remains essential in modern applications like JSON and REST APIs. The author shares personal experiences of encountering BASE64 in various projects, emphasizing the need for a pragmatic approach to using this outdated technology.