Can LLMs subtract numbers?

arXiv — cs.CLWednesday, November 5, 2025 at 5:00:00 AM
A recent study published on arXiv evaluates the arithmetic capabilities of eight pretrained large language models (LLMs), focusing specifically on their performance in subtraction compared to addition. While addition tasks have been widely tested in prior research, subtraction has received less attention. The findings indicate that these LLMs demonstrate significantly lower accuracy when performing subtraction than when handling addition problems. This discrepancy highlights a notable gap in the numerical reasoning abilities of current language models. The study’s results contribute to ongoing discussions about the limitations of LLMs in mathematical tasks and suggest areas for further improvement. By systematically comparing performance across multiple models, the research provides a clearer understanding of where LLMs struggle in basic arithmetic operations. These insights align with recent connected coverage emphasizing the challenges LLMs face in precise numerical computations.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about