Lethe: Layer- and Time-Adaptive KV Cache Pruning for Reasoning-Intensive LLM Serving

arXiv — cs.LGThursday, November 13, 2025 at 5:00:00 AM
Lethe represents a significant advancement in the management of key-value caches for large language models, particularly in reasoning-intensive applications. Traditional methods have struggled with the memory and latency overheads associated with long decoding sequences, which are critical in generating coherent and contextually relevant outputs. Lethe's innovative approach combines layerwise sparsity-aware allocation with a Recency-Aware Selective Retention mechanism, allowing it to dynamically prune tokens based on their relevance and attention patterns. This dual adaptability not only optimizes memory usage but also enhances throughput, with empirical results indicating an increase of up to 2.56 times. Such improvements are vital as they enable LLMs to operate more efficiently, thereby enhancing their applicability in various complex reasoning tasks, which is increasingly important in the evolving landscape of artificial intelligence.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
AccKV: Towards Efficient Audio-Video LLMs Inference via Adaptive-Focusing and Cross-Calibration KV Cache Optimization
PositiveArtificial Intelligence
Recent advancements in Audio-Video Large Language Models (AV-LLMs) have improved their performance in tasks such as audio-visual question answering and multimodal dialog systems. The study highlights that the key-value (KV) cache for AV-LLMs is larger due to the extended temporal dimension introduced by video and audio. It was found that the attention of AV-LLMs shifts towards the video modality in higher layers, and integrating audio and video KV caches indiscriminately can lead to performance degradation.
Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness
NeutralArtificial Intelligence
The paper titled 'Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness' discusses the capabilities of large language models (LLMs) in biomedical natural language processing (NLP) tasks. It highlights the sensitivity of LLMs to demonstration selection and addresses the hallucination issue through retrieval-augmented LLMs (RAL). However, there is a lack of rigorous evaluation of RAL's impact on various biomedical NLP tasks, which complicates understanding its capabilities in this domain.
ExPairT-LLM: Exact Learning for LLM Code Selection by Pairwise Queries
PositiveArtificial Intelligence
ExPairT-LLM is introduced as an exact learning algorithm aimed at improving code selection from multiple outputs generated by large language models (LLMs). Traditional code selection algorithms often struggle to identify the correct program due to misidentification of nonequivalent programs or reliance on LLMs that may not always provide accurate outputs. ExPairT-LLM addresses these issues by utilizing pairwise membership and pairwise equivalence queries, enhancing the accuracy of program selection. Evaluations show a significant improvement in success rates over existing algorithms.
Go-UT-Bench: A Fine-Tuning Dataset for LLM-Based Unit Test Generation in Go
PositiveArtificial Intelligence
The Go-UT-Bench dataset, introduced in a recent study, addresses the training data imbalance faced by code LLMs, particularly in Golang. This dataset comprises 5,264 pairs of code and unit tests sourced from 10 permissively licensed Golang repositories. The study demonstrates that fine-tuning LLMs with this dataset significantly enhances their performance, with models outperforming their base versions on over 75% of benchmark tasks.
Experience-Guided Adaptation of Inference-Time Reasoning Strategies
PositiveArtificial Intelligence
The article discusses the Experience-Guided Reasoner (EGuR), a novel AI system designed to adapt its problem-solving strategies based on experiences accumulated during inference time. Unlike existing systems that only modify textual inputs, EGuR generates tailored strategies dynamically, allowing for a more flexible approach to AI reasoning. This advancement addresses the challenge of enabling agentic AI systems to adapt their methodologies post-training.
Increase my familiarity with BASE64.
NeutralArtificial Intelligence
The article discusses BASE64, a data encoding method that has been around for 30 years. It highlights the inefficiency of BASE64, which increases data size by 33%, yet remains essential in modern applications like JSON and REST APIs. The author shares personal experiences of encountering BASE64 in various projects, emphasizing the need for a pragmatic approach to using this outdated technology.