CLEV: LLM-Based Evaluation Through Lightweight Efficient Voting for Free-Form Question-Answering

arXiv — cs.CLWednesday, November 12, 2025 at 5:00:00 AM
CLEV, introduced in a recent arXiv submission, addresses the ongoing challenge of evaluating free-form Question Answering (QA), which is complicated by its diverse and open-ended responses. Traditional automatic metrics often fail to capture the semantic nuances of these responses, leading to inconsistencies in evaluation. The proposed method, Consensus via Lightweight Efficient Voting (CLEV), utilizes two primary LLMs to assess answers, invoking a third only in cases of disagreement. This innovative approach not only enhances the reliability of evaluations but also reduces unnecessary computational demands, making it a scalable and resource-efficient solution. Experiments, including human evaluations, have demonstrated CLEV's effectiveness, establishing it as a robust framework for evaluating LLMs in free-form QA scenarios.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness
NeutralArtificial Intelligence
The paper titled 'Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness' discusses the capabilities of large language models (LLMs) in biomedical natural language processing (NLP) tasks. It highlights the sensitivity of LLMs to demonstration selection and addresses the hallucination issue through retrieval-augmented LLMs (RAL). However, there is a lack of rigorous evaluation of RAL's impact on various biomedical NLP tasks, which complicates understanding its capabilities in this domain.
Bridging Hidden States in Vision-Language Models
PositiveArtificial Intelligence
Vision-Language Models (VLMs) are emerging models that integrate visual content with natural language. Current methods typically fuse data either early in the encoding process or late through pooled embeddings. This paper introduces a lightweight fusion module utilizing cross-only, bidirectional attention layers to align hidden states from both modalities, enhancing understanding while keeping encoders non-causal. The proposed method aims to improve the performance of VLMs by leveraging the inherent structure of visual and textual data.
Bias-Restrained Prefix Representation Finetuning for Mathematical Reasoning
PositiveArtificial Intelligence
The paper titled 'Bias-Restrained Prefix Representation Finetuning for Mathematical Reasoning' introduces a new method called Bias-REstrained Prefix Representation FineTuning (BREP ReFT). This approach aims to enhance the mathematical reasoning capabilities of models by addressing the limitations of existing Representation finetuning (ReFT) methods, which struggle with mathematical tasks. The study demonstrates that BREP ReFT outperforms both standard ReFT and weight-based Parameter-Efficient finetuning (PEFT) methods through extensive experiments.
Transformers know more than they can tell -- Learning the Collatz sequence
NeutralArtificial Intelligence
The study investigates the ability of transformer models to predict long steps in the Collatz sequence, a complex arithmetic function that maps odd integers to their successors. The accuracy of the models varies significantly depending on the base used for encoding, achieving up to 99.7% accuracy for bases 24 and 32, while dropping to 37% and 25% for bases 11 and 3. Despite these variations, all models exhibit a common learning pattern, accurately predicting inputs with similar residuals modulo 2^p.
ExPairT-LLM: Exact Learning for LLM Code Selection by Pairwise Queries
PositiveArtificial Intelligence
ExPairT-LLM is introduced as an exact learning algorithm aimed at improving code selection from multiple outputs generated by large language models (LLMs). Traditional code selection algorithms often struggle to identify the correct program due to misidentification of nonequivalent programs or reliance on LLMs that may not always provide accurate outputs. ExPairT-LLM addresses these issues by utilizing pairwise membership and pairwise equivalence queries, enhancing the accuracy of program selection. Evaluations show a significant improvement in success rates over existing algorithms.
Go-UT-Bench: A Fine-Tuning Dataset for LLM-Based Unit Test Generation in Go
PositiveArtificial Intelligence
The Go-UT-Bench dataset, introduced in a recent study, addresses the training data imbalance faced by code LLMs, particularly in Golang. This dataset comprises 5,264 pairs of code and unit tests sourced from 10 permissively licensed Golang repositories. The study demonstrates that fine-tuning LLMs with this dataset significantly enhances their performance, with models outperforming their base versions on over 75% of benchmark tasks.
Experience-Guided Adaptation of Inference-Time Reasoning Strategies
PositiveArtificial Intelligence
The article discusses the Experience-Guided Reasoner (EGuR), a novel AI system designed to adapt its problem-solving strategies based on experiences accumulated during inference time. Unlike existing systems that only modify textual inputs, EGuR generates tailored strategies dynamically, allowing for a more flexible approach to AI reasoning. This advancement addresses the challenge of enabling agentic AI systems to adapt their methodologies post-training.