Can LLM-Generated Textual Explanations Enhance Model Classification Performance? An Empirical Study

arXiv — cs.CLWednesday, November 12, 2025 at 5:00:00 AM
In the evolving landscape of Explainable Natural Language Processing (NLP), a recent study has introduced an innovative automated framework that utilizes large language models (LLMs) to generate textual explanations for model predictions. Traditional methods often depend on human annotations, which are expensive and labor-intensive, limiting scalability. This study rigorously assessed the quality of LLM-generated explanations using Natural Language Generation (NLG) metrics and examined their impact on the performance of pre-trained language models (PLMs) across various natural language inference tasks. The results revealed that these automated explanations not only match but often surpass the effectiveness of human-annotated explanations in enhancing model performance. This advancement presents a promising avenue for scalable, automated generation of textual explanations, potentially transforming how models are interpreted and utilized in NLP applications.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it