MTQ-Eval: Multilingual Text Quality Evaluation for Language Models
NeutralArtificial Intelligence
The introduction of MTQ-Eval represents a significant advancement in multilingual text quality evaluation, leveraging large language models (LLMs) to assess text quality across 115 languages. This framework learns from both high- and low-quality text examples, allowing it to refine its internal representations and improve evaluation accuracy. The study highlights the effectiveness of automatically generated text quality preference data in training open-source base LLMs, which align with established quality ratings. The comprehensive evaluation indicates that MTQ-Eval not only enhances text quality assessments but also contributes to notable improvements in downstream tasks, showcasing the potential of LLMs in broader contexts beyond task-specific evaluations.
— via World Pulse Now AI Editorial System
