LangMark: A Multilingual Dataset for Automatic Post-Editing

arXiv — cs.CLMonday, November 24, 2025 at 5:00:00 AM
  • LangMark has been introduced as a new multilingual dataset aimed at enhancing automatic post-editing (APE) for machine-translated texts, featuring 206,983 triplets across seven languages including Brazilian Portuguese, French, and Japanese. This dataset is human-annotated by expert linguists to improve translation quality and reduce reliance on human intervention.
  • The release of LangMark is significant as it addresses the critical gap in large-scale multilingual datasets necessary for developing effective APE systems, which can lead to improved translation accuracy and efficiency in various applications.
  • This development highlights the growing importance of large language models (LLMs) in natural language processing, as they are increasingly utilized for tasks like APE, prompting discussions about their capabilities, biases, and the need for robust training datasets to ensure high-quality outputs across diverse languages.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Sequoia-Backed Pennylane Eyes Funding at $4.3 Billion Valuation
PositiveArtificial Intelligence
Pennylane, a French startup specializing in accounting software, is reportedly in discussions for a new funding round that could value the company at $4.25 billion, nearly double its previous valuation from just seven months ago.
How Well Do LLMs Understand Tunisian Arabic?
NegativeArtificial Intelligence
A recent study highlights the limitations of Large Language Models (LLMs) in understanding Tunisian Arabic, also known as Tunizi. This research introduces a new dataset that includes parallel translations in Tunizi, standard Tunisian Arabic, and English, aiming to benchmark LLMs on their comprehension of this low-resource language. The findings indicate that the neglect of such dialects may hinder millions of Tunisians from engaging with AI in their native language.
Fairness Evaluation of Large Language Models in Academic Library Reference Services
PositiveArtificial Intelligence
A recent evaluation of large language models (LLMs) in academic library reference services examined their ability to provide equitable support across diverse user demographics, including sex, race, and institutional roles. The study found no significant differentiation in responses based on race or ethnicity, with only minor evidence of bias against women in one model. LLMs showed nuanced responses tailored to users' institutional roles, reflecting professional norms.
Improving Generalization of Neural Combinatorial Optimization for Vehicle Routing Problems via Test-Time Projection Learning
PositiveArtificial Intelligence
A novel learning framework utilizing Large Language Models (LLMs) has been introduced to enhance the generalization capabilities of Neural Combinatorial Optimization (NCO) for Vehicle Routing Problems (VRPs). This approach addresses the significant performance drop observed when NCO models trained on small-scale instances are applied to larger scenarios, primarily due to distributional shifts between training and testing data.
Aligning Vision to Language: Annotation-Free Multimodal Knowledge Graph Construction for Enhanced LLMs Reasoning
PositiveArtificial Intelligence
A novel approach called Vision-align-to-Language integrated Knowledge Graph (VaLiK) has been proposed to enhance reasoning in Large Language Models (LLMs) by constructing Multimodal Knowledge Graphs (MMKGs) without the need for manual annotations. This method aims to address challenges such as incomplete knowledge and hallucination artifacts that LLMs face due to the limitations of traditional Knowledge Graphs (KGs).
MUCH: A Multilingual Claim Hallucination Benchmark
PositiveArtificial Intelligence
A new benchmark named MUCH has been introduced to assess Claim-level Uncertainty Quantification (UQ) in Large Language Models (LLMs). This benchmark includes 4,873 samples in English, French, Spanish, and German, and provides 24 generation logits per token, enhancing the evaluation of UQ methods under realistic conditions.
Hallucinate Less by Thinking More: Aspect-Based Causal Abstention for Large Language Models
PositiveArtificial Intelligence
A new framework called Aspect-Based Causal Abstention (ABCA) has been introduced to enhance the reliability of Large Language Models (LLMs) by enabling early abstention from generating potentially incorrect responses. This approach analyzes the internal diversity of LLM knowledge through causal inference, allowing models to assess the reliability of their knowledge before generating answers.
AutoLink: Autonomous Schema Exploration and Expansion for Scalable Schema Linking in Text-to-SQL at Scale
PositiveArtificial Intelligence
The introduction of AutoLink marks a significant advancement in the field of text-to-SQL, addressing the challenges of supplying entire database schemas to Large Language Models (LLMs) by reformulating schema linking into an iterative, agent-driven process. This innovative framework allows for dynamic exploration and expansion of relevant schema components, achieving high recall rates in schema linking tasks.