What Drives Cross-lingual Ranking? Retrieval Approaches with Multilingual Language Models

arXiv — cs.CLTuesday, November 25, 2025 at 5:00:00 AM

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Cornell Tech Secures $7 Million From NASA and Schmidt Sciences to Modernise arXiv
PositiveArtificial Intelligence
Cornell Tech has secured a $7 million investment from NASA and Schmidt Sciences aimed at modernizing arXiv, a preprint repository for scientific papers. This funding will facilitate the migration of arXiv to cloud infrastructure, upgrade its outdated codebase, and develop new tools to enhance the discovery of relevant preprints for researchers.
LexInstructEval: Lexical Instruction Following Evaluation for Large Language Models
PositiveArtificial Intelligence
LexInstructEval has been introduced as a new benchmark and evaluation framework aimed at enhancing the ability of Large Language Models (LLMs) to follow complex lexical instructions. This framework utilizes a formal, rule-based grammar to break down intricate instructions into manageable components, facilitating a more systematic evaluation process.
For Those Who May Find Themselves on the Red Team
NeutralArtificial Intelligence
A recent position paper emphasizes the need for literary scholars to engage with research on large language model (LLM) interpretability, suggesting that the red team could serve as a platform for this ideological struggle. The paper argues that current interpretability standards are insufficient for evaluating LLMs.
Representational Stability of Truth in Large Language Models
NeutralArtificial Intelligence
Recent research has introduced the concept of representational stability in large language models (LLMs), focusing on how these models encode distinctions between true, false, and neither-true-nor-false content. The study assesses this stability by training a linear probe on LLM activations to differentiate true from not-true statements and measuring shifts in decision boundaries under label changes.
SGM: A Framework for Building Specification-Guided Moderation Filters
PositiveArtificial Intelligence
A new framework named Specification-Guided Moderation (SGM) has been introduced to enhance content moderation filters for large language models (LLMs). This framework allows for the automation of training data generation based on user-defined specifications, addressing the limitations of traditional safety-focused filters. SGM aims to provide scalable and application-specific alignment goals for LLMs.
Personalized LLM Decoding via Contrasting Personal Preference
PositiveArtificial Intelligence
A novel decoding-time approach named CoPe (Contrasting Personal Preference) has been proposed to enhance personalization in large language models (LLMs) after parameter-efficient fine-tuning on user-specific data. This method aims to maximize each user's implicit reward signal during text generation, demonstrating an average improvement of 10.57% in personalization metrics across five tasks.
Generating Reading Comprehension Exercises with Large Language Models for Educational Applications
PositiveArtificial Intelligence
A new framework named Reading Comprehension Exercise Generation (RCEG) has been proposed to leverage large language models (LLMs) for automatically generating personalized English reading comprehension exercises. This framework utilizes fine-tuned LLMs to create content candidates, which are then evaluated by a discriminator to select the highest quality output, significantly enhancing the educational content generation process.
Speech Recognition Model Improves Text-to-Speech Synthesis using Fine-Grained Reward
PositiveArtificial Intelligence
Recent advancements in text-to-speech (TTS) technology have led to the development of a new model called Word-level TTS Alignment by ASR-driven Attentive Reward (W3AR), which utilizes fine-grained reward signals from automatic speech recognition (ASR) systems to enhance TTS synthesis. This model addresses the limitations of traditional evaluation methods that often overlook specific problematic words in utterances.