It's All About the Confidence: An Unsupervised Approach for Multilingual Historical Entity Linking using Large Language Models

arXiv — cs.CLWednesday, January 14, 2026 at 5:00:00 AM
  • A new approach called MHEL-LLaMo has been introduced for multilingual historical entity linking, utilizing a combination of a Small Language Model (SLM) and a Large Language Model (LLM). This unsupervised ensemble method addresses challenges in processing historical texts, such as linguistic variation and noisy inputs, by leveraging a multilingual bi-encoder for candidate retrieval and an instruction-tuned LLM for predictions.
  • The development of MHEL-LLaMo is significant as it reduces the reliance on extensive training data and domain-specific rules, enhancing scalability and efficiency in historical entity linking tasks. By applying SLM's confidence scores, the system can effectively differentiate between straightforward and complex cases, optimizing computational resources.
  • This advancement reflects a broader trend in natural language processing where large language models are increasingly employed to tackle historical and low-resource language challenges. The integration of multilingual capabilities and unsupervised methods highlights the ongoing evolution in AI, aiming to improve accuracy and reduce biases in language processing across diverse linguistic contexts.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
STAGE: A Benchmark for Knowledge Graph Construction, Question Answering, and In-Script Role-Playing over Movie Screenplays
NeutralArtificial Intelligence
The introduction of STAGE (Screenplay Text, Agents, Graphs and Evaluation) marks a significant advancement in the field of narrative understanding, providing a comprehensive benchmark for evaluating knowledge graph construction, scene-level event summarization, long-context screenplay question answering, and in-script character role-playing across 150 films in English and Chinese.
Get away with less: Need of source side data curation to build parallel corpus for low resource Machine Translation
PositiveArtificial Intelligence
A recent study emphasizes the importance of data curation in machine translation, particularly for low-resource languages. The research introduces LALITA, a framework designed to optimize the selection of source sentences for creating parallel corpora, focusing on English-Hindi bi-text to enhance machine translation performance.
Analyzing Bias in False Refusal Behavior of Large Language Models for Hate Speech Detoxification
NeutralArtificial Intelligence
A recent study analyzed the false refusal behavior of large language models (LLMs) in the context of hate speech detoxification, revealing that these models disproportionately refuse tasks involving higher semantic toxicity and specific target groups, particularly in English datasets.
VocalBench: Benchmarking the Vocal Conversational Abilities for Speech Interaction Models
NeutralArtificial Intelligence
VocalBench has been introduced as a benchmarking tool to evaluate the conversational abilities of speech interaction models, utilizing approximately 24,000 curated instances in English and Mandarin across four dimensions: semantic quality, acoustic performance, conversational abilities, and robustness. This initiative aims to address the shortcomings of existing evaluations that fail to replicate real-world scenarios and provide comprehensive comparisons of model capabilities.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about