Improving Direct Persian-English Speech-to-Speech Translation with Discrete Units and Synthetic Parallel Data

arXiv — cs.LGTuesday, November 18, 2025 at 5:00:00 AM
  • A new direct speech
  • This advancement is significant as it opens up new possibilities for effective communication between Persian and English speakers, potentially benefiting various sectors such as education, business, and technology by facilitating smoother interactions and understanding.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Do Language Models Associate Sound with Meaning? A Multimodal Study of Sound Symbolism
NeutralArtificial Intelligence
A recent study explores sound symbolism, revealing how Multimodal Large Language Models (MLLMs) interpret auditory information in human languages. The research introduces LEX-ICON, a dataset comprising 8,052 words and 2,930 pseudo-words across four languages, examining MLLMs' phonetic iconicity through phoneme-level attention scores.
LongCat-Image Technical Report
PositiveArtificial Intelligence
LongCat-Image has been introduced as an innovative open-source bilingual foundation model for image generation, specifically designed to enhance multilingual text rendering and photorealism. This model employs advanced data curation strategies throughout its training phases, achieving state-of-the-art performance in text-rendering and aesthetic quality, particularly for complex Chinese characters.
SwissGov-RSD: A Human-annotated, Cross-lingual Benchmark for Token-level Recognition of Semantic Differences Between Related Documents
NeutralArtificial Intelligence
SwissGov-RSD has been introduced as the first naturalistic, document-level, cross-lingual dataset designed for recognizing semantic differences across documents in multiple languages, including English, German, French, and Italian. This dataset includes 224 multi-parallel documents annotated at the token level by human annotators, addressing a previously underexplored area in text generation evaluation and multilingual content alignment.
Efficient ASR for Low-Resource Languages: Leveraging Cross-Lingual Unlabeled Data
PositiveArtificial Intelligence
A systematic investigation into automatic speech recognition (ASR) for low-resource languages has been conducted, focusing on Perso-Arabic languages such as Persian, Arabic, and Urdu. The study demonstrates that leveraging cross-lingual unlabeled data can effectively enhance ASR performance without the need for extensive labeled datasets. A 300M parameter model was developed, achieving results comparable to larger systems while utilizing a 3,000-hour multilingual corpus.
GUMBridge: a Corpus for Varieties of Bridging Anaphora
NeutralArtificial Intelligence
GUMBridge has been introduced as a new resource for bridging anaphora, encompassing 16 diverse genres of English. This corpus aims to provide comprehensive coverage of the phenomenon, which involves understanding references in discourse that depend on previous entities, such as identifying 'the door' as belonging to 'a house.'
TeluguST-46: A Benchmark Corpus and Comprehensive Evaluation for Telugu-English Speech Translation
NeutralArtificial Intelligence
A new benchmark corpus for Telugu-English speech translation, named TeluguST-46, has been developed, comprising 46 hours of manually verified data. This initiative addresses the underexplored area of speech translation for Telugu, a language spoken by over 80 million people, and includes a systematic evaluation of various translation architectures, highlighting the performance of IndicWhisper + IndicMT and finetuned SeamlessM4T models.
Understanding Syntactic Generalization in Structure-inducing Language Models
NeutralArtificial Intelligence
Structure-inducing Language Models (SiLM) have been trained from scratch using three different architectures: Structformer, UDGN, and GPST, focusing on their syntactic generalization capabilities and performance across various NLP tasks. The study evaluates the models on their induced syntactic representations, grammaticality judgment tasks, and training dynamics, revealing no single architecture excels across all metrics.
TRepLiNa: Layer-wise CKA+REPINA Alignment Improves Low-Resource Machine Translation in Aya-23 8B
PositiveArtificial Intelligence
The TRepLiNa method, which combines Centered Kernel Alignment (CKA) and REPINA, has been introduced to enhance low-resource machine translation, particularly for Indian languages like Mundari, Santali, and Bhili, using the Aya-23 8B model. This approach aims to improve translation quality from low-resource languages to high-resource languages such as Hindi and English.