Is Lying Only Sinful in Islam? Exploring Religious Bias in Multilingual Large Language Models Across Major Religions

arXiv — cs.CLThursday, December 4, 2025 at 5:00:00 AM
  • Recent research highlights the persistent bias in multilingual large language models (LLMs) towards Islam, revealing that these models often misrepresent religious contexts, particularly when responding in Bengali compared to English. The study introduces the BRAND dataset, which focuses on major South Asian religions and aims to improve bias detection in AI systems.
  • This development is significant as it underscores the challenges faced by AI developers in ensuring accurate and fair representation of diverse religious beliefs, which is crucial for fostering understanding and reducing misinformation in multilingual contexts.
  • The findings reflect broader concerns regarding the impact of language proficiency on AI performance, suggesting that biases may be exacerbated by the demographic profiles of users, particularly in non-native English contexts, raising questions about the inclusivity and reliability of AI technologies.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Different types of syntactic agreement recruit the same units within large language models
NeutralArtificial Intelligence
Recent research has shown that large language models (LLMs) can effectively differentiate between grammatical and ungrammatical sentences, revealing that various types of syntactic agreement, such as subject-verb and determiner-noun, utilize overlapping units within these models. This study involved a functional localization approach to identify the responsive units across 67 English syntactic phenomena in seven open-weight models.
Reveal-Bangla: A Dataset for Cross-Lingual Multi-Step Reasoning Evaluation
NeutralArtificial Intelligence
A new dataset named Reveal-Bangla has been introduced, focusing on cross-lingual multi-step reasoning evaluation in Bangla, derived from the English Reveal dataset. This dataset includes both binary and non-binary question types and aims to assess the reasoning capabilities of multilingual small language models in Bangla compared to English.
RFOP: Rethinking Fusion and Orthogonal Projection for Face-Voice Association
PositiveArtificial Intelligence
The RFOP project has introduced a novel approach to face-voice association in a multilingual context, specifically focusing on English-German pairs. This initiative is part of the challenge set for 2026, which aims to enhance the evaluation of face-voice associations by revisiting fusion and orthogonal projection techniques, achieving a notable EER of 33.1 and ranking 3rd in the FAME 2026 challenge.
Multilingual Pretraining for Pixel Language Models
PositiveArtificial Intelligence
The introduction of PIXEL-M4 marks a significant advancement in multilingual pretraining for pixel language models, which operate directly on images of rendered text. This model has been pretrained on four diverse languages: English, Hindi, Ukrainian, and Simplified Chinese, showcasing its ability to outperform English-only models in tasks involving non-Latin scripts.