Breaking Language Barriers or Reinforcing Bias? A Study of Gender and Racial Disparities in Multilingual Contrastive Vision Language Models
NeutralArtificial Intelligence
A recent study has examined the social biases present in multilingual vision-language models (VLMs), which are designed for universal image-text retrieval. The research highlights disparities related to gender and race across four public VLM variants, including M-CLIP and SigLIP-2, and covers ten different languages. This audit is significant as it sheds light on the potential biases these models may perpetuate, raising important questions about their fairness and effectiveness in diverse linguistic contexts.
— via World Pulse Now AI Editorial System