An Under-Explored Application for Explainable Multimodal Misogyny Detection in code-mixed Hindi-English

arXiv — cs.CLWednesday, January 14, 2026 at 5:00:00 AM
  • A new study has introduced a multimodal and explainable web application designed to detect misogyny in code-mixed Hindi and English, utilizing advanced artificial intelligence models like XLM-RoBERTa. This application aims to enhance the interpretability of hate speech detection, which is crucial in the context of increasing online misogyny.
  • The development of this application is significant as it addresses the urgent need for effective tools to combat hate speech on digital platforms, particularly in low-resource languages where such technologies are often lacking.
  • This initiative reflects a broader trend in artificial intelligence towards enhancing explainability and fairness, as seen in various recent studies that emphasize the importance of reliable metrics and frameworks in AI governance, particularly in sensitive applications like hate speech detection.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
What’s coming up at #AAAI2026?
NeutralArtificial Intelligence
The Annual AAAI Conference on Artificial Intelligence is set to take place in Singapore from January 20 to January 27, marking the first time the event is held outside North America. This 40th edition will include invited talks, tutorials, workshops, and a comprehensive technical program, highlighting the global significance of AI advancements.
A Novel Approach to Explainable AI with Quantized Active Ingredients in Decision Making
PositiveArtificial Intelligence
A novel approach to explainable artificial intelligence (AI) has been proposed, leveraging Quantum Boltzmann Machines (QBMs) and Classical Boltzmann Machines (CBMs) to enhance decision-making transparency. This framework utilizes gradient-based saliency maps and SHAP for feature attribution, addressing the critical challenge of explainability in high-stakes domains like healthcare and finance.
Bridging the Trust Gap: Clinician-Validated Hybrid Explainable AI for Maternal Health Risk Assessment in Bangladesh
PositiveArtificial Intelligence
A study has introduced a hybrid explainable AI (XAI) framework for maternal health risk assessment in Bangladesh, combining ante-hoc fuzzy logic with post-hoc SHAP explanations, validated through clinician feedback. The fuzzy-XGBoost model achieved 88.67% accuracy on 1,014 maternal health records, with a validation study indicating a strong preference for hybrid explanations among healthcare professionals.
From brain scans to alloys: Teaching AI to make sense of complex research data
NeutralArtificial Intelligence
Artificial intelligence (AI) is being increasingly utilized to analyze complex data across various fields, including medical imaging and materials science. However, many AI systems face challenges when real-world data diverges from ideal conditions, leading to issues with accuracy and reliability due to varying measurement qualities.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about