Bridging the Trust Gap: Clinician-Validated Hybrid Explainable AI for Maternal Health Risk Assessment in Bangladesh

arXiv — cs.LGWednesday, January 14, 2026 at 5:00:00 AM
  • A study has introduced a hybrid explainable AI (XAI) framework for maternal health risk assessment in Bangladesh, combining ante-hoc fuzzy logic with post-hoc SHAP explanations, validated through clinician feedback. The fuzzy-XGBoost model achieved 88.67% accuracy on 1,014 maternal health records, with a validation study indicating a strong preference for hybrid explanations among healthcare professionals.
  • This development is significant as it addresses the critical barrier of explainability and trust in AI applications within resource-constrained healthcare settings, potentially improving maternal health outcomes in Bangladesh.
  • The integration of explainable AI in healthcare reflects a growing recognition of the need for transparent AI systems, particularly in high-stakes environments. This aligns with ongoing discussions about the importance of reliable metrics for AI explainability and the role of human expertise in interpreting AI-generated insights.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
An Under-Explored Application for Explainable Multimodal Misogyny Detection in code-mixed Hindi-English
PositiveArtificial Intelligence
A new study has introduced a multimodal and explainable web application designed to detect misogyny in code-mixed Hindi and English, utilizing advanced artificial intelligence models like XLM-RoBERTa. This application aims to enhance the interpretability of hate speech detection, which is crucial in the context of increasing online misogyny.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about