A Survey on Human-Centered Evaluation of Explainable AI Methods in Clinical Decision Support Systems

arXiv — cs.LGWednesday, November 12, 2025 at 5:00:00 AM
The study on Explainable AI (XAI) in Clinical Decision Support Systems (CDSS) reveals significant insights into the current landscape of human-centered evaluations. Conducted as a systematic PRISMA-guided survey of 31 evaluations, it emphasizes that over 80% of studies utilize post-hoc, model-agnostic methods such as SHAP and Grad-CAM, typically involving clinician samples of fewer than 25 participants. Although these explanations generally bolster clinician trust and diagnostic confidence, they often lead to increased cognitive load and misalignment with the reasoning processes inherent in clinical practice. This highlights a critical gap in the effectiveness of existing XAI methods, prompting the need for a stakeholder-centric evaluation framework that integrates socio-technical principles and human-computer interaction to better align AI explanations with clinical workflows.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Deep Learning for Short-Term Precipitation Prediction in Four Major Indian Cities: A ConvLSTM Approach with Explainable AI
PositiveArtificial Intelligence
A new study presents a deep learning framework for short-term precipitation prediction in Bengaluru, Mumbai, Delhi, and Kolkata, utilizing a hybrid CNN-ConvLSTM architecture. This model, trained on multi-decadal ERA5 reanalysis data, aims to enhance transparency in weather forecasting. The models achieved varying root mean square error (RMSE) values: 0.21 mm/day for Bengaluru, 0.52 mm/day for Mumbai, 0.48 mm/day for Delhi, and 1.80 mm/day for Kolkata. The approach emphasizes explainable AI to improve understanding of precipitation patterns.