A Survey on Human-Centered Evaluation of Explainable AI Methods in Clinical Decision Support Systems

arXiv — cs.LGWednesday, November 12, 2025 at 5:00:00 AM
The study on Explainable AI (XAI) in Clinical Decision Support Systems (CDSS) reveals significant insights into the current landscape of human-centered evaluations. Conducted as a systematic PRISMA-guided survey of 31 evaluations, it emphasizes that over 80% of studies utilize post-hoc, model-agnostic methods such as SHAP and Grad-CAM, typically involving clinician samples of fewer than 25 participants. Although these explanations generally bolster clinician trust and diagnostic confidence, they often lead to increased cognitive load and misalignment with the reasoning processes inherent in clinical practice. This highlights a critical gap in the effectiveness of existing XAI methods, prompting the need for a stakeholder-centric evaluation framework that integrates socio-technical principles and human-computer interaction to better align AI explanations with clinical workflows.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about