Explainable AI Models for Clinical Decision Support Systems

Authors

  • Chitiz Tayal Senior Director, Data and AI. Author

DOI:

https://doi.org/10.63282/3050-9246.IJETCSIT-V5I1P112

Keywords:

Clinical Decision Support Systems (CDSS), Artificial Intelligence (AI), Explainable Artificial Intelligence (XAI), Machine Learning (ML)

Abstract

The data-driven world has led to the intervention of artificial intelligence and clinical decision support systems technologies in the healthcare sector in the decision-making process. However supportive these advanced systems are, they suffer from an issue of “black-box” or unexplainably of their working process, which hinders their utilisation in the health sector. Explainable Artificial Intelligence can mitigate such challenges by providing automation models that are transparent to healthcare individuals about their working procedures, which promises the clinical trust in healthcare systems and aids them in efficient decision-making. The evaluation of the data and information through statistical analysis has outlined that explainable artificial intelligence is the epitome of clinical data accuracy. In addition to this, the evaluation of the data through graphical representation has also outlined that, specifically, gender-based specialisation, the male patients mostly face problems due to respiratory conditions, along with heart conditions, while most of the female patients have faced diabetes. Explainable artificial intelligence has the capability in enhancing the data accuracy of the patients and developing treatments according to it

Downloads

Download data is not yet available.

References

[1] J. Amann et al., “To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems,” PLOS Digital Health, vol. 1, no. 2, p. e0000016, Feb. 2022, doi: https://doi.org/10.1371/journal.pdig.0000016.

[2] N. Rane, S. Choudhary, and J. Rane, “Explainable Artificial Intelligence (XAI) in healthcare: Interpretable Models for Clinical Decision Support,” Social Science Research Network, Nov. 15, 2023. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4637897

[3] P. P. Angelov, E. A. Soares, R. Jiang, N. I. Arnold, and P. M. Atkinson, “Explainable artificial intelligence: an analytical review,” WIREs Data Mining and Knowledge Discovery, vol. 11, no. 5, Jul. 2021, doi: https://doi.org/10.1002/widm.1424.

[4] V. Hassija et al., “Interpreting Black-Box Models: a Review on Explainable Artificial Intelligence,” Cognitive Computation, vol. 16, no. 1, pp. 45–74, Aug. 2023, doi: https://doi.org/10.1007/s12559-023-10179-8.

[5] Y. Du, Anna Markella Antoniadi, C. McNestry, F. M. McAuliffe, and C. Mooney, “The Role of XAI in Advice-Taking from a Clinical Decision Support System: A Comparative User Study of Feature Contribution-Based and Example-Based Explanations,” vol. 12, no. 20, pp. 10323–10323, Oct. 2022, doi: https://doi.org/10.3390/app122010323.

[6] M. Sujan et al., “Validation framework for the use of AI in healthcare: overview of the new British standard BS30440,” BMJ Health & Care Informatics, vol. 30, no. 1, p. e100749, Jun. 2023, doi: https://doi.org/10.1136/bmjhci-2023-100749.

[7] Y. Y. M. Aung, D. C. S. Wong, and D. S. W. Ting, “The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare,” British Medical Bulletin, vol. 139, no. 1, pp. 4–15, Aug. 2021, doi: https://doi.org/10.1093/bmb/ldab016.

[8] M. S. Reed et al., “Evaluating Impact from research: a Methodological Framework,” Research Policy, vol. 50, no. 4, 2021, Accessed: Oct. 25, 2025. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0048733320302225

[9] M. N. Laryeafio and O. C. Ogbewe, “Ethical Consideration dilemma: Systematic Review of Ethics in Qualitative Data Collection through Interviews,” Journal of Ethics in Entrepreneurship and Technology, vol. 3, no. 2, pp. 94–110, Aug. 2023, doi: https://doi.org/10.1108/JEET-09-2022-0014.

[10] J. Bajwa, U. Munir, A. Nori, and B. Williams, “Artificial intelligence in healthcare: Transforming the practice of medicine,” Future Healthcare Journal, vol. 8, no. 2, pp. 188–194, Jul. 2021, doi: https://doi.org/10.7861/fhj.2021-0095.

[11] R. L. Pierce, W. Van Biesen, D. Van Cauwenberge, J. Decruyenaere, and S. Sterckx, “Explainability in medicine in an era of AI-based clinical decision support systems,” Frontiers in Genetics, vol. 13, Sep. 2022, doi: https://doi.org/10.3389/fgene.2022.903600.

Published

2024-03-30

Issue

Section

Articles

How to Cite

1.
Tayal C. Explainable AI Models for Clinical Decision Support Systems. IJETCSIT [Internet]. 2024 Mar. 30 [cited 2025 Nov. 13];5(1):112-8. Available from: https://www.ijetcsit.org/index.php/ijetcsit/article/view/467

Similar Articles

31-40 of 346

You may also start an advanced similarity search for this article.