Artificial Intelligence (AI) systems are integral to sectors like healthcare, finance, and criminal justice, offering superior decision-making capabilities. However, the opacity of these processes can undermine user trust and accountability. This paper explores methods to enhance transparency and understanding in AI, including model-agnostic approaches like LIME and SHAP, and intrinsically interpretable models such as decision trees and rule-based systems. It also proposes strategies like hybrid models, user-centric design, and regulatory frameworks to enforce transparency. Case studies in healthcare and finance demonstrate these strategies' practical applications, aiming to balance AI's technical performance with transparency and ethical deployment.
AI transparency, explainable AI, interpretability, decision-making processes
IRE Journals:
Vinayak Pillai
"Enhancing Transparency and Understanding in AI Decision-Making Processes" Iconic Research And Engineering Journals Volume 8 Issue 1 2024 Page 168-172
IEEE:
Vinayak Pillai
"Enhancing Transparency and Understanding in AI Decision-Making Processes" Iconic Research And Engineering Journals, 8(1)