Enhancing Transparency and Understanding in AI Decision-Making Processes
  • Author(s): Vinayak Pillai
  • Paper ID: 1706039
  • Page: 168-172
  • Published Date: 13-07-2024
  • Published In: Iconic Research And Engineering Journals
  • Publisher: IRE Journals
  • e-ISSN: 2456-8880
  • Volume/Issue: Volume 8 Issue 1 July-2024
Abstract

Artificial Intelligence (AI) systems are integral to sectors like healthcare, finance, and criminal justice, offering superior decision-making capabilities. However, the opacity of these processes can undermine user trust and accountability. This paper explores methods to enhance transparency and understanding in AI, including model-agnostic approaches like LIME and SHAP, and intrinsically interpretable models such as decision trees and rule-based systems. It also proposes strategies like hybrid models, user-centric design, and regulatory frameworks to enforce transparency. Case studies in healthcare and finance demonstrate these strategies' practical applications, aiming to balance AI's technical performance with transparency and ethical deployment.

Keywords

AI transparency, explainable AI, interpretability, decision-making processes

Citations

IRE Journals:
Vinayak Pillai "Enhancing Transparency and Understanding in AI Decision-Making Processes" Iconic Research And Engineering Journals Volume 8 Issue 1 2024 Page 168-172

IEEE:
Vinayak Pillai "Enhancing Transparency and Understanding in AI Decision-Making Processes" Iconic Research And Engineering Journals, 8(1)