The Role of Explainable AI in Cybersecurity: Improving Analyst Trust in Automated Threat Assessment Systems
  • Author(s): Muhammad Ashraf Faheem ; Sridevi Kakolu ; Muhammad Aslam
  • Paper ID: 1703839
  • Page: 173-182
  • Published Date: 18-11-2024
  • Published In: Iconic Research And Engineering Journals
  • Publisher: IRE Journals
  • e-ISSN: 2456-8880
  • Volume/Issue: Volume 6 Issue 4 October-2022
Abstract

XAI is making a difference in cybersecurity today by handling the implications of opaqueness from deep learning approaches in threat systems. Some of the older AI models are ''black box,'' which implies that after the models have analyzed data and made a prediction, the analysts are unsure why the given decision was made on the threat. This lack of transparency results in a gap in trust because analysts using the model often need help to confirm or interact with the model's model's results in a way explained to them. On the other hand, XAI brought interpretability into these systems to help analysts know some factors that contribute to AI-generated predictions. By rendering the decision-making process of AI transparent in terms of the parameters employed, XAI empowers analysts with the ability to support or refute the conclusions reached with a high level of confidence in record time and with equal certainty about the accuracy of the decisions made. It is especially important when it is done in highly sensitive cybersecurity scenarios where reliance on the outlooks offered by an AI may lead to drastic consequences. In this case, this research examines the role of XAI in enhancing trust in AI systems, threat detection, and mitigation. Moreover, several examples of real-world application and cases are used to further elaborate the advantages of XAI, with particular focus on how it would improve the accuracy of the system and reduce its risks. Consequently, the findings of the study suggest that XAI enhances analysts’ and analyst’s confidence and fortifies and optimizes distinct cybersecurity frameworks.

Keywords

Explainable AI, XAI, Black-box models, Threat detection, AI compliance, Security frameworks

Citations

IRE Journals:
Muhammad Ashraf Faheem , Sridevi Kakolu , Muhammad Aslam "The Role of Explainable AI in Cybersecurity: Improving Analyst Trust in Automated Threat Assessment Systems" Iconic Research And Engineering Journals Volume 6 Issue 4 2022 Page 173-182

IEEE:
Muhammad Ashraf Faheem , Sridevi Kakolu , Muhammad Aslam "The Role of Explainable AI in Cybersecurity: Improving Analyst Trust in Automated Threat Assessment Systems" Iconic Research And Engineering Journals, 6(4)