Rising dependence on autonomous systems in sensitive sectors like self-driving vehicles, smart surveillance, and industrial automation has revealed weaknesses to hostile attacks. Though accurate, traditional deep learning systems still run the risk of adversarial perturbations that might lead artificial intelligence driven decision making off track. To improve adversarial attack detection in real time systems, this paper offers a hybrid neurosymbolic framework that combines the logical reasoning strengths of symbolic AI with the pattern recognition abilities of neural networks. To address hazards in independent cyber physical environments, the suggested approach combines neural feature extraction, symbolic logic based validation, and dynamic adversarial mitigation techniques for threat identification and response. Compared to conventional deep learning models, our approach is judged by means of standard adversarial attack benchmarks, which show greater sensitivity, superior interpretability, and greater resistance against adversarial techniques. This study emphasizes the capacity of neurosymbolic AI to protect autonomous systems and reduce adversarial dangers in mission critical uses.
Neuro-symbolic AI, adversarial attacks, autonomous systems, symbolic reasoning, neural networks, adversarial defense, real-time detection, deep learning security.
IRE Journals:
Deepak Kumar Kejriwal , Ashwin Sharma
"A Hybrid Neuro-Symbolic Framework for Real-Time Detection of Adversarial Attacks in Autonomous Systems" Iconic Research And Engineering Journals Volume 8 Issue 5 2024 Page 1293-1304
IEEE:
Deepak Kumar Kejriwal , Ashwin Sharma
"A Hybrid Neuro-Symbolic Framework for Real-Time Detection of Adversarial Attacks in Autonomous Systems" Iconic Research And Engineering Journals, 8(5)