This paper critically reviews the importance of applying AI to boost security features on social media platforms to protect against unethical data collection and uses of psychological influence for political and business purposes. Most importantly, the abovementioned AI and machine learning experiences have become pervasive, revolutionizing the users’ experiences and creating complicated means for developing malicious intents to track personal information and influence consumer experiences. Here, we also look into the existing artificial intelligence-based security systems like encryption, anomaly detection, and user authentication to understand their efficiency in protecting user data privacy. These worries will probably materialize in the following ways when artificial intelligence capabilities like facial recognition and natural language processing become more widely used. We discuss real-world examples to illustrate the repercussions of manipulating AI: the Cambridge Analytica case and Meta’s Ray-Ban augmented reality glasses. Thus, the paper explores the ethical challenge of AI misuse before outlining the importance of achieving the best mix of security benefits and concerns for users’ freedom and privacy. Thus, we proposed a four-part solution aimed at some of those threats, which include increased data protection, ethical usage of the created AI framework, and general user education. Policymakers, social media companies, and designers of these A. Systems can benefit from our conclusions by making sure the latter protects users of such platforms from abuse.
RPA Systems, Artificial intelligence, social networks, NLP, Facial Recognition
IRE Journals:
Oladoyin Akinsuli
"AI Security in Social Engineering: Mitigating Risks of Data Harvesting and Targeted Manipulation" Iconic Research And Engineering Journals Volume 8 Issue 3 2024 Page 665-684
IEEE:
Oladoyin Akinsuli
"AI Security in Social Engineering: Mitigating Risks of Data Harvesting and Targeted Manipulation" Iconic Research And Engineering Journals, 8(3)