From Perception to Prediction: Leveraging Explainable AI in Self-Driving Cars for Enhanced Passenger Trust

  • Jamuna Purushotham Student, School of CS and IT Department of MCA, JAIN (Deemed to be) University, Jayanagar, Bengaluru, India
  • Srikanth V Associate Professor, School of CS and IT Department of MCA, JAIN (Deemed to be) University, Jayanagar, Bengaluru, India
Keywords: Self-driving cars, Passenger trust , Explainable AI (XAI), Perception, Prediction , Decision-making

Abstract

Self-driving cars hold immense potential for revolutionizing transportation. However, public acceptance hinges on trust in the car's ability to navigate safely and make critical decisions. This trust deficit stems from the "black box" nature of traditional machine learning models used in self-driving cars. Passengers are left in the dark about the car's perception of the environment and the reasoning behind its actions. This research proposes leveraging Explainable Artificial Intelligence (XAI) techniques to enhance passenger trust in self-driving cars. By incorporating explainability into the perception and prediction modules of the car's decision- making system, we aim to provide passengers with real-time insights into how the car perceives its surroundings and translates those perceptions into driving decisions. This paper explores various XAI methods suitable for self-driving car applications. We discuss the integration of these techniques into the perception and prediction pipelines, enabling the car to explain its reasoning behind lane changes, obstacle avoidance maneuvers, and other critical actions. We evaluate the effectiveness of the proposed approach through user studies, assessing how explainability can improve passenger trust and comfort in self-driving vehicles. The ultimate goal of this research is to foster greater transparency and trust in self-driving car technology, paving the way for wider public adoption and a future of safe and reliable autonomous transportation.
Published
2024-05-17