Physiologically Responsive AI for Detecting Driver Impairment in Semi-Autonomous Vehicles

Affiliations: College of Sciences
Team Leader:
Nicholas Battle
ni014632@ucf.edu
Applied/Experimental & Human Factors Psychology
Faculty Mentor:
Mustapha Mouloua, PhD
Team Size:
8
Open Spots: 0
Team Member Qualifications:
Preferred Qualifications: Must be available for regular, scheduled in-person data collection sessions. Reliable attendance is critical as sessions involve human participants and specialized equipment. Respectful and culturally sensitive when working with diverse participants. Ability to maintain confidentiality and follow ethical research protocols. Open to training in EEG/neurophysiological data collection. Eager to learn statistical analysis techniques and research software. Receptive to feedback and continuous improvement. This research project welcomes students from any major or field of study. We will provide comprehensive training in all technical skills needed including equipment setup and operation. Required Qualifications: CITI Training Certification (or willingness to complete it immediately upon acceptance) Human Subjects Research training must be completed before beginning work with participants.
Description:
Emotionally elevated states are contributors to impaired driving behavior, increasing reaction time variability and reducing decision-making accuracy. Investigating high-risk states in real-world driving presents ethical and safety constraints, making driving simulators essential for controlled assessments. Existing research suggests that dual-modal physiological monitoring—integrating electrocardiogram (ECG) and facial data—significantly improves the detection of driver stress. However, gaps remain regarding real-time validation in high-accuracy simulations and the cognitive impact of AI transparency on user trust. Grounded in the Human Identity and Autonomy Gap (HIAG) framework, this study addresses these gaps by examining how physiologically responsive AI informs interventions within a simulated driving environment, as well as human responses to AI transparency. The experiment utilizes a 2×2×4 mixed-factorial design, manipulating AI transparency to evaluate trust through intervention acceptance and perceived autonomy. Participants will navigate varying levels of traffic density across four time blocks, creating opportunities for AI recommendation. It is expected that higher transparency and participant openness will correlate with increased trust and intervention acceptance. While the dual-modal system is used for stress detection, the principal analysis focuses on behavioral responses to multiple feedback levels and perceptions of AI assistance. This research aims to advance the design of trustworthy AI systems optimizing human–autonomous collaboration.