Personalized Driver Alertness with Multimodal Holistic Models of Drivers

Project Abstract/Statement of Work:

According to a survey run by the National Sleep Foundation (2005), 60% of US adult drivers drove while being fatigued, and as many as 37% admitted to have fallen asleep at the wheel. Moreover, distracted driving caused by engaging in a different activity can cause fatal accidents. The U.S. Department of Transportation estimated a total number of 421,000 injuries in vehicles crashes due to distracted driving.
The goal of this project is to construct holistic models of drivers, covering a multitude of channels, including vision, physiology, and language, as well as background information (demographic and psychological) and affective information of the drivers. We will use these holistic representations to build effective personalized multimodal models of driver alertness, aiming to identify patterns associated with two main driving states: alertness (alert vs. drowsy) and attention (attentive vs. distracted).
 
Previous research (including ongoing TRI projects) has focused on the detection of either alertness or distraction, but not their joint co-existence. Yet, we expect that distractors would have a different impact on a driver in a drowsy (tired) state than on an alert driver, and therefore our goal is to build joint models that take into account this interdependency.
 
Our team spans three areas of expertise in sensors, vision, and language, and it will be a unique collaboration across all three UofM  campuses.
 

PIs:

Mihai Burzo, Mechanical Engineering, University of Michigan – Flint
Mohamed Abouelenien, Computer and Information Science, University of Michigan – Dearborn
Rada Mihalcea, Computer Science and Engineering, University of Michigan – Ann Arbor