Driver Distraction Prediction Using Machine Learning”, where given driver images, each taken during a car with a driver doing something within the car (texting, eating, talking on the phone, makeup, reaching behind, etc). The goal was to predict the likelihood of what the driving force is doing in each picture.
Driving a car may be a complex task, and it requires complete attention. Distracted driving is any activity that takes away the driver’s attention from the road. Several studies have identified three main sorts of distraction: visual distractions (driver’s eyes off the road), manual distractions (driver’s hands off the wheel) and cognitive distractions (driver’s mind off the driving task).
Dataset details -
- Image Size - 480 X 640 pixels
- Training Images count - 22424 images
- Test Images count - 79726 images
- Image type - RGB
- Image field of view - Dashboard images with view of Driver and passenger
- The 10 classes to predict are:
- c0: safe driving
- c1: texting - right
- c2: talking on the phone - right
- c3: texting - left
- c4: talking on the phone - left
- c5: operating the radio
- c6: drinking
- c7: reaching behind
- c8: hair and makeup
- c9: talking to passenger
- Loss - multi-class logarithmic loss
Kaggle hosted the challenge few years ago which focused on identifying distracted drivers using Computer Vision
Details of challenge can be found here - https://www.kaggle.com/c/state-farm-distracted-driver-detection
- DL Model - CNN's build from scratch ( 6 Conv Layer, 5 Dropout Layer, 3 Dense Layer)
- Framework - Keras / Pytorch version in the process.
- CNN Model Visualization/Model Interpretability - GradCAM
- Final Accuracy -Train acc - 99.06%, Val acc-99 .46%
GRAD-CAM implementation for a test image with label drinking
GRAD-CAM is a technique to highlight how a model classifies new instanes by creating a heat map which highlights only the area which has contributed the most in prediction.
As seen in below image model classifies driver as distracted by drinking by highlighting the hand and glass.