Driver Gaze Zone Estimation using Convolutional Neural Networks: A General Framework and Ablative Analysis.
Driver gaze has been shown to be an excellent surrogate for driver attentionin intelligent vehicles. With the recent surge of highly autonomous vehicles,driver gaze can be useful for determining the handoff time to a human driver.While there has been significant improvement in personalized driver gaze zoneestimation systems, a generalized system which is invariant to differentsubjects, perspectives and scales is still lacking. We take a step towards thisgeneralized system using Convolutional Neural Networks (CNNs). We finetune 4popular CNN architectures for this task, and provide extensive comparisons oftheir outputs. We additionally experiment with different input image patches,and also examine how image size affects performance. For training and testingthe networks, we collect a large naturalistic driving dataset comprising of 11long drives, driven by 10 subjects in two different cars. Our best performingmodel achieves an accuracy of 95.18% during cross-subject testing,outperforming current state of the art techniques for this task. Finally, weevaluate our best performing model on the publicly available Columbia GazeDataset comprising of images from 56 subjects with varying head pose and gazedirections. Without any training, our model successfully encodes the differentgaze directions on this diverse dataset, demonstrating good generalizationcapabilities.
Stay in the loop.
Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.