Paper at ICCV 2015
We will present the following paper at the IEEE International Conference on Computer Vision (ICCV 2015):
![]() | Erroll Wood; Tadas Baltrusaitis; Xucong Zhang; Yusuke Sugano; Peter Robinson; Andreas Bulling Rendering of Eyes for Eye-Shape Registration and Gaze Estimation Inproceedings Proc. of the IEEE International Conference on Computer Vision (ICCV 2015), pp. 3756-3764, 2015. @inproceedings{wood2015_iccv, title = {Rendering of Eyes for Eye-Shape Registration and Gaze Estimation}, author = {Erroll Wood and Tadas Baltrusaitis and Xucong Zhang and Yusuke Sugano and Peter Robinson and Andreas Bulling}, url = {https://perceptual.mpi-inf.mpg.de/wp-content/blogs.dir/12/files/2016/06/wood2015_iccv.pdf http://www.technologyreview.com/view/537891/virtual-eyes-train-deep-learning-algorithm-to-recognize-gaze-direction/ http://www.cl.cam.ac.uk/research/rainbow/projects/syntheseyes/}, doi = {10.1109/ICCV.2015.428}, year = {2015}, date = {2015-01-01}, booktitle = {Proc. of the IEEE International Conference on Computer Vision (ICCV 2015)}, pages = {3756-3764}, abstract = {Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model's controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model's controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild. |