Emerging investigator award at ETRA 2016
Our joint paper “Learning an appearance-based gaze estimator from one million synthesised images” with the University of Cambridge won an Emerging Investigator Award at ACM ETRA 2016. Congrats Erroll!
![]() | Erroll Wood; Tadas Baltrusaitis; Louis-Philippe Morency; Peter Robinson; Andreas Bulling Learning an appearance-based gaze estimator from one million synthesised images Inproceedings Proc. of the 9th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2016), pp. 131-138, 2016, (emerging investigator award). @inproceedings{wood16_etra, title = {Learning an appearance-based gaze estimator from one million synthesised images}, author = {Erroll Wood and Tadas Baltrusaitis and Louis-Philippe Morency and Peter Robinson and Andreas Bulling}, url = {https://perceptual.mpi-inf.mpg.de/files/2016/01/wood16_etra.pdf http://www.cl.cam.ac.uk/research/rainbow/projects/unityeyes/ http://www.cai.cam.ac.uk/news/eye-tracking-research-wins-award}, doi = {10.1145/2857491.2857492}, year = {2016}, date = {2016-01-01}, booktitle = {Proc. of the 9th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2016)}, pages = {131-138}, abstract = {Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learning-by-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, the appearance variability as well as the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses real- time approximations for complex eyeball materials and structures as well as novel anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework freely available online for the benefit of the research community.}, note = {emerging investigator award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learning-by-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, the appearance variability as well as the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses real- time approximations for complex eyeball materials and structures as well as novel anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework freely available online for the benefit of the research community. |