Best paper and best presentation awards at ETRA 2018
Our joint paper “Error-Aware Gaze-Based Interfaces for Robust Mobile Gaze Interaction” with DFKI Saarbrücken won the Best Paper Award at ACM ETRA 2018.
![]() | Michael Barz; Florian Daiber; Daniel Sonntag; Andreas Bulling Error-Aware Gaze-Based Interfaces for Robust Mobile Gaze Interaction Inproceedings Proc. International Symposium on Eye Tracking Research and Applications (ETRA), pp. 24:1-24:10, 2018, (best paper award). @inproceedings{barz18_etra, title = {Error-Aware Gaze-Based Interfaces for Robust Mobile Gaze Interaction}, author = {Michael Barz and Florian Daiber and Daniel Sonntag and Andreas Bulling}, url = {https://perceptual.mpi-inf.mpg.de/files/2018/04/barz18_etra.pdf}, doi = {10.1145/3204493.3204536}, year = {2018}, date = {2018-03-28}, booktitle = {Proc. International Symposium on Eye Tracking Research and Applications (ETRA)}, pages = {24:1-24:10}, abstract = {Gaze estimation error is unavoidable in head-mounted eye trackers and can severely hamper usability and performance of mobile gaze-based interfaces given that the error varies constantly for different interaction positions. In this work, we explore error-aware gaze-based interfaces that estimate and adapt to gaze estimation error on-the-fly. We implement a sample error-aware user interface for gaze-based selection and different error compensation methods: a naïve approach that increases component size directly proportional to the absolute error, a recent model by Feit et al. (CHI’17) that is based on the 2-dimensional error distribution, and a novel predictive model that shifts gaze by a directional error estimate. We evaluate these models in a 12-participant user study and show that our predictive model outperforms the others significantly in terms of selection rate, particularly for small gaze targets. These results underline both the feasibility and potential of next generation error-aware gaze-based user interfaces.}, note = {best paper award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Gaze estimation error is unavoidable in head-mounted eye trackers and can severely hamper usability and performance of mobile gaze-based interfaces given that the error varies constantly for different interaction positions. In this work, we explore error-aware gaze-based interfaces that estimate and adapt to gaze estimation error on-the-fly. We implement a sample error-aware user interface for gaze-based selection and different error compensation methods: a naïve approach that increases component size directly proportional to the absolute error, a recent model by Feit et al. (CHI’17) that is based on the 2-dimensional error distribution, and a novel predictive model that shifts gaze by a directional error estimate. We evaluate these models in a 12-participant user study and show that our predictive model outperforms the others significantly in terms of selection rate, particularly for small gaze targets. These results underline both the feasibility and potential of next generation error-aware gaze-based user interfaces. |
Our joint paper “Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings” with ETH Zurich won the Best Presentation Award.
![]() | Seonwook Park; Xucong Zhang; Andreas Bulling; Otmar Hilliges Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings Inproceedings Proc. International Symposium on Eye Tracking Research and Applications (ETRA), pp. 21:1-21:10, 2018, (best presentation award). @inproceedings{park18_etra, title = {Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings}, author = {Seonwook Park and Xucong Zhang and Andreas Bulling and Otmar Hilliges}, url = {https://perceptual.mpi-inf.mpg.de/files/2018/04/park18_etra.pdf https://youtu.be/I8WlEHgDBV4}, doi = {10.1145/3204493.3204545}, year = {2018}, date = {2018-03-27}, booktitle = {Proc. International Symposium on Eye Tracking Research and Applications (ETRA)}, pages = {21:1-21:10}, abstract = {Conventional feature-based and model-based gaze estimation methods have proven to perform well in settings with controlled illumination and specialized cameras. In unconstrained real-world settings, however, such methods are surpassed by recent appearance-based methods due to difficulties in modeling factors such as illumination changes and other visual artifacts. We present a novel learning-based method for eye region landmark localization that enables conventional methods to be competitive to latest appearance-based methods. Despite having been trained exclusively on synthetic data, our method exceeds the state of the art for iris localization and eye shape registration on real-world imagery. We then use the detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods. Our approach outperforms existing model-fitting and appearance-based methods in the context of person-independent and personalized gaze estimation.}, note = {best presentation award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Conventional feature-based and model-based gaze estimation methods have proven to perform well in settings with controlled illumination and specialized cameras. In unconstrained real-world settings, however, such methods are surpassed by recent appearance-based methods due to difficulties in modeling factors such as illumination changes and other visual artifacts. We present a novel learning-based method for eye region landmark localization that enables conventional methods to be competitive to latest appearance-based methods. Despite having been trained exclusively on synthetic data, our method exceeds the state of the art for iris localization and eye shape registration on real-world imagery. We then use the detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods. Our approach outperforms existing model-fitting and appearance-based methods in the context of person-independent and personalized gaze estimation. |