2 papers at UIST 2017
We will present the following two papers at the 30th International ACM Symposium on User Interface Software and Technology (UIST 2017):
![]() | Xucong Zhang; Yusuke Sugano; Andreas Bulling Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery Inproceedings Proc. of the ACM Symposium on User Interface Software and Technology (UIST), pp. 193-203, 2017, (best paper honourable mention award). @inproceedings{zhang17_uist, title = {Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery}, author = {Xucong Zhang and Yusuke Sugano and Andreas Bulling}, url = {https://perceptual.mpi-inf.mpg.de/files/2017/05/zhang17_uist.pdf https://www.youtube.com/watch?v=ccrS5XuhQpk https://www.youtube.com/watch?v=AxDHU40Xda8 http://www.techbriefs.com/component/content/article/1198-tb/news/news/27400-new-software-spots-eye-contact}, doi = {10.1145/3126594.3126614}, year = {2017}, date = {2017-06-26}, booktitle = {Proc. of the ACM Symposium on User Interface Software and Technology (UIST)}, pages = {193-203}, abstract = {Eye contact is an important non-verbal cue in social signal processing and promising as a measure of overt attention in human-object interactions and attentive user interfaces. However, robust detection of eye contact across different users, gaze targets, camera positions, and illumination conditions is notoriously challenging. We present a novel method for eye contact detection that combines a state-of-the-art appearance-based gaze estimator with a novel approach for unsupervised gaze target discovery, i.e. without the need for tedious and time-consuming manual data annotation. We evaluate our method in two real-world scenarios: detecting eye contact at the workplace, including on the main work display, from cameras mounted to target objects, as well as during everyday social interactions with the wearer of a head-mounted egocentric camera. We empirically evaluate the performance of our method in both scenarios and demonstrate its effectiveness for detecting eye contact independent of target object type and size, camera position, and user and recording environment.}, note = {best paper honourable mention award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Eye contact is an important non-verbal cue in social signal processing and promising as a measure of overt attention in human-object interactions and attentive user interfaces. However, robust detection of eye contact across different users, gaze targets, camera positions, and illumination conditions is notoriously challenging. We present a novel method for eye contact detection that combines a state-of-the-art appearance-based gaze estimator with a novel approach for unsupervised gaze target discovery, i.e. without the need for tedious and time-consuming manual data annotation. We evaluate our method in two real-world scenarios: detecting eye contact at the workplace, including on the main work display, from cameras mounted to target objects, as well as during everyday social interactions with the wearer of a head-mounted egocentric camera. We empirically evaluate the performance of our method in both scenarios and demonstrate its effectiveness for detecting eye contact independent of target object type and size, camera position, and user and recording environment. |
![]() | Mohamed Khamis; Axel Hoesl; Alexander Klimczak; Martin Reiss; Florian Alt; Andreas Bulling EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays Inproceedings Proc. of the ACM Symposium on User Interface Software and Technology (UIST), pp. 155-166, 2017. @inproceedings{khamis17_uist, title = {EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays}, author = {Mohamed Khamis and Axel Hoesl and Alexander Klimczak and Martin Reiss and Florian Alt and Andreas Bulling}, url = {https://perceptual.mpi-inf.mpg.de/files/2017/05/khamis17_uist.pdf https://www.youtube.com/watch?v=D1IprYwqToM https://www.youtube.com/watch?v=J7_OiRqsmdM}, doi = {10.1145/3126594.3126630}, year = {2017}, date = {2017-06-26}, booktitle = {Proc. of the ACM Symposium on User Interface Software and Technology (UIST)}, pages = {155-166}, abstract = {While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user’s lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display’s sweet spot into a sweet line, and reduces gaze interaction kick- off time to 3.5 seconds - a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user’s lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display’s sweet spot into a sweet line, and reduces gaze interaction kick- off time to 3.5 seconds - a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays. |