2 papers at MUM 2017
We will present the following two papers at the 16th International Conference on Mobile and Ubiquitous Multimedia (MUM 2017):
![]() | Mohamed Khamis; Linda Bandelow; Stina Schick; Dario Casadevall; Andreas Bulling; Florian Alt They are all after you: Investigating the Viability of a Threat Model that involves Multiple Shoulder Surfers Inproceedings Proc. of the International Conference on Mobile and Ubiquitous Multimedia (MUM), pp. 31-35, 2017, (best paper honourable mention award). @inproceedings{khamis17_mum, title = {They are all after you: Investigating the Viability of a Threat Model that involves Multiple Shoulder Surfers}, author = {Mohamed Khamis and Linda Bandelow and Stina Schick and Dario Casadevall and Andreas Bulling and Florian Alt}, url = {https://perceptual.mpi-inf.mpg.de/files/2017/10/khamis17_mum.pdf}, doi = {10.1145/3152832.3152851}, year = {2017}, date = {2017-10-09}, booktitle = {Proc. of the International Conference on Mobile and Ubiquitous Multimedia (MUM)}, pages = {31-35}, abstract = {Many of the authentication schemes for mobile devices that were proposed lately complicate shoulder surfing by splitting the attacker’s attention into two or more entities. For example, multimodal authentication schemes such as GazeTouchPIN and GazeTouchPass require attackers to observe the user’s gaze input and the touch input performed on the phone’s screen. These schemes have always been evaluated against single observers, while multiple observers could potentially attack these schemes with greater ease, since each of them can focus exclusively on one part of the password. In this work, we study the effectiveness of a novel threat model against authentication schemes that split the attacker’s attention. As a case study, we report on a security evaluation of two state of the art authentication schemes in the case of a team of two observers. Our results show that although multiple observers perform better against these schemes than single observers, multimodal schemes are significantly more secure against multiple observers compared to schemes that employ a single modality. We discuss how this threat model impacts the design of authentication schemes.}, note = {best paper honourable mention award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Many of the authentication schemes for mobile devices that were proposed lately complicate shoulder surfing by splitting the attacker’s attention into two or more entities. For example, multimodal authentication schemes such as GazeTouchPIN and GazeTouchPass require attackers to observe the user’s gaze input and the touch input performed on the phone’s screen. These schemes have always been evaluated against single observers, while multiple observers could potentially attack these schemes with greater ease, since each of them can focus exclusively on one part of the password. In this work, we study the effectiveness of a novel threat model against authentication schemes that split the attacker’s attention. As a case study, we report on a security evaluation of two state of the art authentication schemes in the case of a team of two observers. Our results show that although multiple observers perform better against these schemes than single observers, multimodal schemes are significantly more secure against multiple observers compared to schemes that employ a single modality. We discuss how this threat model impacts the design of authentication schemes. |
![]() | Christian Lander; Sven Gehring; Markus Löchtefeld; Andreas Bulling; Antonio Krüger EyeMirror: Mobile Calibration-Free Gaze Approximation using Corneal Imaging Inproceedings Proc. of the International Conference on Mobile and Ubiquitous Multimedia (MUM), pp. 279-291, 2017. @inproceedings{lander17_mum, title = {EyeMirror: Mobile Calibration-Free Gaze Approximation using Corneal Imaging}, author = {Christian Lander and Sven Gehring and Markus Löchtefeld and Andreas Bulling and Antonio Krüger}, url = {https://perceptual.mpi-inf.mpg.de/files/2017/11/lander17_mum.pdf}, doi = {10.1145/3152832.3152839}, year = {2017}, date = {2017-10-09}, booktitle = {Proc. of the International Conference on Mobile and Ubiquitous Multimedia (MUM)}, pages = {279-291}, abstract = {Gaze is a powerful measure of people’s attracted attention and reveals where we are looking at within our current field of view. Hence, gaze-based interfaces are gaining in importance. However, gaze estimation usually requires extensive hardware and depends on a calibration that has to be renewed regularly. We present EyeMirror, a mobile device for calibration-free gaze approximation on surfaces (e.g. displays). It consists of a head-mounted camera, connected to a wearable mini-computer, capturing the environment reflected on the human cornea. The corneal images are analyzed using natural feature tracking for gaze estimation on surfaces. In two lab studies we compared variations of EyeMirror against established methods for gaze estimation in a display scenario, and investigated the effect of display content (i.e. number of features). EyeMirror achieved 4.03° gaze estimation error, while we found no significant effect of display content.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Gaze is a powerful measure of people’s attracted attention and reveals where we are looking at within our current field of view. Hence, gaze-based interfaces are gaining in importance. However, gaze estimation usually requires extensive hardware and depends on a calibration that has to be renewed regularly. We present EyeMirror, a mobile device for calibration-free gaze approximation on surfaces (e.g. displays). It consists of a head-mounted camera, connected to a wearable mini-computer, capturing the environment reflected on the human cornea. The corneal images are analyzed using natural feature tracking for gaze estimation on surfaces. In two lab studies we compared variations of EyeMirror against established methods for gaze estimation in a display scenario, and investigated the effect of display content (i.e. number of features). EyeMirror achieved 4.03° gaze estimation error, while we found no significant effect of display content. |