I’m a Phd student in the Perceptual User Interfaces group since December 2014.
contact
mail:
phone: +49 (0)681 9325 2132
…or just drop by at E1.4, room 622.
education
I hold a Bachelor’s degree in Computer Science from Saarland University in February 2013. Instead of continuing the Master’s program in Computer Science I specialized in the fields of image acquisition, analysis and synthesis requiring profound scientific knowledge, in particular in computer science, mathematics, physics, engineering, and cognitive science. In December 2014 I received an interdisciplinary Master’s degree in Visual Computing from Saarland University.
research interests
- Mobile Eye Tracking
- Image Processing and Computer Vision
- Machine Learning and Pattern Recognition
- Human-Computer Interaction
publications
![]() | Julian Steil; Marion Koelle; Wilko Heuten; Susanne Boll; Andreas Bulling PrivacEye: Privacy-Preserving Head-Mounted Eye Tracking Using Egocentric Scene Image and Eye Movement Features Inproceedings Proc. International Symposium on Eye Tracking Research and Applications (ETRA), 2019, (best video award). @inproceedings{steil19_etra, title = {PrivacEye: Privacy-Preserving Head-Mounted Eye Tracking Using Egocentric Scene Image and Eye Movement Features}, author = {Julian Steil and Marion Koelle and Wilko Heuten and Susanne Boll and Andreas Bulling}, url = {//perceptual.mpi-inf.mpg.de/files/2019/04/steil19_etra.pdf //perceptual.mpi-inf.mpg.de/files/2019/04/steil19_etra_supplementary_material.pdf}, doi = {10.1145/3314111.3319913}, year = {2019}, date = {2019-03-07}, booktitle = {Proc. International Symposium on Eye Tracking Research and Applications (ETRA)}, abstract = {Eyewear devices, such as augmented reality displays, increasingly integrate eye tracking, but the first-person camera required to map a user’s gaze to the visual scene can pose a significant threat to user and bystander privacy. We present PrivacEye, a method to detect privacy-sensitive everyday situations and automatically enable and disable the eye tracker’s first-person camera using a mechanical shutter. To close the shutter in privacy-sensitive situations, the method uses a deep representation of the first-person video combined with rich features that encode users’ eye movements. To open the shutter without visual input, PrivacEye detects changes in users’ eye movements alone to gauge changes in the “privacy level” of the current situation. We evaluate our method on a first-person video dataset recorded in daily life situations of 17 participants, annotated by themselves for privacy sensitivity, and show that our method is effective in preserving privacy in this challenging setting.}, note = {best video award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Eyewear devices, such as augmented reality displays, increasingly integrate eye tracking, but the first-person camera required to map a user’s gaze to the visual scene can pose a significant threat to user and bystander privacy. We present PrivacEye, a method to detect privacy-sensitive everyday situations and automatically enable and disable the eye tracker’s first-person camera using a mechanical shutter. To close the shutter in privacy-sensitive situations, the method uses a deep representation of the first-person video combined with rich features that encode users’ eye movements. To open the shutter without visual input, PrivacEye detects changes in users’ eye movements alone to gauge changes in the “privacy level” of the current situation. We evaluate our method on a first-person video dataset recorded in daily life situations of 17 participants, annotated by themselves for privacy sensitivity, and show that our method is effective in preserving privacy in this challenging setting. |
![]() | Julian Steil; Inken Hagestedt; Michael Xuelin Huang; Andreas Bulling Privacy-Aware Eye Tracking Using Differential Privacy Inproceedings Proc. International Symposium on Eye Tracking Research and Applications (ETRA), 2019, (best paper award). @inproceedings{steil19_etra2, title = {Privacy-Aware Eye Tracking Using Differential Privacy}, author = {Julian Steil and Inken Hagestedt and Michael Xuelin Huang and Andreas Bulling}, url = {//perceptual.mpi-inf.mpg.de/files/2019/04/steil19_etra2.pdf //perceptual.mpi-inf.mpg.de/files/2019/04/steil19_etra2_supplementary_material.pdf}, doi = {10.1145/3314111.3319915}, year = {2019}, date = {2019-03-07}, booktitle = {Proc. International Symposium on Eye Tracking Research and Applications (ETRA)}, abstract = {With eye tracking being increasingly integrated into virtual and augmented reality (VR/AR) head-mounted displays, preserving users’ privacy is an ever more important, yet under-explored, topic in the eye tracking community. We report a large-scale online survey (N=124) on privacy aspects of eye tracking that provides the first comprehensive account of with whom, for which services, and to what extent users are willing to share their gaze data. Using these insights, we design a privacy-aware VR interface that uses differential privacy, which we evaluate on a new 20-participant dataset for two privacy sensitive tasks: We show that our method can prevent user re-identification and protect gender information while maintaining high performance for gaze-based document type classification. Our results highlight the privacy challenges particular to gaze data and demonstrate that differential privacy is a potential means to address them. Thus, this paper lays important foundations for future research on privacy-aware gaze interfaces.}, note = {best paper award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } With eye tracking being increasingly integrated into virtual and augmented reality (VR/AR) head-mounted displays, preserving users’ privacy is an ever more important, yet under-explored, topic in the eye tracking community. We report a large-scale online survey (N=124) on privacy aspects of eye tracking that provides the first comprehensive account of with whom, for which services, and to what extent users are willing to share their gaze data. Using these insights, we design a privacy-aware VR interface that uses differential privacy, which we evaluate on a new 20-participant dataset for two privacy sensitive tasks: We show that our method can prevent user re-identification and protect gender information while maintaining high performance for gaze-based document type classification. Our results highlight the privacy challenges particular to gaze data and demonstrate that differential privacy is a potential means to address them. Thus, this paper lays important foundations for future research on privacy-aware gaze interfaces. |
![]() | Julian Steil; Philipp Müller; Yusuke Sugano; Andreas Bulling Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors Inproceedings Proc. International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI), pp. 1:1–1:13, 2018, (best paper award). @inproceedings{steil18_mobilehci, title = {Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors}, author = {Julian Steil and Philipp Müller and Yusuke Sugano and Andreas Bulling}, url = {https://wp.mpi-inf.mpg.de/perceptual/files/2018/07/steil18_mobilehci.pdf}, doi = {10.1145/3229434.3229439}, year = {2018}, date = {2018-04-16}, booktitle = {Proc. International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI)}, pages = {1:1--1:13}, abstract = {Visual attention is highly fragmented during mobile interactions but the erratic nature of attention shifts currently limits attentive user interfaces to adapt after the fact, i.e. after shifts have already happened. We instead study attention forecasting – the challenging task of predicting users' gaze behavior (overt visual attention) in the near future. We present a novel long-term dataset of everyday mobile phone interactions, continuously recorded from 20 participants engaged in common activities on a university campus over 4.5 hours each (more than 90 hours in total). We propose a proof-of-concept method that uses device-integrated sensors and body-worn cameras to encode rich information on device usage and users' visual scene. We demonstrate that our method can forecast bidirectional attention shifts and whether the primary attentional focus is on the handheld mobile device. We study the impact of different feature sets on performance and discuss the significant potential but also remaining challenges of forecasting user attention during mobile interactions.}, note = {best paper award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Visual attention is highly fragmented during mobile interactions but the erratic nature of attention shifts currently limits attentive user interfaces to adapt after the fact, i.e. after shifts have already happened. We instead study attention forecasting – the challenging task of predicting users' gaze behavior (overt visual attention) in the near future. We present a novel long-term dataset of everyday mobile phone interactions, continuously recorded from 20 participants engaged in common activities on a university campus over 4.5 hours each (more than 90 hours in total). We propose a proof-of-concept method that uses device-integrated sensors and body-worn cameras to encode rich information on device usage and users' visual scene. We demonstrate that our method can forecast bidirectional attention shifts and whether the primary attentional focus is on the handheld mobile device. We study the impact of different feature sets on performance and discuss the significant potential but also remaining challenges of forecasting user attention during mobile interactions. |
![]() | Julian Steil; Michael Xuelin Huang; Andreas Bulling Fixation Detection for Head-Mounted Eye Tracking Based on Visual Similarity of Gaze Targets Inproceedings Proc. International Symposium on Eye Tracking Research and Applications (ETRA), pp. 23:1-23:9, 2018. @inproceedings{steil18_etra, title = {Fixation Detection for Head-Mounted Eye Tracking Based on Visual Similarity of Gaze Targets}, author = {Julian Steil and Michael Xuelin Huang and Andreas Bulling}, url = {https://perceptual.mpi-inf.mpg.de/files/2018/04/steil18_etra.pdf https://perceptual.mpi-inf.mpg.de/research/datasets/#steil18_etra}, doi = {10.1145/3204493.3204538}, year = {2018}, date = {2018-03-28}, booktitle = {Proc. International Symposium on Eye Tracking Research and Applications (ETRA)}, pages = {23:1-23:9}, abstract = {Fixations are widely analysed in human vision, gaze-based interaction, and experimental psychology research. However, robust fixation detection in mobile settings is profoundly challenging given the prevalence of user and gaze target motion. These movements feign a shift in gaze estimates in the frame of reference defined by the eye tracker's scene camera. To address this challenge, we present a novel fixation detection method for head-mounted eye trackers. Our method exploits that, independent of user or gaze target motion, target appearance remains about the same during a fixation. It extracts image information from small regions around the current gaze position and analyses the appearance similarity of these gaze patches across video frames to detect fixations. We evaluate our method using fine-grained fixation annotations on a five-participant indoor dataset (MPIIEgoFixation) with more than 2,300 fixations in total. Our method outperforms commonly used velocity- and dispersion-based algorithms, which highlights its significant potential to analyse scene image information for eye movement detection.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Fixations are widely analysed in human vision, gaze-based interaction, and experimental psychology research. However, robust fixation detection in mobile settings is profoundly challenging given the prevalence of user and gaze target motion. These movements feign a shift in gaze estimates in the frame of reference defined by the eye tracker's scene camera. To address this challenge, we present a novel fixation detection method for head-mounted eye trackers. Our method exploits that, independent of user or gaze target motion, target appearance remains about the same during a fixation. It extracts image information from small regions around the current gaze position and analyses the appearance similarity of these gaze patches across video frames to detect fixations. We evaluate our method using fine-grained fixation annotations on a five-participant indoor dataset (MPIIEgoFixation) with more than 2,300 fixations in total. Our method outperforms commonly used velocity- and dispersion-based algorithms, which highlights its significant potential to analyse scene image information for eye movement detection. |
![]() | Julian Steil; Marion Koelle; Wilko Heuten; Susanne Boll; Andreas Bulling PrivacEye: Privacy-Preserving First-Person Vision Using Image Features and Eye Movement Analysis Technical Report arXiv:1801.04457, 2018. @techreport{steil2018_arxiv, title = {PrivacEye: Privacy-Preserving First-Person Vision Using Image Features and Eye Movement Analysis}, author = {Julian Steil and Marion Koelle and Wilko Heuten and Susanne Boll and Andreas Bulling}, url = {https://arxiv.org/abs/1801.04457 https://wp.mpi-inf.mpg.de/perceptual/files/2018/12/steil2018_arxiv_v2.pdf}, year = {2018}, date = {2018-01-13}, abstract = {As first-person cameras in head-mounted displays become increasingly prevalent, so does the problem of infringing user and bystander privacy. To address this challenge, we present PrivacEye, a proof-of-concept system that detects privacysensitive everyday situations and automatically enables and disables the first-person camera using a mechanical shutter. To close the shutter, PrivacEye detects sensitive situations from first-person camera videos using an end-to-end deep-learning model. To open the shutter without visual input, PrivacEye uses a separate, smaller eye camera to detect changes in users' eye movements to gauge changes in the "privacy level" of the current situation. We evaluate PrivacEye on a dataset of first-person videos recorded in the daily life of 17 participants that they annotated with privacy sensitivity levels. We discuss the strengths and weaknesses of our proof-of-concept system based on a quantitative technical evaluation as well as qualitative insights from semi-structured interviews. }, type = {arXiv:1801.04457}, keywords = {}, pubstate = {published}, tppubtype = {techreport} } As first-person cameras in head-mounted displays become increasingly prevalent, so does the problem of infringing user and bystander privacy. To address this challenge, we present PrivacEye, a proof-of-concept system that detects privacysensitive everyday situations and automatically enables and disables the first-person camera using a mechanical shutter. To close the shutter, PrivacEye detects sensitive situations from first-person camera videos using an end-to-end deep-learning model. To open the shutter without visual input, PrivacEye uses a separate, smaller eye camera to detect changes in users' eye movements to gauge changes in the "privacy level" of the current situation. We evaluate PrivacEye on a dataset of first-person videos recorded in the daily life of 17 participants that they annotated with privacy sensitivity levels. We discuss the strengths and weaknesses of our proof-of-concept system based on a quantitative technical evaluation as well as qualitative insights from semi-structured interviews. |
![]() | Marc Tonsen; Julian Steil; Yusuke Sugano; Andreas Bulling InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation Journal Article Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), 1 (3), pp. 106:1-106:21, 2017, (distinguished paper award). @article{tonsen17_imwut, title = {InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation}, author = {Marc Tonsen and Julian Steil and Yusuke Sugano and Andreas Bulling}, url = {https://perceptual.mpi-inf.mpg.de/files/2017/08/tonsen17_imwut.pdf https://perceptual.mpi-inf.mpg.de/research/datasets/#tonsen17_imwut}, doi = {10.1145/3130971}, year = {2017}, date = {2017-07-24}, journal = {Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)}, volume = {1}, number = {3}, pages = {106:1-106:21}, abstract = {Analysis of everyday human gaze behaviour has significant potential for ubiquitous computing, as evidenced by a large body of work in gaze-based human-computer interaction, attentive user interfaces, and eye-based user modelling. However, current mobile eye trackers are still obtrusive, which not only makes them uncomfortable to wear and socially unacceptable in daily life, but also prevents them from being widely adopted in the social and behavioural sciences. To address these challenges we present InvisibleEye, a novel approach for mobile eye tracking that uses millimetre-size RGB cameras that can be fully embedded into normal glasses frames. To compensate for the cameras’ low image resolution of only a few pixels, our approach uses multiple cameras to capture different views of the eye, as well as learning-based gaze estimation to directly regress from eye images to gaze directions. We prototypically implement our system and characterise its performance on three large-scale, increasingly realistic, and thus challenging datasets: 1) eye images synthesised using a recent computer graphics eye region model, 2) real eye images recorded of 17 participants under controlled lighting, and 3) eye images recorded of four participants over the course of four recording sessions in a mobile setting. We show that InvisibleEye achieves a top person-specific gaze estimation accuracy of 1.79° using four cameras with a resolution of only 5×5 pixels. Our evaluations not only demonstrate the feasibility of this novel approach but, more importantly, underline its significant potential for finally realising the vision of invisible mobile eye tracking and pervasive attentive user interfaces.}, note = {distinguished paper award}, keywords = {}, pubstate = {published}, tppubtype = {article} } Analysis of everyday human gaze behaviour has significant potential for ubiquitous computing, as evidenced by a large body of work in gaze-based human-computer interaction, attentive user interfaces, and eye-based user modelling. However, current mobile eye trackers are still obtrusive, which not only makes them uncomfortable to wear and socially unacceptable in daily life, but also prevents them from being widely adopted in the social and behavioural sciences. To address these challenges we present InvisibleEye, a novel approach for mobile eye tracking that uses millimetre-size RGB cameras that can be fully embedded into normal glasses frames. To compensate for the cameras’ low image resolution of only a few pixels, our approach uses multiple cameras to capture different views of the eye, as well as learning-based gaze estimation to directly regress from eye images to gaze directions. We prototypically implement our system and characterise its performance on three large-scale, increasingly realistic, and thus challenging datasets: 1) eye images synthesised using a recent computer graphics eye region model, 2) real eye images recorded of 17 participants under controlled lighting, and 3) eye images recorded of four participants over the course of four recording sessions in a mobile setting. We show that InvisibleEye achieves a top person-specific gaze estimation accuracy of 1.79° using four cameras with a resolution of only 5×5 pixels. Our evaluations not only demonstrate the feasibility of this novel approach but, more importantly, underline its significant potential for finally realising the vision of invisible mobile eye tracking and pervasive attentive user interfaces. |
![]() | Mohsen Mansouryar; Julian Steil; Yusuke Sugano; Andreas Bulling 3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers Technical Report arXiv:1601.02644, 2016. @techreport{mansouryar16_arxiv, title = {3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers}, author = {Mohsen Mansouryar and Julian Steil and Yusuke Sugano and Andreas Bulling }, url = {http://arxiv.org/abs/1601.02644 https://perceptual.mpi-inf.mpg.de/files/2016/01/Mansouryar16_arxiv.pdf}, year = {2016}, date = {2016-01-11}, abstract = {3D gaze information is important for scene-centric attention analysis but accurate estimation and analysis of 3D gaze in real-world environments remains challenging. We present a novel 3D gaze estimation method for monocular head-mounted eye trackers. In contrast to previous work, our method does not aim to infer 3D eyeball poses but directly maps 2D pupil positions to 3D gaze directions in scene camera coordinate space. We first provide a detailed discussion of the 3D gaze estimation task and summarize different methods, including our own. We then evaluate the performance of different 3D gaze estimation approaches using both simulated and real data. Through experimental validation, we demonstrate the effectiveness of our method in reducing parallax error, and we identify research challenges for the design of 3D calibration procedures. }, type = {arXiv:1601.02644}, keywords = {}, pubstate = {published}, tppubtype = {techreport} } 3D gaze information is important for scene-centric attention analysis but accurate estimation and analysis of 3D gaze in real-world environments remains challenging. We present a novel 3D gaze estimation method for monocular head-mounted eye trackers. In contrast to previous work, our method does not aim to infer 3D eyeball poses but directly maps 2D pupil positions to 3D gaze directions in scene camera coordinate space. We first provide a detailed discussion of the 3D gaze estimation task and summarize different methods, including our own. We then evaluate the performance of different 3D gaze estimation approaches using both simulated and real data. Through experimental validation, we demonstrate the effectiveness of our method in reducing parallax error, and we identify research challenges for the design of 3D calibration procedures. |
![]() | Julian Steil; Andreas Bulling Discovery of Everyday Human Activities From Long-Term Visual Behaviour Using Topic Models Inproceedings Proc. of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2015), pp. 75-85, 2015. @inproceedings{Steil_Ubicomp15, title = {Discovery of Everyday Human Activities From Long-Term Visual Behaviour Using Topic Models}, author = {Julian Steil and Andreas Bulling}, url = {https://perceptual.mpi-inf.mpg.de/files/2015/08/Steil_Ubicomp15.pdf https://perceptual.mpi-inf.mpg.de/research/datasets/#steil15_ubicomp}, doi = {10.1145/2750858.2807520}, year = {2015}, date = {2015-05-21}, booktitle = {Proc. of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2015)}, pages = {75-85}, abstract = {Human visual behaviour has significant potential for activity recognition and computational behaviour analysis, but previous works focused on supervised methods and recognition of predefined activity classes based on short-term eye movement recordings. We propose a fully unsupervised method to discover users' everyday activities from their long-term visual behaviour. Our method combines a bag-of-words representation of visual behaviour that encodes saccades, fixations, and blinks with a latent Dirichlet allocation (LDA) topic model. We further propose different methods to encode saccades for their use in the topic model. We evaluate our method on a novel long-term gaze dataset that contains full-day recordings of natural visual behaviour of 10 participants (more than 80 hours in total). We also provide annotations for eight sample activity classes (outdoor, social interaction, focused work, travel, reading, computer work, watching media, eating) and periods with no specific activity. We show the ability of our method to discover these activities with performance competitive with that of previously published supervised methods.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Human visual behaviour has significant potential for activity recognition and computational behaviour analysis, but previous works focused on supervised methods and recognition of predefined activity classes based on short-term eye movement recordings. We propose a fully unsupervised method to discover users' everyday activities from their long-term visual behaviour. Our method combines a bag-of-words representation of visual behaviour that encodes saccades, fixations, and blinks with a latent Dirichlet allocation (LDA) topic model. We further propose different methods to encode saccades for their use in the topic model. We evaluate our method on a novel long-term gaze dataset that contains full-day recordings of natural visual behaviour of 10 participants (more than 80 hours in total). We also provide annotations for eight sample activity classes (outdoor, social interaction, focused work, travel, reading, computer work, watching media, eating) and periods with no specific activity. We show the ability of our method to discover these activities with performance competitive with that of previously published supervised methods. |