Home

research mission

Welcome to the Perceptual User Interfaces group! We work at the intersection of Ubiquitous Computing, Human-Computer Interaction, Computer Vision, and Machine Learning. We develop novel computational methods as well as ambient and on-body systems to sense, model, and analyse everyday non-verbal human behaviour. We specifically focus on gaze and physical behaviour as these modalities are most promising for developing next-generation human-computer interfaces that offer natural interactive capabilities.

Please see research and publications as well as our YouTube channel for more information.

Open PhD positions

We are currently looking for 2 PhD students with a background in either i) machine learning, particularly deep-learning, generative models, reinforcement learning, or related areas or ii) computer vision or graphics, particularly gaze estimation, egocentric vision, scene understanding, or object detection/recognition. A strong interest in applying these methods to problems in HCI, e.g. intelligent user interfaces, is expected.

Excellent programming skills in C++ or other high-level programming languages are expected and experience with Phython, MATLAB, or CUDA is beneficial. Fluent English written and presentation skills are essential.

Positions are fully funded and are available to applicants of any nationality.
Please see the Jobs page for how to apply.

spotlight

ETRA’18: Fixation Detection for Head-Mounted Eye Tracking Based on Visual Similarity of Gaze Targets
ETRA’18: Error-Aware Gaze-Based Interfaces for Robust Mobile Gaze Interaction
ETRA’18: Revisiting Data Normalization for Appearance-Based Gaze Estimation
ETRA’18: Robust Eye Contact Detection in Natural Multi-Person Interactions Using Gaze and Speaking Behaviour
ETRA’18: A novel approach to single camera, glint-free 3D eye model fitting including corneal refraction
ETRA’18: Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings
ETRA’18: idden Pursuits: Evaluating Gaze-selection via Pursuits when the Stimulus Trajectory is Partially Hidden
CHI’18: Training Person-Specific Gaze Estimators from Interactions with Multiple Devices
CHI’18 (best paper honourable mention award): Which one is me? Identifying Oneself on Public Displays
CHI’18: Understanding Face and Eye Visibility in Front-Facing Cameras of Smartphones used in the Wild
EG’18: GazeDirector: Fully Articulated Eye Gaze Redirection in Video
IUI’18: Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behavior
IEEE TPAMI’18: MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation
MUM’17 (best paper honourable mention award): They are all after you: Investigating the Viability of a Threat Model that involves Multiple Shoulder Surfers
PACM IMWUT’17: InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation
PACM IMWUT’17: EyePACT: Eye-Based Parallax Correction on Touch-Enabled Interactive Displays
UIST’17 (best paper honourable mention award): Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery
UIST’17: EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays
CVPRW’17: It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation
CVPR’17 (spotlight presentation): Gaze Embeddings for Zero-Shot Image Classification
ECCV’16: A 3D Morphable Eye Region Model for Gaze Estimation
UIST’16 (best paper honourable mention award): AggreGaze: Collective Estimation of Audience Attention on Public Displays
UbiComp’16: TextPursuits: Using Text for Pursuits-Based Interaction and Calibration on Public Displays
ETRA’16 (emerging investigator award): Learning an appearance-based gaze estimator from one million synthesised images
CHI’16 (best paper honourable mention award): Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces
UIST’15 (best paper award): Orbits: Enabling Gaze Interaction in Smart Watches using Moving Targets
UbiComp’15: Discovery of Everyday Human Activities From Long-Term Visual Behaviour Using Topic Models
ICCV’15: Rendering of Eyes for Eye-Shape Registration and Gaze Estimation
CVPR’15: Appearance-Based Gaze Estimation in the Wild