Datasets

Privacy-Aware Eye Tracking Using Differential Privacy (MPIIDPEye)

With eye tracking being increasingly integrated into virtual and augmented reality (VR/AR) head-mounted displays, preserving users’ privacy is an ever more important, yet under-explored, topic in the eye tracking community. We report a large-scale online survey (N=124) on privacy aspects of eye tracking that provides the first comprehensive account of with whom, for which services, and to which extent users are willing to share their gaze data. Using these insights, we design a privacy-aware VR interface that uses differential privacy, which we evaluate on a new 20-participant dataset for two privacy sensitive tasks: We show that our method can prevent user re-identification and protect gender information while maintaining high performance for gaze-based document type classification. Our results highlight the privacy challenges particular to gaze data and demonstrate that differential privacy is a potential means to address them. Thus, this paper lays important foundations for future research on privacy-aware gaze interfaces.

More information can be found here.

The dataset consists of a .zip file with two folders (Eye_Tracking_Data and Eye_Movement_Features), a .csv file with the ground truth annotation (Ground_Truth.csv) and a Readme.txt file. In each folder there are two files for participant (P) for each recording (R = document class). These two files contain the recorded eye tracking data and the corresponding eye movement features. The data is saved as a .npy and .csv file. The data scheme of the eye tracking data and eye movement features is given in this Readme.txt file.

Download: Please download the full dataset here (64 MB).
Contact: Julian Steil Campus E1.4, room 622, E-mail: jsteil@mpi-inf.mpg.de

The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:

Julian Steil; Inken Hagestedt; Michael Xuelin Huang; Andreas Bulling

Privacy-Aware Eye Tracking Using Differential Privacy Inproceedings

Proc. International Symposium on Eye Tracking Research and Applications (ETRA), 2019, (best paper award).

Abstract | Links | BibTeX

PrivacEye: Privacy-Preserving Head-Mounted Eye Tracking Using Egocentric Scene Image and Eye Movement Features

Eyewear devices, such as augmented reality displays, increasingly integrate eye tracking, but the first-person camera required to map a user’s gaze to the visual scene can pose a significant threat to user and bystander privacy. We present PrivacEye, a method to detect privacy-sensitive everyday situations and automatically enable and disable the eye tracker’s first-person camera using a mechanical shutter. To close the shutter in privacy-sensitive situations, the method uses a deep representation of the first-person video combined with rich features that encode users’ eye movements. To open the shutter without visual input, PrivacEye detects changes in users’ eye movements alone to gauge changes in the ”privacy level” of the current situation. We evaluate our method on a first-person video dataset recorded in daily life situations of 17 participants, annotated by themselves for privacy sensitivity, and show that our method is effective in preserving privacy in this challenging setting.

More information can be found here.

The full dataset can be downloaded as .zip file (but also separately for each participant here). Each .zip file contains four folders. In each folder there is a Readme.txt with a separate annotation scheme for the contained files.

Data_Annotation
For each participant and each recording continuously recorded eye, scene, and IMU data as well as the corresponding ground truth annotation are saved as .csv, .npy, and .pkl (all three files include the same data).

Features_and_Ground_Truth
For each participant and each recording eye movement features (52) from a sliding window of 30 seconds and CNN features (68) extracted with a step size of 1 second are saved as .csv, .npy (both files include the same data). These data are not standardised. In a standardised form these data were used to train our SVM models.

Video_Frames_and_Ground_Truth
For each participant and each recording the scene frame number and corresponding ground truth annotation are saved as .csv, .npy (both files include the same data).

Private_Segments_Statistics
For each participant and each recording statistics of the number of private and non-private segments, average, min, max, and total segment time in minutes are saved as .csv, .npy (both files include the same data).

Download: Please download the full dataset here (2.6 GB).
Contact: Julian Steil Campus E1.4, room 622, E-mail: jsteil@mpi-inf.mpg.de

The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:

Julian Steil; Marion Koelle; Wilko Heuten; Susanne Boll; Andreas Bulling

PrivacEye: Privacy-Preserving Head-Mounted Eye Tracking Using Egocentric Scene Image and Eye Movement Features Inproceedings

Proc. International Symposium on Eye Tracking Research and Applications (ETRA), 2019, (best video award).

Abstract | Links | BibTeX

Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable (MPIIMobileAttention)

Visual attention is highly fragmented during mobile interactions, but the erratic nature of attention shifts currently limits attentive user interfaces to adapting after the fact, i.e. after shifts have already happened. We instead study attention forecasting – the challenging task of predicting users’ gaze behaviour (overt visual attention) in the near future. We present a novel long-term dataset of everyday mobile phone interactions, continuously recorded from 20 participants engaged in common activities on a university campus over 4.5 hours each (more than 90 hours in total). We propose a proof-of-concept method that uses device-integrated sensors and body-worn cameras to encode rich information on device usage and users’ visual scene. We demonstrate that our method can forecast bidirectional attention shifts and predict whether the primary attentional focus is on the handheld mobile device. We study the impact of different feature sets on performance and discuss the significant potential but also remaining challenges of forecasting user attention during mobile interactions.

More information can be found here.

The dataset consists of a .zip file with three files per participant for each of the three recording blocks (RB). Each recording block file is saved as a .pkl file which can be read with python using pandas. The data scheme of the 213 coulmns is given in this README.txt file.

Download: Please download the full dataset here (2.4 GB).
Contact: Julian Steil Campus E1.4, room 622, E-mail: jsteil@mpi-inf.mpg.de

The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:

Julian Steil; Philipp Müller; Yusuke Sugano; Andreas Bulling

Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors Inproceedings

Proc. International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI), pp. 1:1–1:13, 2018, (best paper award).

Abstract | Links | BibTeX

Fixation Detection for Head-Mounted Eye Tracking Based on Visual Similarity of Gaze Targets (MPIIEgoFixation)

Steil_ETRA18

Fixations are widely analysed in human vision, gaze-based interaction, and experimental psychology research. However, robust fixation detection in mobile settings is profoundly challenging given the prevalence of user and gaze target motion. These movements feign a shift in gaze estimates in the frame of reference defined by the eye tracker’s scene camera. To address this challenge, we present a novel fixation detection method for head-mounted eye trackers. Our method exploits that, independent of user or gaze target motion, target appearance remains about the same during a fixation. It extracts image information from small regions around the current gaze position and analyses the appearance similarity of these gaze patches across video frames to detect fixations. We evaluate our method using fine-grained fixation annotations on a five-participant indoor dataset (MPIIEgoFixation) with more than 2,300 fixations in total. Our method outperforms commonly used velocity- and dispersion-based algorithms, which highlights its significant potential to analyse scene image information for eye movement detection.

We have evaluated our method on a recent mobile eye tracking dataset [Sugano and Bulling 2015]. This dataset is particularly suitable because participants walked around throughout the recording period. Walking leads to a large amount of head motion and scene dynamics, which is both challenging and interesting for our detection task. Since the dataset was not yet publicly available, we requested it directly from the authors. The eye tracking headset (Pupil [Kassner et al. 2014]) featured a 720p world camera as well as an infra-red eye camera equipped on an adjustable camera arm. Both cameras recorded at 30 Hz. Egocentric videos were recorded using the world camera and synchronised via hardware timestamps. Gaze estimates were given in the dataset.

The dataset consists of 5 folders (Indoor-Recordings: P1 (1. Recording), P2 (1. Recording), P3 (2. Recording), P4 (1. Recording), P5 (2. Recording)). Each folder consists of a data file as well as a ground truth file with fixation IDs, start and end frame of the corresponding scene video. Both files are available as .npy and .csv format.

More information can be found here.

Download: Please download the full dataset here (3.2 MB).
Contact: Julian Steil Campus E1.4, room 622, E-mail:
Videos: Can be requested here

The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:

Julian Steil; Michael Xuelin Huang; Andreas Bulling

Fixation Detection for Head-Mounted Eye Tracking Based on Visual Similarity of Gaze Targets Inproceedings

Proc. International Symposium on Eye Tracking Research and Applications (ETRA), pp. 23:1-23:9, 2018.

Abstract | Links | BibTeX


InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation

Tonsen_IMWUT17

Analysis of everyday human gaze behaviour has significant potential for ubiquitous computing, as evidenced by a large body of work in gaze-based human-computer interaction, attentive user interfaces, and eye-based user modelling. However, current mobile eye trackers are still obtrusive, which not only makes them uncomfortable to wear and socially unacceptable in daily life, but also prevents them from being widely adopted in the social and behavioural sciences. To address these challenges we present InvisibleEye, a novel approach for mobile eye tracking that uses millimetre-size RGB cameras that can be fully embedded into normal glasses frames. To compensate for the cameras’ low image resolution of only a few pixels, our approach uses multiple cameras to capture different views of the eye, as well as learning-based gaze estimation to directly regress from eye images to gaze directions. We prototypically implement our system and characterise its performance on three large-scale, increasingly realistic, and thus challenging datasets: 1) eye images synthesised using a recent computer graphics eye region model, 2) real eye images recorded of 17 participants under controlled lighting, and 3) eye images recorded of four participants over the course of four recording sessions in a mobile setting. We show that InvisibleEye achieves a top person-specific gaze estimation accuracy of 1.79° using four cameras with a resolution of only 5 × 5 pixels. Our evaluations not only demonstrate the feasibility of this novel approach but, more importantly, underline its significant potential for finally realising the vision of invisible mobile eye tracking and pervasive attentive user interfaces.

We used this first hardware prototype to record a dataset of more than 280,000 close-up eye images with ground truth annotation of the gaze location. A total of 17 participants were recorded, covering a wide range of appearances:
• Gender: Five (29%) female and 12 (71%) male
• Nationality: Seven (41%) German, seven (41%) Indian, one (6%) Bangladeshi, one (6%) Iranian,
and one (6%) Greek
• Eye Color: 12 (70%) brown, four (23%) blue, and one (5%) green
• Glasses: Four participants (23%) wore regular glasses and one (6%) wore contact lenses
For each participant, two sets of data were recorded: one set of training data and a separate set of test data. For each set, a series of gaze targets was shown on a display that participants were instructed to look at. For both training and test data the gaze targets covered a uniform grid in a random order, where the grid corresponding to the test data was positioned to lie in between the training points. Since the NanEye cameras record at about 44 FPS, we gathered approximately 22 frames per camera and gaze target. The training data was recorded using a uniform 24 × 17 grid of points, with an angular distance in gaze angle of 1.45° horizontally and 1.30° vertically between the points. In total the training set contained about 8,800 images per camera and participant. The test set’s points belonged to a 23 × 16 grid of points and it contains about 8,000 images per camera and participant. This way, the gaze targets covered a field of view of 35° horizontally and 22° vertically.

The dataset consists of 17 folders. Each folder contains the two subfolders for train and test sets with the video frames from each of the four NanEye cameras as well as a .npy file with pixel coordinates on the display.

More information can be found here.

Download: Please download the full dataset here (49.2 GB).
Contact: Julian Steil Campus E1.4, room 622, E-mail:

The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:

Marc Tonsen; Julian Steil; Yusuke Sugano; Andreas Bulling

InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation Journal Article

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), 1 (3), pp. 106:1-106:21, 2017, (distinguished paper award).

Abstract | Links | BibTeX


It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation

We present the MPIIFaceGaze dataset which is based on the MPIIGaze dataset, with the additional human facial landmark annotation and the face regions are available. We added additional facial landmark and pupil center annotations for 37,667 face images.  Facial landmarks annotations were conducted in a semi-automatic manner as running facial landmark detection method first and then checking by two human annotators. The pupil centers were annotated by two human annotators from scratch. For sake of privacy, we only released the face region and blocked the background in images.

More information can be found here.

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Download: Please download the full dataset here (940 MB).

You can also download the normalized data from here, which includes the normalized face images as 448*448 pixels size and 2D gaze angle vectors. Please note that the data normalization procedure changed the gaze direction label so that you need to convert results based on this normalized data to the original camera space.

Contact: Xucong Zhang, Campus E1.4, room 609, E-mail:

The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following papers:

Xucong Zhang; Yusuke Sugano; Mario Fritz; Andreas Bulling

MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation Journal Article

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 41 (1), pp. 162-175, 2019.

Abstract | Links | BibTeX

Xucong Zhang; Yusuke Sugano; Mario Fritz; Andreas Bulling

It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation Inproceedings

Proc. of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2299-2308, 2017.

Abstract | Links | BibTeX


Labeled pupils in the wild: A dataset for studying pupil detection in unconstrained environments

Tonsen_ETRA16
We present labelled pupils in the wild (LPW), a novel dataset of 66 high-quality, high-speed eye region videos for the development and evaluation of pupil detection algorithms. The videos in our dataset were recorded from 22 participants in everyday locations at about 95 FPS using a state-of-the-art dark-pupil head-mounted eye tracker. They cover people with different ethnicities, a diverse set of everyday indoor and outdoor illumination environments, as well as natural gaze direction distributions. The dataset also includes participants wearing glasses, contact lenses, as well as make-up. We benchmark five state-of-the-art pupil detection algorithms on our dataset with respect to robustness and accuracy. We further study the influence of image resolution, vision aids, as well as recording location (indoor, outdoor) on pupil detection performance. Our evaluations provide valuable insights into the general pupil detection problem and allow us to identify key challenges for robust pupil detection on head-mounted eye trackers.

More information can be found here.

Download: Please download the full dataset here (2.4 GB).
Contact: Andreas Bulling Campus E1.4, room 628, E-mail:

The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:

Marc Tonsen; Xucong Zhang; Yusuke Sugano; Andreas Bulling

Labelled pupils in the wild: A dataset for studying pupil detection in unconstrained environments Inproceedings

Proc. of the 9th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2016), pp. 139-142, 2016.

Abstract | Links | BibTeX


3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers

mansouryar16_etra

We collected eye tracking data from 14 participants aged between 22 and 29 years. 10 recordings were collected from each participant, 2 for each depth (calibration and test) at 5 different depths from a public display (1m, 1.25m, 1.5m, 1.75m and 2m). Display dimensions were 121.5cm × 68.7cm. We use a 5×5 grid pattern to disply 25 calibration points and an inner 4×4 grid for displaying 16 test points. This is done by randomly moving a target marker on these grid positions and capturing images from eye/scene camera at 30 Hz. We further perform marker detection using ArUco library on target points to compute their 3D coordinates w.r.t. scene camera. In addition, we are given the 2D position of pupil center in each frame of the eye-camera from a state-of-the-art dark-pupil head-mounted eye tracker (PUPIL). the eye tracker consists of a 1280×720 resolution scene camera and a 640×360 resolution eye camera. the PUPIL software used was v0.5.4.

Data is collected in an indoor setting and adds up to over 7 hours of eye tracking. Current dataset includes marker tracking results using ArUco per frame for every recording along with pupil tracking results from PUPIL eye tracker also for every frame of the eye video. We have also included camera intrinsic parameters for both eye camera and scene camera along with some post processed results such as frames corresponding to gaze intervals for every grid point. For more information on data format and how to use it please refer to the README file inside the dataset. In case you want to access the raw videos from both scene and eye camera please contact the authors.

Our evaluations on this data show the effectiveness of our new 2D-to-3D mapping approach together with multiple depth calibration data in reducing gaze estimation error. More information on this data and the analysis made can be found here.

Download: Please download the full dataset from here (81.4 MB).
Contact: Andreas Bulling Campus E1.4, room 628, E-mail:

The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:

Mohsen Mansouryar; Julian Steil; Yusuke Sugano; Andreas Bulling

3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers Inproceedings

Proc. of the 9th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2016), pp. 197-200, 2016.

Abstract | Links | BibTeX


Discovery of Everyday Human Activities From Long-term Visual Behaviour Using Topic Models

Steil_Ubicomp15

We recruited 10 participants (three female) aged between 17 and 25 years through university mailing lists and adverts in university buildings. Most participants were bachelor’s and master’s students in computer science and chemistry. None of them had previous experience with eye tracking. After arriving in the lab, participants were first introduced to the purpose and goals of the study and could familiarise themselves with the recording system. In particular, we showed them how to start and stop the recording software, how to run the calibration procedure, and how to restart the recording. We then asked them to take the system home and wear it continuously for a full day from morning to evening. We asked participants to plug in and recharge the laptop during prolonged stationary activities, such as at their work desk. We did not impose any other restrictions on these recordings, such as which day of the week to record or which activities to perform, etc.

The recording system consisted of a Lenovo Thinkpad X220 laptop, an additional 1TB hard drive and battery pack, as well as an external USB hub. Gaze data was collected using a PUPIL head-mounted eye tracker connected to the laptop via USB. The eye tracker features two cameras: one eye camera with a resolution of 640×360 pixels recording a video of the right eye from close proximity, as well as an egocentric (scene) camera with a resolution of 1280×720 pixels. Both cameras record at 30 Hz. The battery lifetime of the system was four hours. We implemented custom recording software with a particular focus on ease of use as well as the ability to easily restart a recording if needed.

We recorded a dataset of more than 80 hours of eye tracking data. The dataset comprises 7.8 hours of outdoor activities, 14.3 hours of social interaction, 31.3 hours of focused work, 8.3 hours of travel, 39.5 hours of reading, 28.7 hours of computer work, 18.3 hours of watching media, 7 hours of eating, and 11.4 hours of other (special) activities. Note that annotations are not mutually exclusive, i.e. these durations should be seen independently and sum up to more than the actual dataset size.

The dataset consists of 20 files. Ten files contain the long-term eye movement data of the ten recorded participants of this study. The other ten files describe the corresponding ground truth annotations.

More information can be found here.

Download: Please download the full dataset here (457.8 MB).
Contact: Julian Steil Campus E1.4, room 622, E-mail:

The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:

Julian Steil; Andreas Bulling

Discovery of Everyday Human Activities From Long-Term Visual Behaviour Using Topic Models Inproceedings

Proc. of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2015), pp. 75-85, 2015.

Abstract | Links | BibTeX


Appearance-Based Gaze Estimation in the Wild

Zhang_CVPR15

We present the MPIIGaze dataset that contains 213,659 images that we collected from 15 participants during natural everyday laptop use over more than three months. The number of images collected by each participant varied from 34,745 to 1,498. Our dataset is significantly more variable than existing ones with respect to appearance and illumination.

The dataset contains three parts: “Data”, “Evaluation Subset” and “Annotation subset”.

  • The “Data” includes “Original”, “Normalized” and “Calibration” for all the 15 participants.
  • The “Evaluation Subset” contains the image list that indicates the selected samples for the evaluation subset in our paper.
  • The “Annotation Subset” contains the image list that indicates 10,848 samples that we manually annotated, following the annotations with (x, y) position of 6 facial landmarks (four eye corners, two mouth corners) and (x,y) position of two pupil centers for each of above images.

More information can be found here.

Download: Please download the full dataset here (2.1 GB).
Contact: Xucong Zhang, Campus E1.4, room 609, E-mail:

If you use this dataset in your work, please cite:

Xucong Zhang; Yusuke Sugano; Mario Fritz; Andreas Bulling

MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation Journal Article

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 41 (1), pp. 162-175, 2019.

Abstract | Links | BibTeX

Xucong Zhang; Yusuke Sugano; Mario Fritz; Andreas Bulling

Appearance-Based Gaze Estimation in the Wild Inproceedings

Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), pp. 4511-4520, 2015.

Abstract | Links | BibTeX


Prediction of Search Targets From Fixations in Open-World Settings

sattar15_cvpr

We recorded fixation data of 18 participants (nine male) with different nationalities and aged between 18 and 30 years. The eyesight of nine participants was impaired but corrected with contact lenses or glasses.

To record gaze data we used a stationary Tobii TX300 eye tracker that provides binocular gaze data at a sampling frequency of 300Hz. Parameters for fixation detection were left at their defaults: fixation duration was set to 60ms while the maximum time between fixations was set to 75ms.The stimuli were shown on a 30 inch screen with a resolution of 2560×1600 pixels.Participants were randomly assigned to search for targets for one of the three stimulus types.

The dataset contains three categories: “Amazon”, “O’Reilly” and “Mugshots”. For each category, there is a folder that contains 4 subfolder: search targets, Collages, Gaze data and binary mask that we used to get the position of each individual image in the collages.

  • In the subfolder search targets you can find the 5 single images. The participants were looking for this image in the collages.
  • In the folder Collages there are 5 subfolder. Subfolder with the same name as the search target indicate that users saw those collages for the search target. There are 20 collages per search target.
  • In the folder gaze data you can find Media name, Fixation order, Fixation position on the screen, pupil size for left and right eye.

More information can be found here.

Download: Please download the full dataset here (374.9 MB).
Contact: Hosnieh Sattar Campus E1.4, room 608, E-mail:

The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:

Hosnieh Sattar; Sabine Müller; Mario Fritz; Andreas Bulling

Prediction of Search Targets From Fixations in Open-World Settings Inproceedings

Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015), pp. 981-990, 2015.

Abstract | Links | BibTeX


Recognition of Visual Memory Recall Processes Using Eye Movement Analysis

bulling11_ubicomp

This dataset was recorded to investigate the feasibility of recognising visual memory recall from eye movements. Eye movement data was recorded of participants looking at familiar and unfamiliar pictures from four picture categories: abstract, landscapes, faces, and buildings. The study was designed with two objectives in mind: (1) to elicit distinct eye movements by using a large screen and well-defined visual stimuli, and (2) to record natural visual behaviour without any active visual search or memory task by not asking participants for real-time feedback.

The dataset has the following characteristics:

  • ~7 hours of eye movement data recorded using a wearable Electrooculography (EOG) system
  • 7 participants (3 female, 4 male), aged between 25 and 29 years
  • one experimental run for each participant, involving them to look at four continuous, random sequences of pictures (exposure time for each picture 10s). Within each sequence, 12 pictures were presented only once; five others were presented four times at regular intervals. In between each exposure, a picture with Gaussian noise was shown for five seconds as a baseline measurement.
  • separate horizontal and vertical EOG channels, joint sampling frequency of 128Hz
  • fully ground truth annotated for picture type (repeated, non-repeated) and picture category

Download: Please download the full dataset here (25.3 MB).
Contact: Andreas Bulling, Campus E1.4, room 628, E-mail:

If you use this dataset in your work, please cite:

Andreas Bulling; Daniel Roggen

Recognition of Visual Memory Recall Processes Using Eye Movement Analysis Inproceedings

Proc. of the 13th International Conference on Ubiquitous Computing (UbiComp 2011), pp. 455-464, 2011.

Abstract | Links | BibTeX


Eye Movement Analysis for Activity Recognition Using Electrooculography

bulling09_ubicomp

This dataset was recorded to investigate the problem of recognising common office activities from eye movements. The experimental scenario involved five office-based activities – copying a text, reading a printed paper, taking handwritten notes, watching a video, and browsing the Web – and periods during which participants took a rest (the NULL class).

The dataset has the following characteristics:

  • ~8 hours of eye movement data recorded using a wearable Electrooculography (EOG) system
  • 8 participants (2 female, 6 male), aged between 23 and 31 years
  • 2 experimental runs for each participant, each run involving them in a sequence of five different, randomly ordered office activities and a period of rest
  • separate horizontal and vertical EOG channels, joint sampling frequency of 128Hz
  • fully ground truth annotated (5 activity classes plus NULL)

Download: Please download the full dataset here (20.9 MB).
Contact: Andreas Bulling, Campus E1.4, room 628, E-mail:

If you use this dataset in your work, please cite:

Andreas Bulling; Jamie A. Ward; Hans Gellersen; Gerhard Tröster

Eye Movement Analysis for Activity Recognition Using Electrooculography Journal Article

IEEE Transactions on Pattern Analysis and Machine Intelligence, 33 (4), pp. 741-753, 2011.

Abstract | Links | BibTeX

Andreas Bulling; Jamie A. Ward; Hans Gellersen; Gerhard Tröster

Eye Movement Analysis for Activity Recognition Inproceedings

Proc. of the 11th International Conference on Ubiquitous Computing (UbiComp 2009), pp. 41–50, 2009.

Abstract | Links | BibTeX


Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography

bulling08_pervasive

This dataset was recorded to investigate the problem of recognising reading activity from eye movements. The experimental setup was designed with two main objectives in mind: (1) to record eye movements in an unobtrusive manner in a mobile real-world setting, and (2) to evaluate how well reading can be recognised for persons in transit. We defined a scenario of travelling to and from work containing a semi-naturalistic set of reading activities. It involved subjects reading freely chosen text without pictures while engaged in a sequence of activities such as sitting at a desk, walking along a corridor, walking along a street, waiting at a tram stop and riding a tram.

The dataset has the following characteristics:

  • ~6 hours of eye movement data recorded using a wearable Electrooculography (EOG) system
  • 8 participants (4 female, 4 male), aged between 23 and 35 years
  • 4 experimental runs for each participant: calibration (walking around a circular corridor for approximately 2 minutes while reading continuously), baseline (walk and tram ride to and from work without any reading), two runs of reading in the same scenario
  • separate horizontal and vertical EOG channels, joint sampling frequency of 128Hz
  • fully ground truth annotated (reading vs. not reading) using a wireless Wii Remote controller

Download: Please download the full dataset here (20.2 MB).
Contact: Andreas Bulling, Campus E1.4, room 628, E-mail:

If you use this dataset in your work, please cite:

Andreas Bulling; Jamie A. Ward; Hans Gellersen

Multimodal Recognition of Reading Activity in Transit Using Body-Worn Sensors Journal Article

ACM Transactions on Applied Perception, 9 (1), pp. 2:1–2:21, 2012.

Abstract | Links | BibTeX

Andreas Bulling; Jamie A. Ward; Hans Gellersen; Gerhard Tröster

Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography Inproceedings

Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 19–37, 2008.

Abstract | Links | BibTeX