 | Pingmei Xu; Yusuke Sugano; Andreas Bulling Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces Inproceedings Proc. of the 34th ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 3299-3310, 2016, ISBN: 978-1-4503-3362-7 , (best paper honourable mention award). Abstract | Links | BibTeX @inproceedings{xu16_chi,
title = {Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces},
author = {Pingmei Xu and Yusuke Sugano and Andreas Bulling},
url = {https://perceptual.mpi-inf.mpg.de/files/2016/02/xu16_chi.pdf},
doi = {10.1145/2858036.2858479},
isbn = {978-1-4503-3362-7 },
year = {2016},
date = {2016-05-07},
booktitle = {Proc. of the 34th ACM SIGCHI Conference on Human Factors in Computing Systems (CHI)},
pages = {3299-3310},
abstract = {We present a computational model to predict users' spatio-temporal visual attention for WIMP-style (windows, icons, mouse, pointer) graphical user interfaces. Like existing models of bottom-up visual attention in computer vision, our model does not require any eye tracking equipment. Instead, it predicts attention solely using information available to the interface, specifically users' mouse and keyboard input as well as the UI components they interact with. To study our model in a principled way we further introduce a method to synthesize user interface layouts that are functionally equivalent to real-world interfaces, such as from Gmail, Facebook, or GitHub. We first quantitatively analyze attention allocation and its correlation with user input and UI components using ground-truth gaze, mouse, and keyboard data of 18 participants performing a text editing task. We then show that our model predicts attention maps more accurately than state-of-the-art methods. Our results underline the significant potential of spatio-temporal attention modeling for user interface evaluation, optimization, or even simulation.},
note = {best paper honourable mention award},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
We present a computational model to predict users' spatio-temporal visual attention for WIMP-style (windows, icons, mouse, pointer) graphical user interfaces. Like existing models of bottom-up visual attention in computer vision, our model does not require any eye tracking equipment. Instead, it predicts attention solely using information available to the interface, specifically users' mouse and keyboard input as well as the UI components they interact with. To study our model in a principled way we further introduce a method to synthesize user interface layouts that are functionally equivalent to real-world interfaces, such as from Gmail, Facebook, or GitHub. We first quantitatively analyze attention allocation and its correlation with user input and UI components using ground-truth gaze, mouse, and keyboard data of 18 participants performing a text editing task. We then show that our model predicts attention maps more accurately than state-of-the-art methods. Our results underline the significant potential of spatio-temporal attention modeling for user interface evaluation, optimization, or even simulation. |