Given a video of an activity, can we predict what will
happen next? In this paper we explore two simple tasks
related to temporal prediction in egocentric videos of everyday
activities. We provide both human experiments to
understand how well people can perform on these tasks and
computational models for prediction. Experiments indicate
that humans and computers can do well on temporal prediction
and that personalization to a particular individual or
environment provides significantly increased performance.
Developing methods for temporal prediction could have far
reaching benefits for robots or intelligent agents to anticipate
what a person will do, before they do it.
Paper
Temporal Perception and Prediction in Ego-Centric Video
Yipin Zhou and Tamara L. Berg Proceedings of 15th IEEE International Conference on Computer Vision (ICCV2015)
[PDF 10MB] [Poster 10MB]
@inproceedings{pred,
booktitle = {ICCV},
year = {2015},
author = {Yipin Zhou and Tamara L. Berg},
title = {Temporal Perception and Prediction in Ego-Centric Video},}
Dataset
FPPA ego-centric dataset (6.8G) [README]
Please email yipin@cs.unc.edu
to get the link for downloading.