Abstract
We present a video-based gesture dataset and a methodology for annotating video-based gesture datasets. Our dataset consists of user-defined gestures generated by 18 participants from a previous investigation of gesture memorability. We design and use a crowd-sourced classification task to annotate the videos. The results are made available through a web-based visualization that allows researchers and designers to explore the dataset. Finally, we perform an additional descriptive analysis and quantitative modeling exercise that provide additional insights into the results of the original study.
To facilitate the use of the presented methodology by other researchers we share the data, the source of the human intelligence tasks for crowdsourcing, a new taxonomy that integrates previous work, and the source code of the visualization tool.
To facilitate the use of the presented methodology by other researchers we share the data, the source of the human intelligence tasks for crowdsourcing, a new taxonomy that integrates previous work, and the source code of the visualization tool.
Original language | English |
---|---|
Title of host publication | Proceedings of the 9th ACM International Conference on Interactive Tabletops and Surfaces (ITS 2014) |
Place of Publication | New York, NY |
Publisher | ACM |
Pages | 25-34 |
Number of pages | 10 |
ISBN (Electronic) | 9781450325875 |
DOIs | |
Publication status | Published - 16 Nov 2014 |
Keywords
- Gesture design
- User-defined gestures
- Gesture elicitation
- Gesture analysis methodology
- Gesture annotation
- Gesture memorability
- Gestures
- Gesture datasets
- Crowdsourcing