Abstract
Capturing visual social cues in social conversations can prove a difficult task for visually impaired people. Their lack of ability to see facial expressions and body postures expressed by their conversation partners can lead them to misunderstand or misjudge the social situations. This paper presents a system that infers social cues from streaming video recorded by a pair of imaging glasses and feedbacks the inferred social cues to the users. We have implemented the prototype and evaluated the effectiveness and usefulness of the system in real-world conversation situations.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct |
Place of Publication | New York, NY |
Publisher | ACM |
Pages | 968-972 |
Number of pages | 5 |
ISBN (Print) | 9781450344623 |
DOIs | |
Publication status | Published - 12 Sept 2016 |
Event | 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '16) - Kongresshaus Stadthalle Heidelberg, Heidelberg, Germany Duration: 12 Sept 2016 → 16 Sept 2016 http://ubicomp.org/ubicomp2016/ |
Conference
Conference | 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '16) |
---|---|
Abbreviated title | UbiComp |
Country/Territory | Germany |
City | Heidelberg |
Period | 12/09/16 → 16/09/16 |
Internet address |
Keywords
- Affective Computing
- Imaging glasses
- Emotion
- Recognition