This article presents a systematic review of empirical research in mid-air interaction, based on a corpus of 104 publications from 2011 to 2018. Although the idea of exploiting body movements and gestures in HCI is not new, mid-air interaction is now possible due to the availability of reasonably priced depth cameras and sensors. In mid-air interaction, users make use of their whole body-with a strong focus on hands-and apply gestures, postures, and movements to interact with digital content on distant displays or remote devices. Mid-air interaction is a distinct style of natural HCI (Human-Computer Interaction). on interactive TV and video) 4 3.8% (Bobeth et al., 2012 Dezfuli et al., 2012 Ren & O'Neill, 2013b Vatavu, 2013) IFIP Interact 4 3.8% (Bailly, Walter, Müller, Ning, & Lecolinet, 2011 Heimonen et al., 2013 Markussen et al., 2013 Piumsomboon et al., 2013 (Gillespie et al., 2017 Opromolla et al., 2015) IEEE 3D User Interfaces 2 1.9% (Guy, Punpongsanon, Iwai, Sato, & Boubekeur, 2015 Ortega et al., 2017) Other conferences 19 18.3%. Chen et al., 2017) Virtual Reality 2 1.9% (Koutsabasis & Vosinakis, 2017 Vosinakis & Koutsabasis, 2018) Other journals 18 17.3% (Albertini et al., 2017 Bossavit, Marzo, & Ardaiz, 2014 Caro, Tentori, Martinez-Garcia, & Zavala-Ibarra, 2015 Cho, Baek, Baek, Lee, & Bang, 2014 Dong, Danesh, Figueroa, & Saddik, 2015 Ebert, Hatch, Ampanozi, Thali, & Ross, 2012 Fernandez-Cervantes, Neubauer, Hunter, Stroulia, & Liu, 2018 Kamel Boulos et al., 2011 Kosmas, Ioannou, & Retalis, 2018 Löcken, Hesselmann, Pielot, Henze, & Boll, 2011 Morrison et al., 2016 Nancel, Pietriga, Chapuis, & Beaudouin-Lafon, 2015 O'Hara et al., 2014 Shen, Luo, Wu, Tian, & Deng, 2016 Stellmach, Jüttner, Nywelt, & Schneider, 2012 Tan, Chao, Zawaideh, & Roberts, 2013 (Arefin Shimon et al., 2016 Carter et al., 2016 Chan, Seyed, Stuerzlinger, Yang, & Maurer, 2016 Freeman, Brewster, & Lantz, 2016 Gerling, Livingston, Nacke, & Mandryk, 2012 Hayashi, Maas, & Hong, 2014 Kulshreshth & LaViola, 2014 Mäkelä et al., 2018 Markussen, Jakobsen, & Hornbaek, 2014 Nancel, Wagner, Pietriga, Chapuis, & Mackay, 2011 Paay et al., 2017 Rovelo Ruiz, Vanacken, Luyten, Abad, & Camahort, 2014 Ruiz, Li, & Lank, 2011 Song, Goh, Hutama, Fu, & Liu, 2012 Strohmeier, Boring, & Hornbaek, 2018 Wacharamanotham, Todi, Pye, & Borchers, 2014 Walter, Bailly, & Müller, 2013) Euro ITV (European Conf. of Human-Computer Interaction 2 1.9% (Bernardos, Gómez, & Casar, 2016 Z. Conceptually, this means the technique can be recognized in a single sensor time frame (in implementations, it may actually be a few frames to compensate for noise). Independent -A mode-switch action should be fast to recognize and independent of previous tracking states, meaning it should not rely on time-based actions such as a specific Pinch Finger(s) 29 thumb touches index thumb touches all fingers thumb touches side of hand thumb touches middle thumb touches index and middle thumb touches ring thumb touches pinky thumb touches index, middle, and ring thumb touches ring and pinky thumb touches three fingers Extend Finger(s) 31 extend index extend thumb extend thumb-index-middle clench index two hand point point with dwell Close Hand 18 make fist make partial fist Open Hand 62 open hand with oriented palm in/out, up/down, right/left open hand open hand with finger(s) bent Raise Hand (ND) 6 hand raised into field-of-view raised above shoulder Touch Body (ND) 6 finger(s) touch head, behind ear mouth hand touches waist movements. Keywords: Gesture-based interaction, Mental model, Behavior model, Virtual reality, Augmented reality Conclusions We show that leveraging these two models, interaction experience and performance can be improved in VR and AR environments. Results In this paper, we present and discuss three pieces of research that focus on the mental model and behavior model of gesture-based target acquisition in VR and AR. Behavior model describes how user moves the body parts to perform the gestures, and the relationship between the gesture that user intends to perform and signals that computer senses. Mental model describes how user thinks up a gesture for acquiring a target, and can be the intuitive mapping between gestures and targets. Methods We build mental model and behavior model of the user to study two key parts of the interaction process. A typical process of gesture-based target acquisition is: when a user intends to acquire a target, she performs a gesture with her hands, head or other parts of the body, the computer senses and recognizes the gesture and infers the most possible target. In this paper, we explore to use gesture-based approaches for target acquisition in virtual and augmented reality. Background Gesture is a basic interaction channel that is frequently used by humans to communicate in daily life.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |