Refine
Document Type
- conference proceeding (article) (18)
- Article (3)
Language
- English (21) (remove)
Keywords
- Limited input space (2)
- Mensch-Maschine-Kommunikation (2)
- Projector (2)
- Projektionsapparat (2)
- Smartphone (2)
- 3D printer (1)
- 3D-Drucker (1)
- Accessibility (1)
- App <Programm> (1)
- Authentifikation (1)
BikeVR
(2020)
While becoming more and more aware of the ongoing climate change, eco-friendly means of transport for all citizens are moving further into focus. In order to be able to implement specific measures, it is necessary to better understand and emphasize sustainable transportation like walking and cycling through focused research. When developing novel traffic concepts and urban spaces for non-motorized traffic participants like bicycles and pedestrians, traffic and urban planning must be focused on their needs. To provide rare qualitative factors (such as stress, the perception of time and attractiveness of the environment) in this context, we present an audiovisual VR bicycle simulator which allows the user to cycle through a virtual urban environment by physically pedaling and also steering. Virtual Reality (VR) is a suitable tool in this context, as study participants find identical and almost freely definable (virtual) urban spaces with adjustable traffic scenarios. Our preliminary prototype proved to be promising and will be further optimized and evaluated.
This paper examines different approaches for the identification of users by their personal behavior and discusses techniques which could be used in the context of websites. Such web tracking approaches have the potential to identify users even if they use multiple or shared devices. For web pages mouse and touch input are widely used. Therefore, we propose a survey to evaluate the feasibility to identify users by their interaction behavior.
Haptic feedback may support immersion and presence in virtual reality (VR) environments. The emerging market of consumer devices offers first devices which are expected to increase the degree of feeling being actually present in a virtual environment. In this paper we introduce a novel evaluation that examines the influence of different types of haptic feedback on presence and performance regarding manual tasks in VR. Therefore, we conducted a comprehensive user study involving 14 subjects, who performed throwing, stacking and object identification tasks in VR with visual (i.e., sensory substitution), vibrotactile or force feedback. We measured the degree of presence and task-related performance metrics. Our results indicate that regarding presence vibrotactile feedback outperforms haptic feedback which performs better than visual feedback only. In addition, force feedback significantly lowered the execution time for the throwing and the stacking task. In object identification tasks, the vibrotactile feedback increased the detection rates compared to the vibrotactile and force feedback, but also increased the required time of identification. Despite the inadequacies of the still young consumer technology, there were nevertheless strong indications of connections between presence, task fulfillment and the type of haptic feedback.
The interest in virtual and augmented reality increased rapidly in the last years. Recently, haptic interaction and its applications get into focus. In this paper, we suggest the exploration of virtual objects using off-the-shelf VR game controllers. These are held like a pen with both hands and were used to palpate and identify the virtual object. Our study largely coincides with comparable previous work and shows that a ready-to-use VR system can be basically used for haptic exploration. The results indicate that virtual objects are more effectively recognized with closed eyes than with open eyes. In both cases, objects with a bigger morphological difference were identified the most frequently. The limitations due to quality and quantity of tactile feedback should be tackled in future studies that utilize currently developed wearable haptic devices and haptic rendering involving all fingers or even both hands. Thus, objects could be identifiable more intuitively and haptic feedback devices for interacting with virtual objects will be further disseminated.
This paper presents the first results on a user study in which people with visual impairments (PVI) explored a virtual environment (VE) by walking in a virtual reality (VR) treadmill. As recently suggested, we have now acquired first results from our feasibility study investigating this walk-in-place interaction. This represents a new, more intuitive way of for example virtually exploring unknown spaces in advance. Our prototype consists of off-the-shelf VR components (i.e., treadmill, headphones, glasses, and controller) providing a simplified white cane simulation and was tested by six visually impaired subjects. Our results indicate that this interaction is yet difficult, but promising and an important step to make VR more and better usable for PVIs. As an impact on the CHI community, we would like to make this research field known to a wider audience by sharing our intermediate results and suggestions for improvements, on some of which we are already working on.
With Tangible User Interfaces, the computer user is able to interact in a fundamentally different and more intuitive way than with usual 2D displays. By grasping real physical objects, information can also be conveyed haptically, i.e., the user not only sees information on a 2D display, but can also grasp physical representations. To recognize such objects (“tangibles”) it is skillful to use capacitive sensing, as it happens in most touch screens. Thus, real objects can be located and identified by the touch screen display automatically. Recent work already addressed such capacitive markers, but focused on their coding scheme and automated fabrication by 3D printing. This paper goes beyond the fabrication by 3D printers and, for the first time, applies the concept of capacitive codes to laser cutting and another immediate prototyping approach using modeling clay. Beside the evaluation of additional properties, we adapt recent research results regarding the optimized detection of tangible objects on capacitive screens. As a result of our comprehensive study, the detection performance is affected by the type of capacitive signal processing (respectively the device) and the geometry of the marker. 3D printing revealed to be the most reliable technique, though laser cutting and immediate prototyping of markers showed promising results. Based on our findings, we discuss individual strengths of each capacitive marker type.
Small mobile devices such as smartwatches are a rapidly growing market. However, they share the issue of limited input and output space which could impede the success of these devices in future. Hence, suitable alternatives to the concepts and metaphors known from smartphones have to be found. In this paper we present InclineType a tilt-based keyboard input that uses a 3-axis accelerometer for smartwatches. The user may directly select letters by moving his/her wrist and enters them by tapping on the touchscreen. Thanks to the distribution of the letters on the edges of the screen, the keyboard dedicates a low amount of space in the smartwatch. In order to optimize the user input our concept proposes multiple techniques to stabilize the user interaction. Finally, a user study shows that users get familiar with this technique with almost no previous training, reaching speeds of about 6 wpm in average.
CapCodes: Capacitive 3D Printable Identification and On-screen Tracking for Tangible Interaction
(2019)
Electronic markers can be used to link physical representations and virtual content for tangible interaction, such as visual markers commonly used for tabletops. Another possibility is to leverage capacitive touch inputs of smartphones, tablets and notebooks. However, existing approaches either do not couple physical and virtual representations or require significant post-processing. This paper presents and evaluates a novel approach using a coding scheme for the automatic identification of tangibles by touch inputs when they are touched and shifted. The codes can be generated automatically and integrated into a great variety of existing 3D models from the internet. The resulting models can then be printed completely in one cycle by off-the-shelf 3D printers; post processing is not needed. Besides the identification, the object's position and orientation can be tracked by touch devices. Our evaluation examined multiple variables and showed that the CapCodes can be integrated into existing 3D models and the approach could also be applied to untouched use for larger tangibles.
This paper introduces an approach for the (semi)automatic generation of worldwide available, detailed tactile maps including buildings and blind-specific features based on recognized illustrators’ guidelines and standards. These guidelines for tactile maps are investigated in order to define a formal rule set and to automatically filter map data accordingly. Using the rule set, our approach automatically abstracts map data in order to generate a 2.1D tactile model providing multiple height levels (layers) which can be printed by usual consumer 3D printers. Based on the popular OpenStreetMap map data, our automated approach allows to generate arbitrary detail maps blind persons individually interested in, without the need for manual adaption of the tactile map. Thus, this approach contributes to the goal to increase the autonomy of blind persons.
Tactile maps may contribute to the orientation of blind people or alternatively be used for navigation. In the past, the generation of these maps was a manual task which considerably limited their availability. Nowadays, similar to visual maps, tactile maps can also be generated semi-automatically by tools and web services. The existing approaches enable users to generate maps by entering a specific address or point of interest. This can in principle be done by a blind user. However, these approaches actually show an image of the map on the users display which cannot be read by screen readers. Consequently, the blind user does not know what is on the map before it is printed. Ideally, the map selection process should give the user more information and freedom to select the desired excerpt. This paper introduces a novel web service for blind people to interactively select and automatically generate tactile maps. It adapts the interaction concept for map selection to the requirements of blind users whilst supporting multiple printing technologies. The integrated audio review of the map’s contents allows earlier feedback to review if the currently selected map extract corresponds to the desired information need. Changes can be initiated before the map is printed which, especially for 3D printing, saves much time. The user is able to select map features to be included in the tactile map. Furthermore, the map rendering can be adapted to different zoom levels and supports multiple printing technologies. Finally, an evaluation with blind users was used to refine our approach.