TY - CHAP A1 - Yamazaki, Yasuhiro A1 - Herder, Jens T1 - Exploring Spatial Audio Conferencing Functionality in Multiuser Virtual Environments T2 - The Third International Conference on Collaborative Virtual Environments N2 - A chatspace was developed that allows conversation with 3D sound using networked streaming in a shared virtual environment. The system provides an interface to advanced audio features, such as a "whisper function" for conveying a confided audio stream. This study explores the use of spatial audio to enhance a user's experience in multiuser virtual environments. KW - chatspaces KW - groupware KW - narrowcasting functions KW - networked audio KW - spatial audio KW - VSVR Y1 - 2000 SP - 207 EP - 208 PB - ACM CY - San Francisco ER - TY - GEN A1 - Jarosch, Monika A1 - Herder, Jens A1 - Langmann, Mathias T1 - Entwicklung einer AR-Applikation zur kosteneffektiven volumetrischen Erfassung von Baugruben T2 - gis.Science N2 - Die volumetrische Erfassung von Aushüben auf Baustellen ist ein kostenrelevanter Faktor und wird auch heute im täglichen Baustellenbetrieb oft noch in manueller Detailarbeit durchgeführt. Kostengünstige Sensoren zur Tiefenerfassung ermöglichen die halbautomatische Erfassung von Baugruben. Augmented Reality (AR) kann für diesen Prozess das nötige Feedback liefern. Vorgestellt wird ein Prototyp, bestehend aus einem Tablet mit integrierter Kamera und einem Lidar-Scanner. Es wird die Erfassung des Volumens bezüglich Nutzbarkeit und Genauigkeit mit Einsatz von AR getestet und evaluiert. Zur Bestimmung des Volumens wird unter Verwendung von Strahlen mit Unterstützung einer Grafik-Engine ein Algorithmus entwickelt. Der Algorithmus ist robust gegen nicht vollständig geschlossene Volumen. Die Bedienung, Überprüfung und Visualisierung findet durch praktischen Einsatz von AR statt. KW - Volumenbestimmung KW - Augmented Reality KW - Baustelle KW - LIDAR KW - VSVR Y1 - 2022 UR - https://gispoint.de/artikelarchiv/gis/2022/gisscience-ausgabe-22022/7427-entwicklung-einer-ar-applikation-zur-kosteneffektiven-volumetrischen-erfassung-von-baugruben.html SN - 2698-4571 N1 - Der Artikel ist als PDF beim Verlag herunterladbar. Dieses wurde nach 3 Monaten freigeschaltet. Vorher war ein Abonnement notwendig. VL - 2022 IS - 2 SP - 75 EP - 83 ER - TY - CHAP A1 - Herder, Jens A1 - Cohen, Michael ED - Gorayska, Barbara ED - Nehaniv, Chrystopher L. ED - Marsh, Jonathon P. T1 - Enhancing Perspicuity of Objects in Virtual Reality Environments T2 - Proceedings, Second International Conference on Cognitive Technology N2 - In an information-rich Virtual Reality (VR) environment, the user is immersed in a world containing many objects providing that information. Given the finite computational resources of any computer system, optimization is required to ensure that the most important information is presented to the user as clearly as possible and in a timely fashion. In particular, what is desired are means whereby the perspicuity of an object may be enhanced when appropriate. An object becomes more perspicuous when the information it provides to the user becomes more readily apparent. Additionally, if a particular object provides high-priority information, it would be advantageous to make that object obtrusive as well as highly perspicuous. An object becomes more obtrusive if it draws attention to itself (or equivalently, if it is hard to ignore). This paper describes a technique whereby objects may dynamically adapt their representation in a user's environment according to a dynamic priority evaluation of the information each object provides. The three components of our approach are: - an information manager that evaluates object information priority, - an enhancement manager that tabulates rendering features associated with increasing object perspicuity and obtrusion as a function of priority, and - a resource manager that assigns available object rendering resources according to features indicated by the enhancement manager for the priority set for each object by the information manager. We consider resources like visual space (pixels), sound spatialization channels (mixels), MIDI/audio channels, and processing power, and discuss our approach applied to different applications. Assigned object rendering features are implemented locally at the object level (e.g., object facing the user using the billboard node in VRML 2.0) or globally, using helper applications (e.g., active spotlights, semi-automatic cameras). KW - autonomous actors KW - obtrusion KW - perspicuity KW - spatial media KW - spatialization KW - user interface design man-machine interfaces KW - Virtual Reality Y1 - 1997 SN - 0-8186-8084-9 SP - 228 EP - 237 PB - IEEE CY - Los Alamitos ER - TY - CHAP A1 - Ryskeldiev, Bektur A1 - Ochiai, Yoichi A1 - Cohen, Michael A1 - Herder, Jens T1 - Distributed Metaverse: Creating Decentralized Blockchain-based Model for Peer-to-peer Sharing of Virtual Spaces for Mixed Reality Applications T2 - Proceedings of the 9th Augmented Human International Conference N2 - Mixed reality telepresence is becoming an increasingly popular form of interaction in social and collaborative applications. We are interested in how created virtual spaces can be archived, mapped, shared, and reused among different applications. Therefore, we propose a decentralized blockchain-based peer-to-peer model of distribution, with virtual spaces represented as blocks. We demonstrate the integration of our system in a collaborative mixed reality application and discuss the benefits and limitations of our approach. KW - Blockchain KW - Groupware KW - Mixed Reality KW - Mobile Computing KW - Photospherical Imagery Y1 - 2018 UR - http://vsvr.medien.hs-duesseldorf.de/publications/ah2018-blockchain-streamspace-abstract.html SN - 978-1-4503-5415-8 U6 - https://doi.org/10.1145/3174910.3174952 SP - 7 EP - 9 PB - ACM ER - TY - JOUR A1 - Honno, Kuniaki A1 - Suzuki, Kenji A1 - Herder, Jens T1 - Distance and Room Effects Control for the PSFC, an Auditory Display using a Loudspeaker Array JF - Journal of the 3D-Forum Society N2 - The Pioneer Sound Field Controller (PSFC), a loudspeaker array system, features realtime configuration of an entire sound field, including sound source direction, virtual distance, and context of simulated environment (room characteristics: room size and liveness) for each of two sound sources. In the PSFC system, there is no native parameter to specify the distance between the sound source and sound sink (listener) and also no function to control it directrly. This paper suggests the method to control virtual distance using basic parameters: volume, room size and liveness. The implementation of distance cue is an important aspect of 3D sounds. Virtual environments supporting room effects like reverberation not only gain realism but also provide additional information to users about surrounding space. The context switch of different aural attributes is done by using an API of the Sound Spatialization Framework. Therefore, when the sound sink move through two rooms, like a small bathroom and a large living room, the context of the sink switches and different sound is obtained. KW - VSVR Y1 - 2000 VL - 14 IS - 4 SP - 146 EP - 151 ER - TY - CHAP A1 - Honno, Kuniaki A1 - Suzuki, Kenji A1 - Herder, Jens T1 - Distance and Room Effects Control for the PSFC, an Auditory Display using a Loudspeaker Array T2 - Third International Conference on Human and Computer N2 - The Pioneer Sound Field Controller (PSFC), a loudspeaker array system, features realtime configuration of an entire sound field,including sound source direction, virtual distance, and context of simulated environment (room characteristics: room size and liveness)for each of two sound sources. In the PSFC system, there is no native parameter to specify the distance between the sound source and sound sink (listener) and also no function to control it directrly. This paper suggests the method to control virtual distance using basic parameters: volume, room size and liveness. The implementation of distance cue is an important aspect of 3D sounds. Virtual environments supporting room effects like reverberation not only gain realism but also provide additional information to users about surrounding space. The context switch of different aural attributes is done by using an API of the Sound Spatialization Framework. Therefore, when the sound sink move through two rooms, like a small bathroom and a large living room, the context of the sink switches and different sound is obtained. KW - VSVR Y1 - 2000 SP - 71 EP - 76 PB - University of Aizu CY - Aizu-Wakamatsu ER - TY - CHAP A1 - Herder, Jens A1 - Geiger, Christian A1 - Lehmann, Anke A1 - Vierjahn, Tom A1 - Wöldecke, Björn ED - Gausemeier, Jürgen ED - Grafe, Michael T1 - Designstrategien für den Einsatz von vibrotaktielem Feedback in Mixed Reality Anwendungen T2 - Augmented & Virtual Reality in der Produktentstehung KW - Mixed Reality KW - Human Computer Interaktion KW - Vibrotaktiles Feedback KW - Haptik Y1 - 2009 SN - 978-3-939350-71-2 VL - 232 SP - 225 EP - 240 PB - Heinz Nixdorf Institut, Universität Paderborn CY - Paderborn ER - TY - CHAP A1 - Geiger, Christian A1 - Herder, Jens A1 - Göbel, Sebastian A1 - Heinze, Christin A1 - Marinos, Dionysios T1 - Design and Virtual Studio Presentation of a Traditional Archery Simulator T2 - Proceedings of the Entertainment Interfaces Track 2010 at Interaktive Kulturen, Duisburg, Germany, September 12-15, 2010 N2 - In this paper we describe the design of a virtual reality simulator for traditional intuitive archery. Traditional archers aim without a target figure. Good shooting results require an excellent body-eye coordination that allows the user to perform identical movements when drawing the bow. Our simulator provides a virtual archery experience and supports the user to learn and practice the motion sequence of traditional archery in a virtual environment. We use an infrared tracking system to capture the user’s movements in order to correct his movement. To provide a realistic haptic feedback a real bow is used as interaction device. Our system provides a believable user experience and supports the user to learn how to shoot in the traditional way. Following a user-centered iterative design approach we developed a number of prototypes and evaluated them for refinement in sequent iteration cycles. For illustration purposes we created a short video clip in our virtual studio about this project that presents the main ideas in an informative yet entertaining way. KW - vr archery KW - 3D interaction KW - interactive sport simulation KW - user experience KW - user-centered design KW - Lehre KW - VSVR KW - Virtual Reality Y1 - 2010 UR - https://dl.gi.de/handle/20.500.12116/7385;jsessionid=149F7CDB1184309727393899BD806939 UR - http://ceur-ws.org/Vol-634/Entertainment-Interfaces-Proceedings03.pdf SP - 37 EP - 44 CY - Duisburg ER - TY - JOUR A1 - Cohen, Michael A1 - Herder, Jens A1 - L. Martens, William T1 - Cyberspatial Audio Technology T1 - available in Japanese as well - Acoustical Society of Japan, Vol. 55, No. 10, pp. 730-731 JF - The Journal of the Acoustical Society of Japan (E) N2 - Cyberspatial audio applications are distinguished from the broad range of spatial audio applications in a number of important ways that help to focus this review. Most significant is that cyberspatial audio is most often designed to be responsive to user inputs. In contrast to non-interactive auditory displays, cyberspatial auditory displays typically allow active exploration of the virtual environment in which users find themselves. Thus, at least some portion of the audio presented in a cyberspatial environment must be selected, processed, or otherwise rendered with minimum delay relative to user input. Besides the technological demands associated with realtime delivery of spatialized sound, the type and quality of auditory experiences supported are also very different from those associated with displays that support stationary sound localization. Y1 - 1999 U6 - https://doi.org/10.1250/ast.20.389 N1 - available in Japanese as well - Acoustical Society of Japan, Vol. 55, No. 10, pp. 730-731 VL - 20 IS - 6 SP - 389 EP - 395 ER - TY - CHAP A1 - Becker, Thomas A1 - Herder, Jens T1 - Cost effective tangibles using fiducials for infrared multi-touch frames T2 - 15th International Conference on Human and Computer N2 - The late immersion of multi-touch sensitive displays enables the use of tangibles on multi-touch screens. There a several wide spread and/or sophisticated solutions to fulfill this need but they seem to have some flaws. One popular system at the time of writing is an overlay frame that can be placed on a normal display with the corresponding size. The frame creates a grid with infrared light emitting diodes. The disruption of this grid can be detected and messages with the positions are sent via usb to a connected computer. This system is quite robust in matters of ambient light insensitivity and also fast to calibrate. Unfortunately it is not created with the recognition of tangibles in mind and printed patterns can not be resolved. This article summarizes an attempt to create fiducials that are recognized by an infrared multi-touch frame as fingers. Those false fingers are checked by a software for known patterns. Once a known pattern (= fiducial) has been recognized its position and orientation are send with the finger positions towards the interactive software. The usability is tested with an example application where tangibles and finger touches are used in combination. KW - low cost multi-touch infrared overlay frame KW - fiducial tangible recognition KW - FHD Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:hbz:due62-opus-16011 UR - http://vsvr.medien.hs-duesseldorf.de/publications/hc2012-fiducials-abstract.html N1 - Copyright 2012 University of Aizu Press CY - Hamamatsu/Aizu-Wakamatsu/Duesseldorf ER -