TY - CHAP A1 - Ryskeldiev, Bektur A1 - Cohen, Michael A1 - Herder, Jens T1 - Applying rotational tracking and photospherical imagery to immersive mobile telepresence and live video streaming groupware T2 - Proceeding SA '17 SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications, Article No. 5 N2 - Mobile live video streaming is becoming an increasingly popular form of interaction both in social media and remote collaboration scenarios. However, in most cases the streamed video does not take mobile devices' spatial data into account (e.g., the viewers do not know the spatial orientation of a streamer), or use such data only in specific scenarios (e.g., to navigate around a spherical video stream). KW - spatial media KW - rotational tracking KW - mixed reality KW - live streaming KW - social media KW - telepresence KW - mobile computing KW - groupware KW - photospherical imagery Y1 - 2017 UR - https://dl.acm.org/citation.cfm?doid=3132787.3132813 SN - 978-1-4503-5410-3 U6 - https://doi.org/10.1145/3132787.3132813 PB - ACM CY - New York ER - TY - CHAP A1 - Ryskeldiev, Bektur A1 - Igarashi, Toshiharu A1 - Zhang, Junjian A1 - Ochiai, Yoichi A1 - Cohen, Michael A1 - Herder, Jens T1 - Spotility: Crowdsourced Telepresence for Social and Collaborative Experiences in Mobile Mixed Reality T2 - ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '18) N2 - Live video streaming is becoming increasingly popular as a form of interaction in social applications. One of its main advantages is an ability to immediately create and connect a community of remote users on the spot. In this paper we discuss how this feature can be used for crowdsourced completion of simple visual search tasks (such as finding specific objects in libraries and stores, or navigating around live events) and social interactions through mobile mixed reality telepresence interfaces. We present a prototype application that allows users to create a mixed reality space with a photospherical imagery as a background and interact with other connected users through viewpoint, audio, and video sharing, as well as realtime annotations in mixed reality space. Believing in the novelty of our system, we conducted a short series of interviews with industry professionals on the possible applications of our system. We discuss proposed use-cases for user evaluation, as well as outline future extensions of our system. KW - Groupware KW - Mixed reality KW - Mobile computing KW - Remote collaboration KW - Spatial media KW - M KW - FHD Y1 - 2018 UR - http://vsvr.medien.hs-duesseldorf.de/publications/cscw2018-spotility-abstract.html SN - 978-1-4503-6018-0 U6 - https://doi.org/10.1145/3272973.3274100 N1 - Zugriff auf den Volltext via ACM-Datenbank SP - 373 EP - 376 PB - ACM CY - New York ER - TY - CHAP A1 - Ryskeldiev, Bektur A1 - Ochiai, Yoichi A1 - Cohen, Michael A1 - Herder, Jens T1 - Distributed Metaverse: Creating Decentralized Blockchain-based Model for Peer-to-peer Sharing of Virtual Spaces for Mixed Reality Applications T2 - Proceedings of the 9th Augmented Human International Conference N2 - Mixed reality telepresence is becoming an increasingly popular form of interaction in social and collaborative applications. We are interested in how created virtual spaces can be archived, mapped, shared, and reused among different applications. Therefore, we propose a decentralized blockchain-based peer-to-peer model of distribution, with virtual spaces represented as blocks. We demonstrate the integration of our system in a collaborative mixed reality application and discuss the benefits and limitations of our approach. KW - Blockchain KW - Groupware KW - Mixed Reality KW - Mobile Computing KW - Photospherical Imagery Y1 - 2018 UR - http://vsvr.medien.hs-duesseldorf.de/publications/ah2018-blockchain-streamspace-abstract.html SN - 978-1-4503-5415-8 U6 - https://doi.org/10.1145/3174910.3174952 SP - 7 EP - 9 PB - ACM ER - TY - CHAP A1 - Herder, Jens A1 - Cohen, Michael T1 - Design of a Helical Keyboard T2 - icad'96 - International Conference on Auditory Display, Palo Alto N2 - Inspired by the cyclical nature of octaves and helical structure of a scale (Shepard, '82 and '83), we prepared a model of a piano-style keyboard (prototyped in Mathematica), which was then geometrically warped into a left-handed helical configuration, one octave/revolution, pitch mapped to height. The natural orientation of upper frequency keys higher on the helix suggests a parsimonious left-handed chirality, so that ascending notes cross in front of a typical listener left to right. Our model is being imported (via the dxf file format) into (Open Inventor/)VRML, where it can be driven by MIDI events, realtime or sequenced, which stream is both synthesized (by a Roland Sound Module), and spatialized by a heterogeneous spatial sound backend (including the Crystal River Engineering Acoustetron II and the Pioneer Sound Field Control speaker-array System), so that the sound of the respective notes is directionalized with respect to sinks, avatars of the human user, by default in the tube of the helix. This is a work-in-progress which we hope to be fully functional within the next few months. Y1 - 1996 CY - Palo Alto ER - TY - JOUR A1 - Herder, Jens A1 - Cohen, Michael T1 - The Helical Keyboard: Perspectives for Spatial Auditory Displays and Visual Music JF - Journal of New Music Research N2 - Auditory displays with the ability to dynamically spatialize virtual sound sources under real-time conditions enable advanced applications for art and music. A listener can be deeply immersed while interacting and participating in the experience. We review some of those applications while focusing on the Helical Keyboard project and discussing the required technology. Inspired by the cyclical nature of octaves and helical structure of a scale, a model of a piano-style keyboard was prepared, which was then geometrically warped into a helicoidal configuration, one octave/revolution, pitch mapped to height and chroma. It can be driven by MIDI events, real-time or sequenced, which stream is both synthesized and spatialized by a spatial sound display. The sound of the respective notes is spatialized with respect to sinks, avatars of the human user, by default in the tube of the helix. Alternative coloring schemes can be applied, including a color map compatible with chromastereoptic eyewear. The graphical display animates polygons, interpolating between the notes of a chord across the tube of the helix. Recognition of simple chords allows directionalization of all the notes of a major triad from the position of its musical root. The system is designed to allow, for instance, separate audition of harmony and melody, commonly played by the left and right hands, respectively, on a normal keyboard. Perhaps the most exotic feature of the interface is the ability to fork oneís presence, replicating subject instead of object by installing multiple sinks at arbitrary places around a virtual scene so that, for example, harmony and melody can be separately spatialized, using two heads to normalize the octave; such a technique effectively doubles the helix from the perspective of a single listener. Rather than a symmetric arrangement of the individual helices, they are perceptually superimposed in-phase, co-extensively, so that corresponding notes in different registers are at the same azimuth. KW - 3D audio KW - virtual reality KW - computer music KW - spatialization KW - spatial media KW - visual music KW - VSVR Y1 - 2002 VL - 31 IS - 3 SP - 269 EP - 281 ER - TY - CHAP A1 - Cohen, Michael A1 - Herder, Jens A1 - Martens, William T1 - Panel: Eartop computing and cyberspatial audio technology T2 - IEEE-VR2001: IEEE Virtual Reality KW - VSVR Y1 - 2001 SN - 0-7695-0948-7 SP - 322 EP - 323 PB - IEEE CY - Yokohama ER - TY - CHAP A1 - Herder, Jens A1 - Cohen, Michael ED - Gorayska, Barbara ED - Nehaniv, Chrystopher L. ED - Marsh, Jonathon P. T1 - Enhancing Perspicuity of Objects in Virtual Reality Environments T2 - Proceedings, Second International Conference on Cognitive Technology N2 - In an information-rich Virtual Reality (VR) environment, the user is immersed in a world containing many objects providing that information. Given the finite computational resources of any computer system, optimization is required to ensure that the most important information is presented to the user as clearly as possible and in a timely fashion. In particular, what is desired are means whereby the perspicuity of an object may be enhanced when appropriate. An object becomes more perspicuous when the information it provides to the user becomes more readily apparent. Additionally, if a particular object provides high-priority information, it would be advantageous to make that object obtrusive as well as highly perspicuous. An object becomes more obtrusive if it draws attention to itself (or equivalently, if it is hard to ignore). This paper describes a technique whereby objects may dynamically adapt their representation in a user's environment according to a dynamic priority evaluation of the information each object provides. The three components of our approach are: - an information manager that evaluates object information priority, - an enhancement manager that tabulates rendering features associated with increasing object perspicuity and obtrusion as a function of priority, and - a resource manager that assigns available object rendering resources according to features indicated by the enhancement manager for the priority set for each object by the information manager. We consider resources like visual space (pixels), sound spatialization channels (mixels), MIDI/audio channels, and processing power, and discuss our approach applied to different applications. Assigned object rendering features are implemented locally at the object level (e.g., object facing the user using the billboard node in VRML 2.0) or globally, using helper applications (e.g., active spotlights, semi-automatic cameras). KW - autonomous actors KW - obtrusion KW - perspicuity KW - spatial media KW - spatialization KW - user interface design man-machine interfaces KW - Virtual Reality Y1 - 1997 SN - 0-8186-8084-9 SP - 228 EP - 237 PB - IEEE CY - Los Alamitos ER - TY - JOUR A1 - Amano, Katsumi A1 - Matsushita, Fumio A1 - Yanagawa, Hirofumi A1 - Cohen, Michael A1 - Herder, Jens A1 - Martens, William A1 - Koba, Yoshiharu A1 - Tohyama, Mikio T1 - A Virtual Reality Sound System Using Room-Related Transfer Functions Delivered Through a Multispeaker Array: the PSFC at the University of Aizu Multimedia Center JF - TVRSJ N2 - The PSFC, or Pioneer Sound Field Controller, is a DSP-driven hemispherical loudspeaker array, installed at the University of Aizu Multimedia Center. The PSFC features realtime manipulation of the primary components of sound spatialization for each of two audio sources located in a virtual environment, including the content (apparent direction and distance) and context (room characteristics: reverberation level, room size and liveness). In an alternate mode, it can also direct the destination of the two separate input signals across 14 loudspeakers, manipulating the direction of the virtual sound sources with no control over apparent distance other than that afforded by source loudness (including no simulated environmental reflections or reverberation). The PSFC speaker dome is about 10 m in diameter, accommodating about fifty simultaneous users, including about twenty users comfortably standing or sitting near its ``sweet spot,'' the area in which the illusions of sound spatialization are most vivid. Collocated with a large screen rear-projection stereographic display, the PSFC is intended for advanced multimedia and virtual reality applications. KW - audio signal processing, audio telecommunications KW - auralization KW - calm technology KW - directional mixing console KW - multichannel sound reproduction KW - room-related transfer functions KW - roomware KW - sound localization KW - virtual conferencing environment Y1 - 1998 U6 - https://doi.org/10.18974/tvrsj.3.1_1 VL - 3 IS - 1 SP - 1 EP - 12 PB - J-STAGE ER - TY - JOUR A1 - Cohen, Michael A1 - Herder, Jens A1 - L. Martens, William T1 - Cyberspatial Audio Technology T1 - available in Japanese as well - Acoustical Society of Japan, Vol. 55, No. 10, pp. 730-731 JF - The Journal of the Acoustical Society of Japan (E) N2 - Cyberspatial audio applications are distinguished from the broad range of spatial audio applications in a number of important ways that help to focus this review. Most significant is that cyberspatial audio is most often designed to be responsive to user inputs. In contrast to non-interactive auditory displays, cyberspatial auditory displays typically allow active exploration of the virtual environment in which users find themselves. Thus, at least some portion of the audio presented in a cyberspatial environment must be selected, processed, or otherwise rendered with minimum delay relative to user input. Besides the technological demands associated with realtime delivery of spatialized sound, the type and quality of auditory experiences supported are also very different from those associated with displays that support stationary sound localization. Y1 - 1999 U6 - https://doi.org/10.1250/ast.20.389 N1 - available in Japanese as well - Acoustical Society of Japan, Vol. 55, No. 10, pp. 730-731 VL - 20 IS - 6 SP - 389 EP - 395 ER - TY - CHAP A1 - Cohen, Michael A1 - Herder, Jens ED - Göbel, Martin ED - Landauer, Jürgen ED - Lang, Ulrich ED - Wapler, Matthias T1 - Symbolic representations of exclude and include for audio sources and sinks: Figurative suggestions of mute/solo & cue and deafen/confide & harken T2 - Virtual Environments ’98, Proceedings of the Eurographics Workshop Y1 - 1998 SN - 3-211-83233-5 U6 - https://doi.org/10.1007/978-3-7091-7519-4_23 SP - 235 EP - 242 PB - Springer-Verlag CY - Stuttgart ER - TY - CHAP A1 - Herder, Jens A1 - Cohen, Michael T1 - Sound Spatialization Resource Management in Virtual Reality Environments T2 - ASVA’97 ‐- Int. Symp. on Simulation, Visualization and Auralization for Acoustic Research and Education N2 - In a virtual reality environment users are immersed in a scene with objects which might produce sound. The responsibility of a VR environment is to present these objects, but a system has only limited resources, including spatialization channels (mixels), MIDI/audio channels, and processing power. The sound spatialization resource manager controls sound resources and optimizes fidelity (presence) under given conditions. For that a priority scheme based on human psychophysical hearing is needed. Parameters for spatialization priorities include intensity calculated from volume and distance, orientation in the case of non-uniform radiation patterns, occluding objects, frequency spectrum (low frequencies are harder to localize), expected activity, and others. Objects which are spatially close together (depending on distance and direction) can be mixed. Sources that can not be spatialized can be treated as a single ambient sound source. Important for resource management is the resource assignment, i.e., minimizing swap operations, which makes it desirable to look-ahead and predict upcoming events in a scene. Prediction is achieved by monitoring objects’ speed and past evaluation values. Fidelity is contrasted for Zifferent kind of resource restrictions and optimal resource assignment based upon unlimited dynamic scene look-ahead. To give standard and comparable results, the VRML 2.0 specification is used as an application programmer interface. Applicability is demonstrated with a helical keyboard, a polyphonic MIDI stream driven animation including user interaction (user moves around, playing together with programmed notes). The developed sound spatialization resource manager gives improved spatialization fidelity under runtime constraints. Application programmers and virtual reality scene designers are freed from the burden of assigning and predicting the sound sources. Y1 - 1997 SP - 407 EP - 414 CY - Tokyo ER - TY - CHAP A1 - Amano, Katsumi A1 - Matsushita, Fumio A1 - Yanagawa, Hirofumi A1 - Cohen, Michael A1 - Herder, Jens A1 - Koba, Yoshiharu A1 - Tohyama, Mikio T1 - The Pioneer sound field control system at the University of Aizu Multimedia Center T2 - RO-MAN '96 Tsukuba N2 - The PSFC, or Pioneer sound field control system, is a DSP-driven hemispherical 14-loudspeaker array, installed at the University of Aizu Multimedia Center. Collocated with a large screen rear-projection stereographic display the PSFC features realtime control of virtual room characteristics and direction of two separate sound channels, smoothly steering them around a configurable soundscape. The PSFC controls an entire sound field, including sound direction, virtual distance, and simulated environment (reverb level, room size and liveness) for each source. It can also configure a dry (DSP-less) switching matrix for direct directionalization. The PSFC speaker dome is about 14 m in diameter, allowing about twenty users at once to comfortably stand or sit near its sweet spot. KW - Acoustic reflection KW - Auditory displays KW - Control systems KW - Delay effects KW - Electronic mail KW - Large screen displays KW - Loudspeakers KW - Multimedia systems KW - Reverberation KW - Size control Y1 - 1996 SN - 0-7803-3253-9 U6 - https://doi.org/10.1109/ROMAN.1996.568887 SP - 495 EP - 499 PB - IEEE CY - Piscataway ER - TY - JOUR A1 - Ryskeldiev, Bektur A1 - Cohen, Michael A1 - Herder, Jens T1 - StreamSpace: Pervasive Mixed Reality Telepresence for Remote Collaboration on Mobile Devices JF - Journal of Information Processing N2 - We present a system that exploits mobile rotational tracking and photospherical imagery to allow users to share their environment with remotely connected peers “on the go.” We surveyed related interfaces and developed a unique groupware application that shares a mixed reality space with spatially-oriented live video feeds. Users can collaborate through realtime audio, video, and drawings in a virtual space. The developed system was tested in a preliminary user study, which confirmed an increase in spatial and situational awareness among viewers as well as reduction in cognitive workload. Believing that our system provides a novel style of collaboration in mixed reality environments, we discuss future applications and extensions of our prototype. KW - Ubiquitous Computing KW - Telepresence KW - Remote Collaboration KW - Mixed Reality KW - Live Video Streaming Y1 - 2018 U6 - https://doi.org/10.2197/ipsjjip.26.177 VL - 26 SP - 177 EP - 185 PB - J-STAGE ER -