TY - CHAP A1 - Herder, Jens A1 - Cohen, Michael T1 - Sound Spatialization Resource Management in Virtual Reality Environments T2 - ASVA’97 ‐- Int. Symp. on Simulation, Visualization and Auralization for Acoustic Research and Education N2 - In a virtual reality environment users are immersed in a scene with objects which might produce sound. The responsibility of a VR environment is to present these objects, but a system has only limited resources, including spatialization channels (mixels), MIDI/audio channels, and processing power. The sound spatialization resource manager controls sound resources and optimizes fidelity (presence) under given conditions. For that a priority scheme based on human psychophysical hearing is needed. Parameters for spatialization priorities include intensity calculated from volume and distance, orientation in the case of non-uniform radiation patterns, occluding objects, frequency spectrum (low frequencies are harder to localize), expected activity, and others. Objects which are spatially close together (depending on distance and direction) can be mixed. Sources that can not be spatialized can be treated as a single ambient sound source. Important for resource management is the resource assignment, i.e., minimizing swap operations, which makes it desirable to look-ahead and predict upcoming events in a scene. Prediction is achieved by monitoring objects’ speed and past evaluation values. Fidelity is contrasted for Zifferent kind of resource restrictions and optimal resource assignment based upon unlimited dynamic scene look-ahead. To give standard and comparable results, the VRML 2.0 specification is used as an application programmer interface. Applicability is demonstrated with a helical keyboard, a polyphonic MIDI stream driven animation including user interaction (user moves around, playing together with programmed notes). The developed sound spatialization resource manager gives improved spatialization fidelity under runtime constraints. Application programmers and virtual reality scene designers are freed from the burden of assigning and predicting the sound sources. Y1 - 1997 SP - 407 EP - 414 CY - Tokyo ER - TY - CHAP A1 - Myszkowski, Karol A1 - Okuneva, Galina A1 - Herder, Jens A1 - Kunii, Tosiyasu L. A1 - Ibusuki, Masumi ED - Earnshaw, Rae A. ED - Huw, Jones ED - John, Vince T1 - Visual Simulation of the Chewing Process for Dentistry T2 - Visualization & Modeling N2 - CAD/CAM techniques are increasingly used in dentistry for the design and fabrication of teeth estorations. Important concerns are the correction of articulation problems that existed before treatment and the prevention of treatment-generated problems. These require interactive evaluation of the occlusal surfaces of teeth during mastication. Traditional techniques based on the use of casts with mechanical articulators require manual adjustment of occlusal surfaces, which becomes impractical when hard restoration materials like porcelain are used; they are also time and labor consuming and provide little visual information. We present new visual tools and a related user interface for global articulation simulation, developed for the Intelligent Dental Care System project. The aim of the simulation is visual representation of characteristics relevant to the chewing process. The simulation is based on the construction of distance maps, which are visual representations of the distributions of the distances of points in a tooth to the opposite jaw. We use rasterizing graphics hardware for fast calculation of the distance maps. Distance maps are used for collision detection and for the derivation of various characteristics showing the distribution of load on the teeth and the chewing capability of the teeth. Such characteristics can be calculated for particular positions of the jaws; cumulative characteristics are used to describe the properties of jaw movement. This information may be used for interactive design of the occlusal surfaces of restorations and for jaw articulation diagnosis. We also demonstrate elements of a user interface that exploit metaphors familiar to dentists from everyday practice. Y1 - 1997 SN - 0-12-227738-4 SP - 419 EP - 438 PB - Academic Press CY - London ER - TY - CHAP A1 - Herder, Jens A1 - Myszkowski, Karol A1 - Kunii, Tosiyasu L. A1 - Ibusuki, Masumi ED - Weghorst, Suzanne J. ED - Sieburg, Hans B. ED - Morgan, Karen S. T1 - A Virtual Reality Interface to an Intelligent Dental Care System T2 - Medicine Meets Virtual Reality 4 Y1 - 1996 SP - 17 EP - 20 PB - IOS Press CY - Amsterdam ER - TY - JOUR A1 - Herder, Jens T1 - Visualization of a Clustering Algorithm of Sound Sources based on Localization Errors JF - Journal of the 3D-Forum Society N2 - A module for soundscape monitoring and visualizing resource management processes was extended for presenting clusters, generated by a novel sound source clustering algorithm. This algorithm groups multiple sound sources together into a single representative source, considering localization errors depending on listener orientation. Localization errors are visualized for each cluster using resolution cones. Visualization is done in runtime and allows understanding and evaluation of the clustering algorithm. KW - audio rendering, clustering KW - human perception KW - Resource Management KW - Sound Spatialization KW - Visualization Y1 - 1999 VL - 13 IS - 3 SP - 66 EP - 70 ER - TY - JOUR A1 - Jens Herder, T1 - Optimization of Sound Spatialization Resource Management through Clustering JF - Journal of the 3D-Forum Society N2 - Level-of-detail is a concept well-known in computer graphics to reduce the number of rendered polygons. Depending on the distance to the subject (viewer), the objects’ representation is changed. A similar concept is the clustering of sound sources for sound spatialization. Clusters can be used to hierarchically organize mixelsand to optimize the use of resources, by grouping multiple sources together into a single representative source. Such a clustering process should minimize the error of position allocation of elements, perceived as angle and distance, and also differences between velocity relative to the sink (i.e., Doppler shift). Objects with similar direction of motion and speed (relative to sink) in the same acoustic resolution cone and with similar distance to a sink can be grouped together. KW - audio rendering KW - clustering, and human perception KW - Resource Management KW - Sound Spatialization KW - VSVR Y1 - 1999 VL - 13 IS - 3 SP - 59 EP - 65 ER - TY - GEN A1 - Herder, Jens T1 - Interactive Sound Spatialization - a Primer T2 - MM News, University of Aizu Multimedia Center N2 - Sound spatialization is a technology which puts sound into the three dimensional space, so that it has a perceivable direction and distance. Interactive means mutually or reciprocally active. Interaction is when one action (e.g., user moves mouse) has direct or immediate influence to other actions (e.g., processing by a computer: graphics change in size). Based on this definition an introduction to sound reproduction using DVD and virtual environments is given and illustrated by applications (e.g., virtual converts). KW - DVD KW - Interactivity KW - Resource Management KW - Sound Spatialization KW - Virtual Concerts KW - VSVR Y1 - 2000 UR - http://vsvr.medien.hs-duesseldorf.de/publications/mmnews2000-8.pdf VL - 8 SP - 8 EP - 12 ER - TY - CHAP A1 - Herder, Jens A1 - Yamazaki, Yasuhiro T1 - A Chatspace Deploying Spatial Audio for Enhanced Conferencing T2 - Third International Conference on Human and Computer KW - VSVR Y1 - 2000 SP - 197 EP - 202 PB - University of Aizu CY - Aizu-Wakamatsu ER - TY - CHAP A1 - Honno, Kuniaki A1 - Suzuki, Kenji A1 - Herder, Jens T1 - Distance and Room Effects Control for the PSFC, an Auditory Display using a Loudspeaker Array T2 - Third International Conference on Human and Computer N2 - The Pioneer Sound Field Controller (PSFC), a loudspeaker array system, features realtime configuration of an entire sound field,including sound source direction, virtual distance, and context of simulated environment (room characteristics: room size and liveness)for each of two sound sources. In the PSFC system, there is no native parameter to specify the distance between the sound source and sound sink (listener) and also no function to control it directrly. This paper suggests the method to control virtual distance using basic parameters: volume, room size and liveness. The implementation of distance cue is an important aspect of 3D sounds. Virtual environments supporting room effects like reverberation not only gain realism but also provide additional information to users about surrounding space. The context switch of different aural attributes is done by using an API of the Sound Spatialization Framework. Therefore, when the sound sink move through two rooms, like a small bathroom and a large living room, the context of the sink switches and different sound is obtained. KW - VSVR Y1 - 2000 SP - 71 EP - 76 PB - University of Aizu CY - Aizu-Wakamatsu ER - TY - THES A1 - Herder, Jens T1 - A Sound Spatialization Resource Management Framework N2 - In a virtual reality environment, users are immersed in a scene with objects which might produce sound. The responsibility of a VR environment is to present these objects, but a practical system has only limited resources, including spatialization channels (mixels), MIDI/audio channels, and processing power. A sound spatialization resource manager, introduced in this thesis, controls sound resources and optimizes fidelity (presence) under given conditions, using a priority scheme based on psychoacoustics. Objects which are spatially close together can be coalesced by a novel clustering algorithm, which considers listener localization errors. Application programmers and VR scene designers are freed from the burden of assigning mixels and predicting sound source locations. The framework includes an abstract interface for sound spatialization backends, an API for the VR environments, and multimedia authoring tools. KW - audio rendering KW - human perception KW - Resource Management KW - Sound Spatialization Y1 - 1999 UR - http://vsvr.medien.hs-duesseldorf.de/publications/phd99-abstract.html PB - University of Tsukuba CY - Tsukuba ER - TY - JOUR A1 - Cohen, Michael A1 - Herder, Jens A1 - L. Martens, William T1 - Cyberspatial Audio Technology T1 - available in Japanese as well - Acoustical Society of Japan, Vol. 55, No. 10, pp. 730-731 JF - The Journal of the Acoustical Society of Japan (E) N2 - Cyberspatial audio applications are distinguished from the broad range of spatial audio applications in a number of important ways that help to focus this review. Most significant is that cyberspatial audio is most often designed to be responsive to user inputs. In contrast to non-interactive auditory displays, cyberspatial auditory displays typically allow active exploration of the virtual environment in which users find themselves. Thus, at least some portion of the audio presented in a cyberspatial environment must be selected, processed, or otherwise rendered with minimum delay relative to user input. Besides the technological demands associated with realtime delivery of spatialized sound, the type and quality of auditory experiences supported are also very different from those associated with displays that support stationary sound localization. Y1 - 1999 U6 - https://doi.org/10.1250/ast.20.389 N1 - available in Japanese as well - Acoustical Society of Japan, Vol. 55, No. 10, pp. 730-731 VL - 20 IS - 6 SP - 389 EP - 395 ER -