TY - CHAP A1 - von Berg, Markus A1 - Schwörer, Paul A1 - Prinz, Lukas A1 - Steffens, Jochen T1 - Analysis of physical and perceptual properties of room impulse responses: development of an online tool T2 - Forum Acusticuum 2023: 10th Convention of European Acoustics Association, Turin, Italy, 11th-15th September 2023 N2 - Over the past decades, research in room acoustics has established several derivative measures of an impulse response, some of which are incorporated in the ISO 3382 standards. These parameters intend to represent perceptual qualities, but were developed without a consistent modeling of room acoustical perception. More recent research proposed comprehensive inventories of room acoustic perception that are purely based on evaluations by human subjects, such as the Room Acoustical Quality Index (RAQI). In this work RA-QI scores acquired for 70 room impulse responses were predicted from room acoustical parameters. Except for Reverberance, the prediction of RAQI factors performed rather poor. In most cases, the sound source had a greater impact on RAQI scores. All analyses are published in an online tool, where users can upload omnidirectional and binaural impulse responses, and instantly obtain and visualize several physical descriptors, as well as predicted RAQI scores for three different sound sources. So far, acceptable prediction accuracy is achieved for Reverberance, Strength, Irregular Decay, Clarity and Intimacy. Larger data sets of evaluated impulse responses are required to improve the model performance and enable reliable predictions of room acoustical quality. Therefore, the administration of RAQI evaluations within the website is currently being developed. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:hbz:due62-opus-41955 PB - EAA ER - TY - GEN A1 - Versümer, Siegbert A1 - Steffens, Jochen A1 - Rosenthal, Fabian T1 - Extensive crowdsourced dataset of in-situ evaluated binaural soundscapes of private dwellings containing subjective sound-related and situational ratings along with person factors to study time-varying influences on sound perception - research data N2 - The soundscape approach highlights the role of situational factors in sound evaluations; however, only a few studies have applied a multi‐domain approach including sound‐related, person‐related, and time‐varying situational variables. Therefore, we conducted a study based on the Experience Sampling Method to measure the relative contribution of a broad range of potentially relevant acoustic and non‐auditory variables in predicting indoor soundscape evaluations. Here we present the comprehensive dataset for which 105 participants reported temporally (rather) stable trait variables such as noise sensitivity, trait affect, and quality of life. They rated 6.594 situations regarding the soundscape standard dimensions, perceived loudness, and the saliency of its sound components and evaluated situational variables such as state affect, perceived control, activity, and location. To complement these subject‐centered data, we additionally crowdsourced object‐centered data by having participants make binaural measurements of each indoor soundscape at their homes using a low‐(self‐)noise recorder. These recordings were used to compute (psycho‐)acoustical indices such as the energetically averaged loudness level, the A‐weighted energetically averaged equivalent continuous sound pressure level, and the A‐weighted five‐percent exceedance level. This complex hierarchical data can be used to investigate time‐varying non‐auditory influences on sound perception and to develop soundscape indicators based on the binaural recordings to predict soundscape evaluations. KW - Dataset KW - soundscape KW - person factors KW - indoor soundscape KW - acoustic environment KW - non-auditory factors KW - situational factors KW - binaural recording KW - Experience Sampling Method KW - acoustic descriptors Y1 - 2023 U6 - https://doi.org/10.5281/zenodo.7193937 N1 - This study was sponsored by the German Federal Ministry of Education and Research. “FHprofUnt” funding code: 13FH729IX6. Versions: Version V.01.1 10.5281/zenodo.7858848 Apr 25, 2023 Version V.01.0 10.5281/zenodo.7193938 Mar 7, 2023 CY - Zenodo ET - V.01.1 ER - TY - JOUR A1 - Anglada-Tort, Manuel A1 - Masters, Nikhil A1 - Steffens, Jochen A1 - North, Adrian A1 - Müllensiefen, Daniel T1 - The Behavioural Economics of Music: Systematic review and future directions JF - Quarterly Journal of Experimental Psychology N2 - Music-related decision-making encompasses a wide range of behaviours including those associated with listening choices, composition and performance, and decisions involving music education and therapy. Although research programmes in psychology and economics have contributed to an improved understanding of music-related behaviour, historically, these disciplines have been unconnected. Recently, however, researchers have begun to bridge this gap by employing tools from behavioural economics. This article contributes to the literature by providing a discussion about the benefits of using behavioural economics in music-decision research. We achieve this in two ways. First, through a systematic review, we identify the current state of the literature within four key areas of behavioural economics-heuristics and biases, social decision-making, behavioural time preferences, and dual-process theory. Second, taking findings of the literature as a starting point, we demonstrate how behavioural economics can inform future research. Based on this, we propose the Behavioural Economics of Music (BEM), an integrated research programme that aims to break new ground by stimulating interdisciplinary research in the intersection between music, psychology, and economics. Y1 - 2022 SN - 1747-0218 U6 - https://doi.org/10.1177/17470218221113761 SN - 1747-0226 VL - 76 IS - 5 PB - SAGE ER - TY - CHAP A1 - Krause, Amanda A1 - Baker J., David A1 - Groarke, Jenny A1 - Pereira, Ana I. A1 - Liew, Kongmeng A1 - Anglada-Tort, Manuel A1 - Steffens, Jochen T1 - A global investigation of music listening practices: The influence of country latitude and seasons on music preferences T2 - ICMPC-ESCOM 2021: 16th International Conference on Music Perception and Cognition/11th Triennial conference of the European Society for the Cognitive Sciences of Music, 28-31 July 2021, Sheffield Y1 - 2021 UR - https://researchonline.jcu.edu.au/68858/ CY - Sheffield ER - TY - CHAP A1 - Steffens, Jochen A1 - Himmelein, Hendrik T1 - Induced cognitive load influences unpleasantness judgments of modulated noise T2 - Proceedings of the 24th International Congress on Acoustics Y1 - 2022 CY - Gyeoungju, Südkorea ER - TY - CHAP A1 - von Berg, Markus A1 - Prinz, Lukas A1 - Steffens, Jochen T1 - Comparing individual perception of timbre and reverberance BT - Proceedings of the 24th International Congress on Acoustics N2 - Room reverberation alters the spatial impression and timbre of a sound by modulating its spectral and temporal characteristics. Thus, we argue that, on a perceptual level, reverberation basically breaks down into interaural differences and spectro-temporal cues and that the separation of a perceived timbre into a sound source and a surrounding room is a purely cognitive process. To investigate the connection between the perception of reverberation cues and timbre analysis, the sensitivity for changes in reverberation was compared to timbre perception abilities. The Timbre Perception Test was used to measure the perception of the temporal envelope, spectral centroid, and spectral flux of artificial sounds. Sensitivity for changes in reverberation time was tested with a discrimination task using speech and noise with speech-alike spectral and temporal envelopes as source signals. Musical and acoustical expertise was assessed through the Goldsmiths Musical Sophistication Index and self-reports on experience with and knowledge of acoustics. There was a considerable correlation between timbre and reverberance perception ability, but timbre perception and academic experience predicted only 41% of the variance in reverberance perception. Still, perception abilities related to similar acoustical phenomena seem to be better indicators of listening skills than self-reports on acoustical or musical expertise. KW - Timbre KW - Klangfarbe KW - Reverberation KW - Nachhall Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:hbz:due62-opus-38706 UR - https://ica2022korea.org/data/Proceedings_A11.pdf CY - Gyeoungju, Südkorea ER - TY - JOUR A1 - von Berg, Markus A1 - Steffens, Jochen A1 - Weinzierl, Stefan A1 - Müllensiefen, Daniel T1 - Assessing room acoustic listening expertise JF - Journal of the Acoustical Society of America N2 - Musicians and music professionals are often considered to be expert listeners for listening tests on room acoustics. However, these tests often target acoustic parameters other than those typically relevant in music such as pitch, rhythm, amplitude, or timbre. To assess the expertise in perceiving and understanding room acoustical phenomena, a listening test battery was constructed to measure the perceptual sensitivity and cognitive abilities in the identification of rooms with different reverberation times and different spectral envelopes. Performance in these tests was related to data from the Goldsmiths Musical Sophistication Index, self-reported previous experience in music recording and acoustics, and academic knowledge on acoustics. The data from 102 participants show that sensory and cognitive abilities are both correlated significantly with musical training, analytic listening skills, recording experience, and academic knowledge on acoustics, whereas general interest in and engagement with music do not show any significant correlations. The regression models, using only significantly correlated criteria of musicality and professional expertise, explain only small to moderate amounts (11%–28%) of the variance in the “room acoustic listening expertise” across the different tasks of the battery. Thus, the results suggest that the traditional criteria for selecting expert listeners in room acoustics are only weak predictors of their actual performances. Y1 - 2021 U6 - https://doi.org/10.1121/10.0006574 VL - 150 IS - 4 PB - Acoustical Society of America ER - TY - JOUR A1 - Steffens, Jochen A1 - Wilczek, Tobias A1 - Weinzierl, Stefan T1 - Junk Food or Haute Cuisine to the Ear? – Investigating the Relationship Between Room Acoustics, Soundscape, Non-Acoustical Factors, and the Perceived Quality of Restaurants JF - Frontiers in Built Environment N2 - Sound and music are well-studied aspects of the quality of experience in restaurants; the role of the room acoustical conditions, their influence on the visitors’ soundscape evaluation and their impact on the overall customer satisfaction in restaurants, however, has received less scientific attention. The present field study therefore investigated whether sound pressure level, reverberation time, and soundscape pleasantness can predict factors associated with overall restaurant quality. In total, 142 persons visiting 12 restaurants in Berlin rated relevant acoustical and non-acoustical factors associated with restaurant quality. Simultaneously, the A-weighted sound pressure level (LA,eq,15) was measured, and the reverberation time in the occupied state (T20,occ) was obtained by measurements performed in the unoccupied room and a subsequent calculation of the occupied condition according to DIN 18041. Results from linear mixed-effects models revealed that both the LA,eq,15 and T20,occ had a significant influence on soundscape pleasantness and eventfulness, whereby the effect of T20,occ was meditated by the LA,eq,15. Also, the LA,eq,15 as well as soundscape pleasantness were significant predictors of overall restaurant quality. A comprehensive structural equation model including both acoustical and non-acoustical factors, however, indicates that the effect of soundscape pleasantness on overall restaurant quality is mediated by the restaurant’s atmosphere. Our results support and extend previous findings which suggest that the acoustical design of restaurants involves a trade-off between comfort and liveliness, depending on the desired character of the place. KW - DOAJ Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:hbz:due62-opus-32115 SN - 2297-3362 N1 - The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fbuil.2021.676009/full#supplementary-material; ut:000655121500001 VL - 676009 PB - Frontiers ER - TY - JOUR A1 - Behbehani, Sami A1 - Steffens, Jochen T1 - Musical DIAMONDS: The influence of situational classes and characteristics on music listening behavior JF - Psychology of Music Y1 - 2021 U6 - https://doi.org/10.1177/0305735620968910 SN - 1741-3087 VL - 49 IS - 6 SP - 1532 EP - 1545 PB - sage journals ER - TY - CHAP A1 - Brettschneider, Nico A1 - Herder, Jens A1 - de Mooij, Jeroen A1 - Ryskeldiev, Bektur ED - Herder, Jens T1 - Audio vs. Visual Avatars as Guides in Virtual Environments T2 - 21th International Conference on Human and Computer, HC-2018, March 27–28, 2019, Shizuoka University, Hamamatsu, Japan. N2 - Through constant technical progress, multi-user virtual reality is transforming towards a social activity that is no longer only used by remote users, but also in large-scale location-based experiences. We evaluate the usage of realtime-tracked avatars in co-located business-oriented applications in a "guide-user-scenario" in comparison to audio only instructions. The present study examined the effect of an avatar-guide on the user-related factors of Spatial Presence, Social Presence, User Experience and Task Load in order to propose design guidelines for co-located collaborative immersive virtual environments. Therefore, an application was developed and a user study with 40 participants was conducted in order to compare both guiding techniques of a realtime-tracked avatar guide and a non-visualised guide with otherwise constant conditions. Results reveal that the avatar-guide enhanced and stimulated communicative processes while facilitating interaction possibilities and creating a higher sense of mental immersion for users. Furthermore, the avatar-guide appeared to make the storyline more engaging and exciting while helping users adapt to the medium of virtual reality. Even though no assertion could be made concerning the Task Load factor, the avatar-guide achieved a higher subjective value on User Experience. Due to the results, avatars can be considered valuable social elements in the design of future co-located collaborative virtual environments. KW - Virtual Reality KW - Co-located Collaborations KW - Networked Immersive Virtual Environments KW - Head-mounted Display KW - Avatars KW - Lehre Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:hbz:due62-opus-23859 UR - https://vsvr.medien.hs-duesseldorf.de/publications/hc2018-avatar/ PB - Hochschule Düsseldorf CY - Düsseldorf ER - TY - JOUR A1 - Anglada-Tort, Manuel A1 - Keller, Steve A1 - Steffens, Jochen A1 - Müllensiefen, Daniel T1 - The Impact of Source Effects on the Evaluation of Music for Advertising JF - Journal of Advertising Research Y1 - 2020 U6 - https://doi.org/10.2501/JAR-2020-016 VL - 60 IS - 3 PB - ARF ER - TY - JOUR A1 - Irrgang, Melanie A1 - Steffens, Jochen A1 - Egermann, Hauke T1 - From acceleration to rhythmicity: Smartphone-assessed movement predicts properties of music JF - Journal of New Music Research KW - Accelerometer KW - Embodied Music Cognition KW - Mobile Devices KW - Movement and Computing KW - Music Information Retrieval Y1 - 2020 U6 - https://doi.org/10.1080/09298215.2020.1715447 VL - 4 IS - 7 SP - 1 EP - 14 PB - Taylor & Francis ER - TY - JOUR A1 - Herzog, Martin A1 - Lepa, Steffen A1 - Egermann, Hauke A1 - Schoenrock, Andreas A1 - Steffens, Jochen T1 - Towards a common terminology for music branding campaigns JF - Journal of Marketing Management KW - brand identity KW - brand value KW - Music branding KW - Music Branding Expert Terminology (MBET) KW - musical congruity KW - musical meaning Y1 - 2020 U6 - https://doi.org/10.1080/0267257X.2020.1713856 N1 - Das Preprint (Accepted Version) ist online verfügbar: https://eprints.whiterose.ac.uk/155624/ (Green Open Access) VL - 36 IS - 1-2 SP - 176 EP - 209 PB - Taylor & Francis ER - TY - JOUR A1 - Luizard, Paul A1 - Steffens, Jochen A1 - Weinzierl, Stefan T1 - Singing in different rooms: Common or individual adaptation patterns to the acoustic conditions? JF - The Journal of the Acoustical Society of America Y1 - 2020 U6 - https://doi.org/10.1121/10.0000715 N1 - Open Access Veröffentlichung VL - 147 IS - 2 PB - ASA ER - TY - JOUR A1 - Versümer, Siegbert A1 - Steffens, Jochen A1 - Blättermann, Patrick A1 - Becker-Schweitzer, Jörg T1 - Modelling evaluations of low-level sounds in everyday situations using linear machine learning for variable selection JF - Frontiers in Psychology KW - DOAJ Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:hbz:due62-opus-23779 SN - 1664-1078 N1 - Supplementary Material The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2020.570761/full#supplementary-material VL - 11 PB - Frontiers ER - TY - JOUR A1 - Steffens, Jochen A1 - Müller, Franz A1 - Schulz, Melanie A1 - Gibson, Samuel T1 - The effect of inattention and cognitive load on unpleasantness judgments of environmental sounds JF - Applied Acoustics Y1 - 2020 U6 - https://doi.org/10.1016/j.apacoust.2020.107278 SN - 0003-682X VL - 164 PB - Elsevier ER - TY - CHAP A1 - Amano, Katsumi A1 - Matsushita, Fumio A1 - Yanagawa, Hirofumi A1 - Cohen, Michael A1 - Herder, Jens A1 - Koba, Yoshiharu A1 - Tohyama, Mikio T1 - The Pioneer sound field control system at the University of Aizu Multimedia Center T2 - RO-MAN '96 Tsukuba N2 - The PSFC, or Pioneer sound field control system, is a DSP-driven hemispherical 14-loudspeaker array, installed at the University of Aizu Multimedia Center. Collocated with a large screen rear-projection stereographic display the PSFC features realtime control of virtual room characteristics and direction of two separate sound channels, smoothly steering them around a configurable soundscape. The PSFC controls an entire sound field, including sound direction, virtual distance, and simulated environment (reverb level, room size and liveness) for each source. It can also configure a dry (DSP-less) switching matrix for direct directionalization. The PSFC speaker dome is about 14 m in diameter, allowing about twenty users at once to comfortably stand or sit near its sweet spot. KW - Acoustic reflection KW - Auditory displays KW - Control systems KW - Delay effects KW - Electronic mail KW - Large screen displays KW - Loudspeakers KW - Multimedia systems KW - Reverberation KW - Size control Y1 - 1996 SN - 0-7803-3253-9 U6 - https://doi.org/10.1109/ROMAN.1996.568887 SP - 495 EP - 499 PB - IEEE CY - Piscataway ER - TY - CHAP A1 - Herder, Jens A1 - Cohen, Michael T1 - Sound Spatialization Resource Management in Virtual Reality Environments T2 - ASVA’97 ‐- Int. Symp. on Simulation, Visualization and Auralization for Acoustic Research and Education N2 - In a virtual reality environment users are immersed in a scene with objects which might produce sound. The responsibility of a VR environment is to present these objects, but a system has only limited resources, including spatialization channels (mixels), MIDI/audio channels, and processing power. The sound spatialization resource manager controls sound resources and optimizes fidelity (presence) under given conditions. For that a priority scheme based on human psychophysical hearing is needed. Parameters for spatialization priorities include intensity calculated from volume and distance, orientation in the case of non-uniform radiation patterns, occluding objects, frequency spectrum (low frequencies are harder to localize), expected activity, and others. Objects which are spatially close together (depending on distance and direction) can be mixed. Sources that can not be spatialized can be treated as a single ambient sound source. Important for resource management is the resource assignment, i.e., minimizing swap operations, which makes it desirable to look-ahead and predict upcoming events in a scene. Prediction is achieved by monitoring objects’ speed and past evaluation values. Fidelity is contrasted for Zifferent kind of resource restrictions and optimal resource assignment based upon unlimited dynamic scene look-ahead. To give standard and comparable results, the VRML 2.0 specification is used as an application programmer interface. Applicability is demonstrated with a helical keyboard, a polyphonic MIDI stream driven animation including user interaction (user moves around, playing together with programmed notes). The developed sound spatialization resource manager gives improved spatialization fidelity under runtime constraints. Application programmers and virtual reality scene designers are freed from the burden of assigning and predicting the sound sources. Y1 - 1997 SP - 407 EP - 414 CY - Tokyo ER - TY - JOUR A1 - Amano, Katsumi A1 - Matsushita, Fumio A1 - Yanagawa, Hirofumi A1 - Cohen, Michael A1 - Herder, Jens A1 - Martens, William A1 - Koba, Yoshiharu A1 - Tohyama, Mikio T1 - A Virtual Reality Sound System Using Room-Related Transfer Functions Delivered Through a Multispeaker Array: the PSFC at the University of Aizu Multimedia Center JF - TVRSJ N2 - The PSFC, or Pioneer Sound Field Controller, is a DSP-driven hemispherical loudspeaker array, installed at the University of Aizu Multimedia Center. The PSFC features realtime manipulation of the primary components of sound spatialization for each of two audio sources located in a virtual environment, including the content (apparent direction and distance) and context (room characteristics: reverberation level, room size and liveness). In an alternate mode, it can also direct the destination of the two separate input signals across 14 loudspeakers, manipulating the direction of the virtual sound sources with no control over apparent distance other than that afforded by source loudness (including no simulated environmental reflections or reverberation). The PSFC speaker dome is about 10 m in diameter, accommodating about fifty simultaneous users, including about twenty users comfortably standing or sitting near its ``sweet spot,'' the area in which the illusions of sound spatialization are most vivid. Collocated with a large screen rear-projection stereographic display, the PSFC is intended for advanced multimedia and virtual reality applications. KW - audio signal processing, audio telecommunications KW - auralization KW - calm technology KW - directional mixing console KW - multichannel sound reproduction KW - room-related transfer functions KW - roomware KW - sound localization KW - virtual conferencing environment Y1 - 1998 U6 - https://doi.org/10.18974/tvrsj.3.1_1 VL - 3 IS - 1 SP - 1 EP - 12 PB - J-STAGE ER - TY - CHAP A1 - Cohen, Michael A1 - Herder, Jens ED - Göbel, Martin ED - Landauer, Jürgen ED - Lang, Ulrich ED - Wapler, Matthias T1 - Symbolic representations of exclude and include for audio sources and sinks: Figurative suggestions of mute/solo & cue and deafen/confide & harken T2 - Virtual Environments ’98, Proceedings of the Eurographics Workshop Y1 - 1998 SN - 3-211-83233-5 U6 - https://doi.org/10.1007/978-3-7091-7519-4_23 SP - 235 EP - 242 PB - Springer-Verlag CY - Stuttgart ER - TY - JOUR A1 - Herder, Jens T1 - Tools and Widgets for Spatial Sound Authoring JF - Computer Networks & ISDN Systems Y1 - 1998 VL - 30 IS - 20-21 SP - 1933 EP - 1940 PB - Elsevier ER - TY - CHAP A1 - Martens, William L. A1 - Herder, Jens T1 - Perceptual criteria for eliminating reflectors and occluders from the rendering of environmental sound T2 - 137th Regular Meeting of the Acoustical Society of America and the 2nd Convention of the European Acoustics Association N2 - Given limited computational resources available for the rendering of spatial sound imagery, we seek to determine effective means for choosing whatcomponents of the rendering will provide the most audible differences in the results. Rather than begin with an analytic approach that attempts to predict audible differences on the basis of objective parameters, we chose to begin with subjective tests of how audibly different the rendering result may be heard to be when that result includes two types of sound obstruction: reflectors and occluders. Single-channel recordings of 90 short speech sounds were made in an anechoic chamber in the presence and absence of these two types of obstructions, and as the angle of those obstructions varied over a 90 degree range. These recordings were reproduced over a single loudspeaker in that anechoic chamber, and listeners were asked to rate how confident they were that the recording of each of these 90 stimuli included an obstruction. These confidence ratings can be used as an integral component in the evaluation function used to determine which reflectors and occluders are most important for rendering. KW - audio rendering KW - first-order reflection KW - human perception KW - level of detail KW - occluder KW - sound spatialization resource management Y1 - 1999 PB - Acoustical Society of America, European Acoustics Association CY - Berlin ER - TY - CHAP A1 - Martens, William L. A1 - Herder, Jens A1 - Shiba, Yoshiki T1 - A filtering model for efficient rendering of the spatial image of an occluded virtual sound source T2 - 137th Regular Meeting of the Acoustical Society of America and the 2nd Convention of the European Acoustics Association N2 - Rendering realistic spatial sound imagery for complex virtual environments must take into account the effects of obstructions such as reflectors and occluders. It is relatively well understood how to calculate the acoustical consequence that would be observed at a given observation point when an acoustically opaque object occludes a sound source. But the interference patterns generated by occluders of various geometries and orientations relative to the virtual source and receiver are computationally intense if accurate results are required. In many applications, however, it is sufficient to create a spatial image that is recognizable by the human listener as the sound of an occluded source. In the interest of improving audio rendering efficiency, a simplified filtering model was developed and its audio output submitted to psychophysical evaluation. Two perceptually salient components of occluder acoustics were identified that could be directly related to the geometry and orientation of a simple occluder. Actual occluder impulse responses measured in an anechoic chamber resembled the responses of a model incorporating only a variable duration delay line and a low-pass filter with variable cutoff frequenc KW - audio rendering KW - first-order reflection KW - human perception KW - occluder Y1 - 1999 PB - Acoustical Society of America, European Acoustics Association CY - Berlin ER - TY - CHAP A1 - Ishikawa, Kimitaka A1 - Hirose, Minefumi A1 - Herder, Jens T1 - A Sound Spatialization Server for a Speaker Array as an Integrated Part of a Virtual Environment T2 - IEEE YUFORIC Germany 98 N2 - Spatial sound plays an important role in virtual reality environments, allowing orientation in space, giving a feeling of space, focusing the user on events in the scene, and substituting missing feedback cues (e.g., force feedback). The sound spatialization framework of the University of Aizu, which supports number of spatialization backends, has been extended to include a sound spatialization server for a multichannel loudspeaker array (Pioneer Sound Field Control System). Our goal is that the spatialization server allows easy integration into virtual environments. Modeling of distance cues, which are essential for full immersion, is discussed. Furthermore, the integration of this prototype into different applications allowed us to reveal the advantages and problems of spatial sound for virtual reality environments. KW - distance cue KW - loudspeaker array KW - psychoacoustic KW - sound spatialization server Y1 - 1998 UR - http://vsvr.medien.hs-duesseldorf.de/publications/ve98-spatial-server/welcome.html PB - IEEE CY - Stuttgart ER - TY - CHAP A1 - Herder, Jens A1 - Yamazaki, Yasuhiro T1 - A Chatspace Deploying Spatial Audio for Enhanced Conferencing T2 - Third International Conference on Human and Computer KW - VSVR Y1 - 2000 SP - 197 EP - 202 PB - University of Aizu CY - Aizu-Wakamatsu ER - TY - JOUR A1 - Herder, Jens T1 - Visualization of a Clustering Algorithm of Sound Sources based on Localization Errors JF - Journal of the 3D-Forum Society N2 - A module for soundscape monitoring and visualizing resource management processes was extended for presenting clusters, generated by a novel sound source clustering algorithm. This algorithm groups multiple sound sources together into a single representative source, considering localization errors depending on listener orientation. Localization errors are visualized for each cluster using resolution cones. Visualization is done in runtime and allows understanding and evaluation of the clustering algorithm. KW - audio rendering, clustering KW - human perception KW - Resource Management KW - Sound Spatialization KW - Visualization Y1 - 1999 VL - 13 IS - 3 SP - 66 EP - 70 ER - TY - CHAP A1 - Herder, Jens T1 - Optimization of Sound Spatialization Resource Management through Clustering T2 - Second International Conference on Human and Computer N2 - Level-of-detail is a concept well-known in computer graphics to reduce the number of rendered polygons. Depending on the distance to the subject (viewer), the objects’ representation is changed. A similar concept is the clustering of sound sources for sound spatialization. Clusters can be used to hierarchically organize mixels and to optimize the use of resources, by grouping multiple sources together into a single representative ource. Such a clustering process should minimize the error of position allocation of elements, perceived as angle and distance, and also differences between velocity relative to the sink (i.e., Doppler shift). Objects with similar direction of motion and speed (relative to sink) in the same acoustic resolution cone and with similar distance to a sink can be grouped together. KW - audio rendering KW - clustering, and human perception KW - Resource Management KW - Sound Spatialization Y1 - 1999 SP - 1 EP - 7 CY - Aizu-Wakamatsu ER - TY - CHAP A1 - Herder, Jens T1 - Visualization of a Clustering Algorithm of Sound Sources based on Localization Errors T2 - Second International Conference on Human and Computer N2 - A module for soundscape monitoring and visualizing resource management processes was extended for presenting clusters, generated by a novel sound source clustering algorithm. This algorithm groups multiple sound sources together into a single representative source, considering localization errors depending on listener orientation. Localization errors are visualized for each cluster using resolution cones. Visualization is done in runtime and allows understanding and evaluation of the clustering algorithm. KW - audio rendering, clustering KW - human perception KW - Resource Management KW - Sound Spatialization KW - Visualization Y1 - 1999 SP - 1 EP - 5 CY - Aizu-Wakamatsu ER - TY - JOUR A1 - Cohen, Michael A1 - Herder, Jens A1 - L. Martens, William T1 - Cyberspatial Audio Technology T1 - available in Japanese as well - Acoustical Society of Japan, Vol. 55, No. 10, pp. 730-731 JF - The Journal of the Acoustical Society of Japan (E) N2 - Cyberspatial audio applications are distinguished from the broad range of spatial audio applications in a number of important ways that help to focus this review. Most significant is that cyberspatial audio is most often designed to be responsive to user inputs. In contrast to non-interactive auditory displays, cyberspatial auditory displays typically allow active exploration of the virtual environment in which users find themselves. Thus, at least some portion of the audio presented in a cyberspatial environment must be selected, processed, or otherwise rendered with minimum delay relative to user input. Besides the technological demands associated with realtime delivery of spatialized sound, the type and quality of auditory experiences supported are also very different from those associated with displays that support stationary sound localization. Y1 - 1999 U6 - https://doi.org/10.1250/ast.20.389 N1 - available in Japanese as well - Acoustical Society of Japan, Vol. 55, No. 10, pp. 730-731 VL - 20 IS - 6 SP - 389 EP - 395 ER - TY - CHAP A1 - Honno, Kuniaki A1 - Suzuki, Kenji A1 - Herder, Jens T1 - Distance and Room Effects Control for the PSFC, an Auditory Display using a Loudspeaker Array T2 - Third International Conference on Human and Computer N2 - The Pioneer Sound Field Controller (PSFC), a loudspeaker array system, features realtime configuration of an entire sound field,including sound source direction, virtual distance, and context of simulated environment (room characteristics: room size and liveness)for each of two sound sources. In the PSFC system, there is no native parameter to specify the distance between the sound source and sound sink (listener) and also no function to control it directrly. This paper suggests the method to control virtual distance using basic parameters: volume, room size and liveness. The implementation of distance cue is an important aspect of 3D sounds. Virtual environments supporting room effects like reverberation not only gain realism but also provide additional information to users about surrounding space. The context switch of different aural attributes is done by using an API of the Sound Spatialization Framework. Therefore, when the sound sink move through two rooms, like a small bathroom and a large living room, the context of the sink switches and different sound is obtained. KW - VSVR Y1 - 2000 SP - 71 EP - 76 PB - University of Aizu CY - Aizu-Wakamatsu ER - TY - CHAP A1 - Yamazaki, Yasuhiro A1 - Herder, Jens T1 - Exploring Spatial Audio Conferencing Functionality in Multiuser Virtual Environments T2 - The Third International Conference on Collaborative Virtual Environments N2 - A chatspace was developed that allows conversation with 3D sound using networked streaming in a shared virtual environment. The system provides an interface to advanced audio features, such as a "whisper function" for conveying a confided audio stream. This study explores the use of spatial audio to enhance a user's experience in multiuser virtual environments. KW - chatspaces KW - groupware KW - narrowcasting functions KW - networked audio KW - spatial audio KW - VSVR Y1 - 2000 SP - 207 EP - 208 PB - ACM CY - San Francisco ER - TY - JOUR A1 - Honno, Kuniaki A1 - Suzuki, Kenji A1 - Herder, Jens T1 - Distance and Room Effects Control for the PSFC, an Auditory Display using a Loudspeaker Array JF - Journal of the 3D-Forum Society N2 - The Pioneer Sound Field Controller (PSFC), a loudspeaker array system, features realtime configuration of an entire sound field, including sound source direction, virtual distance, and context of simulated environment (room characteristics: room size and liveness) for each of two sound sources. In the PSFC system, there is no native parameter to specify the distance between the sound source and sound sink (listener) and also no function to control it directrly. This paper suggests the method to control virtual distance using basic parameters: volume, room size and liveness. The implementation of distance cue is an important aspect of 3D sounds. Virtual environments supporting room effects like reverberation not only gain realism but also provide additional information to users about surrounding space. The context switch of different aural attributes is done by using an API of the Sound Spatialization Framework. Therefore, when the sound sink move through two rooms, like a small bathroom and a large living room, the context of the sink switches and different sound is obtained. KW - VSVR Y1 - 2000 VL - 14 IS - 4 SP - 146 EP - 151 ER - TY - JOUR A1 - Herder, Jens A1 - Yamazaki, Yasuhiro T1 - A Chatspace Deploying Spatial Audio for Enhanced Conferencing JF - Journal of the 3D-Forum Society KW - VSVR Y1 - 2000 VL - 15 IS - 1 ER - TY - CHAP A1 - Cohen, Michael A1 - Herder, Jens A1 - Martens, William T1 - Panel: Eartop computing and cyberspatial audio technology T2 - IEEE-VR2001: IEEE Virtual Reality KW - VSVR Y1 - 2001 SN - 0-7695-0948-7 SP - 322 EP - 323 PB - IEEE CY - Yokohama ER - TY - JOUR A1 - Herder, Jens A1 - Cohen, Michael T1 - The Helical Keyboard: Perspectives for Spatial Auditory Displays and Visual Music JF - Journal of New Music Research N2 - Auditory displays with the ability to dynamically spatialize virtual sound sources under real-time conditions enable advanced applications for art and music. A listener can be deeply immersed while interacting and participating in the experience. We review some of those applications while focusing on the Helical Keyboard project and discussing the required technology. Inspired by the cyclical nature of octaves and helical structure of a scale, a model of a piano-style keyboard was prepared, which was then geometrically warped into a helicoidal configuration, one octave/revolution, pitch mapped to height and chroma. It can be driven by MIDI events, real-time or sequenced, which stream is both synthesized and spatialized by a spatial sound display. The sound of the respective notes is spatialized with respect to sinks, avatars of the human user, by default in the tube of the helix. Alternative coloring schemes can be applied, including a color map compatible with chromastereoptic eyewear. The graphical display animates polygons, interpolating between the notes of a chord across the tube of the helix. Recognition of simple chords allows directionalization of all the notes of a major triad from the position of its musical root. The system is designed to allow, for instance, separate audition of harmony and melody, commonly played by the left and right hands, respectively, on a normal keyboard. Perhaps the most exotic feature of the interface is the ability to fork oneís presence, replicating subject instead of object by installing multiple sinks at arbitrary places around a virtual scene so that, for example, harmony and melody can be separately spatialized, using two heads to normalize the octave; such a technique effectively doubles the helix from the perspective of a single listener. Rather than a symmetric arrangement of the individual helices, they are perceptually superimposed in-phase, co-extensively, so that corresponding notes in different registers are at the same azimuth. KW - 3D audio KW - virtual reality KW - computer music KW - spatialization KW - spatial media KW - visual music KW - VSVR Y1 - 2002 VL - 31 IS - 3 SP - 269 EP - 281 ER - TY - JOUR A1 - Struchholz, Holger A1 - Herder, Jens A1 - Leckschat, Dieter T1 - Sound radiation simulation of musical instruments based on interpolation and filtering of multi-channel recordings JF - Journal of the 3D-Forum Society N2 - With the virtual environment developed here, the characteristic sound radiation patterns of musical instruments can be experienced in real-time. The user may freely move around a musical instrument, thereby receiving acoustic and visual feedback in real-time. The perception of auditory and visual effects is intensified by the combination of acoustic and visual elements, as well as the option of user interaction. The simulation of characteristic sound radiation patterns is based on interpolating the intensities of a multichannel recording and offers a near-natural mapping of the sound radiation patterns. Additionally, a simple filter has been developed, enabling the qualitative simulation of an instrument’s characteristic sound radiation patterns to be easily implemented within real-time 3D applications. Both methods of simulating sound radiation patterns have been evaluated for a saxophone with respect to their functionality and validity by means of spectral analysis and an auditory experiment. KW - sound radiation pattern KW - audio interpolation KW - audio filtering KW - FHD KW - VSVR Y1 - 2006 VL - 20 IS - 1 SP - 41 EP - 47 ER - TY - CHAP A1 - Garbe, Katharina A1 - Herbst, Iris A1 - Herder, Jens T1 - Spatial Audio for Augmented Reality T2 - 10th International Conference on Human and Computer N2 - Using spatial audio successfully for augmented reality (AR) applications is a challenge, but is awarded with an improved user experience. Thus, we have extended the AR/VR framework \sc Morgan with spatial audio to improve users orientation in an AR application. In this paper, we investigate the users’ capability to localize and memorize spatial sounds (registered with virtual or real objects). We discuss two scenarios. In the first scenario, the user localizes only sound sources and in the second scenario the user memorizes the location of audio-visual objects. Our results reflect spatial audio performance within the application domain and show which technology pitfalls still exist. Finally, we provide design recommendations for spatial audio AR environments. KW - spatial audio KW - augmented reality KW - sound interaction KW - FHD KW - VSVR Y1 - 2007 SP - 53 EP - 58 CY - Düsseldorf, Aizu-Wakamatsu ER - TY - CHAP A1 - Herder, Jens A1 - Wilke, Michael A1 - Heimbach, Julia A1 - Göbel, Sebastian A1 - Marinos, Dionysios T1 - Simple Actor Tracking for Virtual TV Studios Using a Photonic Mixing Device T2 - 12th International Conference on Human and Computer N2 - Virtual TV studios use actor tracking systems for resolving the occlusion of computer graphics and studio camera image. The actor tracking delivers the distance between actor and studio camera. We deploy a photonic mixing device, which captures a depth map and a luminance image at low resolution. The renderer engines gets one depth value per actor using the OSC protocol. We describe the actor recognition algorithm based on the luminance image and the depth value calculation. We discuss technical issues like noise and calibration. KW - actor tracking KW - photonic mixing device KW - virtual studio KW - video processing KW - FHD KW - VSVR Y1 - 2009 CY - Hamamatsu / Aizu-Wakamatsu / Düsseldorf ER - TY - JOUR A1 - Herder, Jens T1 - Sound Spatialization Framework: An Audio Toolkit for Virtual Environments JF - Journal of the 3D-Forum Society N2 - The Sound Spatialization Framework is a C++ toolkit and development environment for providing advanced sound spatialization for virtual reality and multimedia applications. The Sound Spatialization Framework provides many powerful display and user-interface features not found in other sound spatialization software packages. It provides facilities that go beyond simple sound source spatialization: visualization and editing of the soundscape, multiple sinks, clustering of sound sources, monitoring and controlling resource management, support for various spatialization backends, and classes for MIDI animation and handling. Keywords: sound spatialization, resource management, virtual environments, spatial sound authoring, user interface design, human-machine interfaces KW - sound spatialization KW - resource management KW - virtual environments KW - spatial sound authoring KW - user interface design KW - human-machine interfaces Y1 - 1998 VL - 12 IS - 3 SP - 17 EP - 22 ER - TY - CHAP A1 - Ludwig, Philipp A1 - Büchel, Joachim A1 - Herder, Jens A1 - Vonolfen, Wolfgang T1 - InEarGuide - A Navigation and Interaction Feedback System using In Ear Headphones for Virtual TV Studio Productions T2 - 9. Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR N2 - This paper presents an approach to integrate non-visual user feedback in today's virtual tv studio productions. Since recent studies showed that systems providing vibro-tactile feedback are not sufficient for replacing the common visual feedback, we developed an audio-based solution using an in ear headphone system, enabling a talent to move, avoid and point to virtual objects in a blue or green box. The system consists of an optical head tracking system, a wireless in ear monitor system and a workstation, which performs all application and audio processing. Using head related transfer functions, the talent gets directional and distance cues. Past research showed, that generating reflections of the sounds and simulating the acoustics of the virtual room helps the listener to conceive the acoustical feedback, we included this technique as well. In a user study with 15 participants the performance of the system was evaluated. KW - Navigation KW - virtual TV KW - erweiterte Realität KW - VSVR KW - Virtual (TV) Studio Y1 - 2012 CY - Düsseldorf ER - TY - CHAP A1 - Herder, Jens T1 - Tools and widgets for spatial sound authoring T2 - CompuGraphics ' 97, Sixth International Conference on Computational Graphics and Visualization Techniques: Graphics in the Internet Age, Vilamoura, Portugal N2 - Broader use of virtual reality environments and sophisticated animations spawn a need for spatial sound. Until now, spatial sound design has been based very much on experience and trial and error. Most effects are hand-crafted, because good design tools for spatial sound do not exist. This paper discusses spatial sound authoring and its applications, including shared virtual reality environments based on VRML. New utilities introduced by this research are an inspector for sound sources, an interactive resource manager, and a visual soundscape manipulator. The tools are part of a sound spatialization framework and allow a designer/author of multimedia content to monitor and debug sound events. Resource constraints like limited sound spatialization channels can also be simulated. KW - spatial sound KW - virtual reality environments KW - multimedia KW - spatialization KW - spatial media KW - vizualization KW - user interface design KW - man-machine interfaces Y1 - 1997 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:hbz:due62-opus-896 UR - http://vsvr.medien.hs-duesseldorf.de/publications/cg97-tawfssa/welcome.html SN - 972-8342-02-0 N1 - Zugriff auf die Konferenzveröffentlichung über den angegebene URL conference paper available via URL SP - 87 EP - 95 CY - Portugal ER - TY - CHAP A1 - Herder, Jens T1 - Sound Spatialization Framework: An Audio Toolkit for Virtual Environments T2 - First International Conference on Human and Computer, Aizu-Wakamatsu, September 1998 N2 - The Sound Spatialization Framework is a C++ toolkit and development environment for providing advanced sound spatialization for virtual reality and multimedia applications. The Sound Spatialization Framework provides many powerful display and user-interface features not found in other sound spatialization software packages. It provides facilities that go beyond simple sound source spatialization: visualization and editing of the soundscape, multiple sinks, clustering of sound sources, monitoring and controlling resource management, support for various spatialization backends, and classes for MIDI animation and handling. KW - sound spatialization KW - resource management KW - virtual environments KW - spatial sound authoring KW - user interface design KW - human-machine interfaces Y1 - 1998 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:hbz:due62-opus-788 CY - Aizu ER -