Refine
Year of publication
Document Type
- Conference Proceeding (60)
- Article (20)
- Part of a Book (12)
- Article trade magazine (2)
- Workingpaper / Report (2)
- Diploma Thesis (1)
- Doctoral Thesis (1)
Language
- English (87)
- German (10)
- Multiple languages (1)
Keywords
- VSVR (47)
- Virtual (TV) Studio (24)
- FHD (21)
- Lehre (7)
- Sound Spatialization (7)
- Virtual Reality (6)
- virtual studio (6)
- Mixed Reality (5)
- Resource Management (5)
- human perception (5)
Department/institution
The PSFC, or Pioneer sound field control system, is a DSP-driven hemispherical 14-loudspeaker array, installed at the University of Aizu Multimedia Center. Collocated with a large screen rear-projection stereographic display the PSFC features realtime control of virtual room characteristics and direction of two separate sound channels, smoothly steering them around a configurable soundscape. The PSFC controls an entire sound field, including sound direction, virtual distance, and simulated environment (reverb level, room size and liveness) for each source. It can also configure a dry (DSP-less) switching matrix for direct directionalization. The PSFC speaker dome is about 14 m in diameter, allowing about twenty users at once to comfortably stand or sit near its sweet spot.
The PSFC, or Pioneer Sound Field Controller, is a DSP-driven hemispherical loudspeaker array, installed at the University of Aizu Multimedia Center. The PSFC features realtime manipulation of the primary components of sound spatialization for each of two audio sources located in a virtual environment, including the content (apparent direction and distance) and context (room characteristics: reverberation level, room size and liveness). In an alternate mode, it can also direct the destination of the two separate input signals across 14 loudspeakers, manipulating the direction of the virtual sound sources with no control over apparent distance other than that afforded by source loudness (including no simulated environmental reflections or reverberation). The PSFC speaker dome is about 10 m in diameter, accommodating about fifty simultaneous users, including about twenty users comfortably standing or sitting near its ``sweet spot,'' the area in which the illusions of sound spatialization are most vivid. Collocated with a large screen rear-projection stereographic display, the PSFC is intended for advanced multimedia and virtual reality applications.
Shadows in computer graphics are an important rendering aspect for spatial objects. For realtime computer applications such as games, it is essential to represent shadows as accurate as possible. Also, various tv stations work with virtual studio systems instead of real studio sets. Especially for those systems, a realistic impression of the rendered and mixed scene is important. One challenge, hence, is the creation of a natural shadow impression. This paper presents the results of an empirical study to compare the performance and quality of different shadow mapping methods. For this test, a prototype studio renderer was developed. A percentage closer filter (pcf) with a number of specific resolutions is used to minimize the aliasing issue. More advanced algorithms which generate smooth shadows like the percentage closer soft shadow (pcss) method as well as the variance shadow maps (vsm) method are analysed. Different open source apis are used to develop the virtual studio renderer, giving the benefit of permanent enhancement. The Ogre 3D graphic engine is used to implement the rendering system, benefiting from various functions and plugins. The transmission of the tracking data is accomplished with the vrpn server/client and the Intersense api. The different shadow algorithms are compared in a virtual studio environment which also casts real shadows and thus gives a chance for a direct comparison throughout the empirical user study. The performance is measured in frames per secon
In order to improve the interactivity between users and computers, recent technologies focus on incorporating gesture recognition into interactive systems. The aim of this article is to evaluate the effectiveness of using a Myo control armband and the Kinect 2 for recognition of gestures in order to interact with virtual objects in a weather report scenario. The Myo armband has an inertial measurement unit and is able to read electrical activity produced by skeletal muscles, which can be recognized as gestures, which are trained by machine learning. A Kinect sensor was used to build up a dataset which contains motion recordings of 8 different gestures and was also build up by a gesture training machine learning algorithm. Both input methods, the Kinect 2 and the Myo armband, were evaluated with the same interaction patterns in a user study, which allows a direct comparison and reveals benefits and limits of each technique.
In this paper we propose an integrated immersive augmented reality solution for a software tool supporting spacecraft design and verification. The spacecraft design process relies on expertise in many domains, such as thermal and structural engineering. The various subsystems of a spacecraft are highly interdependent and have differing requirements and constraints. In this context, interactive visualizations play an important role in making expert knowledge accessible. Recent immersive display technologies offer new ways of presenting and interacting with computer-generated content. Possibilities and challenges for spacecraft configuration employing these technologies are explored and discussed. A user interface design for an application using the Microsoft HoloLens is proposed. To this end, techniques for selecting a spacecraft component and manipulating its position and orientation in 3D space are developed and evaluated. Thus, advantages and limitations of this approach to spacecraft configuration are revealed and discussed.
The late immersion of multi-touch sensitive displays enables the use of tangibles on multi-touch screens. There a several wide spread and/or sophisticated solutions to fulfill this need but they seem to have some flaws. One popular system at the time of writing is an overlay frame that can be placed on a normal display with the corresponding size. The frame creates a grid with infrared light emitting diodes. The disruption of this grid can be detected and messages with the positions are sent via usb to a connected computer. This system is quite robust in matters of ambient light insensitivity and also fast to calibrate. Unfortunately it is not created with the recognition of tangibles in mind and printed patterns can not be resolved. This article summarizes an attempt to create fiducials that are recognized by an infrared multi-touch frame as fingers. Those false fingers are checked by a software for known patterns. Once a known pattern (= fiducial) has been recognized its position and orientation are send with the finger positions towards the interactive software. The usability is tested with an example application where tangibles and finger touches are used in combination.
Through constant technical progress, multi-user virtual reality is transforming towards a social activity that is no longer only used by remote users, but also in large-scale location-based experiences. We evaluate the usage of realtime-tracked avatars in co-located business-oriented applications in a "guide-user-scenario" in comparison to audio only instructions. The present study examined the effect of an avatar-guide on the user-related factors of Spatial Presence, Social Presence, User Experience and Task Load in order to propose design guidelines for co-located collaborative immersive virtual environments. Therefore, an application was developed and a user study with 40 participants was conducted in order to compare both guiding techniques of a realtime-tracked avatar guide and a non-visualised guide with otherwise constant conditions. Results reveal that the avatar-guide enhanced and stimulated communicative processes while facilitating interaction possibilities and creating a higher sense of mental immersion for users. Furthermore, the avatar-guide appeared to make the storyline more engaging and exciting while helping users adapt to the medium of virtual reality. Even though no assertion could be made concerning the Task Load factor, the avatar-guide achieved a higher subjective value on User Experience. Due to the results, avatars can be considered valuable social elements in the design of future co-located collaborative virtual environments.
This paper presents a mobile approach of integrating tangible user feedback in today’s virtual TV studio productions. We describe a tangible multitouch planning system, enabling a single user to prepare and customize scene flow and settings. Users can view and interact with virtual objects by using a tangible user interface on a capacitive multitouch surface. In a 2D setting created TV scenes are simultaneously rendered as separate view using a production/target renderer in 3D. Thereby the user experiences a closer reproduction of a final production and set assets can be reused. Subsequently, a user can arrange scenes on a timeline while maintaining different versions/sequences. The system consists of a tablet and a workstation, which does all application processing and rendering. The tablet is just an interface connected via wireless LAN.
Four Metamorphosis States in a Distributed Virtual (TV) Studio: Human, Cyborg, Avatar, and Bot
(2013)
The major challenge in virtual studio technology is the interaction between the actor and virtual objects. Within a distributed live production, two locally separated markerless tracking systems where used simultaneously alongside a virtual studio. The production was based on a fully tracked actor, cyborg (half actor, half graphics), avatar, and a bot. All participants could interact and throw a virtual disc. This setup is compared and mapped to Milgram’s continuum and technical challenges are described.
The task of the Center for Language Research is to provide content-based English language instruction for students of computer science and engineering. As such, we find ourselves at the confluence of many of the streams currently running through the English Language Teaching profession, including English for Science and Technology (EST), English for Academic Purposes (EAP), English for Specific Purposes (ESP), Computer-assisted language learning (CALL), content-based instruction, and multimedia applications in foreign language pedagogy. This paper describes our initial attempts to construct a number of World Wide Web pages where students will be able to study EST, EAP, and computer science topics on their own in a multimedia environment.
Cyberspatial audio applications are distinguished from the broad range of spatial audio applications in a number of important ways that help to focus this review. Most significant is that cyberspatial audio is most often designed to be responsive to user inputs. In contrast to non-interactive auditory displays, cyberspatial auditory displays typically allow active exploration of the virtual environment in which users find themselves. Thus, at least some portion of the audio presented in a cyberspatial environment must be selected, processed, or otherwise rendered with minimum delay relative to user input. Besides the technological demands associated with realtime delivery of spatialized sound, the type and quality of auditory experiences supported are also very different from those associated with displays that support stationary sound localization.
Virtual (tv) studios gain much more acceptance through improvements in computer graphics and camera tracking. Still commercial studios cannot have full interaction between actors and virtual scene because actors data are not completely digital available as well as the feedback for actors is still not sufficient. Markerless full body tracking might revolutionize virtual studio technology as it allows better interaction between real and virtual world. This article reports about using a markerless actor tracking in a virtual studio with a tracking volume of nearly 40 cubic meter enabling up to three actors within the green box. The tracking is used for resolving the occlusion between virtual objects and actors so that the Tenderer can output automatically a mask for virtual objects in the foreground in case the actor is behind. It is also used for triggering functions scripted within the Tenderer engine, which are attached to virtual objects, starting any kind of action (e.g., animation). Last but not least the system is used for controlling avatars within the virtual set. All tracking and rendering is done within a studio frame rate of 50 Hz with about 3 frames delay. The markerless actor tracking within virtual studios is evaluated by experts using an interview approach. The statistical evaluation is based on a questionnaire.
Live video broadcasting requires a multitude of professional expertise to enable multi-camera productions. Robotic systems allow the automation of common and repeated tracking shots. However, predefined camera shots do not allow quick adjustments when required due to unpredictable events. We introduce a modular automated robotic camera control and video switch system, based on fundamental cinematographic rules. The actors' positions are provided by a markerless tracking system. In addition, sound levels of actors' lavalier microphones are used to analyse the current scene. An expert system determines appropriate camera angles and decides when to switch from one camera to another. A test production was conducted to observe the developed prototype in a live broadcast scenario and served as a video-demonstration for an evaluation.
Live-Video-Broadcasting mit mehreren Kameras erfordert eine Vielzahl von Fachkenntnissen. Robotersysteme ermöglichen zwar die Automatisierung von gängigen und wiederholten Tracking-Aufnahmen, diese erlauben jedoch keine kurzfristigen Anpassungen aufgrund von unvorhersehbaren Ereignissen. In diesem Beitrag wird ein modulares, automatisiertes Kamerasteuerungs- und Bildschnitt-System eingeführt, das auf grundlegenden kinematografischen Regeln basiert. Die Positionen der Akteure werden durch ein markerloses Tracking-System bereitgestellt. Darüber hinaus werden Tonpegel der Lavaliermikrofone der Akteure zur Analyse der aktuellen Szene verwendet. Ein Expertensystem ermittelt geeignete Kamerawinkel und entscheidet, wann von einer Kamera auf eine andere umgeschaltet werden soll. Eine Testproduktion wurde durchgeführt, um den entwickelten Prototyp in einem Live-Broadcast-Szenario zu beobachten und diente als Videodemonstration für eine Evaluierung.
Augmented reality for supporting manual non-destructive ultrasonic testing of metal pipes and plates
(2018)
We describe an application of augmented reality technology for non-destructive testing of products in the metal-industry. The prototype is created with hard- and software, that is usually employed in the gaming industry, and delivers positions for creating ultra- sonic material scans (C-scans). Using a stereo camera in combination with an hmd enables realtime visualisation of the probes path, as well as the setting of virtual markers on the specimen. As a part of the implementation the downhill simplex optimization algorithm is implemented to fit the specimen to a cloud of recorded surface points. The accuracy is statistically tested and evaluated with the result, that the tracking system is accurate up to ca. 1-2 millimeters in well set-up conditions. This paper is of interest not only for research institutes of the metal-industry, but also for any areas of work, in which the enhancement with augmented reality is possible and a precise tracking is necessary.
The interpretation process of complex data sets makes the integration of effective interaction techniques crucial. Recent work in the field of human-computer interaction has shown that there is strong evidence that multimodal user interaction, i.e. the integration of various input modalities and interaction techniques into one comprehensive user interface, can improve human performance when interacting with complex data sets. However, it is still unclear which factors make these user interfaces superior to unimodal user interfaces. The contribution of this work is an analytical comparison of a multimodal and a unimodal user interface for a scientific visualization application. We show that multimodal user interaction with simultaneously integrated speech and gesture input improves user performance regarding efficiency and ease of use.
This paper presents a tracking of parts of a human body in a virtual TV studio environment. The tracking is based on a depth camera and a HD studio camera and aims at a realistic interaction between the actor and the computer generated environment. Stereo calibration methods are used to match corresponding pixels of both cameras (HD color and depth image). Hence the images were rectified and column aligned. The disparity is used to correct the depth image pixel by pixel. This image registration results in row and column aligned images where ghost regions are in the depth image resulting from occlusion. Both images are used to generate foreground masks with chroma and depth keying. The color image is taken for skin color segmentation to determine and distinguish the actor’s hands and face. In the depth image the flesh colored regions were used to determine their spatial position. The extracted positions were augmented by virtual objects. The scene is rendered correctly with virtual camera parameters which were calculated from the camera calibration parameters. Generated computer graphics with alpha value are combined with the HD color images. This compositing shows interaction with augmented objects for verification. The additional depth information results in changing the size of objects next to the hands when the actor moves around.
Using spatial audio successfully for augmented reality (AR) applications is a challenge, but is awarded with an improved user experience. Thus, we have extended the AR/VR framework \sc Morgan with spatial audio to improve users orientation in an AR application. In this paper, we investigate the users’ capability to localize and memorize spatial sounds (registered with virtual or real objects). We discuss two scenarios. In the first scenario, the user localizes only sound sources and in the second scenario the user memorizes the location of audio-visual objects. Our results reflect spatial audio performance within the application domain and show which technology pitfalls still exist. Finally, we provide design recommendations for spatial audio AR environments.
In this paper we describe the design of a virtual reality simulator for traditional intuitive archery. Traditional archers aim without a target figure. Good shooting results require an excellent body-eye coordination that allows the user to perform identical movements when drawing the bow. Our simulator provides a virtual archery experience and supports the user to learn and practice the motion sequence of traditional archery in a virtual environment. We use an infrared tracking system to capture the user’s movements in order to correct his movement. To provide a realistic haptic feedback a real bow is used as interaction device. Our system provides a believable user experience and supports the user to learn how to shoot in the traditional way. Following a user-centered iterative design approach we developed a number of prototypes and evaluated them for refinement in sequent iteration cycles. For illustration purposes we created a short video clip in our virtual studio about this project that presents the main ideas in an informative yet entertaining way.
Virtual set environments for broadcasting become more sophisticated as well as the visual quality improves. Realtime interaction and production-specific visualization implemented through plugin mechanism enhance the existing systems like the virtual studio software 3DK. This work presents an algorithm which can dynamically manage textures of high resolution by prefetching them depending on their requirement in memory and map them on a procedural mesh in realtime. The main goal application of this work is the virtual representation of a flight over a landscape as part of weather reports in virtual studios and the interaction by the moderator.
Education at the University of Aizu is focussed upon computer science. Besides being the subject matter of many courses, however, the computer also plays a vital role in the educational process itself, both in the distribution of instructional media, and in providing students with valuable practical experience. All students have unlimited access (24-hours-a-day) to individual networked workstations, most of which are multimedia-capable (even video capture is possible in two exercise rooms). Without software and content tailored for computer-aided instruction, the hardware becomes an expensive decoration. In any case, there is a need to better educate the instructors and students in the use of the equipment. In the interest of facilitating effective, collaborative use of network-based computers in teaching, this article explores the impact that a network environment can have on such activities. First, as a general overview, and to examine the motivation for the use of a network environment in teaching, this article reviews a range of different styles of collaboration. Then the article shows what kind of tools are available for use, within the context of what has come to be called Computer-Supported Cooperative Work (CSCW).
The Sound Spatialization Framework is a C++ toolkit and development environment for providing advanced sound spatialization for virtual reality and multimedia applications. The Sound Spatialization Framework provides many powerful display and user-interface features not found in other sound spatialization software packages. It provides facilities that go beyond simple sound source spatialization: visualization and editing of the soundscape, multiple sinks, clustering of sound sources, monitoring and controlling resource management, support for various spatialization backends, and classes for MIDI animation and handling.
Broader use of virtual reality environments and sophisticated animations spawn a need for spatial sound. Until now, spatial sound design has been based very much on experience and trial and error. Most effects are hand-crafted, because good design tools for spatial sound do not exist. This paper discusses spatial sound authoring and its applications, including shared virtual reality environments based on VRML. New utilities introduced by this research are an inspector for sound sources, an interactive resource manager, and a visual soundscape manipulator. The tools are part of a sound spatialization framework and allow a designer/author of multimedia content to monitor and debug sound events. Resource constraints like limited sound spatialization channels can also be simulated.
The Sound Spatialization Framework is a C++ toolkit and development environment for providing advanced sound spatialization for virtual reality and multimedia applications. The Sound Spatialization Framework provides many powerful display and user-interface features not found in other sound spatialization software packages. It provides facilities that go beyond simple sound source spatialization: visualization and editing of the soundscape, multiple sinks, clustering of sound sources, monitoring and controlling resource management, support for various spatialization backends, and classes for MIDI animation and handling.
Keywords:
sound spatialization, resource management, virtual environments, spatial sound authoring, user interface design, human-machine interfaces
Digital broadcasting enables interactive \sc tv, which presents new challenges for interactive content creation. Besides the technology for streaming and viewing, tools and systems are under development that extend traditional \sc tv studios with virtual set environments. This presentation reviews current technology and describes the requirements for such systems. Interoperability over the production, streaming, and viewer levels requires open interfaces. As the technology allow more interaction, it becomes inherent difficult to control the quality of the viewers experience
Virtual sets have evolved from computer-generated, prerendered 2D backgrounds to realtime, responsive 3D computer graphics and are nowadays standard repertoire of broadcasting divisions. The graphics, which are combined with real video feed becoming moresophisticated, real looking and more responsive. We will look at the recent developments and suggest further developments like integration of spatial audio into the studio production and generating interactive media streams. Educational institutes recognize the demands of the rising media industry and established new courses on media technology like the Duesseldorf University of Applied Sciences.
Digital broadcasting enables interactive \sc tv studios with virtual set environments. This presentation reviews current technology and describes the requirements for such systems. Interoperability over the production, streaming, and viewer levels requires open interfaces. As the technology allow more interaction, it becomes inherent difficult to control the quality of the viewers experience.
In a virtual reality environment, users are immersed in a scene with objects which might produce sound. The responsibility of a VR environment is to present these objects, but a practical system has only limited resources, including spatialization channels (mixels), MIDI/audio channels, and processing power. A sound spatialization resource manager, introduced in this thesis, controls sound resources and optimizes fidelity (presence) under given conditions, using a priority scheme based on psychoacoustics. Objects which are spatially close together can be coalesced by a novel clustering algorithm, which considers listener localization errors. Application programmers and VR scene designers are freed from the burden of assigning mixels and predicting sound source locations. The framework includes an abstract interface for sound spatialization backends, an API for the VR environments, and multimedia authoring tools.
Level-of-detail is a concept well-known in computer graphics to reduce the number of rendered polygons. Depending on the distance to the subject (viewer), the objects’ representation is changed. A similar concept is the clustering of sound sources for sound spatialization. Clusters can be used to hierarchically organize mixels and to optimize the use of resources, by grouping multiple sources together into a single representative ource. Such a clustering process should minimize the error of position allocation of
elements, perceived as angle and distance, and also differences between velocity relative to the sink (i.e., Doppler shift). Objects with similar direction of motion and speed (relative to sink) in the same acoustic resolution cone and with similar distance to a sink can be grouped together.
Sound spatialization is a technology which puts sound into the three dimensional space, so that it has a perceivable direction and distance. Interactive means mutually or reciprocally active. Interaction is when one action (e.g., user moves mouse) has direct or immediate influence to other actions (e.g., processing by a computer: graphics change in size). Based on this definition an introduction to sound reproduction using DVD and virtual environments is given and illustrated by applications (e.g., virtual converts).
A module for soundscape monitoring and visualizing resource management processes was extended for presenting clusters, generated by a novel sound source clustering algorithm. This algorithm groups multiple sound sources together into a single representative source, considering localization errors depending on listener orientation. Localization errors are visualized for each cluster using resolution cones. Visualization is done in runtime and allows understanding and evaluation of the clustering algorithm.
A module for soundscape monitoring and visualizing resource management processes was extended for presenting clusters, generated by a novel sound source clustering algorithm. This algorithm groups multiple sound sources together into a single representative source, considering localization errors depending on listener orientation. Localization errors are visualized for each cluster using resolution cones. Visualization is done in runtime and allows understanding and evaluation of the clustering algorithm.
Der Programmier System Generator - PSG - des Fachgebiets Praktische Informatik in Darmstadt erzeugt aus einer Sprachdefinition eine sprachspezifische Programmierumgebung. Diese besteht u. a. aus einem Editor, welcher syntaktische und semantische Fehler von Programmfragmenten, die nicht vollständig sein müssen, erkennen kann. Dem Benutzer werden per Menü Fehlerkorrekturen angeboten. Neben der freien Texteingabe besteht die Möglichkeit, den Text nur mit Hilfe von Menüs zu verfeinern. Teil dieses Editors ist die Bezeichneranalyse. Sie dient als Hilfsmittel für den Benutzer, indem für jede Stelle eines Programmfragmentes die gültigen Bezeichner ausgegeben werden können. Die Kontextanalyse setzt die Berechnung auf den von der Bezeichneranalyse erzeugten Daten auf, um semantische Fehler zu erkennen. Die bis zu dieser Arbeit verwendete Bezeichneranalyse im PSG unterstützt nur einfache Sprachkonzepte (z. B. Fortran und Pascal). Die Gültigkeitskonzepte der Bezeichner von weiterentwickelten Sprachen (z. B. Modula-2, CHILL, Ada oder Pascal-XT) sind nicht vollständig modellierbar. Wir stellen ein neues Konzept zur Definition und Berechnung der Bezeichneranalyse vor, das alle uns bekannten Sprachen mit statischer Typbindung unterstützt. Hierfür haben wir die Sprache BIS - Bezeichneridentifikationssprache - definiert. Die Methode ist verwandt mid dem Zwischencode für geordnete Attributierte Grammatiken. Für jeden Knoten des Abstrakten Syntaxbaumes wird mit Hilfe von BIS ein Code für eine abstrakte Maschine, welche die Bezeichneranalyse durchführt, geschrieben. Im Gegensatz zu herkömlichen Methoden (verkettete Symboltabellen) wird für jeden Punkt innerhalb eines Programmes for der Anfrage durch den Benutzer oder der Kontextanalyse die Menge der gültigen Bezeichner berechnet. Die Kosten für eine Anfrage sind dadurch minimal. Diese abstrakte Maschine teilt sich in zwei unabhängige Maschinen auf, zum einen in die S-Maschine, die die speziellen Operationen der Bezeichneranalyse durchführt, und zum anderen in die G-Maschine, die den Datenfluss und die Auswertung steuert. Diese Aufteilung ermöglicht den Austausch der S-Maschine durch eine andere, welche neue Anwendungsgebiete erschliesst, z. B. die eines Praeprozessors. Die G-Maschine arbeitet inkrementell; es werden nur die Codeschablonen neu ausgewertet, deren geerbten Attribute sich geändert haben. Dazu müssen die Daten, die in einer Codeschablone hinein- und hinausfliessen, abgelegt werden. Dies ergibt bei grossen Programmfragmenten eine immense Rechenzeiteinsparung auf Kosten des Speicherplatzes. Die Funktionsweise wird an einer kleinen Beispielsprache demonstriert, die zu Pascal ähnlich ist. Diese besitzt Konstrukte zum Import und Export von Daten und Datentypen zwischen Programmfragmenten. Im Prototyp kann die inkrementelle Arbeitsweise abgeschaltet werden und ermöglicht einen guten Vergleich der Verfahren.
Texturen können als Oberflächenstrukturen realer Objekt aufgefasst werden und sind Variationen in Farbe, Geometrie, Transparenz, usw.. Im Gegensatz zu Algorithmen für die künstliche Generierung von Texturen gibt es nur wenige Ansätze zu Textur-Synthese-Sprachen oder zu Hilfsmittel für die Textur-Beschreibung. Die bekannten Werkzeuge decken zudem jeweils nur Teilgebiete der Generierung ab. Mit HiLDTe (Hierachical Language for the Description of Textures) ist nun eine Sprache entwickelt worden, mit der möglichst alle bekannten Texturtypen beschrieben werden können. HiLDTe basiert auf einem am Fachgebiet GRIS entwickelten Texturmodell, in dem Texturen generische, eventuell komplex zusammengesetzte Objekte repräsentieren. Aufgabe dieser Arbeit war es nun, Konzepte für die Sprache HiLDTe zu entwickeln, eine entsprechende Grammatik aufzubauen und mit Hilfe der UNIX-Werkzeuge LEX und YACC einen Compiler zu implementieren, mit dem ein ausführbarer Zwischen-Code für die in HiLDTe spezifizierten Texturen erstellt wird.
High dynamic range environments maps based on still images or video streams are used for computer animation or interactive systems. The task of realistic light setup of scenes using captured environment maps might be eased as well as the visual quality improves. In this article, we discuss the light setting problem for virtual studio (tv) layout and system become more complex to handle this new feature of studio light capturing. The analysis of system requirements identifies the technical challenges.
Multi-user virtual reality is transforming towards a social activity that is no longer only used by remote users, but also in large-scale location-based experiences. Usage of realtime-tracked avatars in co-located business-oriented applications with a ”guide-user-scenario” is examined for user-related factors of Spatial Presence, Social Presence, User Experience and Task Load. A user study was conducted in order to compare both techniques of a realtime-tracked avatar and a non-visualised guide. Results reveal that the avatar-guide enhanced and stimulated communicative processes while facilitating interaction possibilities and creating a higher sense of mental immersion for users and engagement.
This article presents a new approach of integrating tangible user feedback in todays virtual TV studio productions. We describe a tangible multitouch planning system, enabling multiple users to prepare and customize scene flow and settings. Users can collaboratively view and interact with virtual objects by using a tangible user interface on a shared multitouch surface. The in a 2D setting created TV scenes are simultaneously rendered on an external monitor, using a production/target renderer in 3D. Thereby the user experiences a closer reproduction of a final production. Subsequently, users are able to join together the scenes into one complex plot. Within the developing process, a video prototype of the system shows the user interaction and enables early reviews and evaluations. The requirement analysis is based on expert interviews.
Markerless talent tracking is widely used for interactions and animations within virtual environments. In a virtual (tv) studio talents could be overburden by interaction tasks because camera and text require extensive attention. We take a look into animations and inter- actions within a studio, which do not require any special attention or learning. We show the generation of an artificial shadow from a talent, which ease the keying process, where separation of real shadows from the background is a difficult task. We also demonstrate animations of footsteps and dust. Furthermore, capturing talents’ height can also be used to adjust the parameters of elements in the virtual environment, like the position and scaling of a virtual display. In addition to the talents, a rigid body was tracked as placeholder for graphics, easing the interaction tasks for a talent. Two test productions show the possibilities, which subtle animations offer. In the second production, the rendering was improved (shadows, filtering, normal maps, ...) and instead of using the rigid body to move an object (a flag), the animation was only controlled by the hand’s position.
Design of a Helical Keyboard
(1996)
Inspired by the cyclical nature of octaves and helical structure of a scale (Shepard, '82 and '83), we prepared a model of a piano-style keyboard (prototyped in Mathematica), which was then geometrically warped into a left-handed helical configuration, one octave/revolution, pitch mapped to height. The natural orientation of upper frequency keys higher on the helix suggests a parsimonious left-handed chirality, so that ascending notes cross in front of a typical listener left to right. Our model is being imported (via the dxf file format) into (Open Inventor/)VRML, where it can be driven by MIDI events, realtime or sequenced, which stream is both synthesized (by a Roland Sound Module), and spatialized by a heterogeneous spatial sound backend (including the Crystal River Engineering Acoustetron II and the Pioneer Sound Field Control speaker-array System), so that the sound of the respective notes is directionalized with respect to sinks, avatars of the human user, by default in the tube of the helix. This is a work-in-progress which we hope to be fully functional within the next few months.
Auditory displays with the ability to dynamically spatialize virtual sound sources under real-time conditions enable advanced applications for art and music. A listener can be deeply immersed while interacting and participating in the experience. We review some of those applications while focusing on the Helical Keyboard project and discussing the required technology. Inspired by the cyclical nature of octaves and helical structure of a scale, a model of a piano-style keyboard was prepared, which was then geometrically warped into a helicoidal configuration, one octave/revolution, pitch mapped to height and chroma. It can be driven by MIDI events, real-time or sequenced, which stream is both synthesized and spatialized by a spatial sound display. The sound of the respective notes is spatialized with respect to sinks, avatars of the human user, by default in the tube of the helix. Alternative coloring schemes can be applied, including a color map compatible with chromastereoptic eyewear. The graphical display animates polygons, interpolating between the notes of a chord across the tube of the helix. Recognition of simple chords allows directionalization of all the notes of a major triad from the position of its musical root. The system is designed to allow, for instance, separate audition of harmony and melody, commonly played by the left and right hands, respectively, on a normal keyboard. Perhaps the most exotic feature of the interface is the ability to fork oneÃs presence, replicating subject instead of object by installing multiple sinks at arbitrary places around a virtual scene so that, for example, harmony and melody can be separately spatialized, using two heads to normalize the octave; such a technique effectively doubles the helix from the perspective of a single listener. Rather than a symmetric arrangement of the individual helices, they are perceptually superimposed in-phase, co-extensively, so that corresponding notes in different registers are at the same azimuth.
In an information-rich Virtual Reality (VR) environment, the user is immersed in a world containing many objects providing that information. Given the finite computational resources of any computer system, optimization is required to ensure that the most important information is presented to the user as clearly as possible and in a timely fashion. In particular, what is desired are means whereby the perspicuity of an object may be enhanced when appropriate. An object becomes more perspicuous when the information it provides to the user becomes more readily apparent. Additionally, if a particular object provides high-priority information, it would be advantageous to make that object obtrusive as well as highly perspicuous. An object becomes more obtrusive if it draws attention to itself (or equivalently, if it is hard to ignore). This paper describes a technique whereby objects may dynamically adapt their representation in a user's environment according to a dynamic priority evaluation of the information each object provides. The three components of our approach are:
- an information manager that evaluates object information priority,
- an enhancement manager that tabulates rendering features associated with increasing object perspicuity and obtrusion as a function of priority, and
- a resource manager that assigns available object rendering resources according to features indicated by the enhancement manager for the priority set for each object by the information manager.
We consider resources like visual space (pixels), sound spatialization channels (mixels), MIDI/audio channels, and processing power, and discuss our approach applied to different applications. Assigned object rendering features are implemented locally at the object level (e.g., object facing the user using the billboard node in VRML 2.0) or globally, using helper applications (e.g., active spotlights, semi-automatic cameras).
In a virtual reality environment users are immersed in a scene with objects which might produce sound. The responsibility of a VR environment is to present these objects, but a system has only limited resources, including spatialization channels (mixels), MIDI/audio channels, and processing power. The sound spatialization resource manager controls sound resources and optimizes fidelity (presence) under given conditions. For that a priority scheme based on human psychophysical hearing is needed. Parameters for spatialization priorities include intensity calculated from volume and distance, orientation in the case of non-uniform radiation patterns, occluding objects, frequency spectrum (low frequencies are harder to localize), expected activity, and others. Objects which are spatially close together (depending on distance and direction) can be mixed. Sources that can not be spatialized can be treated as a single ambient sound source. Important for resource management is the resource assignment, i.e., minimizing swap operations, which makes it desirable to look-ahead and predict upcoming events in a scene. Prediction is achieved by monitoring objects’ speed and past evaluation values. Fidelity is contrasted for Zifferent kind of resource restrictions and optimal resource assignment based upon unlimited dynamic scene look-ahead. To give standard and comparable results, the VRML 2.0 specification is used as an application programmer interface. Applicability is demonstrated with a helical keyboard, a polyphonic MIDI stream driven animation including user interaction (user moves around, playing together with programmed notes). The developed sound spatialization resource manager gives improved spatialization fidelity under runtime constraints. Application programmers and virtual reality scene designers are freed from the burden of assigning and predicting the sound sources.
Durch den vermehrten Einsatz von multimedialen Technologien werden in der Marktforschung die Möglichkeiten der Durchführung flexibler und kostengünstiger Studien gegeben. In sehr frühen Phasen des Innovationsprozesses als Teil der Marktforschung können durch Einsatz von Virtuellen Umgebungen die Markteinführungskonzepte für neue Produkte getestet werden. Mittels Anwendungen der Virtuellen Realität können neue Produkte einschlieï‚lich des Marketingkonzeptes auch haptisch getestet werden, ohne dass dieses Produkt bereits physisch vorhanden sein muss. Informationen werden dem Benutzer in Virtuellen Umgebungen hauptsächlich visuell und ergänzend auditiv übermittelt. Verbreitete Benutzerschnittstellen sind Interaktionsgeräte wie Stylus und Wand. Durch die haptische Wahrnehmung werden Informationen menschengerechter, effektiver und intuitiver wahrgenommen. Objekte in einer virtuellen Umgebung können durch den Einsatz haptischer Interaktionsgeräte ertastet und erfühlt werden und machen dadurch eine differenziertere Beurteilung und Einschätzung durch den Benutzer eben dieser Objekte möglich. Der Fokus des vorliegenden Projektes liegt daher auf der interaktiven haptischen Produktpräsentation in einer virtuellen Einkaufsumgebung, die in Online-Befragungen mit zusätzlichen Werbefilmen eingebettet ist. Als Nebenprodukt wurde das Werkzeug Open Inventor um Knoten zur Modellierung von haptischen Szeneneigenschaften erweitert.