Refine
Year of publication
Document Type
- Conference Proceeding (112)
- Article (19)
- Part of a Book (15)
- Article trade magazine (2)
- Diploma Thesis (1)
- Doctoral Thesis (1)
- Periodical (1)
Language
- English (136)
- German (12)
- Multiple languages (3)
Has Fulltext
- no (151) (remove)
Keywords
- VSVR (46)
- Virtual (TV) Studio (21)
- FHD (17)
- Sound Spatialization (8)
- Resource Management (6)
- Poster (5)
- audio rendering (5)
- human perception (5)
- virtual studio (5)
- Mixed Reality (4)
Department/institution
- Creative Media Production and Entertainment Computing (151) (remove)
CAD/CAM techniques are increasingly used in dentistry for the design and fabrication of teeth restorations. Important concerns are the correction of articulation problems that existed beforetreatment and the prevention of treatment-generated problems. These require interactive evaluation of the occlusal surfaces of teeth during mastication. Traditional techniques based on the use of casts with mechanical articulators require manual adjustment of occlusal surfaces, which becomes impractical when hard restoration materials like porcelain are used; they are also time and labor consuming and provide little visual information. We present new visual tools and a related user interface for global articulation simulation, developed for the Intelligent Dental Care System project. The aim of the simulation is visual representation of characteristics relevant to the chewing process. The simulation is based on the construction of distance maps, which are visual representations of the distributions of the distances of points in a tooth to the opposite jaw. We use rasterizing graphics hardware for fast calculation of the distance maps. Distance maps are used for collision detection and for the derivation of various characteristics showing the distribution of load on the teeth and the chewing capability of the teeth. Such characteristics can be calculated for particular positions of the jaws; cumulative characteristics are used to describe the properties of jaw movement. This information may be used for interactive design of the occlusal surfaces of restorations and for jaw articulation diagnosis. We also demonstrate elements of a user interface that exploit metaphors familiar to dentists from everyday practice.
Dynamic characteristics of occlusion during lower jaw motion are useful in the diagnosis of jaw articulation problems and in computer-aided design/manufacture of teeth restorations. The Functionally Generated Path (FGP), produced as a surface which envelops the actual occlusal surface of the moving opponent jaw, can be used for compact representation of dynamic occlusal relations. In traditional dentistry FGP is recorded as a bite impression in a patient’s mouth. We propose an efficient computerized technique for FGP reconstruction and validate it through implementation and testing. The distance maps between occlusal surfaces of jaws, calculated for multiple projection directions and accumulated for mandibular motion, provide information for FGP computation. Rasterizing graphics hardware is used for fast calculation of the distance maps. Real-world data are used: the scanned shape of teeth and the measured motion of the lower jaw. We show applications of FGP to analysis of the occlusion relations and occlusal surface design for restorations.
A filtering model for efficient rendering of the spatial image of an occluded virtual sound source
(1999)
Rendering realistic spatial sound imagery for complex virtual environments must take into account the effects of obstructions such as reflectors and occluders. It is relatively well understood how to calculate the acoustical consequence that would be observed at a given observation point when an acoustically opaque object occludes a sound source. But the interference patterns generated by occluders of various geometries and orientations relative to the virtual source and receiver are computationally intense if accurate results are required. In many applications, however, it is sufficient to create a spatial image that is recognizable by the human listener as the sound of an occluded source. In the interest of improving audio rendering efficiency, a simplified filtering model was developed and its audio output submitted to psychophysical evaluation. Two perceptually salient components of occluder acoustics were identified that could be directly related to the geometry and orientation of a simple occluder. Actual occluder impulse responses measured in an anechoic chamber resembled the responses of a model incorporating only a variable duration delay line and a low-pass filter with variable cutoff frequenc
Given limited computational resources available for the rendering of spatial sound imagery, we seek to determine effective means for choosing whatcomponents of the rendering will provide the most audible differences in the results. Rather than begin with an analytic approach that attempts to predict audible differences on the basis of objective parameters, we chose to begin with subjective tests of how audibly different the rendering result may be heard to be when that result includes two types of sound obstruction: reflectors and occluders. Single-channel recordings of 90 short speech sounds were made in an anechoic chamber in the presence and absence of these two types of obstructions, and as the angle of those obstructions varied over a 90 degree range. These recordings were reproduced over a single loudspeaker in that anechoic chamber, and listeners were asked to rate how confident they were that the recording of each of these 90 stimuli included an obstruction. These confidence ratings can be used as an integral component in the evaluation function used to determine which reflectors and occluders are most important for rendering.
In this paper we introduce a system for tracking persons walking or standing on a large planar surface and for using the acquired data to easily configure position based interactions for virtual studio productions. The tracking component of the system, radarTRACK, is based on a laser scanner device capable of delivering interaction points on a large configurable plane. By using the device on the floor it is possible to use the delivered data to detect feet positions and derive the position and orientation of one or more users in real time. The second component of the system, named OscCalibrator, allows for the easy creation of multidimensional linear mappings between input and output parameters and the routing of OSC messages within a single modular design environment. We demonstrate the use of our system to flexibly create position-based interactions in a virtual studio environment.
This paper presents an approach to integrate non-visual user feedback in today's virtual tv studio productions. Since recent studies showed that systems providing vibro-tactile feedback are not sufficient for replacing the common visual feedback, we developed an audio-based solution using an in ear headphone system, enabling a talent to move, avoid and point to virtual objects in a blue or green box. The system consists of an optical head tracking system, a wireless in ear monitor system and a workstation, which performs all application and audio processing. Using head related transfer functions, the talent gets directional and distance cues. Past research showed, that generating reflections of the sounds and simulating the acoustics of the virtual room helps the listener to conceive the acoustical feedback, we included this technique as well. In a user study with 15 participants the performance of the system was evaluated.
More than three decades of ongoing research in immersive modelling has revealed many advantages of creating objects in virtual environments. Even though there are many benefits, the potential of immersive modelling has only been partly exploited due to unresolved problems such as ergonomic problems, numerous challenges with user interaction and the inability to perform exact, fast and progressive refinements. This paper explores past research, shows alternative approaches and proposes novel interaction tools for pending problems. An immersive modelling application for polygon meshes is created from scratch and tested by professional users of desktop modelling tools, such as Autodesk Maya, in order to assess the efficiency, comfort and speed of the proposed application with direct comparison to professional desktop modelling tools.
CAD/CAM techniques are used increasingly in dentistry for design and fabrication of teeth restorations. An important issue is preserving occlusal contacts of teeth after restoration. Traditional techniques based on the use of casts with mechanical articulators require manual adjustment of occlusal surface, which becomes impractical when hard restoration materials like porcelain are used; they are also time and labor consuming. Most existing computer systems ignore completely such an articulation check, or perform the check at the level of a tooth and its immediate neighbors. We present a new mathematical model and a related user interface for global articulation simulation, developed for the Intelligent Dental Care System project. The aim of the simulation is elimination of the use of mechanical articulators and manual adjustment in the process of designing dental restorations and articulation diagnostic. The mathematical model is based upon differential topological modeling of the jawbs considered as a mechanical system. The user interface exploits metaphors that are familiar to dentists from everyday practice. A new input device designed specifically for use with articulation simulation is proposed.
For this study, an experimental vibrotactile feedback system was developed to help actors with the task of moving their arm to a certain place in a virtual tv studio under live conditions. Our intention is to improve interaction with virtual objects in a virtual set, which are usually not directly visible to the actor, but only on distant displays. Vibrotactile feedback might improve the appearance on tv because an actor is able to look in any desired direction (camera or virtual object) or to read text on a teleprompter while interacting with a virtual object. Visual feedback in a virtual studio lacks spatial relation to the actor, which impedes the adjustment of the desired interaction. The five tactors of the implemented system which are mounted on the tracked arm give additional information like collision, navigation and activation. The user study for the developed system shows that the duration for reaching a certain target is much longer in case no visual feedback is given, but the accuracy is similar. In this study, subjects reported that an activation signal indicating the arrival at the target of a drag & drop task was helpful. In this paper, we discuss the problems we encountered while developing such a vibrotactile display. Keeping these pitfalls in mind could lead to better feedback systems for actors in virtual studio environments.
The Soundtrack Of Your Life
(2012)
The presentation of virtual environments in real time has always been a demanding task. Specially designed graphics hardware is necessary to deal with the large amounts of data these applications typically produce. For several years the chipsets that were used allowed only simple lighting models and fixed algorithms. But recent development has produced new graphics processing units (GPUs) that are much faster and more programmable than their predecessors. This paper presents an approach to take advantage of these new features. It uses a video texture as part of the lighting calculations for the passenger compartment of a virtual train and was run on the GPU of a recent PC graphics card. The task was to map the varying illumination of a filmed landscape onto the virtual objects and also onto another video texture (showing two passengers), thereby enhancing the realism of the scene.
Level-of-detail is a concept well-known in computer graphics to reduce the number of rendered polygons. Depending on the distance to the subject (viewer), the objects’ representation is changed. A similar concept is the clustering of sound sources for sound spatialization. Clusters can be used to hierarchically organize mixelsand to optimize the use of resources, by grouping multiple sources together into a single representative source. Such a clustering process should minimize the error of position allocation of elements, perceived as angle and distance, and also differences between velocity relative to the sink (i.e., Doppler shift). Objects with similar direction of motion and speed (relative to sink) in the same acoustic resolution cone and with similar distance to a sink can be grouped together.
Die volumetrische Erfassung von Aushüben auf Baustellen ist ein kostenrelevanter Faktor und wird auch heute im täglichen Baustellenbetrieb oft noch in manueller Detailarbeit durchgeführt. Kostengünstige Sensoren zur Tiefenerfassung ermöglichen die halbautomatische Erfassung von Baugruben. Augmented Reality (AR) kann für diesen Prozess das nötige Feedback liefern. Vorgestellt wird ein Prototyp, bestehend aus einem Tablet mit integrierter Kamera und einem Lidar-Scanner. Es wird die Erfassung des Volumens bezüglich Nutzbarkeit und Genauigkeit mit Einsatz von AR getestet und evaluiert. Zur Bestimmung des Volumens wird unter Verwendung von Strahlen mit Unterstützung einer Grafik-Engine ein Algorithmus entwickelt. Der Algorithmus ist robust gegen nicht vollständig geschlossene Volumen. Die Bedienung, Überprüfung und Visualisierung findet durch praktischen Einsatz von AR statt.
A Sound Spatialization Server for a Speaker Array as an Integrated Part of a Virtual Environment
(1998)
Spatial sound plays an important role in virtual reality environments, allowing orientation in space, giving a feeling of space, focusing the user on events in the scene, and substituting missing feedback cues (e.g., force feedback). The sound spatialization framework of the University of Aizu, which supports number of spatialization backends, has been extended to include a sound spatialization server for a multichannel loudspeaker array (Pioneer Sound Field Control System). Our goal is that the spatialization server allows easy integration into virtual environments. Modeling of distance cues, which are essential for full immersion, is discussed. Furthermore, the integration of this prototype into different applications allowed us to reveal the advantages and problems of spatial sound for virtual reality environments.
Probing the Potential of Multimedia Artefacts to Support Communication of People with Dementia
(2015)
Distance and Room Effects Control for the PSFC, an Auditory Display using a Loudspeaker Array
(2000)
The Pioneer Sound Field Controller (PSFC), a loudspeaker array system, features realtime configuration of an entire sound field, including sound source direction, virtual distance, and context of simulated environment (room characteristics: room size and liveness) for each of two sound sources. In the PSFC system, there is no native parameter to specify the distance between the sound source and sound sink (listener) and also no function to control it directrly. This paper suggests the method to control virtual distance using basic parameters: volume, room size and liveness. The implementation of distance cue is an important aspect of 3D sounds. Virtual environments supporting room effects like reverberation not only gain realism but also provide additional information to users about surrounding space. The context switch of different aural attributes is done by using an API of the Sound Spatialization Framework. Therefore, when the sound sink move through two rooms, like a small bathroom and a large living room, the context of the sink switches and different sound is obtained.
Distance and Room Effects Control for the PSFC, an Auditory Display using a Loudspeaker Array
(2000)
The Pioneer Sound Field Controller (PSFC), a loudspeaker array system, features realtime configuration of an entire sound field,including sound source direction, virtual distance, and context of simulated environment (room characteristics: room size and liveness)for each of two sound sources. In the PSFC system, there is no native parameter to specify the distance between the sound source and sound sink (listener) and also no function to control it directrly. This paper suggests the method to control virtual distance using basic parameters: volume, room size and liveness. The implementation of distance cue is an important aspect of 3D sounds. Virtual environments supporting room effects like reverberation not only gain realism but also provide additional information to users about surrounding space. The context switch of different aural attributes is done by using an API of the Sound Spatialization Framework. Therefore, when the sound sink move through two rooms, like a small bathroom and a large living room, the context of the sink switches and different sound is obtained.
The Common Lisp Interface Manager (CLIM) is used to develop graphical user interfaces for Lisp-basedapplications. With the prototype of the CLIM interface Builder (CLIB) the programmer can generate code for CLIM interactively. The developing process will be fast and less prone to errors. With this new tool, the interactive rapid prototyping reduces costs of a specification phase. Here we present the concept and first results of the prototype of CLIB.
Virtual environments can create a realistic impression of an architectural space during the architectural design process, providing a powerful tool for evaluation and promotion during a project’s early stages. In comparison to pre-rendered animations,
such as walkthroughs based on CAD models, virtual environments can offer intuitive interaction and a more life like experience. Advanced virtual environments allow users to change realtime rendering features with a few manipulations, switching between different versions while still maintaining sensory immersion. This paper reports on an experimental project in which architectural models are being integrated into interactive virtual environments, and includes demonstrations of both the possibilities and limitations of such applications in evaluating, presenting and promoting architectural designs.
Die Visualisierung von Produkten in Echtzeit ist in vielen Bereichen ein hilfreicher Schritt, um potentiellen Kunden eine Vorstellung vom Einsatzgebiet und einen Überblick über die finale Anwendung zu erlauben. In den letzten Jahren haben neue Technologien in der Grafikkartenindustrie dazu geführt, dass früher nur auf teuren Grafikworkstations verfügbare Möglichkeiten nun auch mit relativ kostengünstigen Karten, welche für den Einsatz in Standard-PCs konzipiert wurden, realisierbar sind.
Es wird an einem Modellentwurf des Innenraums des People Cargo Movers gezeigt, wie die Beleuchtung innerhalb einer Echtzeitvisualisierung durch Shader realisiert werden kann. Als Lichtquelle wird dabei eine Landschaftsaufnahme herangezogen, welche als eine von mehreren Videotexturen eingebunden wurde. Außerdem werden real im virtuellen Studio gefilmte Personen im Innenraum gleichermaï‚en über Videotexturen dargestellt und ebenfalls durch die Landschaft beleuchtet.
Virtual TV studios use actor tracking systems for resolving the occlusion of computer graphics and studio camera image. The actor tracking delivers the distance between actor and studio camera. We deploy a photonic mixing device, which captures a depth map and a luminance image at low resolution. The renderer engines gets one depth value per actor using the OSC protocol. We describe the actor recognition algorithm based on the luminance image and the depth value calculation. We discuss technical issues like noise and calibration.
Virtual set environments for broadcasting become more sophisticated as well as the visual quality improves. Realtime interaction and production-specific visualization implemented through plugin mechanism enhance the existing systems like the 3DK. This work presents the integration of the Intersense IS-900 SCT camera tracking and 3D interaction into the 3DK virtual studio environment. The main goal of this work is the design of a virtual studio environment for post productions, which includes video output as well as media streaming formats such as MPEG-4. The systems allows high quality offline rendering during post production and 3D interaction by the moderator during the recording.
We report about several experiments on applying mixed reality technology in the context of accessing collective memories from atomic bombs, Holocaust and Second World War. We discuss the impact of Virtual Reality, Augmented Virtuality and Augmented Reality for specific memorial locations. We show how to use a virtual studio for demonstrating an augmented reality application for a specific location in a remote session within a video conference. Augmented Virtuality is used to recreate the local environment, thus providing a context and helping the participants recollect emotions related to a certain place. This technique demonstrates the advantages of using virtual (VR) and augmented (AR) reality environments for rapid prototyping and pitching project ideas in a live remote setting.
Virtual environment walkthrough applications are generally enhanced by a user’s interactions within a simulated architectural space, but the enhancement that stems from changes in spatial sound that are coupled with a user’s behavior are particularly important, especially within regard to creating a sense of place. When accompanied by stereoscopic image synthesis, spatial sound can immerse the user in a high-realism virtual copy of the real world. An advanced virtual environment that allow users to change realtime rendering features with a few manipulations has been shown to enable switching between different versions of a modeled space while maintaining sensory immersion. This paper reports on an experimental project in which an architectural model is being integrated into such an interactive virtual environment. The focus is on the spatial sound design for supporting interaction, including demonstrations of both the possibilities and limitations of such applications in presenting and promoting architectural designs, as well as in three-dimensional sketching.
Two high dynamic range HDR environments maps based on video streams from fish-eye lens cameras are used for generating virtual lights in a virtual set renderer. The task of realistic virtual light setup of scenes using captured environment maps might be eased as well as visual quality improves. We discuss the light setting problem for virtual studio tv productions which have mixed scenes of real objects, actors, virtual objects and virtual backgrounds. Benefits of hdr interactive light control are that the real light in the studio does not have to be remodeled and the artistic impression by using the light in the studio is also captured. An analysis of system requirements identifies technical challenges. We discuss the properties of a prototype system including test production.
Die interaktive Echtzeit 3D-Visualisierung Mobilisierung und Homing von Blutstammzellen wurde konzipiert, um ein sehr komplexes medizinisches Wissen mit den Mitteln der 3-dimensionalen Visualisierung in Echtzeit und des Internets sowie der daraus resultierenden Interaktivität aufzubereiten. Dies musste auf einer Ebene geschehen, die es hinterher auch jedem Nicht-Mediziner erlaubt, die grundlegenden biologischen und medizinischen Sachverhalte nachzuvollziehen. Das Resultat: Eine informative und didaktische Anwendung, aus einer Mischung von interaktiven 3D-Stationen und erklärenden 3D-Animationen. Diskutiert werden die Methodik der Konzeptionsphase und die Interaktionstechniken.
Multimediale Technologien werden in der Marktforschung immer stärker eingesetzt, um flexible und kostengünstige Studien durchzuführen. Im Innovationsprozess kann dabei auf die langjährigen Erfahrungen zurückgegriffen werden, die durch den Einsatz der Computersimulation in der technischen Produktentwicklung zustande gekommen sind. In sehr frühen Phasen des Innovationsprozesses können durch Einsatz der neuen Technologien die Markteinführungskonzepte für neue Produkte getestet werden. Die Applikationen der virtuellen Realität bieten ein einzigartiges Potential, neue Produkte einschlieï‚lich des Marketingkonzeptes zu testen, ohne dass dieses Produkt bereits physisch vorhanden sein muss. Am Beispiel eines Elementes des Marketingkonzeptes, der Preispolitik, zeigt die vorliegende Studie auf, welches Potential die virtuelle Kaufsituation von Produkten bietet. Der Fokus des Projektes liegt auf der interaktiven Produktpräsentation in einer virtuellen Umgebung, die in eine Online-Befragung mit zusätzlichen Werbefilmen eingebettet ist. Visuell hochwertige 3D-Produktpräsentationen versetzen den Probanden in eine virtuelle Einkaufsumgebung, die einem realen Szenario entspricht. Die virtuellen Produkte werden in mehreren Kaufentscheidungsrunden zu unterschiedlichen Preisen angeboten. Der Preisuntersuchung geht eine Präsentation ausgewählter Werbespots sowie eine produktbezogene Befragung voraus. Im Anschluss an die virtuellen Preisentscheidungen werden die Eindrücke sowie einige Kontrollgröen abgefragt. In weitergehenden Studien dieser Art können die Wirkungen mehrerer Marketing-Instrumente zu einem Zeitpunkt untersucht werden, in dem sich die Produkte noch im Entwicklungsprozess befinden. Auf diesem Weg lassen sich auch Wettbewerbsvorteile bestehender Produkte effizienter erkennen und nutzen. Mit den hoch entwickelten Computer- und Visualisierungstechnologien ist ein mächtiges Werkzeug entstanden, das bereits für kommerzielle Präsentationen und Produktstudien eingesetzt wird. Zukünftig kann es auch in Kombination mit Internetanwendungen und klassischen Methoden der Marktforschung zu einem sehr frühen Zeitpunkt umfassende Erkenntnisse über ein Produkt liefern.
Durch den vermehrten Einsatz von multimedialen Technologien werden in der Marktforschung die Möglichkeiten der Durchführung flexibler und kostengünstiger Studien gegeben. In sehr frühen Phasen des Innovationsprozesses als Teil der Marktforschung können durch Einsatz von Virtuellen Umgebungen die Markteinführungskonzepte für neue Produkte getestet werden. Mittels Anwendungen der Virtuellen Realität können neue Produkte einschlieï‚lich des Marketingkonzeptes auch haptisch getestet werden, ohne dass dieses Produkt bereits physisch vorhanden sein muss. Informationen werden dem Benutzer in Virtuellen Umgebungen hauptsächlich visuell und ergänzend auditiv übermittelt. Verbreitete Benutzerschnittstellen sind Interaktionsgeräte wie Stylus und Wand. Durch die haptische Wahrnehmung werden Informationen menschengerechter, effektiver und intuitiver wahrgenommen. Objekte in einer virtuellen Umgebung können durch den Einsatz haptischer Interaktionsgeräte ertastet und erfühlt werden und machen dadurch eine differenziertere Beurteilung und Einschätzung durch den Benutzer eben dieser Objekte möglich. Der Fokus des vorliegenden Projektes liegt daher auf der interaktiven haptischen Produktpräsentation in einer virtuellen Einkaufsumgebung, die in Online-Befragungen mit zusätzlichen Werbefilmen eingebettet ist. Als Nebenprodukt wurde das Werkzeug Open Inventor um Knoten zur Modellierung von haptischen Szeneneigenschaften erweitert.
Auditory displays with the ability to dynamically spatialize virtual sound sources under real-time conditions enable advanced applications for art and music. A listener can be deeply immersed while interacting and participating in the experience. We review some of those applications while focusing on the Helical Keyboard project and discussing the required technology. Inspired by the cyclical nature of octaves and helical structure of a scale, a model of a piano-style keyboard was prepared, which was then geometrically warped into a helicoidal configuration, one octave/revolution, pitch mapped to height and chroma. It can be driven by MIDI events, real-time or sequenced, which stream is both synthesized and spatialized by a spatial sound display. The sound of the respective notes is spatialized with respect to sinks, avatars of the human user, by default in the tube of the helix. Alternative coloring schemes can be applied, including a color map compatible with chromastereoptic eyewear. The graphical display animates polygons, interpolating between the notes of a chord across the tube of the helix. Recognition of simple chords allows directionalization of all the notes of a major triad from the position of its musical root. The system is designed to allow, for instance, separate audition of harmony and melody, commonly played by the left and right hands, respectively, on a normal keyboard. Perhaps the most exotic feature of the interface is the ability to fork oneÃs presence, replicating subject instead of object by installing multiple sinks at arbitrary places around a virtual scene so that, for example, harmony and melody can be separately spatialized, using two heads to normalize the octave; such a technique effectively doubles the helix from the perspective of a single listener. Rather than a symmetric arrangement of the individual helices, they are perceptually superimposed in-phase, co-extensively, so that corresponding notes in different registers are at the same azimuth.
In an information-rich Virtual Reality (VR) environment, the user is immersed in a world containing many objects providing that information. Given the finite computational resources of any computer system, optimization is required to ensure that the most important information is presented to the user as clearly as possible and in a timely fashion. In particular, what is desired are means whereby the perspicuity of an object may be enhanced when appropriate. An object becomes more perspicuous when the information it provides to the user becomes more readily apparent. Additionally, if a particular object provides high-priority information, it would be advantageous to make that object obtrusive as well as highly perspicuous. An object becomes more obtrusive if it draws attention to itself (or equivalently, if it is hard to ignore). This paper describes a technique whereby objects may dynamically adapt their representation in a user's environment according to a dynamic priority evaluation of the information each object provides. The three components of our approach are:
- an information manager that evaluates object information priority,
- an enhancement manager that tabulates rendering features associated with increasing object perspicuity and obtrusion as a function of priority, and
- a resource manager that assigns available object rendering resources according to features indicated by the enhancement manager for the priority set for each object by the information manager.
We consider resources like visual space (pixels), sound spatialization channels (mixels), MIDI/audio channels, and processing power, and discuss our approach applied to different applications. Assigned object rendering features are implemented locally at the object level (e.g., object facing the user using the billboard node in VRML 2.0) or globally, using helper applications (e.g., active spotlights, semi-automatic cameras).