Refine
Year of publication
Document Type
- Conference Proceeding (100)
- Part of a Book (13)
- Article (10)
- Article trade magazine (2)
- Periodical (1)
Language
- English (113)
- German (10)
- Multiple languages (3)
Is part of the Bibliography
- yes (126) (remove)
Keywords
- VSVR (41)
- Virtual (TV) Studio (21)
- FHD (18)
- Mixed Reality (5)
- Poster (5)
- virtual studio (5)
- Augmented Virtuality (4)
- Lehre (4)
- augmented reality (4)
- Augmented Reality (3)
Department/institution
- Creative Media Production and Entertainment Computing (126) (remove)
A chatspace was developed that allows conversation with 3D sound using networked streaming in a shared virtual environment. The system provides an interface to advanced audio features, such as a "whisper function" for conveying a confided audio stream. This study explores the use of spatial audio to enhance a user's experience in multiuser virtual environments.
Actors in virtual studio productions are faced with the challenge that they have to interact with invisible virtual objects because these elements are rendered separately and combined with the real image later in the production process. Virtual sets typically use static virtual elements or animated objects with predefined behavior so that actors can practice their performance and errors can be corrected in the post production. With the demand for inexpensive live recording and interactive TV productions, virtual objects will be dynamically rendered at arbitrary positions that cannot be predicted by the actor. Perceptive aids have to be employed to support a natural interaction with these objects. In our work we study the effect of haptic feedback for a simple form of interaction. Actors are equipped with a custom built haptic belt and get vibrotactile feedback during a small navigational task (path following). We present a prototype of a wireless vibrotactile feedback device and a small framework for evaluating haptic feedback in a virtual set environment. Results from an initial pilot study indicate that vibrotactile feedback is a suitable non-visual aid for interaction that is at least comparable to audio-visual alternatives used in virtual set productions.
In this paper we describe a prototypical system for live musical performance in a virtual studio environment. The performer stands in front of the studio camera and interacts with an infrared-laser-based multi-touch device. The final TV image shows the performer interacting with a virtual screen which is augmented in front of herself. To overcome the problem of the performer not seeing this virtual screen in reality, we use a special hexagonal grid to facilitate the performer's awareness of this novel Theremin-like virtual musical instrument.
Vibrotactile feedback via body-worn vibrating belts is a common means of direction signalization - e.g. for navigational tasks. Consequently such feedback devices are used to guide blind or visually impaired people but can also be used to support other wayfinding tasks - for instance, guiding actors in virtual studio productions. Recent effort has been made to simplify this task by integrating vibrotactile feedback into virtual studio applications. In this work we evaluate the accuracy of an improved direction signalization technique, utilizing a body-worn vibrotactile belt with a limited number of tactors, and compare it to other work. The results from our user study indicate that it is possible to signalize different directions accurately, even with a small number of tactors spaced by 90°.
This article describes the possibilities and problems that occur using the SteamVR tracking 2.0 system as a camera tracking system in a virtual studio and explains an approach for implementation and calibration within a professional studio environment. The tracking system allows for cost effective deployment. Relevant application fields are also mixed reality recording and streaming of AR and VR experiences.
Cliffhanger-VR
(2018)
The audio design for virtual environments includes simulation of acoustical room properties besides specifing sound sources and sinks and their behavior. Virtual environments supporting room reverberation not only gain realism but also provide additional information to the user about surrounding space. Catching the different sound properties by the different spaces requires partitioning the space by the properties of aural spaces. We define soundscape and aural attributes as an application and multimedia content interface. Calculated data on an abstract level is sent to spatialization backends. Part of this research was the implementation of a device driver for the Roland Sound Space Processor. This device not only directionalizes sound sources, but also controls room effects like reverberation.
With the virtual environment developed here, the characteristic
sound radiation patterns of musical instruments can be experienced
in real-time. The user may freely move around a musical instrument, thereby receiving acoustic and visual feedback in real-time. The perception of auditory and visual effects is intensified by the combination of acoustic and visual elements, as well as the option of user interaction. The simulation of characteristic sound radiation patterns is based on interpolating the intensities of a multichannel recording and offers a near-natural mapping of the sound radiation patterns. Additionally, a simple filter has been developed, enabling the qualitative simulation of an instrument’s characteristic sound radiation patterns to be easily implemented within real-time 3D applications. Both methods of simulating sound radiation patterns have been evaluated for a saxophone with respect to their functionality and validity by means of spectral analysis and an auditory experiment.
A visual and spatial feedback system for orientation in virtual sets of virtual TV studios was developed and evaluated. It is based on a green proxy object, which moves around in the acting space by way of four transparent wires. A separate unit controls four winches and is connected to an engine, which renders the virtual set. A new developed plugin registers a virtual object’s position with the proxy object which imitates the virtual object’s movement on stage. This will allow actors to establish important eye contact with a virtual object and feel more comfortable in a virtual set. Furthermore, interaction with the virtual object and its proxy can be realised through a markerless actor tracking system. Several possible scenarios for user application were recorded and presented to experts in the broadcast industry, who evaluated the potential of SpiderFeedback in interviews and by questionnaires.
Mixed reality telepresence is becoming an increasingly popular form of interaction in social and collaborative applications. We are interested in how created virtual spaces can be archived, mapped, shared, and reused among different applications. Therefore, we propose a decentralized blockchain-based peer-to-peer model of distribution, with virtual spaces represented as blocks. We demonstrate the integration of our system in a collaborative mixed reality application and discuss the benefits and limitations of our approach.
Live video streaming is becoming increasingly popular as a form of interaction in social applications. One of its main advantages is an ability to immediately create and connect a community of remote users on the spot. In this paper we discuss how this feature can be used for crowdsourced completion of simple visual search tasks (such as finding specific objects in libraries and stores, or navigating around live events) and social interactions through mobile mixed reality telepresence interfaces. We present a prototype application that allows users to create a mixed reality space with a photospherical imagery as a background and interact with other connected users through viewpoint, audio, and video sharing, as well as realtime annotations in mixed reality space. Believing in the novelty of our system, we conducted a short series of interviews with industry professionals on the possible applications of our system. We discuss proposed use-cases for user evaluation, as well as outline future extensions of our system.
Mobile live video streaming is becoming an increasingly popular form of interaction both in social media and remote collaboration scenarios. However, in most cases the streamed video does not take mobile devices' spatial data into account (e.g., the viewers do not know the spatial orientation of a streamer), or use such data only in specific scenarios (e.g., to navigate around a spherical video stream).
Einfach benutzbare VR-Anwendungen erfordern andere Interaktionstechniken als konventionelle Desktop-Anwendungen mit Maus, Tastatur und Desktop-Metapher zur Verfügung stellen. Da solche Ansätze in Konzeption und Realisierung deutlicher komplexer sind, müssen diese mit Sorgfalt ausgewählt werden. Folgt man der Argumentation, dass VR eine natürliche Interaktion mit virtuellen Objekten ermöglicht, so führt dies fast zwangsläufig zu zweihändigen Interaktionstechniken für virtuelle Umgebungen, da Benutzer in realen Umgebungen gewohnt sind, fast ausschlieï‚lich zweihändig zu agieren. In diesem Beitrag geben wir eine Übersicht über den Stand der Technik im Bereich zweihändiger Interaktion, leiten Anforderungen an eine Entwicklung zweihändiger Interaktionstechniken in VR ab und beschreiben einen eigenen Ansatz. Dabei geht es um die zweihändige Interaktion bei der Simulation flexibler biegeschlaffer Bauteile (z. B. Schlauchverbindungen).
Acquiring human motion data from video images plays an important role in the field of computer vision. Ground truth tracking systems require markers to create high quality motion data. But in many applications it is desired to work without markers. In recent years affordable hardware for markerless tracking systems was made available at a consumer level. Efficient depth camera systems based on Time-of-Flight sensors and structured light systems have made it possible to record motion data in real time. However, the gap between the quality of marker-based and markerless systems is high. The error sources of a markerless motion tracking pipeline are discussed and a model-based filter is proposed, which adapts depending on spatial location. The proposed method is then proven to be more robust and accurate than the unfiltered data stream and can be used to visually enhance the presence of an actor within a virtual environment in live broadcast productions.
Mobile virtual archery
(2013)
In today’s market research mechanisms multi modal technologies are significant tools to perform flexible and price efficient studies for not only consumer products but also consumer goods. Current appraisal mechanisms in combination with applied computer graphics can improve the assessment of a product’s launch in the very early design phase or an innovation process. The combination of online questionnaires, Virtual Reality (VR) applications and a database management system offers a powerful tool to let a consumer judge products as well as innovated goods even without having produced a single article. In this paper we present an approach of consumer good studies consisting of common as well as interactive VR product presentations and online questionnaires bases on a bidirectional database management solution to configure and manage numerous studies, virtual sets, goods and participants in an effective way to support the estimation of the received data. Non-programmers can create their test environment including a VR scenario very quickly without any effort. Within the extensive knowledge of consumer goods, marketing instruments can be defined to shorten and improve the rollout process in the early product stages.
In this paper we introduce a system for tracking persons walking or standing on a large planar surface and for using the acquired data to easily configure position based interactions for virtual studio productions. The tracking component of the system, radarTRACK, is based on a laser scanner device capable of delivering interaction points on a large configurable plane. By using the device on the floor it is possible to use the delivered data to detect feet positions and derive the position and orientation of one or more users in real time. The second component of the system, named OscCalibrator, allows for the easy creation of multidimensional linear mappings between input and output parameters and the routing of OSC messages within a single modular design environment. We demonstrate the use of our system to flexibly create position-based interactions in a virtual studio environment.
This paper presents an approach to integrate non-visual user feedback in today's virtual tv studio productions. Since recent studies showed that systems providing vibro-tactile feedback are not sufficient for replacing the common visual feedback, we developed an audio-based solution using an in ear headphone system, enabling a talent to move, avoid and point to virtual objects in a blue or green box. The system consists of an optical head tracking system, a wireless in ear monitor system and a workstation, which performs all application and audio processing. Using head related transfer functions, the talent gets directional and distance cues. Past research showed, that generating reflections of the sounds and simulating the acoustics of the virtual room helps the listener to conceive the acoustical feedback, we included this technique as well. In a user study with 15 participants the performance of the system was evaluated.