TY - CHAP A1 - Dewitz, Bastian A1 - Wiche, Roman A1 - Geiger, Christian A1 - Steinicke, Frank A1 - Feitsch, Jochen ED - Chisik, Yoram ED - Holopainen, Jussi ED - Khaled, Rilla ED - Luis Silva, José ED - Alexandra Silva, Paula T1 - AR Sound Sandbox: A Playful Interface for Musical and Artistic Expression T2 - Intelligent Technologies for Interactive Entertainment. 9th International Conference, INTETAIN 2017, Funchal, Portugal, June 20-22, 2017, Proceedings Y1 - 2018 UR - https://doi.org/10.1007/978-3-319-73062-2_5 SN - 978-3-319-73061-5 U6 - https://doi.org/10.1007/978-3-319-73062-2_5 SP - 59 EP - 76 PB - Springer International Publishing CY - Cham ER - TY - CHAP A1 - Dewitz, Bastian A1 - Ladwig, Philipp A1 - Steinicke, Frank A1 - Geiger, Christian T1 - Classification of Beyond-Reality Interaction Techniques in Spatial Human-Computer Interaction T2 - Proceedings of the Symposium on Spatial User Interaction, SUI '18: Symposium on Spatial User Interaction, Berlin, 13.10.2018-14.10.2018 Y1 - 2018 UR - https://dl.acm.org/doi/proceedings/10.1145/3267782 SN - 9781450357081 U6 - https://doi.org/10.1145/3267782.3274680 PB - ACM CY - New York ER - TY - CHAP A1 - Deppe, Robert A1 - Nemitz, Oliver A1 - Herder, Jens ED - Herder, Jens ED - Geiger, Christian ED - Dörner, Ralf ED - Grimm, Paul T1 - Augmented reality for supporting manual non-destructive ultrasonic testing of metal pipes and plates T2 - Workshop Proceedings / Tagungsband: Virtuelle und Erweiterte Realität – 15. Workshop der GI-Fachgruppe VR/AR N2 - We describe an application of augmented reality technology for non-destructive testing of products in the metal-industry. The prototype is created with hard- and software, that is usually employed in the gaming industry, and delivers positions for creating ultra- sonic material scans (C-scans). Using a stereo camera in combination with an hmd enables realtime visualisation of the probes path, as well as the setting of virtual markers on the specimen. As a part of the implementation the downhill simplex optimization algorithm is implemented to fit the specimen to a cloud of recorded surface points. The accuracy is statistically tested and evaluated with the result, that the tracking system is accurate up to ca. 1-2 millimeters in well set-up conditions. This paper is of interest not only for research institutes of the metal-industry, but also for any areas of work, in which the enhancement with augmented reality is possible and a precise tracking is necessary. KW - Nondestructive Testing KW - Ultrasonic KW - Augmented Reality KW - Tracking KW - Stereo camera KW - M Y1 - 2018 UR - http://vsvr.medien.hs-duesseldorf.de/publications/gi-vrar2018-ar-in-ndt/ SN - 978-3-8440-6215-1 U6 - https://doi.org/10.2370/9783844062151 SP - 45 EP - 52 PB - Shaker Verlag CY - Herzogenrath ER - TY - CHAP A1 - Davin, Till A1 - Herder, Jens ED - Weier, Martin ED - Bues, Matthias ED - Wechner, Reto T1 - Real-Time Relighting of Video Streams for Augmented Virtuality Scenes T2 - GI VR / AR Workshop. Gesellschaft für Informatik e.V. KW - Virtual (TV) Studio Y1 - 2021 U6 - https://doi.org/10.18420/vrar2021_6 PB - Gesellschaft für Informatik e.V. (GI) CY - Bonn ER - TY - CHAP A1 - Daemen, Jeff A1 - Herder, Jens A1 - Koch, Cornelius A1 - Ladwig, Philipp A1 - Wiche, Roman A1 - Wilgen, Kai T1 - Semi-Automatic Camera and Switcher Control for Live Broadcast T2 - TVX '16 Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video, Chicago, Illinois, USA — June 22 - 24, 2016 N2 - Live video broadcasting requires a multitude of professional expertise to enable multi-camera productions. Robotic systems allow the automation of common and repeated tracking shots. However, predefined camera shots do not allow quick adjustments when required due to unpredictable events. We introduce a modular automated robotic camera control and video switch system, based on fundamental cinematographic rules. The actors' positions are provided by a markerless tracking system. In addition, sound levels of actors' lavalier microphones are used to analyse the current scene. An expert system determines appropriate camera angles and decides when to switch from one camera to another. A test production was conducted to observe the developed prototype in a live broadcast scenario and served as a video-demonstration for an evaluation. KW - automated robotic camera system KW - actor tracking KW - switcher control KW - scene analysis KW - film rules KW - automated shot control KW - Virtual (TV) Studio Y1 - 2016 UR - http://vsvr.medien.hs-duesseldorf.de/publications/tvx2016-rob-abstract.html SN - 978-1-4503-4067-0 U6 - https://doi.org/10.1145/2932206.2933559 SP - 129 EP - 134 PB - ACM CY - New York ER - TY - JOUR A1 - Daemen, Jeff A1 - Herder, Jens A1 - Koch, Cornelius A1 - Ladwig, Philipp A1 - Wiche, Roman A1 - Wilgen, Kai T1 - Halbautomatische Steuerung von Kamera und Bildmischer bei Live-Übertragungen JF - Fachzeitschrift für Fernsehen, Film und Elektronische Medien N2 - Live-Video-Broadcasting mit mehreren Kameras erfordert eine Vielzahl von Fachkenntnissen. Robotersysteme ermöglichen zwar die Automatisierung von gängigen und wiederholten Tracking-Aufnahmen, diese erlauben jedoch keine kurzfristigen Anpassungen aufgrund von unvorhersehbaren Ereignissen. In diesem Beitrag wird ein modulares, automatisiertes Kamerasteuerungs- und Bildschnitt-System eingeführt, das auf grundlegenden kinematografischen Regeln basiert. Die Positionen der Akteure werden durch ein markerloses Tracking-System bereitgestellt. Darüber hinaus werden Tonpegel der Lavaliermikrofone der Akteure zur Analyse der aktuellen Szene verwendet. Ein Expertensystem ermittelt geeignete Kamerawinkel und entscheidet, wann von einer Kamera auf eine andere umgeschaltet werden soll. Eine Testproduktion wurde durchgeführt, um den entwickelten Prototyp in einem Live-Broadcast-Szenario zu beobachten und diente als Videodemonstration für eine Evaluierung. KW - Halbautomatische Roboterkamerasteuerung KW - Verfolgen von Darstellern KW - Bildmischer KW - Szenenanalyse KW - Filmregeln KW - Automatische Bildauswahl KW - Virtual (TV) Studio Y1 - 2017 N1 - Der Artikel ist eine Übersetzung von dem Konferenzbeitrag "Semi-Automatic Camera and Switcher Control for Live Broadcast", International Conference on Interactive Experiences for Television and Online Video, TVX'2016, Chicago, IL, USA, ACM, DOI=10.1145/2932206.2933559, June 22-24, 2016. IS - 11 SP - 501 EP - 505 PB - Schiele & Schön ER - TY - CHAP A1 - Daemen, Jeff A1 - Haufs-Brusberg, Peter A1 - Herder, Jens T1 - Markerless Actor Tracking for Virtual (TV) Studio Applications T2 - 2013 International Joint Conference on Awareness Science and Technology & Ubi-Media Computing (iCAST 2013 & UMEDIA 2013) N2 - Virtual (tv) studios gain much more acceptance through improvements in computer graphics and camera tracking. Still commercial studios cannot have full interaction between actors and virtual scene because actors data are not completely digital available as well as the feedback for actors is still not sufficient. Markerless full body tracking might revolutionize virtual studio technology as it allows better interaction between real and virtual world. This article reports about using a markerless actor tracking in a virtual studio with a tracking volume of nearly 40 cubic meter enabling up to three actors within the green box. The tracking is used for resolving the occlusion between virtual objects and actors so that the Tenderer can output automatically a mask for virtual objects in the foreground in case the actor is behind. It is also used for triggering functions scripted within the Tenderer engine, which are attached to virtual objects, starting any kind of action (e.g., animation). Last but not least the system is used for controlling avatars within the virtual set. All tracking and rendering is done within a studio frame rate of 50 Hz with about 3 frames delay. The markerless actor tracking within virtual studios is evaluated by experts using an interview approach. The statistical evaluation is based on a questionnaire. KW - Cameras KW - Tracking KW - TV KW - Skeleton KW - Engines KW - Delays KW - Production KW - VSVR KW - Virtual (TV) Studio Y1 - 2013 UR - https://ieeexplore.ieee.org/document/6765544 SN - 978-1-4799-2364-9 U6 - https://doi.org/10.1109/ICAwST.2013.6765544 SP - 790 EP - 795 PB - IEEE CY - Aizu-Wakamatsu ER - TY - CHAP A1 - Cohen, Michael A1 - Herder, Jens A1 - Martens, William T1 - Panel: Eartop computing and cyberspatial audio technology T2 - IEEE-VR2001: IEEE Virtual Reality KW - VSVR Y1 - 2001 SN - 0-7695-0948-7 SP - 322 EP - 323 PB - IEEE CY - Yokohama ER - TY - JOUR A1 - Cohen, Michael A1 - Herder, Jens A1 - L. Martens, William T1 - Cyberspatial Audio Technology T1 - available in Japanese as well - Acoustical Society of Japan, Vol. 55, No. 10, pp. 730-731 JF - The Journal of the Acoustical Society of Japan (E) N2 - Cyberspatial audio applications are distinguished from the broad range of spatial audio applications in a number of important ways that help to focus this review. Most significant is that cyberspatial audio is most often designed to be responsive to user inputs. In contrast to non-interactive auditory displays, cyberspatial auditory displays typically allow active exploration of the virtual environment in which users find themselves. Thus, at least some portion of the audio presented in a cyberspatial environment must be selected, processed, or otherwise rendered with minimum delay relative to user input. Besides the technological demands associated with realtime delivery of spatialized sound, the type and quality of auditory experiences supported are also very different from those associated with displays that support stationary sound localization. Y1 - 1999 U6 - https://doi.org/10.1250/ast.20.389 N1 - available in Japanese as well - Acoustical Society of Japan, Vol. 55, No. 10, pp. 730-731 VL - 20 IS - 6 SP - 389 EP - 395 ER - TY - CHAP A1 - Cohen, Michael A1 - Herder, Jens ED - Göbel, Martin ED - Landauer, Jürgen ED - Lang, Ulrich ED - Wapler, Matthias T1 - Symbolic representations of exclude and include for audio sources and sinks: Figurative suggestions of mute/solo & cue and deafen/confide & harken T2 - Virtual Environments ’98, Proceedings of the Eurographics Workshop Y1 - 1998 SN - 3-211-83233-5 U6 - https://doi.org/10.1007/978-3-7091-7519-4_23 SP - 235 EP - 242 PB - Springer-Verlag CY - Stuttgart ER -