TY - CHAP A1 - Myszkowski, Karol A1 - Okuneva, Galina A1 - Herder, Jens A1 - Kunii, Tosiyasu L. A1 - Ibusuki, Masumi T1 - Visual Simulation of the Chewing Process for Dentistry T2 - Visualization & Modelling, International Conf., 5-7 December, 1995 N2 - CAD/CAM techniques are increasingly used in dentistry for the design and fabrication of teeth restorations. Important concerns are the correction of articulation problems that existed beforetreatment and the prevention of treatment-generated problems. These require interactive evaluation of the occlusal surfaces of teeth during mastication. Traditional techniques based on the use of casts with mechanical articulators require manual adjustment of occlusal surfaces, which becomes impractical when hard restoration materials like porcelain are used; they are also time and labor consuming and provide little visual information. We present new visual tools and a related user interface for global articulation simulation, developed for the Intelligent Dental Care System project. The aim of the simulation is visual representation of characteristics relevant to the chewing process. The simulation is based on the construction of distance maps, which are visual representations of the distributions of the distances of points in a tooth to the opposite jaw. We use rasterizing graphics hardware for fast calculation of the distance maps. Distance maps are used for collision detection and for the derivation of various characteristics showing the distribution of load on the teeth and the chewing capability of the teeth. Such characteristics can be calculated for particular positions of the jaws; cumulative characteristics are used to describe the properties of jaw movement. This information may be used for interactive design of the occlusal surfaces of restorations and for jaw articulation diagnosis. We also demonstrate elements of a user interface that exploit metaphors familiar to dentists from everyday practice. Y1 - 1995 CY - Leeds ER - TY - CHAP A1 - Myszkowski, Karol A1 - Herder, Jens A1 - Kunii, Tosiyasu L. A1 - Ibusuki, Masumi T1 - Visualization and analysis of occlusion for human jaws using a "functionally generated path" T2 - IS&T/SPIE Symp. on Electronic Imaging, Visual Data Exploration and Analysis III N2 - Dynamic characteristics of occlusion during lower jaw motion are useful in the diagnosis of jaw articulation problems and in computer-aided design/manufacture of teeth restorations. The Functionally Generated Path (FGP), produced as a surface which envelops the actual occlusal surface of the moving opponent jaw, can be used for compact representation of dynamic occlusal relations. In traditional dentistry FGP is recorded as a bite impression in a patient’s mouth. We propose an efficient computerized technique for FGP reconstruction and validate it through implementation and testing. The distance maps between occlusal surfaces of jaws, calculated for multiple projection directions and accumulated for mandibular motion, provide information for FGP computation. Rasterizing graphics hardware is used for fast calculation of the distance maps. Real-world data are used: the scanned shape of teeth and the measured motion of the lower jaw. We show applications of FGP to analysis of the occlusion relations and occlusal surface design for restorations. Y1 - 1996 U6 - https://doi.org/10.1117/12.234684 SP - 360 EP - 367 PB - The International Society for Optical Engineering CY - San Jose ER - TY - CHAP A1 - Mayer, Christian A1 - Pogscheba, Patrick A1 - Marinos, Dionysios A1 - Wöldecke, Björn A1 - Geiger, Christian ED - Chisik, Yoram ED - Geiger, Christian ED - Hasegawa, Shoichi T1 - An audio-visual music installation with dichotomous user interactions T2 - Proceedings of the 11th Conference on Advances in Computer Entertainment Technology, Funchal, 11.11.2014-14.11.2014 Y1 - 2014 UR - https://dl.acm.org/doi/proceedings/10.1145/2663806 SN - 9781450329453 U6 - https://doi.org/10.1145/2663806.2663842 SP - 1 EP - 6 PB - ACM CY - New York ER - TY - CHAP A1 - Martens, William L. A1 - Herder, Jens A1 - Shiba, Yoshiki T1 - A filtering model for efficient rendering of the spatial image of an occluded virtual sound source T2 - 137th Regular Meeting of the Acoustical Society of America and the 2nd Convention of the European Acoustics Association N2 - Rendering realistic spatial sound imagery for complex virtual environments must take into account the effects of obstructions such as reflectors and occluders. It is relatively well understood how to calculate the acoustical consequence that would be observed at a given observation point when an acoustically opaque object occludes a sound source. But the interference patterns generated by occluders of various geometries and orientations relative to the virtual source and receiver are computationally intense if accurate results are required. In many applications, however, it is sufficient to create a spatial image that is recognizable by the human listener as the sound of an occluded source. In the interest of improving audio rendering efficiency, a simplified filtering model was developed and its audio output submitted to psychophysical evaluation. Two perceptually salient components of occluder acoustics were identified that could be directly related to the geometry and orientation of a simple occluder. Actual occluder impulse responses measured in an anechoic chamber resembled the responses of a model incorporating only a variable duration delay line and a low-pass filter with variable cutoff frequenc KW - audio rendering KW - first-order reflection KW - human perception KW - occluder Y1 - 1999 PB - Acoustical Society of America, European Acoustics Association CY - Berlin ER - TY - CHAP A1 - Martens, William L. A1 - Herder, Jens T1 - Perceptual criteria for eliminating reflectors and occluders from the rendering of environmental sound T2 - 137th Regular Meeting of the Acoustical Society of America and the 2nd Convention of the European Acoustics Association N2 - Given limited computational resources available for the rendering of spatial sound imagery, we seek to determine effective means for choosing whatcomponents of the rendering will provide the most audible differences in the results. Rather than begin with an analytic approach that attempts to predict audible differences on the basis of objective parameters, we chose to begin with subjective tests of how audibly different the rendering result may be heard to be when that result includes two types of sound obstruction: reflectors and occluders. Single-channel recordings of 90 short speech sounds were made in an anechoic chamber in the presence and absence of these two types of obstructions, and as the angle of those obstructions varied over a 90 degree range. These recordings were reproduced over a single loudspeaker in that anechoic chamber, and listeners were asked to rate how confident they were that the recording of each of these 90 stimuli included an obstruction. These confidence ratings can be used as an integral component in the evaluation function used to determine which reflectors and occluders are most important for rendering. KW - audio rendering KW - first-order reflection KW - human perception KW - level of detail KW - occluder KW - sound spatialization resource management Y1 - 1999 PB - Acoustical Society of America, European Acoustics Association CY - Berlin ER - TY - CHAP A1 - Marinos, Dionysios A1 - Wöldecke, Björn A1 - Geiger, Christian A1 - Schwirten, Tobias ED - Romão, Teresa ED - Correia, Nuno ED - Inami, Masahiko ED - Kato, Hirokasu ED - Prada, Rui ED - Terada, Tsutomu ED - Dias, Eduardo ED - Chambel, Teresa T1 - Design of a touchless multipoint musical interface in a virtual studio environment T2 - Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology - ACE '11, 08.11.2011 - 11.11.2011, Lisbon Y1 - 2011 UR - http://dl.acm.org/citation.cfm?doid=2071423 SN - 9781450308274 U6 - https://doi.org/10.1145/2071423.2071464 PB - ACM Press CY - New York ER - TY - CHAP A1 - Marinos, Dionysios A1 - Wöldecke, Björn A1 - Geiger, Christian ED - Richir, Simon T1 - Prototyping natural interactions in virtual studio environments by demonstration T2 - Proceedings of the Virtual Reality International Conference: Laval Virtual, 20.03.2013-22.03.2013, Laval Y1 - 2013 UR - https://dl.acm.org/doi/proceedings/10.1145/2466816 SN - 9781450318754 U6 - https://doi.org/10.1145/2466816.2466819 SP - 1 EP - 8 PB - ACM CY - New York ER - TY - CHAP A1 - Marinos, Dionysios A1 - Woldecke, Bjorn A1 - Geiger, Christian T1 - Poster: Prototyping natural interactions in virtual studio environments by demonstrationby - Combining spatial mapping with gesture following T2 - 2013 IEEE Symposium on 3D User Interfaces (3DUI), 16.03.2013 - 17.03.2013, Orlando KW - Poster Y1 - 2013 UR - http://ieeexplore.ieee.org/document/6550224/ SN - 978-1-4673-6098-2 U6 - https://doi.org/10.1109/3DUI.2013.6550224 SP - 153 EP - 154 PB - IEEE ER - TY - CHAP A1 - Marinos, Dionysios A1 - Geiger, Christian A1 - Schwirten, Tobias A1 - Göbel, Sebastian ED - Krüger, Antonio ED - Schöning, Johannes ED - Wigdor, Daniel ED - Haller, Michael T1 - Multitouch navigation in zoomable user interfaces for large diagrams T2 - ACM International Conference on Interactive Tabletops and Surfaces - ITS '10, 07.11.2010 - 10.11.2010, Saarbrücken Y1 - 2010 UR - http://portal.acm.org/citation.cfm?doid=1936652 SN - 9781450303996 U6 - https://doi.org/10.1145/1936652.1936713 PB - ACM Press CY - New York ER - TY - CHAP A1 - Marinos, Dionysios A1 - Geiger, Christian A1 - Herder, Jens T1 - Large-Area Moderator Tracking and Demonstrational Configuration of Position Based Interactions for Virtual Studio T2 - EuroITV '12 Proceedings of the 10th European Conference on Interactive TV and Video N2 - In this paper we introduce a system for tracking persons walking or standing on a large planar surface and for using the acquired data to easily configure position based interactions for virtual studio productions. The tracking component of the system, radarTRACK, is based on a laser scanner device capable of delivering interaction points on a large configurable plane. By using the device on the floor it is possible to use the delivered data to detect feet positions and derive the position and orientation of one or more users in real time. The second component of the system, named OscCalibrator, allows for the easy creation of multidimensional linear mappings between input and output parameters and the routing of OSC messages within a single modular design environment. We demonstrate the use of our system to flexibly create position-based interactions in a virtual studio environment. KW - body tracking KW - OSC mapping KW - virtual studio interaction KW - measurement KW - design KW - reliability KW - experimentation KW - human factors KW - VSVR KW - Virtual (TV) Studio Y1 - 2012 UR - https://dl.acm.org/citation.cfm?id=2325639 SN - 978-1-4503-1107-6 U6 - https://doi.org/10.1145/2325616.2325639 SP - 105 EP - 114 PB - ACM CY - New York ER - TY - CHAP A1 - Marinos, Dionysios A1 - Geiger, Christian ED - Grimshaw, Mark ED - Walther-Hansen, Mads T1 - Facilitating the creation of natural interactions for live audiovisual performances T2 - Proceedings of the 9th Audio Mostly on A Conference on Interaction With Sound - AM '14, Aalborg, 01.10.2014 - 03.10.2014 Y1 - 2014 UR - http://dl.acm.org/citation.cfm?doid=2636879 SN - 9781450330329 U6 - https://doi.org/10.1145/2636879.2636893 SP - 1 EP - 6 PB - ACM Press CY - New York ER - TY - CHAP A1 - Marinos, Dionysios A1 - Geiger, Christian T1 - An immersive multiuser music generation interface T2 - Proceedings of the International Conference on Advances in Computer Enterntainment Technology - ACE '09, 29.10.2009 - 31.10.2009, Athens Y1 - 2009 UR - http://portal.acm.org/citation.cfm?doid=1690388 SN - 9781605588643 U6 - https://doi.org/10.1145/1690388.1690486 PB - ACM Press CY - New York ER - TY - CHAP A1 - Ludwig, Philipp A1 - Büchel, Joachim A1 - Herder, Jens A1 - Vonolfen, Wolfgang T1 - InEarGuide - A Navigation and Interaction Feedback System using In Ear Headphones for Virtual TV Studio Productions T2 - 9. Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR N2 - This paper presents an approach to integrate non-visual user feedback in today's virtual tv studio productions. Since recent studies showed that systems providing vibro-tactile feedback are not sufficient for replacing the common visual feedback, we developed an audio-based solution using an in ear headphone system, enabling a talent to move, avoid and point to virtual objects in a blue or green box. The system consists of an optical head tracking system, a wireless in ear monitor system and a workstation, which performs all application and audio processing. Using head related transfer functions, the talent gets directional and distance cues. Past research showed, that generating reflections of the sounds and simulating the acoustics of the virtual room helps the listener to conceive the acoustical feedback, we included this technique as well. In a user study with 15 participants the performance of the system was evaluated. KW - Navigation KW - virtual TV KW - erweiterte Realität KW - VSVR KW - Virtual (TV) Studio Y1 - 2012 CY - Düsseldorf ER - TY - CHAP A1 - Lehmann, Anke A1 - Geiger, Christian A1 - Woldecke, Bjorn A1 - Stocklein, Jorg T1 - Poster: Design and evaluation of 3D content with wind output T2 - 2009 IEEE Symposium on 3D User Interfaces, 14.03.2009 - 15.03.2009, Lafayette KW - Poster Y1 - 2009 UR - http://ieeexplore.ieee.org/document/4811231/ SN - 978-1-4244-3965-2 U6 - https://doi.org/10.1109/3DUI.2009.4811231 SP - 151 EP - 152 PB - IEEE ER - TY - CHAP A1 - Ladwig, Philipp A1 - Pech, Alexander A1 - Dorner, Ralf A1 - Geiger, Christian T1 - Unmasking Communication Partners: A Low-Cost AI Solution for Digitally Removing Head-Mounted Displays in VR-Based Telepresence T2 - 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Utrecht, 14.12.2020 - 18.12.2020 KW - Mirevi Y1 - 2020 UR - https://doi.org/10.1109/AIVR50618.2020.00025 SN - 978-1-7281-7463-1 U6 - https://doi.org/10.1109/AIVR50618.2020.00025 SP - 82 EP - 90 PB - IEEE ER - TY - CHAP A1 - Ladwig, Philipp A1 - Herder, Jens A1 - Geiger, Christian T1 - Towards Precise, Fast and Comfortable Immersive Polygon Mesh Modelling: Capitalising the Results of Past Research and Analysing the Needs of Professionals T2 - ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments N2 - More than three decades of ongoing research in immersive modelling has revealed many advantages of creating objects in virtual environments. Even though there are many benefits, the potential of immersive modelling has only been partly exploited due to unresolved problems such as ergonomic problems, numerous challenges with user interaction and the inability to perform exact, fast and progressive refinements. This paper explores past research, shows alternative approaches and proposes novel interaction tools for pending problems. An immersive modelling application for polygon meshes is created from scratch and tested by professional users of desktop modelling tools, such as Autodesk Maya, in order to assess the efficiency, comfort and speed of the proposed application with direct comparison to professional desktop modelling tools. KW - Human-centered computing KW - Virtual reality KW - Interaction techniques KW - Interaction design process and methods KW - Information systems Y1 - 2017 UR - http://vsvr.medien.hs-duesseldorf.de/publications/egve2017-mod-abstract.html U6 - https://doi.org/10.2312/egve.20171360 SP - 22 EP - 24 PB - The Eurographics Association ER - TY - CHAP A1 - Ladwig, Philipp A1 - Geiger, Christian ED - Auer, Michael E. ED - Langmann, Reinhard T1 - A Literature Review on Collaboration in Mixed Reality T2 - Smart Industry & Smart Education. Proceedings of the 15th International Conference on Remote Engineering and Virtual Instrumentation Y1 - 2019 UR - https://doi.org/10.1007/978-3-319-95678-7_65 SN - 978-3-319-95677-0 U6 - https://doi.org/10.1007/978-3-319-95678-7_65 VL - Lecture Notes in Networks and Systems SP - 591 EP - 600 PB - Springer International Publishing CY - Cham ER - TY - CHAP A1 - Ladwig, Philipp A1 - Evers, Kester A1 - Jansen, Eric J. A1 - Fischer, Ben A1 - Nowottnik, David A1 - Geiger, Christian T1 - MotionHub: Middleware for Unification of Multiple Body Tracking Systems T2 - Proceedings of the 7th International Conference on Movement and Computing: MOCO '20: 7th International Conference on Movement and Computing, Jersey City, 15.07.2020-17.07.2020 Y1 - 2020 UR - https://dl.acm.org/doi/proceedings/10.1145/3401956 SN - 9781450375054 U6 - https://doi.org/10.1145/3401956.3404185 SP - 1 EP - 8 PB - ACM CY - New York ER - TY - JOUR A1 - Kunii, Tosiyasu L. A1 - Herder, Jens A1 - Myszkowski, Karol A1 - Okunev, Oleg A1 - Okuneva, Galina A1 - Ibusuki, Masumi T1 - Articulation Simulation for an Intelligent Dental Care System JF - Displays N2 - CAD/CAM techniques are used increasingly in dentistry for design and fabrication of teeth restorations. An important issue is preserving occlusal contacts of teeth after restoration. Traditional techniques based on the use of casts with mechanical articulators require manual adjustment of occlusal surface, which becomes impractical when hard restoration materials like porcelain are used; they are also time and labor consuming. Most existing computer systems ignore completely such an articulation check, or perform the check at the level of a tooth and its immediate neighbors. We present a new mathematical model and a related user interface for global articulation simulation, developed for the Intelligent Dental Care System project. The aim of the simulation is elimination of the use of mechanical articulators and manual adjustment in the process of designing dental restorations and articulation diagnostic. The mathematical model is based upon differential topological modeling of the jawbs considered as a mechanical system. The user interface exploits metaphors that are familiar to dentists from everyday practice. A new input device designed specifically for use with articulation simulation is proposed. Y1 - 1994 VL - 15 IS - 3 SP - 181 EP - 188 ER - TY - CHAP A1 - Klapdohr, Monika A1 - Wöldecke, Björn A1 - Marinos, Dionysios A1 - Herder, Jens A1 - Geiger, Christian A1 - Vonolfen, Wolfgang T1 - Vibrotactile Pitfalls: Arm Guidance for Moderators in Virtual TV Studios T2 - HC '10 Proceedings of the 13th International Conference on Humans and Computers N2 - For this study, an experimental vibrotactile feedback system was developed to help actors with the task of moving their arm to a certain place in a virtual tv studio under live conditions. Our intention is to improve interaction with virtual objects in a virtual set, which are usually not directly visible to the actor, but only on distant displays. Vibrotactile feedback might improve the appearance on tv because an actor is able to look in any desired direction (camera or virtual object) or to read text on a teleprompter while interacting with a virtual object. Visual feedback in a virtual studio lacks spatial relation to the actor, which impedes the adjustment of the desired interaction. The five tactors of the implemented system which are mounted on the tracked arm give additional information like collision, navigation and activation. The user study for the developed system shows that the duration for reaching a certain target is much longer in case no visual feedback is given, but the accuracy is similar. In this study, subjects reported that an activation signal indicating the arrival at the target of a drag & drop task was helpful. In this paper, we discuss the problems we encountered while developing such a vibrotactile display. Keeping these pitfalls in mind could lead to better feedback systems for actors in virtual studio environments. KW - vibrotactile feedback KW - Virtual (TV) Studio KW - augmented reality KW - FHD KW - VSVR Y1 - 2010 UR - https://dl.acm.org/citation.cfm?id=1994506 SP - 72 EP - 80 PB - University of Aizu Press CY - Aizu-Wakamatsu ER -