Refine
Year of publication
Document Type
- Conference Proceeding (25)
- Article (peer reviewed) (10)
- Book (8)
- Report (2)
- Part of a Book (1)
Is part of the Bibliography
- no (46)
Keywords
- Augmented Reality (AR) (4)
- Mobile Robot (4)
- Robotic Research (4)
- Cognitive Mapping (3)
- Grundlagen Informatik (3)
- Machine Learning (3)
- Sonar Sensor (3)
- Computer vision (2)
- Forschung (2)
- Informatik (2)
Institute
Dieser Artikel stellt ein Konzept f¨ur eine Online-Vorlesung im Rahmen der virtuellen Hochschule Bayern vor.
Zunächst werden kurz die fachlichen Inhalte erl¨autert, die es zu vermitteln gilt.
Nach dieser Einführung wird auf die Umsetzung der Inhalte in eine dem neuen Medium Internet angemessene Form eingegangen.
Dies betrifft vor allem die ausgewählten Techniken, die zur Wissensvermittlung eingesetzt werden.
Ein besonderer Schwerpunkt liegt auf der Möglichkeit durch Interaktivität das Wissen experimentell zu vertiefen und somit eine Verbindung zwischen dem theoretisch Gelernten und dem praktisch Erfahrenen herzustellen.
Weiterhin wird eine Vielfalt an Kommunikationsm¨oglichkeiten vorgestellt, da Kommunikation eine wichtige soziale Komponente im Lernprozess und h¨aufig entscheidend f¨ur den Erfolg ist.
We present an Approach for non linea roptimization of the parameters of an endoscopic camera mounted on a surgery robot. The goal is to generate a depth map for each image in order to enhance the quality of medical light fields.
The pose information provided by the robot is used as an initialization, where especially the orientation isi naccurate. Refinement of intrinsic and extrinsic camera parameters is performed by minimizing the back-projectionerror of 3-D points that are reconstructed by triangulation from image Feature stracked over an image sequence.
Optimization of the camera parameters results in an enhancement of Rendering Quality in two ways: More accurate parameters lead to better interpolation as well as to better depth maps for approximating the scenegeometry.
This work presents a technique for computing dense disparity maps from a binocular stereo camera system. The methods are applied in an Augmented Reality setting for combining real and virtual worlds with proper occlusions. The proposed stereo correspondence technique is based oil area matching and facilitates an efficient strategy by using the concept of a three-dimensional similarity accumulator whereby occlusions are detected and object boundaries are extracted correctly. The main contribution of this paper is the way we fill the accumulator using absolute differences of images and computing a mean filter on these difference images. This. is. where the main advantages of the accumulator approach can be exploited, since all entries can be computed in parallel and thus extremely efficient. Additionally, we-perform an asymmetric correction step and a post-processing of the disparity maps that maintains object edges.
In this paper we address the problem of using quaternions in unconstrained nonlinear optimization of 3-D rotations. Quaternions representing rotations have four elements but only three degrees of freedom, since they must be of norm one.
This constraint has to be taken into account when applying e.g. the Levenberg-Marquardt algorithm, a method for unconstrained nonlinear optimization widely used in computer vision. We propose an easy to use method for achieving this.
Experiments using our parametrization in photo grammetric bündle -adjustment are presented at the end of the paper.
We describe an Augmented Reality system using the corners of a color cube for camera calibration. In the augmented image the cube is replaced by a computer generated virtual object.
The cube is localized in an image by the CSC color segmentation algorithm. The camera projection matrix is estimated with a linear method that is followed by a nonlinear refinement step.
Because of possible missclassifications of the segmented color regions and the minimum number of point correspondences used for calibration, the estimated pose of the cube may be very erroneous for some frames; therefore we perform outlier detection and treatment for rendering the virtual object in an acceptable manner.
Es wird ein System aus der Augmented Reality (Erweiterte Realiäat, AR) vorgestellt, das zur Kamerakalibrierung einen farbigen Würfel verwendet, der außerdem als Platzhalter für ein virtuelles Objekt dient.
Zur Erkennung des Würfels in einer Szene werden Methoden aus der Farbbildverarbeitung verwendet, wie der Color Structure Code (CSC) und die Klassifikation der entstehenden Regionen nach ihrer Farbe. Zur Beschleunigung der Segmentierung wird ein hierarchisches Verfahren eingesetzt.
Die typischerweise gewünschten Einsatzgebiete für Dienstleistungsroboter, z. B. Krankenhäuser oder Seniorenheime, stellen sehr hohe Anforderungen an die Mensch-Maschine-Schnittstelle.
Diese Erfordernisse gehen im Allgemeinen über die Möglichkeiten der Standardsensoren, wie Ultraschalloder Infrarotsensoren, hinaus. Es müssen daher ergänzende Verfahren zum Einsatz kommen.
Aus der Sicht der Mustererkennung sind die Nutzung des Rechnersehens und des natürlichsprachlichen Dialogs von besonderem Interesse. Dieser Beitrag stellt das mobile System MOBS Y vor. MOBS Y ist ein vollkommen integrierter autonomer mobiler Dienstleistungsroboter.
Er dient als ein automatischer dialogbasierter Empfangsservice für Besucher unseres Instituts.
MOBSY vereinigt vielfältige Methoden aus unterschiedlichsten Forschungsgebieten in einem eigenständigen System. Die zum Einsatz kommenden Methoden aus dem Bereich der Bildverarbeitung reichen dabei von Objektklassifikation über visuelle Selbstlokalisierung und Rekalibrierung bis hin zu multiokularer Objektverfolgung.
Die Dialogkomponente umfasst Methoden der Spracherkennung, des Sprachverstehens und die Generierung von Antworten. Im Beitrag werden die zu erfüllende Aufgabe und die einzelnen Verfahren dargestellt.
MOBSY is a fully integrated autonomous mobile service robot system.
It acts as an automatic dialogue based receptionist for visitors of our institute. MOBSY incorporates many techniques from different research areas into one working stand-alone system. Especially the computer vision and dialogue aspects are of main interest from the pattern recognition’s point of view.
To summarize shortly, the involved techniques range from object classification over visual self-localization and recalibration to object tracking with multiple cameras. A dialogue component has to deal with speech recognition, understanding and answer generation. Further techniques needed are navigation, obstacle avoidance, and mechanisms to provide fault tolerant behavior.
This contribution introduces our mobile system MOBSY. Among the main aspects vision and speech, we focus also on the integration aspect, both on the methodological and on the technical level. We describe the task and the involved techniques.
Finally, we discuss the experiences that we gained with MOBSY during a live performance at the 25th anniversary of our institute
Zahlensysteme und binäre Arithmetik
Nachricht und Information
Codierung und Datenkompression
Verschlüsselung
Schaltalgebra, Schaltnetze und Elemente der Computerhardware
Rechnerarchitekturen
Rechnernetze
Betriebssysteme
Datenbanken
Automatentheorie und formale Sprachen
Berechenbarkeit und Komplexität
Suchen und Sortieren
Bäume und Graphen
prozedurale und objektorientierte Programmierung (C und Java)
Anwendungsprogrammierung im Internet (HTML, CSS, JavaScript und PHP)
Software-Engineering
We propose a reinforcement learning approach to heating control in home automation, that can acquire a set of rules enabling an agent to heat a room to the desired temperature at a defined time while conserving as much energy as possible. Experimental results are presented that show the feasibility of our method.
Data fusion plays a central role in more and more automotive applications, especially for driver assistance systems. On the one hand the process of data fusion combines data and information to estimate or predict states of observed objects.
On the other hand data fusion introduces abstraction layers for data description and allows building more flexible and modular systems.The data fusion process can be divided into a low-level processing (tracking and object discrimination) and a high level processing (situation assessment).
High level processing becomes more and more the focus of current research as different assistance applications will be combined into one comprehensive assistance system.
Different levels/strategies for data fusion can be distinguished: Fusion on raw data level, fusion on feature level and fusion on decision level.
All fusion strategies can be found in current driver assistance implementations.
The paper gives an overview of the different fusion strategies and shows their application in current driver assistance systems. For low level processing a raw data fusion approach in a stereo video system is described, as an example for feature level fusion the fusion of radar and camera data for tracking is explained.
As an example for a high level fusion algorithm an approach for a situation assessment based on multiple sensors is given. The paper describes practical realizations of these examples and points out their potential to further increase traffic safety with reasonably low cost for the overall system.
In this paper, we present our experience in designing and teaching of our first robotics course for students at primary school level.
The course was carried out over a comparatively short period of time, namely 6 weeks, 2 hours per week. In contrast to many other projects, we use robots that researchers used to conduct their research and discuss problems faced by these researchers. Thus, this is not a behavioural study but a hands-on learning experience for the students.
The aim is to highlight the development of autonomous robots and artificial intelligence as well as to promote science and robotics in schools.
This paper presents new vector quantization based methods for selecting well-suited data for hand-eye calibration from a given sequence of hand and eye movements.
Data selection can improve the accuracy of classic hand-eye calibration, and make it possible in the first place in situations where the standard approach of manually selecting positions is inconvenient or even impossible, especially when using continuously recorded data.
A variety of methods is proposed, which differ from each other in the dimensionality of the vector quantization compared to the degrees of freedom of the rotation representation, and how the rotation angle is incorporated.
The performance of the proposed vector quantization based data selection methods is evaluated using data obtained from a manually moved optical tracking system (hand) and an endoscopic camera (eye).
This paper describes using a mobile robot, equipped with some sonar sensors and an odometer, to test navigation through the use of a cognitive map. The robot explores an office environment, computes a cognitive map, which is a network of ASRs [36, 35], and attempts to find its way home.
Ten trials were conducted and the robot found its way home each time. From four random positions in two trials, the robot estimated the home position relative to its current position reasonably accurately.
Our robot does not solve the simultaneous localization and mapping problem and the map computed is fuzzy and inaccurate with much of the details missing.
In each homeward journey, it computes a new cognitive map of the same part of the environment, as seen from the perspective of the homeward journey. We show how the robot uses distance information from both maps to find its way home.
When animals (including humans) first explore a new environment, what they remember is fragmentary knowledge about the places visited. Yet, they have to use such fragmentary knowledge to find their way home.
Humans naturally use more powerful heuristics while lower animals have shown to develop a variety of methods that tend to utilize two key pieces of information, namely distance and orientation information.
Their methods differ depending on how they sense their environment. Could a mobile robot be used to investigate the nature of such a process, commonly referred to in the psychological literature as cognitive mapping? What might be computed in the initial explorations and how is the resulting “cognitive map” be used for localization?
In this paper, we present an approach using a mobile robot to generate a “cognitive map”, the main focus being on experiments conducted in large spaces that the robot cannot apprehend at once due to the very limited range of its sensors. The robot computes a “cognitive map” and uses distance and orientation information for localization.
We present an approach for indoor mapping and localisation using sparse range data, acquired by a mobile robot equipped with sonar sensors.
The chapter consists of two main parts. First, a split and merge based method for dividing a given metric map into distinct regions is presented, thus creating a topological map in a metric framework. Spatial information extracted from this map is then used for self-localisation on the return home journey.
The robot computes local confidence maps for two simple localisation strategies based on distance and relative orientation of regions. These local maps are then fused to produce overall confidence maps.
When animals (including humans) first explore a new environment, what they remember is fragmentary knowledge about the places visited. Yet, they have to use such fragmentary knowledge to find their way home. Humans naturally use more powerful heuristics while lower animals have shown to developa varietyof methodsthat tend to utilize two key pieces of information,namely distance and orientation information.
Their methods differ depending on how they sense their environment.
Could a mobile robot be used to investigate the nature of such a process, commonly referred to in the psychological literature as cognitive mapping? What might be computed in the initial explorations and how is the resulting “cognitive map” be used to return home?
In this paper, we presented a novel approach using a mobile robot to do cognitive mapping. Our robot computes a “cognitive map” and uses distance and orientation information to find its way home.
The process developed provides interesting insights into the nature of cognitive mapping and encourages us to use a mobile robot to do cognitive mapping in the future, as opposed to its popular use in robot mapping.
The main focus of this work is the development of new methods for the self-calibration of a rigid stereo camera system. However, many of the algorithms introduced here have a wider impact, particularly in robot hand-eye calibration with all its different areas of application. Stereo self-calibration refers to the computation of the intrinsic and extrinsic parameters of a stereo rig using neither a priori knowledge on the movement of the rig nor on the geometry of the observed scene.
The stereo parameters obtained by self-calibration, namely rotation and translation from left to right camera, are used for computing depth maps for both images, which are applied for rendering correctly occluded virtual objects into a real scene (Augmented Reality).
The proposed methods were evaluated on real and synthetic data and compared to algorithms from the literature. In addition to a stereo rig, an optical tracking system with a camera mounted on an endoscope was calibrated without a calibration pattern using the proposed extended hand-eye calibration algorithm.
The self-calibration methods developed in this work have a number of features, which make them easily applicable in practice: They rely on temporal feature tracking only, as this monocular tracking in a continuous image sequence is much easier than left-to-right tracking when the camera parameters are still unknown.
Intrinsic and extrinsic camera parameters are computed during the self-calibration process, i.e., no calibration pattern is required. The proposed stereo self-calibration approach can also be used for extended hand-eye calibration, where the eye poses are obtained by structure-from-motion rather than from a calibration pattern.
An inherent problem to hand-eye calibration is that it requires at least two general movements of the cameras in order to compute the rigid transformation.
If the motion is not general enough, only a part of the parameters can be obtained, which would not be sufficient for computing depth maps. Therefore, a main part of this work discusses methods for data selection that increase the robustness of hand-eye calibration. Different new approaches are shown, the most successful ones being based on vector quantization.
The data selection algorithms developed in this work can not only be used for stereo self-calibration, but also for classic robot hand-eye calibration, and they are independent of the actually used hand-eye calibration algorithm.
We present an approach for indoor mapping and localization with a mobile robot using sparse range data, without the need for solving the SLAM problem.
The paper consists of two main parts. First, a split and merge based method for dividing a given metric map into distinct regions is presented, thus creating a topological map in a metric framework.
Spatial information extracted from this map is then used for self-localization. The robot computes local confidence maps for two simple localization strategies based on distance and relative orientation of regions.
The local confidence maps are then fused using an approach adapted from computer vision to produce overall confidence maps. Experiments on data acquired by mobile robots equipped with sonar sensors are presented.
We present a novel split and merge based method for dividing a given metric map into distinct regions, thus effectively creating a topological map on top of a metric one. The initial metric map is obtained from range data that are converted to a geometric map consisting of linear approximations of the indoor environment.
The splitting is done using an objective function that computes the quality of a region, based on criteria such as the average region width (to distinguish big rooms from corridors) and overall direction (which accounts for sharp bends).
A regularization term is used in order to avoid the formation of very small regions, which may originate from missing or unreliable sensor data. Experiments based on data acquired by a mobile robot equipped with sonar sensors are presented, which demonstrate the capabilities of the proposed method.