TY - CHAP A1 - Meißner, Pascal A1 - Reckling, Reno A1 - Wittenbeck, Valerij A1 - Schmidt-Rohr, Sven R. A1 - Dillmann, Rüdiger T1 - Active Scene Recognition for Programming by Demonstration using Next-Best-View Estimates from Hierarchical Implicit Shape Models T2 - Proceedings of IEEE International Conference on Robotics and Automation (ICRA) N2 - We present an approach that combines passive scene understanding with object search in order to recognize scenes in indoor environments that cannot be perceived from a single point of view. Passive scene recognition is performed using Implicit Shape Models based on spatial relations between objects. ISMs, a variant of the Generalized Hough Transform, are extended to describe scenes as sets of objects with relations lying between them. Relations are expressed as six-degree-of-freedom (DoF) relative object poses. They are extracted from sensor recordings of human demonstrations of actions usually taking place in the corresponding scene. In a scene ISMs solely represent relations of n objects towards a common reference. Violations of other relations are not detectable. To overcome this limitation, we extend our scene model, using hierarchical agglomerative clustering, to a binary tree consisting of ISMs. Active scene recognition aims to simultaneously detect present scenes and look for objects these scenes consist of. For a pivoting stereo camera rig, we achieve this by performing recognition with ISMs in an object search loop using next-best-view (NBV) estimates. A criterion, on which we greedily choose views the rig shall adopt next, is the confidence to detect objects in them. In each step during the search, confidences on potential positions of objects, not found yet, are calculated based on the best available scene hypothesis. This is done by reversing the principle of ISMs and using spatial relations to predict potential object positions starting from the objects already detected. Y1 - 2014 U6 - https://doi.org/10.1109/ICRA.2014.6907680 PB - IEEE ER - TY - JOUR A1 - Meißner, Pascal A1 - Schmidt-Rohr, Sven R. A1 - Lösch, Martin A1 - Jäkel, Rainer A1 - Dillmann, Rüdiger T1 - Localization of furniture parts by integrating range and intensity data robust against depths with low signal-to-noise ratio JF - Robotics and Autonomous Systems N2 - In this article we present an approach for localizing planar parts of furniture in depth data from range cameras. It estimates both their six-degree-of-freedom poses and their dimensions. The system has been designed for enabling robots to autonomously manipulate furniture. Range cameras are a promising sensor category for this application. As many of them provide data with considerable noise and distortions, detecting objects, for example, using canonical methods for range data segmentation or feature extraction, is complicated. In contrast, our approach is able to overcome these issues. This is done by combining concepts of 2D and 3D computer vision as well as integrating intensity and range information in multiple steps of our processing chain. Therefore it can be employed on range sensors with both low and high signal-to-noise ratios and in particular on time-of-flight cameras. This concept can be adapted to various object shapes. It has been implemented for object parts with shapes similar to ellipses as a proof-of-concept. For this, a state-of-the-art ellipse detection method has been enhanced regarding our application. Y1 - 2014 U6 - https://doi.org/10.1016/j.robot.2012.07.018 VL - 62 IS - 1 SP - 25 EP - 37 ER -