TY - JOUR A1 - Berscheid, Lars A1 - Meißner, Pascal A1 - Kröger, Torsten T1 - Self-supervised Learning for Precise Pick-and-place without Object Model JF - IEEE Robotics and Automation Letters (RA-L) 5 (3) N2 - Flexible pick-and-place is a fundamental yet challenging task within robotics, in particular due to the need of an object model for a simple target pose definition. In this work, the robot instead learns to pick-and-place objects using planar manipulation according to a single, demonstrated goal state. Our primary contribution lies within combining robot learning of primitives, commonly estimated by fully-convolutional neural networks, with one-shot imitation learning. Therefore, we define the place reward as a contrastive loss between real-world measurements and a task-specific noise distribution. Furthermore, we design our system to learn in a self-supervised manner, enabling real-world experiments with up to 25000 pick-and-place actions. Then, our robot is able to place trained objects with an average placement error of 2.7 (0.2) mm and 2.6 (0.8)°. As our approach does not require an object model, the robot is able to generalize to unknown objects while keeping a precision of 5.9 (1.1) mm and 4.1 (1.2)°. We further show a range of emerging behaviors: The robot naturally learns to select the correct object in the presence of multiple object types, precisely inserts objects within a peg game, picks screws out of dense clutter, and infers multiple pick-and-place actions from a single goal state. Y1 - 2020 U6 - https://doi.org/10.1109/LRA.2020.3003865 ER - TY - BOOK A1 - Meißner, Pascal T1 - Indoor Scene Recognition by 3-D Object Search for Robot Programming by Demonstration T3 - Springer Tracts in Advanced Robotics (STAR) N2 - This book focuses on enabling mobile robots to recognize scenes in indoor environments, in order to allow them to determine which actions are appropriate at which points in time. In concrete terms, future robots will have to solve the classification problem represented by scene recognition sufficiently well for them to act independently in human-centered environments. To achieve accurate yet versatile indoor scene recognition, the book presents a hierarchical data structure for scenes – the Implicit Shape Model trees. Further, it also provides training and recognition algorithms for these trees. In general, entire indoor scenes cannot be perceived from a single point of view. To address this problem the authors introduce Active Scene Recognition (ASR), a concept that embeds canonical scene recognition in a decision-making system that selects camera views for a mobile robot to drive to so that it can find objects not yet localized. The authors formalize the automatic selection of camera views as a Next-Best-View (NBV) problem to which they contribute an algorithmic solution, which focuses on realistic problem modeling while maintaining its computational efficiency. Lastly, the book introduces a method for predicting the poses of objects to be searched, establishing the otherwise missing link between scene recognition and NBV estimation. Y1 - 2020 U6 - https://doi.org/10.1007/978-3-030-31852-9 VL - 135 PB - Springer Cham ET - 1. ER - TY - CHAP A1 - Kiemel, Jonas C. A1 - Weitemeyer, Robin A1 - Meißner, Pascal A1 - Kröger, Torsten T1 - TrueAdapt: Learning Smooth Online Trajectory Adaptation with Bounded Jerk, Acceleration and Velocity in Joint Space T2 - Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) N2 - We present TrueAdapt, a model-free method to learn online adaptations of robot trajectories based on their effects on the environment. Given sensory feedback and future waypoints of the original trajectory, a neural network is trained to predict joint accelerations at regular intervals. The adapted trajectory is generated by linear interpolation of the predicted accelerations, leading to continuously differentiable joint velocities and positions. Bounded jerks, accelerations and velocities are guaranteed by calculating the valid acceleration range at each decision step and clipping the network's output accordingly. A deviation penalty during the training process causes the adapted trajectory to follow the original one. Smooth movements are encouraged by penalizing high accelerations and jerks. We evaluate our approach by training a simulated KUKA iiwa robot to balance a ball on a plate while moving and demonstrate that the balancing policy can be directly transferred to a real robot with little impact on performance. Y1 - 2020 U6 - https://doi.org/10.1109/IROS45743.2020.9341001 ER - TY - CHAP A1 - Berscheid, Lars A1 - Meißner, Pascal A1 - Kröger, Torsten T1 - Robot Learning of Shifting Objects for Grasping in Cluttered Environments T2 - Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) N2 - Robotic grasping in cluttered environments is often infeasible due to obstacles preventing possible grasps. Then, pre-grasping manipulation like shifting or pushing an object becomes necessary. We developed an algorithm that can learn, in addition to grasping, to shift objects in such a way that their grasp probability increases. Our research contribution is threefold: First, we present an algorithm for learning the optimal pose of manipulation primitives like clamping or shifting. Second, we learn non-prehensible actions that explicitly increase the grasping probability. Making one skill (shifting) directly dependent on another (grasping) removes the need of sparse rewards, leading to more data-efficient learning. Third, we apply a real-world solution to the industrial task of bin picking, resulting in the ability to empty bins completely. The system is trained in a self-supervised manner with around 25 000 grasp and 2500 shift actions. Our robot is able to grasp and file objects with 274±3 picks per hour. Furthermore, we demonstrate the system's ability to generalize to novel objects. Y1 - 2019 U6 - https://doi.org/10.1109/IROS40897.2019.8968042 PB - IEEE ER - TY - CHAP A1 - Kunz, Christian A1 - Genten, Vera A1 - Meißner, Pascal A1 - Hein, Björn T1 - Metric-based evaluation of fiducial markers for medical procedures T2 - Proceedings of Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling N2 - The accurate tracking of patients during a surgery is an essential requirement of computer assisted surgery. Many tracking systems are based on permanently installed infrared camera systems to detect reflective spheres. These tracking concepts need a certain amount of installation effort and are associated with high investments. An alternative are planar fiducial markers, which can be tracked only through RGB data and can therefore be used with different camera systems. The objective of this work is to introduce a set of similarity metrics to compare fiducial markers for pose estimation. We propose eight different similarity metrics to unify the process of evaluation and comparison of marker systems. These are the size and outer margin of the marker, the distance to the camera, the pose estimation accuracy, the runtime of the algorithm, the robustness against external influences, the affection to the sensor system and the number of used markers. We also describe the methodology for evaluating these metrics. We then apply these metrics to compare the ArUco and AprilTag open source marker systems. Our tests conclude that the optical tracking of open source fiducial markers is possible at submillimeter range at distances up to one meter. In addition, the tracking result can be greatly improved by using multiple markers. Accuracy is increased and fluctuations are minimized. The similarity metrics presented by us are suitable for evaluating and comparing marker systems in detail. This can serve as a basis for selecting a suitable system for a specific medical procedure. Y1 - 2019 U6 - https://doi.org/10.1117/12.2511720 VL - 10951 SP - 690 EP - 703 ER - TY - CHAP A1 - Meißner, Pascal A1 - Schleicher, R. A1 - Hutmacher, R. A1 - Schmidt-Rohr, Sven R. A1 - Dillmann, Rüdiger T1 - Scene Recognition for Mobile Robots by Relational Object Search using Next-Best-View Estimates from Hierarchical Implicit Shape Models T2 - Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) N2 - We present an approach for recognizing indoor scenes in object constellations that require object search by a mobile robot, as they cannot be captured from a single viewpoint. In our approach that we call Active Scene Recognition (ASR), robots predict object poses from learnt spatial relations that they combine with their estimates about present scenes. Our models for estimating scenes and predicting poses are Implicit Shape Model (ISM) trees from prior work [1]. ISMs model scenes as sets of objects with spatial relations in-between and are learnt from observations. In prior work [2], we presented a realization of ASR, limited to choosing orientations for a fixed robot head with an approach to search objects that uses positions and ignores types. In this paper, we introduce an integrated system that extends ASR to selecting positions and orientations of camera views for a mobile robot with a pivoting head. We contribute an approach for Next-Best-View estimation in object search on predicted object poses. It is defined on 6 DoF viewing frustums and optimizes the searched view, together with the objects to be searched in it, based on 6 DoF pose predictions. To prevent combinatorial explosion when searching camera pose space, we introduce a hierarchical approach to sample robot positions with increasing resolution. Y1 - 2016 U6 - https://doi.org/10.1109/IROS.2016.7759046 PB - IEEE ER - TY - CHAP A1 - Meißner, Pascal A1 - Hanselmann, Fabian A1 - Jäkel, Rainer A1 - Schmidt-Rohr, Sven R. A1 - Dillmann, Rüdiger T1 - Automated Selection of Spatial Object Relations for Modeling and Recognizing Indoor Scenes with Hierarchical Implicit Shape Models T2 - Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) N2 - We present an approach that uses combinatorial optimization to decide which spatial relations between objects are relevant to accurately describe an indoor scene, made up of objects. We extract scene models from object configurations that are acquired during demonstration of actions, characteristic for a certain scene. We model scenes as graphs with Implicit Shape Models (ISMs), a Generalized Hough Transform variant. ISMs are limited to represent scenes as star-shaped topologies of object relations, leading to false positives in recognizing scenes. To describe other relation topologies, we introduced a representation of trees of ISMs in prior work together with a method to learn such ISM trees from demonstrations. Limited to creating topologies, corresponding to spanning trees, that method omits certain relations so that false positives still occur. In this paper, we introduce a method to convert any relation topology, corresponding to a connected graph, into an ISM tree using a heuristic depth-first-search. It allows using complete graphs as scene models. Despite causing no false positives, complete graphs are intractable for scene recognition. To achieve efficiency, we contribute a method that searches for an optimal relation topology by traversing the space of connected scene graphs, for a given set of objects, using an optimization similar to hill climbing. Optimality is defined as minimizing computational costs during scene recognition, while producing a minimum of false positives. Experiments with up to 15 objects show that both are achievable by the presented method. Costs, growing exponentially with the number of objects, are transferred from online recognition to offline optimization. Y1 - 2015 U6 - https://doi.org/10.1109/IROS.2015.7353980 ER - TY - CHAP A1 - Meißner, Pascal A1 - Reckling, Reno A1 - Wittenbeck, Valerij A1 - Schmidt-Rohr, Sven R. A1 - Dillmann, Rüdiger T1 - Active Scene Recognition for Programming by Demonstration using Next-Best-View Estimates from Hierarchical Implicit Shape Models T2 - Proceedings of IEEE International Conference on Robotics and Automation (ICRA) N2 - We present an approach that combines passive scene understanding with object search in order to recognize scenes in indoor environments that cannot be perceived from a single point of view. Passive scene recognition is performed using Implicit Shape Models based on spatial relations between objects. ISMs, a variant of the Generalized Hough Transform, are extended to describe scenes as sets of objects with relations lying between them. Relations are expressed as six-degree-of-freedom (DoF) relative object poses. They are extracted from sensor recordings of human demonstrations of actions usually taking place in the corresponding scene. In a scene ISMs solely represent relations of n objects towards a common reference. Violations of other relations are not detectable. To overcome this limitation, we extend our scene model, using hierarchical agglomerative clustering, to a binary tree consisting of ISMs. Active scene recognition aims to simultaneously detect present scenes and look for objects these scenes consist of. For a pivoting stereo camera rig, we achieve this by performing recognition with ISMs in an object search loop using next-best-view (NBV) estimates. A criterion, on which we greedily choose views the rig shall adopt next, is the confidence to detect objects in them. In each step during the search, confidences on potential positions of objects, not found yet, are calculated based on the best available scene hypothesis. This is done by reversing the principle of ISMs and using spatial relations to predict potential object positions starting from the objects already detected. Y1 - 2014 U6 - https://doi.org/10.1109/ICRA.2014.6907680 PB - IEEE ER - TY - CHAP A1 - Meißner, Pascal A1 - Reckling, Reno A1 - Jäkel, Rainer A1 - Schmidt-Rohr, Sven R. A1 - Dillmann, Rüdiger T1 - Recognizing Scenes with Hierarchical Implicit Shape Models based on Spatial Object Relations for Programming by Demonstration T2 - Proceedings of International Conference on Advanced Robotics (ICAR) N2 - We present an approach for recognizing scenes, consisting of spatial relations between objects, in unstructured indoor environments, which change over time. Object relations are represented by full six Degree-of-Freedom (DoF) coordinate transformations between objects. They are acquired from object poses that are visually perceived while people demonstrate actions that are typically performed in a given scene. We recognize scenes using an Implicit Shape Model (ISM) that is similar to the Generalized Hough Transform. We extend it to take orientations between objects into account. This includes a verification step that allows us to infer not only the existence of scenes, but also the objects they are composed of. ISMs are restricted to represent scenes as star topologies of relations, which insufficiently approximate object relations in complex dynamic settings. False positive detections may occur. Our solution are exchangeable heuristics for recognizing object relations that have to be represented explicitly in separate ISMs. Object relations are modeled by the ISMs themselves. We use hierarchical agglomerative clustering, employing the heuristics, to construct a tree of ISMs. Learning and recognition of scenes with a single ISM is naturally extended to multiple ISMs. Y1 - 2013 U6 - https://doi.org/10.1109/ICAR.2013.6766470 PB - IEEE ER - TY - CHAP A1 - Schmidt-Rohr, Sven R. A1 - Romahn, Fabian A1 - Meißner, Pascal A1 - Jäkel, Rainer A1 - Dillmann, Rüdiger T1 - Learning probabilistic decision making by a service robot with generalization of user demonstrations and interactive refinement T2 - Intelligent Autonomous Systems 12 ; Advances in Intelligent Systems and Computing book series N2 - When learning abstract probabilistic decision making models for multi-modal service robots from human demonstrations, alternative courses of events may be missed by human teachers during demonstrations. We present an active model space exploration approach with generalization of observed action effect knowledge leading to interactive requests of new demonstrations to verify generalizations. At first, the robot observes several user demonstrations of interacting humans, including dialog, object poses and human body movement. Discretization and analysis then lead to a symbolic-causal model of a demonstrated task in the form of a preliminary Partially observable Markov decision process. Based on the transition model generated from demonstrations, new hypotheses of unobserved action effects, generalized transitions, can be derived along with a generalization confidence estimate. To validate generalized transitions which have a strong impact on a decision policy, a request generator proposes further demonstrations to human teachers, used in turn to implicitly verify hypotheses. The system has been evaluated on a multi-modal service robot with realistic tasks, including furniture manipulation and execution-time interacting humans. Y1 - 2013 U6 - https://doi.org/10.1007/978-3-642-33932-5_35 VL - 194 SP - 369 EP - 382 PB - Springer CY - Berlin, Heidelberg ER - TY - CHAP A1 - Jäkel, Rainer A1 - Meißner, Pascal A1 - Schmidt-Rohr, Sven R. A1 - Dillmann, Rüdiger T1 - Distributed Generalization of Learned Planning Models in Robot Programming by Demonstration T2 - Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) N2 - In Programming by Demonstration (PbD), one of the key problems for autonomous learning is to automatically extract the relevant features of a manipulation task, which has a significant impact on the generalization capabilities. In this paper, task features are encoded as constraints of a learned planning model. In order to extract the relevant constraints, the human teacher demonstrates a set of tests, e.g. a scene with different objects, and the robot tries to execute the planning model on each test using constrained motion planning. Based on statistics about which constraints failed during the planning process multiple hypotheses about a maximal subset of constraints, which allows to find a solution in all tests, are refined in parallel using an evolutionary algorithm. The algorithm was tested on 7 experiments and two robot systems. Y1 - 2011 U6 - https://doi.org/10.1109/IROS.2011.6094717 PB - IEEE ER - TY - CHAP A1 - Jäkel, Rainer A1 - Xie, Yi A1 - Meißner, Pascal A1 - Dillmann, Rüdiger T1 - Online-generation of task-dependent search heuristics to execute learned planning models in Programming by Demonstration T2 - Proceedings of IEEE-RAS International Conference on Humanoid Robots (Humanoids) N2 - A service robot has to be flexible and fast in order to solve a manipulation task in the human environment with differing start configurations, objects, obstacles and a restricted work space. Based on a sophisticated task model, which captures the relevant constraints and goals of a task, constrained motion planning can be used to generate robot trajectories autonomously with high generalization capabilities. The major drawbacks are high planning times and non-repeatability of the results. In this work, search heuristics, which restrict the search space during motion planning, are learned incrementally whenever the robot uses the task model to plan a solution. The number of learned search heuristics is restricted by using a combination of constrained motion planning and a fast local control algorithm to increase the number of situations, in which the search heuristic can be applied. The approach combines two major approaches in Programming by Demonstration (PbD), i.e. learning and goal-directed planning with a general task description and learning efficient encodings of low-level trajectories, in a consistent way. Y1 - 2012 U6 - https://doi.org/10.1109/HUMANOIDS.2012.6651525 PB - IEEE ER - TY - CHAP A1 - Schmidt-Rohr, Sven R. A1 - Romahn, Fabian A1 - Meissner, Pascal A1 - Jäkel, Rainer A1 - Dillmann, Rüdiger T1 - Learning probabilistic decision making by a service robot with generalization of user demonstrations and interactive refinement T2 - Proceedings of Intelligent Autonomous Systems 12 N2 - When learning abstract probabilistic decision making models for multi-modal service robots from human demonstrations, alternative courses of events may be missed by human teachers during demonstrations. We present an active model space exploration approach with generalization of observed action effect knowledge leading to interactive requests of new demonstrations to verify generalizations. At first, the robot observes several user demonstrations of interacting humans, including dialog, object poses and human body movement. Discretization and analysis then lead to a symbolic-causal model of a demonstrated task in the form of a preliminary Partially observable Markov decision process. Based on the transition model generated from demonstrations, new hypotheses of unobserved action effects, generalized transitions, can be derived along with a generalization confidence estimate. To validate generalized transitions which have a strong impact on a decision policy, a request generator proposes further demonstrations to human teachers, used in turn to implicitly verify hypotheses. The system has been evaluated on a multi-modal service robot with realistic tasks, including furniture manipulation and execution-time interacting humans. Y1 - 2013 U6 - https://doi.org/10.1007/978-3-642-33932-5_35 SP - 369 EP - 382 PB - Springer CY - Berlin, Heidelberg ER - TY - JOUR A1 - Meißner, Pascal A1 - Schmidt-Rohr, Sven R. A1 - Lösch, Martin A1 - Jäkel, Rainer A1 - Dillmann, Rüdiger T1 - Localization of furniture parts by integrating range and intensity data robust against depths with low signal-to-noise ratio JF - Robotics and Autonomous Systems N2 - In this article we present an approach for localizing planar parts of furniture in depth data from range cameras. It estimates both their six-degree-of-freedom poses and their dimensions. The system has been designed for enabling robots to autonomously manipulate furniture. Range cameras are a promising sensor category for this application. As many of them provide data with considerable noise and distortions, detecting objects, for example, using canonical methods for range data segmentation or feature extraction, is complicated. In contrast, our approach is able to overcome these issues. This is done by combining concepts of 2D and 3D computer vision as well as integrating intensity and range information in multiple steps of our processing chain. Therefore it can be employed on range sensors with both low and high signal-to-noise ratios and in particular on time-of-flight cameras. This concept can be adapted to various object shapes. It has been implemented for object parts with shapes similar to ellipses as a proof-of-concept. For this, a state-of-the-art ellipse detection method has been enhanced regarding our application. Y1 - 2014 U6 - https://doi.org/10.1016/j.robot.2012.07.018 VL - 62 IS - 1 SP - 25 EP - 37 ER - TY - CHAP A1 - Meißner, Pascal A1 - Schmidt-Rohr, Sven R. A1 - Lösch, Martin A1 - Jäkel, Rainer A1 - Dillmann, Rüdiger T1 - Robust Localization of Furniture Parts by Integrating Depth and Intensity Data Suitable for Range Sensors with Varying Image Quality T2 - Proceedings of International Conference on Advanced Robotics (ICAR) N2 - In this paper we present an approach to localize planar furniture parts in 3D range camera data for autonomous robot manipulation, that estimates both their six degree of freedom (DoF) poses and their dimensions. Range cameras are a promising sensor category for mobile robotics. Unfortunately, many of them come with a considerable measurement noise, that leads to difficulties when trying to detect objects or their parts e.g. using canonical methods for range image segmentation. In contrast, our approach is able to overcome these issues by combining concepts of 2D and 3D computer vision as well as integrating intensity and depth data on several levels of abstraction. Therefore it is not restricted to range sensors with high image quality and scales on cameras with lower image quality, too. This concept is generic and has been implemented for elliptical object parts as a proof of concept. Y1 - 2011 U6 - https://doi.org/10.1109/ICAR.2011.6088595 ER - TY - CHAP A1 - Schmidt-Rohr, Sven R. A1 - Dirschl, Gerhard A1 - Meißner, Pascal A1 - Dillmann, Rüdiger T1 - A Knowledge Base for Learning Probabilistic Decision Making from Human Demonstrations by a Multimodal Service Robot T2 - Proceedings of International Conference on Advanced Robotics (ICAR) N2 - This paper presents a description logic based system to store and retrieve knowledge used in models for autonomous probabilistic decision making by multimodal service robots. These models are mainly generated by observation and analysis of humans performing tasks, the programming by demonstration methodology. As formal model representation, partially observable Markov decision processes (POMDPs) are utilized as they are a well understood formal framework for decision making considering real world uncertainty in both perception and execution. The approach presented here deals with aspects of organizing knowledge which cannot be retrieved from user demonstrations or which is valid beyond a single task. It is shown how use it in the process of model generation on a real service robot. Y1 - 2011 U6 - https://doi.org/10.1109/ICAR.2011.6088640 ER - TY - CHAP A1 - Holomjova, Valerija A1 - Meißner, Pascal T1 - Exploring Rotated Object Detection Models for Antipodal Robotic Grasping T2 - Proceedings of The 5th UK Robotics and Autonomous Systems Conference (UKRAS22) N2 - Abstract—Current deep learning approaches used by robotic grasping systems for predicting multiple valid grasps across various objects from images have achieved great results, but often stem from object detectors that were originally designed for predicting horizontal bounding boxes. Since 2D grasp poses are more naturally represented by oriented bounding boxes, in this paper, we explore the suitability of three top-performing rotated object detectors as they are composed of modules tailored for encoding rotated object features more precisely. The performance of the oriented detectors is compared against an effective grasp detection model architecture from literature on two publicly available grasping datasets. Results show that oriented detectors obtained comparable grasp accuracy scores on both datasets, whilst being more capable of producing confident and diverse sets of grasps. Code is available at https://github.com/valerijah/ exploring rotated object detection models. Y1 - 2022 U6 - https://doi.org/10.31256/Sp7Gn7W ER - TY - CHAP A1 - Berscheid, Lars A1 - Meißner, Pascal A1 - Kröger, Torsten T1 - Learning a Generative Transition Model for Uncertainty-Aware Robotic Manipulation T2 - Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) N2 - Robot learning of real-world manipulation tasks remains challenging and time consuming, even though actions are often simplified by single-step manipulation primitives. In order to compensate the removed time dependency, we additionally learn an image-to-image transition model that is able to predict a next state including its uncertainty. We apply this approach to bin picking, the task of emptying a bin using grasping as well as pre-grasping manipulation as fast as possible. The transition model is trained with up to 42000 pairs of real-world images before and after a manipulation action. Our approach enables two important skills: First, for applications with flange-mounted cameras, picks per hours (PPH) can be increased by around 15% by skipping image measurements. Second, we use the model to plan action sequences ahead of time and optimize time-dependent rewards, e.g. to minimize the number of actions required to empty the bin. We evaluate both improvements with real-robot experiments and achieve over 700 PPH in the YCB Box and Blocks Test. Y1 - 2021 U6 - https://doi.org/10.48550/arXiv.2107.02464 ER - TY - CHAP A1 - Kiemel, Jonas C. A1 - Meißner, Pascal A1 - Kröger, Torsten T1 - TrueRMA: Learning fast and smooth Robot Trajectories with Recursive Midpoint Adaptations in Cartesian Space T2 - Proceedings of IEEE International Conference on Robotics and Automation (ICRA) N2 - We present TrueRMA, a data-efficient, model-free method to learn cost-optimized robot trajectories over a wide range of starting points and endpoints. The key idea is to calculate trajectory waypoints in Cartesian space by recursively predicting orthogonal adaptations relative to the midpoints of straight lines. We generate a differentiable path by adding circular blends around the waypoints, calculate the corresponding joint positions with an inverse kinematics solver and calculate a time-optimal parameterization considering velocity and acceleration limits. During training, the trajectory is executed in a physics simulator and costs are assigned according to a user-specified cost function which is not required to be differentiable. Given a starting point and an endpoint as input, a neural network is trained to predict midpoint adaptations that minimize the cost of the resulting trajectory via reinforcement learning. We successfully train a KUKA iiwa robot to keep a ball on a plate while moving between specified points and compare the performance of TrueRMA against two baselines. The results show that our method requires less training data to learn the task while generating shorter and faster trajectories. Y1 - 2020 U6 - https://doi.org/10.1109/ICRA40945.2020.9196711 PB - IEEE ER - TY - CHAP A1 - Werner, Max A1 - Bullmann, Markus A1 - Fetzer, Toni A1 - Meißner, Pascal A1 - Deinzer, Frank T1 - Interpolation of Position Estimates for Radio Fingerprinting using Gaussian Process Regression T2 - 2025 International Conference on Indoor Positioning and Indoor Navigation (IPIN) Y1 - 2025 U6 - https://doi.org/10.1109/IPIN66788.2025.11213294 SP - 1 EP - 6 PB - IEEE ER -