Refine
Document Type
- conference proceeding (article) (11)
- Article (2)
- Part of Periodical (1)
Is part of the Bibliography
- no (14)
Keywords
- Activity recognition (2)
- Collaboration (2)
- Robots (2)
- Task analysis (2)
- Three-dimensional displays (2)
- 3D depth camera (1)
- Gesture recognition (1)
- Intelligent robots (1)
- Machine learning (1)
- Maschinelles Lernen (1)
Institute
- Fakultät Maschinenbau (8)
- Fakultät Angewandte Sozial- und Gesundheitswissenschaften (6)
- Fakultät Informatik und Mathematik (6)
- Institut für Sozialforschung und Technikfolgenabschätzung (IST) (6)
- Labor Empirische Sozialforschung (6)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (6)
- Fakultät Elektro- und Informationstechnik (2)
- Hochschulleitung/Hochschulverwaltung (1)
- Institut für Angewandte Forschung und Wirtschaftskooperationen (IAFW) (1)
- Mechatronics Research Unit (MRU) (1)
Begutachtungsstatus
- peer-reviewed (7)
In smart manufacturing environments robots collaborate with human operators as peers. They even share the same working space and time. An intuitive interaction with different input modalities is decisive to reduce workload and training periods for collaboration. We introduce our interaction system that is able to recognize gestures, actions and objects in a typical smart working scenario. As key aspect, this article considers an empirical investigation of input modalities (touch, gesture), individual differences (performance, recognition rate, previous knowledge) and boundary conditions (level of automation) on user experience. Therefore, answers from 31 participants within two experiments are collected. We show that the arrangement of the human-robot collaboration (input modalities, boundary conditions) has a significant effect on user experience in real-world environments. This effect and the individual differences between participants can be measured utilizing recognition rates and standardized usability questionnaires.
Steigende Anforderungen an die Qualität von zum Teil manuell gefertigten Produkten führen dazu, dass Handarbeitsplätze mit Assistenzsystemen für die Unterstützung der am Arbeitsplatz arbeitenden Mitarbeiterinnen und Mitarbeiter ausgestattet werden. Der Beitrag beschreibt einen neuen Ansatz, um mittels Verfahren des maschinellen Lernens die Objekterkennung sowie die Transitionen eines, den Arbeitsprozess repräsentierenden Zustandsautomaten eines solchen Systems einzulernen. Hierfür werden nach einer Vorverarbeitung Daten aus einer Tiefenkamera in drei Stufen durch Support Vector Machines (SVM) klassifiziert und das Ergebnis mit dem Zustandsautomaten verknüpft. Das Konzept wird an einem industriellen Montageprozess überschaubarer Komplexität evaluiert; es zeigt gute Ergebnisse hinsichtlich der Robustheit gegenüber Fehlern bei der Objektklassifikation.
Tightening quality requirements of industrial products involving manual assembly lead to the development of assisting workbenches with integrated functions to support workers performing these manual tasks. This contribution discusses a new approach to learning transitions of a finite state automaton representing the sequence of work tasks based on the video stream of a 3D depth camera. Preprocessed video data is fed into a three-stage classification scheme based on support vector machines. The results of the classification are then related to the state automation to trigger state transitions indicating the completion of a specific work task and the start of the next one. The proposed approach has been evaluated at an industrial assembly process of moderate complexity and shows very robust results with respect to disturbances caused by inaccurate object classification.
Seamless human-robot collaboration depends on high non-verbal behaviour recognition rates. To realize that in real-world manufacturing scenarios with an ecological valid setup, a lot of effort has to be invested. In this paper, we evaluate the impact of spontaneous inputs on the robustness of human-robot collaboration during gesture-based interaction. A high share of these spontaneous inputs lead to a reduced capability to predict behaviour and subsequently to a loss of robustness. We observe body and hand behaviour during interactive manufacturing of a collaborative task within two experiments. First, we analyse the occurrence frequency, reason and manner of human inputs in specific situations during a human-human experiment. We show the high impact of spontaneous inputs, especially in situations that differ from the typical working procedure. Second, we concentrate on implicit inputs during a real-world Wizard of Oz experiment using our human-robot working cell. We show that hand positions can be used to anticipate user needs in a semi-structured environment by applying knowledge about the semi-structured human behaviour which is distributed over working space and time in a typical manner.
Manual tasks in industrial production are often monotonous, leading to a decrease in concentration and motivation of the worker and thus to deficiencies in the products. With quality as well as performance requirements getting more and more stringent, workers need additional support by their work environment. We developed a novel approach providing worker assistance and inline quality assurance for manual workplaces. The prototypical system Smart Workbench (SWoB) uses a multimodal sensor interface consisting of a 3D depth sensor in combination with a 2D camera to control quality aspects of the product and track the work progress. The bidirectional flow of information is handled via an image processing driven gestural interface and the displaying of advice directly on the work surface. In this paper the developed system is introduced and the current state of evaluating its industrial usage with a manual quality control and packaging task reported.
The ability to synchronize expectations among human-robot teams and understand discrepancies between expectations and reality is essential for human-robot collaboration scenarios. To ensure this, human activities and intentions must be interpreted quickly and reliably by the robot using various modalities. In this paper we propose a multimodal recognition system designed to detect physical interactions as well as nonverbal gestures. Existing approaches feature high post-transfer recognition rates which, however, can only be achieved based on well-prepared and large datasets. Unfortunately, the acquisition and preparation of domain-specific samples especially in industrial context is time consuming and expensive. To reduce this effort we introduce a weakly-supervised classification approach. Therefore, we learn a latent representation of the human activities with a variational autoencoder network. Additional modalities and unlabeled samples are incorporated by a scalable product-of-expert sampling approach. The applicability in industrial context is evaluated by two domain-specific collaborative robot datasets. Our results demonstrate, that we can keep the number of labeled samples constant while increasing the network performance by providing additional unprocessed information.
Collaboration between robots and humans requires communicative skills on both sides. The robot has to understand the conscious and unconscious activities of human workers. Many state-of-the-art activity recognition algorithms with high performance rates on existing benchmark datasets are available for this task. This paper re-evaluates appropriate architectures in light of human work activity recognition for working cells in industrial production contexts. The specific constraints of such a domain is elaborated and used as prior knowledge. We utilize state-of-the-art algorithms as spatiotemporal feature encoders and search for appropriate classification and fusion strategies. Furthermore, we combine keypoint-based with appearance-based approaches to a multi-stream recognition system. Due to data protection rules and the high effort of data annotation within industrial domains only small datasets are available that reflect production aspects. Therefore, we use transfer learning approaches to reduce the dependency on data volume and variance in the target domain. The resulting recognition system achieves high performance for both singular person action and human-object interaction.
Um die Lebensqualität und Einsatzfähigkeit von Menschen mit Behinderung oder älteren Menschen zu verbessern, wurde untersucht, inwiefern Gamification-Anwendungen beim Anlernen einer Gestensteuerung geeignet sind. Grundlage des Experiments stellt ein intelligenter Arbeitsplatz (Smart Workbench, SWoB) dar, der Personen bei manuellen Handhabungsaufgaben unterstützt sowie bestimmte Produktionsprozesse teilautomatisiert ausführt. Um die Anlage bedienen zu können, muss im Vorfeld eine Einweisung erfolgen, welche von Menschen oder durch ein Lerntutorial mit Gamification-Elementen zur Motivationssteigerung durchgeführt werden kann. In der Studie wurde untersucht, welche Form des Anleitens aus welchen Gründen von unterschiedlichen Personen eher akzeptiert oder abgelehnt wird.
Smart Workbench
(2016)