• search hit 32 of 198
Back to Result List

Advances in Automated Generation of Convolutional Neural Networks from Synthetic Data in Industrial Environment

  • The usage of convolutional neural networks has revolutionized data processing and its application in the industry during the last few years. Especially detection in images, a historically hard task to automate is now available on every smart phone. Nonetheless, this technology has not yet spread in the industry of car production, where lots of visual tests and quality checks are still performed manually. Even though the vision capabilities convolutional neural networks can give machines are already respectable, they still need well prepared training data that is costly and time-consuming to produce. This paper describes our effort to test and improve a system to automatically synthesize training images. This existing system renders computer aided design models into scenes and out of that produces realistic images and corresponding labels. Two new models, Single Shot Detector and RetinaNet are retrained under the use of distractors and then tested against each other. The better performing RetinaNet is then tested for performance underThe usage of convolutional neural networks has revolutionized data processing and its application in the industry during the last few years. Especially detection in images, a historically hard task to automate is now available on every smart phone. Nonetheless, this technology has not yet spread in the industry of car production, where lots of visual tests and quality checks are still performed manually. Even though the vision capabilities convolutional neural networks can give machines are already respectable, they still need well prepared training data that is costly and time-consuming to produce. This paper describes our effort to test and improve a system to automatically synthesize training images. This existing system renders computer aided design models into scenes and out of that produces realistic images and corresponding labels. Two new models, Single Shot Detector and RetinaNet are retrained under the use of distractors and then tested against each other. The better performing RetinaNet is then tested for performance under training with a variety of datasets from different domains in order to observe the models strength and weakness under domain shifts. These domains are real photographs, rendered models and images of objects cut and pasted into different backgrounds. The results show that the model trained with a mixture of all domains performs best.show moreshow less

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author: Jan Hodapp, Markus Schiemann, Vadym Bilous, Claudio Salvatore Arcidiacono, Matthias Reichenbach
URL:https://hdl.handle.net/10125/64307
DOI:https://doi.org/10.24251/HICSS.2020.565
ISBN:978-0-9981331-3-3
Title of the source (English):53rd Hawaii International Conference on System Sciences: Maui, Hawaii, United States of America, Jan 07 - 10, 2020
Document Type:Conference publication peer-reviewed
Language:English
Year of publication:2020
Number of pages:7
Faculty/Chair:Fakultät 3 Maschinenbau, Elektro- und Energiesysteme / FG Automatisierungstechnik
Einverstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.