• search hit 49 of 50
Back to Result List

Camera Vignetting Model and its Effects on Deep Neural Networks for Object Detection

  • In this paper, we describe a new approach for synthetic image augmentation and its advantages in training Deep Neural Networks (DNNs) for object classification and localization. To address the need for a significant amount of data when training DNNs, for image-based ADAS functions, our method relies on virtually generated scenarios augmented via a physics-based camera model. The camera model implements various optical effects on ideal-synthetic images. For the scope of this paper, we illustrate the performance differences associated with the vignetting effect when training DNNs with and without image augmentation. We show that training on images altered by our camera vignetting model yield to a better performance than using ideal-synthetic images, additionally we illustrate the relationship between the network's performance results and the implemented effect (vignetting in this case). For a start, our results open the possibility for using camera models for training neural networks on synthetic data and pave the way toward furtherIn this paper, we describe a new approach for synthetic image augmentation and its advantages in training Deep Neural Networks (DNNs) for object classification and localization. To address the need for a significant amount of data when training DNNs, for image-based ADAS functions, our method relies on virtually generated scenarios augmented via a physics-based camera model. The camera model implements various optical effects on ideal-synthetic images. For the scope of this paper, we illustrate the performance differences associated with the vignetting effect when training DNNs with and without image augmentation. We show that training on images altered by our camera vignetting model yield to a better performance than using ideal-synthetic images, additionally we illustrate the relationship between the network's performance results and the implemented effect (vignetting in this case). For a start, our results open the possibility for using camera models for training neural networks on synthetic data and pave the way toward further investigations on significant optical and image sensor effects to be modeled/implemented for performance enhancement during the training process. The approach is conducted and evaluated by training a DNN for car detection using the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) and Virtual KITTI (VKITTI) datasets.show moreshow less

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Kmeid Saad, Stefan-Alexander SchneiderORCiDGND
DOI:https://doi.org/10.1109/ICCVE45908.2019.8965233
Identifier:9781728101422 OPAC HS OPAC extern
Parent Title (English):2019 IEEE ICCVE : the 8th IEEE International Conference on Connected Vehicles and Expo (ICCVE)
Subtitle (English):2019 conference proceedings
Publisher:IEEE
Place of publication:Piscataway
Document Type:conference proceeding (article)
Conference:2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE), 4.-8.11.2019, Graz Austria
Language:English
Date of Publication (online):2020/01/23
Year of first Publication:2020
Volume:2020
Number of pages:5 Seiten
Institutes:Fakultät Elektrotechnik
Dewey Decimal Classification:6 Technik, Medizin, angewandte Wissenschaften
Research focus:FSP2: Mobilität
Publication Lists:Schneider, Stefan-Alexander
Publication reviewed:begutachtet
Release Date:2021/02/15
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.