• search hit 6 of 6
Back to Result List

Modeling realistic optical aberrations to reuse existing drive scene recordings for autonomous driving validation

  • Training autonomous vehicles requires lots of driving sequences in all situations. Collecting and labeling these drive scenes is a very time-consuming and expensive process. Currently, it is not possible to reuse these drive scenes with different optical properties, because there exists no numerically efficient model for the transfer function of the optical system. We present a numerical model for the point spread function (PSF) of an optical system that can efficiently model both experimental measurements and lens design simulations of the PSF. The numerical basis for this model is a nonlinear regression of the PSF with an artificial neural network. The novelty lies in the portability and the parameterization of this model. We present a lens measurement series, yielding a numerical function for the PSF that depends only on the parameters defocus, field, and azimuth. By convolving existing images and videos with this PSF, we generate images as if seen through the measured lens. TheTraining autonomous vehicles requires lots of driving sequences in all situations. Collecting and labeling these drive scenes is a very time-consuming and expensive process. Currently, it is not possible to reuse these drive scenes with different optical properties, because there exists no numerically efficient model for the transfer function of the optical system. We present a numerical model for the point spread function (PSF) of an optical system that can efficiently model both experimental measurements and lens design simulations of the PSF. The numerical basis for this model is a nonlinear regression of the PSF with an artificial neural network. The novelty lies in the portability and the parameterization of this model. We present a lens measurement series, yielding a numerical function for the PSF that depends only on the parameters defocus, field, and azimuth. By convolving existing images and videos with this PSF, we generate images as if seen through the measured lens. The methodology applies to any optical scenario, but we focus on the context of autonomous driving, where the quality of the detection algorithms depends directly on the optical quality of the used camera system. With this model, it is possible to reuse existing recordings, with the potential to avoid millions of test drive miles. The parameterization of the optical model allows for a method to validate the functional and safety limits of camera-based advanced driver assistance systems based on the real, measured lens actually used in the product.show moreshow less

Export metadata

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Matthias LehmannORCiD, Christian Wittpahl, Hatem Ben Zakour, Alexander BraunORCiD
Qualitätssicherung:peer reviewed
Fachbereich/Einrichtung:Hochschule Düsseldorf / Fachbereich - Elektro- & Informationstechnik
Document Type:Article
Year of Completion:2019
Language of Publication:English
Publisher:SPIE
Parent Title (English):Journal of Electronic Imaging
Volume:28
Issue:1
First Page:013005
DOI:https://doi.org/10.1117/1.JEI.28.1.013005
ISSN:1560-229X
Dewey Decimal Classification:6 Technik, Medizin, angewandte Wissenschaften / 62 Ingenieurwissenschaften / 620 Ingenieurwissenschaften und zugeordnete Tätigkeiten
Licence (German):keine Lizenz - nur Metadaten
Release Date:2019/07/16
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.