Zeh, Thomas
Refine
Document Type
- Article (7)
- conference proceeding (article) (6)
- conference proceeding (presentation) (4)
- Book (1)
- conference proceeding (summary) (1)
- Report (1)
Publication reviewed
- begutachtet (14)
- nicht begutachtet (6)
Keywords
- LiDAR sensor (2)
- highly automated driving (2)
- open simulation interface (2)
- ASTM E3125-17 standard (1)
- Elektroniksimulation (1)
- Fahrerassistenzsystem (1)
- LiDAR sensor; rain; fog; sunlight; advanced driver-assistance system; backscattering; Mie theory; open simulation interface; functional mock-up interface; functional mock-up unit (1)
- Organic Solar Cells, Annealing, Efficiency (1)
- POF (1)
- PSPICE (1)
Institute
Der Beitrag zeigt die Modellierung, Simulation und den Test des Einflusses von Regen und Nebel auf Messungen mit LiDAR- und Radar-Sensoren. Zunächst wurden relevante Kriterien zur Modellierung hergeleitet und erläutert. Die Anwendung der Theorie der Mie-Streuung und Rayleigh-Streuung führte zu einer Entwicklung von LiDAR und Radar Sensor Simulationsmodellen. Diese Modelle wurden durch Tests in Regen-Testananlagen validiert. Hierbei liefern Key Performance Parameters (KPIs) quantitative Ergebnisse wie Signal Dämpfung, Signal-Rauschverhältnis (SNR), Detektionsrate, Fehlerrate, Entfernungsfehler.
Light detection and ranging (LiDAR) sensors are increasingly applied to automated driving vehicles. Microelectromechanical systems are an established technology for making LiDAR sensors cost-effective and mechanically robust for automotive applications. These sensors scan their environment using a pulsed laser to record a point cloud. The scanning process leads in the point cloud to a distortion of objects with a relative velocity to the sensor. The consecutive generation and processing of points offers the opportunity to enrich the measured object data from the LiDAR sensors with velocity information by extracting information with the help of machine learning, without the need for object tracking. Turning it into a socalled 4D-LiDAR. This allows object detection, object tracking, and sensor data fusion based on LiDAR sensor data to be optimized. Moreover, this affects all overlying levels of autonomous driving functions or advanced driver assistance systems. However, since such sensor-specific effects are rarely available in public datasets and the velocities of target objects are not included as ground truth in these datasets, it makes sense to enrich the limited real-world data with synthetic data. Therefore, this paper discusses how such datasets can be created and combined to efficiently predict velocities on realworld data using the authors' novel method dubbed VeloPoints.
Many modern automated vehicle sensor systems use light detection and ranging (LiDAR) sensors. The prevailing technology is scanning LiDAR, where a collimated laser beam illuminates objects sequentially point-by-point to capture 3D range data. In current systems, the point clouds from the LiDAR sensors are mainly used for object detection. To estimate the velocity of an object of interest (OoI) in the point cloud, the tracking of the object or sensor data fusion is needed. Scanning LiDAR sensors show the motion distortion effect, which occurs when objects have a relative velocity to the sensor. Often, this effect is filtered, by using sensor data fusion, to use an undistorted point cloud for object detection. In this study, we developed a method using an artificial neural network to estimate an object’s velocity and direction of motion in the sensor’s field of view (FoV) based on the motion distortion effect without any sensor data fusion. This network was trained and evaluated with a synthetic dataset featuring the motion distortion effect. With the method presented in this paper, one can estimate the velocity and direction of an OoI that moves independently from the sensor from a single point cloud using only one single sensor. The method achieves a root mean squared error (RMSE) of 0.1187 m s−1 and a two-sigma confidence interval of [−0.0008 m s−1, 0.0017 m s−1] for the axis-wise estimation of an object’s relative velocity, and an RMSE of 0.0815 m s−1 and a two-sigma confidence interval of [0.0138 m s−1, 0.0170 m s−1] for the estimation of the resultant velocity. The extracted velocity information (4D-LiDAR) is available for motion prediction and object tracking and can lead to more reliable velocity data due to more redundancy for sensor data fusion.
Automated vehicles use light detection and ranging (LiDAR) sensors for environmental scanning. However, the relative motion between the scanning LiDAR sensor and objects leads to a distortion of the point cloud. This phenomenon is known as the motion distortion effect, significantly degrading the sensor’s object detection capabilities and generating false negative or false positive errors. In this work, we have introduced ray tracing-based deterministic and analytical approaches to model the motion distortion effect on the scanning LiDAR sensor’s performance for simulation-based testing. In addition, we have performed dynamic test drives at a proving ground to compare real LiDAR data with the motion distortion effect simulation data. The real-world scenarios, the environmental conditions, the digital twin of the scenery, and the object of interest (OOI) are replicated in the virtual environment of commercial software to obtain the synthetic LiDAR data. The real and the virtual test drives are compared frame by frame to validate the motion distortion effect modeling. The mean absolute percentage error (MAPE), the occupied cell ratio (OCR), and the Barons cross-correlation coefficient (BCC) are used to quantify the correlation between the virtual and the real LiDAR point cloud data. The results show that the deterministic approach matches the real measurements better than the analytical approach for the scenarios in which the yaw rate of the ego vehicle changes rapidly.
In this article, the optimization of the control circuit and path planning of a delta kinematic with the help of machine learning is presented. The described delta kinematic is primarily used for pick-and-place applications in the field of packaging machines. The optimization of the path planning procedure aims to make the workflow more efficent and flexible for commissioning the delta kinematic. By optimizing the control circuit using machine learning, mechanical oscillations and the deviation of the specified path are to be minimized. The possible use of a simulator for training, the prediction quality and the implementation on the robot controller are discussed. Furthermore, the path planning procedure was optimized. For this purpose, an environment was implemented in which a reinforcement learning agent plans the path of the robot between a starting point and a target point in a time-optimized manner, considering interference contours e.g. from the machine. The obtained results show the optimization of the robot by machine learning with a root mean squared error of the predicted torques of 0.06025 Nm in a prediction time of around 0.125 ms and the possibility of path planning with different criteria.
A Methodology to Model the Rain and Fog Effect on the Performance of Automotive LiDAR Sensors
(2023)
In this work, we introduce a novel approach to model the rain and fog effect on the light detection and ranging (LiDAR) sensor performance for the simulation-based testing of LiDAR systems. The proposed methodology allows for the simulation of the rain and fog effect using the rigorous applications of the Mie scattering theory on the time domain for transient and point cloud levels for spatial analyses. The time domain analysis permits us to benchmark the virtual LiDAR signal attenuation and signal-to-noise ratio (SNR) caused by rain and fog droplets. In addition, the detection rate (DR), false detection rate (FDR), and distance error derror of the virtual LiDAR sensor due to rain and fog droplets are evaluated on the point cloud level. The mean absolute percentage error (MAPE) is used to quantify the simulation and real measurement results on the time domain and point cloud levels for the rain and fog droplets. The results of the simulation and real measurements match well on the time domain and point cloud levels if the simulated and real rain distributions are the same. The real and virtual LiDAR sensor performance degrades more under the influence of fog droplets than in rain.
Die Entwicklung, Erprobung und Validierung von Fahrerassistenzsystemen und automatisierten Fahrfunktionen ist im realen Fahrversuch aufgrund mangelnder Skalierbarkeit nur eingeschränkt möglich. IPG Automotive und die Hochschule Kempten beschreiben eine effziente Simulations-Toolchain, die eine nahtlose Integration und Austauschbarkeit verschiedener Sensormodelle und Systemkomponenten ermöglicht.
Developing, testing and validating advanced driver assistance systems and automated driving functions can only be realized to a limited extent in real-world test drives due to a lack of scalability. IPG Automotive and the Kempten University describe an efficient simulation toolchain that enables the seamless integration and exchange of different sensor models and system components.
Measurement performance evaluation of real and virtual automotive light detection and ranging (LiDAR) sensors is an active area of research. However, no commonly accepted automotive standards, metrics, or criteria exist to evaluate their measurement performance. ASTM International released the ASTM E3125-17 standard for the operational performance evaluation of 3D imaging systems commonly referred to as terrestrial laser scanners (TLS). This standard defines the specifications and static test procedures to evaluate the 3D imaging and point-to-point distance measurement performance of TLS. In this work, we have assessed the 3D imaging and point-to-point distance estimation performance of a commercial micro-electro-mechanical system (MEMS)-based automotive LiDAR sensor and its simulation model according to the test procedures defined in this standard. The static tests were performed in a laboratory environment. In addition, a subset of static tests was also performed at the proving ground in natural environmental conditions to determine the 3D imaging and point-to-point distance measurement performance of the real LiDAR sensor. In addition, real scenarios and environmental conditions were replicated in the virtual environment of a commercial software to verify the LiDAR model’s working performance. The evaluation results show that the LiDAR sensor and its simulation model under analysis pass all the tests specified in the ASTM E3125-17 standard. This standard helps to understand whether sensor measurement errors are due to internal or external influences. We have also shown that the 3D imaging and point-to-point distance estimation performance of LiDAR sensors significantly impacts the working performance of the object recognition algorithm. That is why this standard can be beneficial in validating automotive real and virtual LiDAR sensors, at least in the early stage of development. Furthermore, the simulation and real measurements show good agreement on the point cloud and object recognition levels.