Refine
Document Type
- Article (3)
Language
- English (3)
Publication reviewed
- begutachtet (3)
Keywords
- ASTM E3125-17 standard (1)
- Automated Driving (1)
- Fahrerassistenzsystem (1)
- Fieldbus (1)
- FlexRay (1)
- Intelligent vehicles (1)
- LiDAR sensor (1)
- Protocols (1)
- Real-time systems (1)
- Robot Operating System (1)
As the development of advanced driver assistance systems (ADAS) continues, more and more software functions and sensors are being introduced to the market. This is accompanied by an increase in the amount of data that has to be transmitted to multiple receivers in the vehicle under hard real-time requirements. The use of deterministic and non-deterministic Fieldbus protocols enables communication between sensor and actuator or ECUs. For the purpose of verifying and validating the developed software modules, but also for type approval, an objective and thus data-driven toolchain is mandatory. By using suitable middleware such as Robotic Operating System (ROS), the complexity of integrating multiple (reference) sensors as well as prototypical software functions can be broken down into subtasks and thus distributed to the hardware in a computationally efficient manner. Recording and manipulating sensor ECU communication while driving is also possible under certain circumstances. However, at least to our knowledge, there is no public ROS driver available to integrate automotive-specific fieldbus protocols except for CAN. In the following paper, we introduce a generic and open-source framework for integrating on-board communication of various Fieldbus protocols and demonstrate the integration in ROS as a real-world use case. To validate the presented methodology, we perform a time analysis of the presented ROS node and compare it to a ROS-independent reference measurement system while performing a standardized vehicle dynamic driving test. In addition, we objectively compare two different on-board sensors from a series vehicle with two distinct reference sensors in a real-world scenario.
Measurement performance evaluation of real and virtual automotive light detection and ranging (LiDAR) sensors is an active area of research. However, no commonly accepted automotive standards, metrics, or criteria exist to evaluate their measurement performance. ASTM International released the ASTM E3125-17 standard for the operational performance evaluation of 3D imaging systems commonly referred to as terrestrial laser scanners (TLS). This standard defines the specifications and static test procedures to evaluate the 3D imaging and point-to-point distance measurement performance of TLS. In this work, we have assessed the 3D imaging and point-to-point distance estimation performance of a commercial micro-electro-mechanical system (MEMS)-based automotive LiDAR sensor and its simulation model according to the test procedures defined in this standard. The static tests were performed in a laboratory environment. In addition, a subset of static tests was also performed at the proving ground in natural environmental conditions to determine the 3D imaging and point-to-point distance measurement performance of the real LiDAR sensor. In addition, real scenarios and environmental conditions were replicated in the virtual environment of a commercial software to verify the LiDAR model’s working performance. The evaluation results show that the LiDAR sensor and its simulation model under analysis pass all the tests specified in the ASTM E3125-17 standard. This standard helps to understand whether sensor measurement errors are due to internal or external influences. We have also shown that the 3D imaging and point-to-point distance estimation performance of LiDAR sensors significantly impacts the working performance of the object recognition algorithm. That is why this standard can be beneficial in validating automotive real and virtual LiDAR sensors, at least in the early stage of development. Furthermore, the simulation and real measurements show good agreement on the point cloud and object recognition levels.
Many modern automated vehicle sensor systems use light detection and ranging (LiDAR) sensors. The prevailing technology is scanning LiDAR, where a collimated laser beam illuminates objects sequentially point-by-point to capture 3D range data. In current systems, the point clouds from the LiDAR sensors are mainly used for object detection. To estimate the velocity of an object of interest (OoI) in the point cloud, the tracking of the object or sensor data fusion is needed. Scanning LiDAR sensors show the motion distortion effect, which occurs when objects have a relative velocity to the sensor. Often, this effect is filtered, by using sensor data fusion, to use an undistorted point cloud for object detection. In this study, we developed a method using an artificial neural network to estimate an object’s velocity and direction of motion in the sensor’s field of view (FoV) based on the motion distortion effect without any sensor data fusion. This network was trained and evaluated with a synthetic dataset featuring the motion distortion effect. With the method presented in this paper, one can estimate the velocity and direction of an OoI that moves independently from the sensor from a single point cloud using only one single sensor. The method achieves a root mean squared error (RMSE) of 0.1187 m s−1 and a two-sigma confidence interval of [−0.0008 m s−1, 0.0017 m s−1] for the axis-wise estimation of an object’s relative velocity, and an RMSE of 0.0815 m s−1 and a two-sigma confidence interval of [0.0138 m s−1, 0.0170 m s−1] for the estimation of the resultant velocity. The extracted velocity information (4D-LiDAR) is available for motion prediction and object tracking and can lead to more reliable velocity data due to more redundancy for sensor data fusion.