TY - INPR A1 - Song, Rui A1 - Liang, Chenwei A1 - Xia, Yan A1 - Zimmer, Walter A1 - Cao, Hu A1 - Caesar, Holger A1 - Festag, Andreas A1 - Knoll, Alois T1 - CoDa-4DGS: Dynamic Gaussian Splatting with Context and Deformation Awareness for Autonomous Driving N2 - Dynamic scene rendering opens new avenues in autonomous driving by enabling closed-loop simulations with photorealistic data, which is crucial for validating end-to-end algorithms. However, the complex and highly dynamic nature of traffic environments presents significant challenges in accurately rendering these scenes. In this paper, we introduce a novel 4D Gaussian Splatting (4DGS) approach, which incorporates context and temporal deformation awareness to improve dynamic scene rendering. Specifically, we employ a 2D semantic segmentation foundation model to self-supervise the 4D semantic features of Gaussians, ensuring meaningful contextual embedding. Simultaneously, we track the temporal deformation of each Gaussian across adjacent frames. By aggregating and encoding both semantic and temporal deformation features, each Gaussian is equipped with cues for potential deformation compensation within 3D space, facilitating a more precise representation of dynamic scenes. Experimental results show that our method improves 4DGS's ability to capture fine details in dynamic scene rendering for autonomous driving and outperforms other self-supervised methods in 4D reconstruction and novel view synthesis. Furthermore, CoDa-4DGS deforms semantic features with each Gaussian, enabling broader applications. UR - https://doi.org/10.48550/arXiv.2503.06744 Y1 - 2025 UR - https://doi.org/10.48550/arXiv.2503.06744 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-58313 PB - arXiv CY - Ithaca ER - TY - INPR A1 - Han, Longfei A1 - Kefferpütz, Klaus A1 - Beyerer, Jürgen T1 - 3D Extended Object Tracking based on Extruded B-Spline Side View Profiles N2 - Object tracking is an essential task for autonomous systems. With the advancement of 3D sensors, these systems can better perceive their surroundings using effective 3D Extended Object Tracking (EOT) methods. Based on the observation that common road users are symmetrical on the right and left sides in the traveling direction, we focus on the side view profile of the object. In order to leverage of the development in 2D EOT and balance the number of parameters of a shape model in the tracking algorithms, we propose a method for 3D extended object tracking (EOT) by describing the side view profile of the object with B-spline curves and forming an extrusion to obtain a 3D extent. The use of B-spline curves exploits their flexible representation power by allowing the control points to move freely. The algorithm is developed into an Extended Kalman Filter (EKF). For a through evaluation of this method, we use simulated traffic scenario of different vehicle models and realworld open dataset containing both radar and lidar data. UR - https://doi.org/10.48550/arXiv.2503.10730 Y1 - 2025 UR - https://doi.org/10.48550/arXiv.2503.10730 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-58331 PB - arXiv CY - Ithaca ER - TY - JOUR A1 - Agrawal, Shiva A1 - Bhanderi, Savankumar A1 - Elger, Gordon T1 - Infra-3DRC-FusionNet: Deep Fusion of Roadside Mounted RGB Mono Camera and Three-Dimensional Automotive Radar for Traffic User Detection JF - Sensors N2 - Mono RGB cameras and automotive radar sensors provide a complementary information set that makes them excellent candidates for sensor data fusion to obtain robust traffic user detection. This has been widely used in the vehicle domain and recently introduced in roadside-mounted smart infrastructure-based road user detection. However, the performance of the most commonly used late fusion methods often degrades when the camera fails to detect road users in adverse environmental conditions. The solution is to fuse the data using deep neural networks at the early stage of the fusion pipeline to use the complete data provided by both sensors. Research has been carried out in this area, but is limited to vehicle-based sensor setups. Hence, this work proposes a novel deep neural network to jointly fuse RGB mono-camera images and 3D automotive radar point cloud data to obtain enhanced traffic user detection for the roadside-mounted smart infrastructure setup. Projected radar points are first used to generate anchors in image regions with a high likelihood of road users, including areas not visible to the camera. These anchors guide the prediction of 2D bounding boxes, object categories, and confidence scores. Valid detections are then used to segment radar points by instance, and the results are post-processed to produce final road user detections in the ground plane. The trained model is evaluated for different light and weather conditions using ground truth data from a lidar sensor. It provides a precision of 92%, recall of 78%, and F1-score of 85%. The proposed deep fusion methodology has 33%, 6%, and 21% absolute improvement in precision, recall, and F1-score, respectively, compared to object-level spatial fusion output. UR - https://doi.org/10.3390/s25113422 Y1 - 2025 UR - https://doi.org/10.3390/s25113422 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-59421 SN - 1424-8220 VL - 25 IS - 11 PB - MDPI CY - Basel ER - TY - CHAP A1 - Jagtap, Abhishek Dinkar A1 - Song, Rui A1 - Sadashivaiah, Sanath Tiptur A1 - Festag, Andreas T1 - V2X-Gaussians: Gaussian Splatting for Multi-Agent Cooperative Dynamic Scene Reconstruction T2 - 2025 IEEE Intelligent Vehicles Symposium (IV) UR - https://doi.org/10.1109/IV64158.2025.11097436 Y1 - 2025 UR - https://doi.org/10.1109/IV64158.2025.11097436 SN - 979-8-3315-3803-3 SP - 1033 EP - 1039 PB - IEEE CY - Piscataway ER - TY - JOUR A1 - Bhanderi, Savankumar A1 - Agrawal, Shiva A1 - Elger, Gordon T1 - Deep segmentation of 3+1D radar point cloud for real-time roadside traffic user detection JF - Scientific Reports N2 - Smart cities rely on intelligent infrastructure to enhance road safety, optimize traffic flow, and enable vehicle-to-infrastructure (V2I) communication. A key component of such infrastructure is an efficient and real-time perception system that accurately detects diverse traffic participants. Among various sensing modalities, automotive radar is one of the best choices due to its robust performance in adverse weather and low-light conditions. However, due to low spatial resolution, traditional clustering-based approaches for radar object detection often struggle with vulnerable road user detection and nearby object separation. Hence, this paper proposes a deep learning-based D radar point cloud clustering methodology tailored for smart infrastructure-based perception applications. This approach first performs semantic segmentation of the radar point cloud, followed by instance segmentation to generate well-formed clusters with class labels using a deep neural network. It also detects single-point objects that conventional methods often miss. The described approach is developed and experimented using a smart infrastructure-based sensor setup and it performs segmentation of the point cloud in real-time. Experimental results demonstrate 95.35% F1-macro score for semantic segmentation and 91.03% mean average precision (mAP) at an intersection over union (IoU) threshold of 0.5 for instance segmentation. Further, the complete pipeline operates at 43.61 frames per second with a memory requirement of less than 0.7 MB on the edge device (Nvidia Jetson AGX Orin). UR - https://doi.org/10.1038/s41598-025-23019-6 Y1 - 2025 UR - https://doi.org/10.1038/s41598-025-23019-6 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-63458 SN - 2045-2322 VL - 15 PB - Springer Nature CY - London ER - TY - INPR A1 - Wan, Lei A1 - Zhao, Jianxin A1 - Wiedholz, Andreas A1 - Bied, Manuel A1 - Martinez de Lucena, Mateus A1 - Jagtap, Abhishek Dinkar A1 - Festag, Andreas A1 - Fröhlich, Antônio A1 - Keen, Hannan Ejaz A1 - Vinel, Alexey T1 - A Systematic Literature Review on Vehicular Collaborative Perception – A Computer Vision Perspective UR - https://doi.org/10.48550/arXiv.2504.04631 Y1 - 2025 UR - https://doi.org/10.48550/arXiv.2504.04631 PB - arXiv CY - Ithaca ER - TY - JOUR A1 - Pandey, Amit A1 - Kühn, Stephan A1 - Weis, Alexander A1 - Wunderle, Bernhard A1 - Elger, Gordon T1 - Evaluating optical performance degradation of automotive cameras under accelerated aging JF - Optics and Lasers in Engineering N2 - Automotive cameras are subject to environmental stress, which degrades performance by reducing image sharpness. To qualify for automotive use and to ensure that the cameras maintain sharpness according to the hard requirements of end-of-line testing, cameras have to undergo standardized accelerated aging tests. These tests are performed to demonstrate reliability and functional safety over lifetime. Few studies have been published that demonstrate how aging contributes to the degradation of optical performance. This study addresses this gap by combining accelerated thermal aging with sharpness tracking to investigate degradation over time. To quantify sharpness degradation, six series-production cameras were subjected to accelerated thermal aging between −40◦𝐶 and +85◦𝐶. Each camera underwent 2000 aging cycles, equivalent to 80% of their lifetime based on the Coffin-Manson model of the LV124 standard. Sharpness was measured by calculating the Spatial Frequency Response (SFR) from images captured of a double-cross reticle projected by a virtual object generator with three illumination wavelengths (625nm, 520nm, and 470nm). The change in sharpness was evaluated with SFR50 and SFR at 60 line pairs per millimeter (SFR@60). During the first 250 cycles, a wear-in effect was observed, where sharpness increased before leveling off, as seen previously. The results also indicated a slow decline in sharpness showing long-term stability. Analysis indicated that before aging, the best focal plane was located closer to the focal position of the red wavelength, which lies furthest from the objective. By the end of the aging process, the best focal plane had shifted toward the focal position of the blue wavelength, which is located closer to the objective. This suggests a forward movement of the image sensor due to aging. Even after 2000 cycles, all cameras maintained an SFR@60 above 0.5. A Random Forest regression model was trained to predict the age based on the SFR curves, achieving a mean absolute error of 126 cycles and a 𝑅2 score of 0.96. UR - https://doi.org/10.1016/j.optlaseng.2025.109396 Y1 - 2025 UR - https://doi.org/10.1016/j.optlaseng.2025.109396 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-64821 SN - 1873-0302 VL - 2026 IS - 196 PB - Elsevier CY - Amsterdam ER -