TY - JOUR A1 - Agrawal, Shiva A1 - Bhanderi, Savankumar A1 - Elger, Gordon T1 - Infra-3DRC-FusionNet: Deep Fusion of Roadside Mounted RGB Mono Camera and Three-Dimensional Automotive Radar for Traffic User Detection JF - Sensors N2 - Mono RGB cameras and automotive radar sensors provide a complementary information set that makes them excellent candidates for sensor data fusion to obtain robust traffic user detection. This has been widely used in the vehicle domain and recently introduced in roadside-mounted smart infrastructure-based road user detection. However, the performance of the most commonly used late fusion methods often degrades when the camera fails to detect road users in adverse environmental conditions. The solution is to fuse the data using deep neural networks at the early stage of the fusion pipeline to use the complete data provided by both sensors. Research has been carried out in this area, but is limited to vehicle-based sensor setups. Hence, this work proposes a novel deep neural network to jointly fuse RGB mono-camera images and 3D automotive radar point cloud data to obtain enhanced traffic user detection for the roadside-mounted smart infrastructure setup. Projected radar points are first used to generate anchors in image regions with a high likelihood of road users, including areas not visible to the camera. These anchors guide the prediction of 2D bounding boxes, object categories, and confidence scores. Valid detections are then used to segment radar points by instance, and the results are post-processed to produce final road user detections in the ground plane. The trained model is evaluated for different light and weather conditions using ground truth data from a lidar sensor. It provides a precision of 92%, recall of 78%, and F1-score of 85%. The proposed deep fusion methodology has 33%, 6%, and 21% absolute improvement in precision, recall, and F1-score, respectively, compared to object-level spatial fusion output. UR - https://doi.org/10.3390/s25113422 Y1 - 2025 UR - https://doi.org/10.3390/s25113422 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-59421 SN - 1424-8220 VL - 25 IS - 11 PB - MDPI CY - Basel ER - TY - JOUR A1 - Agrawal, Shiva A1 - Bhanderi, Savankumar A1 - Elger, Gordon T1 - Semi-Automatic Annotation of 3D Radar and Camera for Smart Infrastructure-Based Perception JF - IEEE Access N2 - Environment perception using camera, radar, and/or lidar sensors has significantly improved in the last few years because of deep learning-based methods. However, a large group of these methods fall into the category of supervised learning, which requires a considerable amount of annotated data. Due to uncertainties in multi-sensor data, automating the data labeling process is extremely challenging; hence, it is performed manually to a large extent. Even though full automation of such a process is difficult, semiautomation can be a significant step to ease this process. However, the available work in this regard is still very limited; hence, in this paper, a novel semi-automatic annotation methodology is developed for labeling RGB camera images and 3D automotive radar point cloud data using a smart infrastructure-based sensor setup. This paper also describes a new method for 3D radar background subtraction to remove clutter and a new object category, GROUP, for radar-based object detection for closely located vulnerable road users. To validate the work, a dataset named INFRA-3DRC is created using this methodology, where 75% of the labels are automatically generated. In addition, a radar cluster classifier and an image classifier are developed, trained, and tested on this dataset, achieving accuracy of 98.26% and 94.86%, respectively. The dataset and Python scripts are available at https://fraunhoferivi.github.io/INFRA-3DRC-Dataset/. UR - https://doi.org/10.1109/ACCESS.2024.3373310 Y1 - 2024 UR - https://doi.org/10.1109/ACCESS.2024.3373310 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-46180 SN - 2169-3536 VL - 12 SP - 34325 EP - 34341 PB - IEEE CY - New York ER - TY - JOUR A1 - Agrawal, Shiva A1 - Bhanderi, Savankumar A1 - Doycheva, Kristina A1 - Elger, Gordon T1 - Static multi-target-based auto-calibration of RGB cameras, 3D Radar, and 3D Lidar sensors JF - IEEE Sensors Journal UR - https://doi.org/10.1109/JSEN.2023.3300957 KW - Autonomous vehicles KW - camera KW - feature extraction KW - intelligent roadside infrastructure KW - lidar KW - radar KW - sensor calibration Y1 - 2023 UR - https://doi.org/10.1109/JSEN.2023.3300957 SN - 1530-437X VL - 23 IS - 18 SP - 21493 EP - 21505 PB - IEEE CY - Piscataway ER - TY - JOUR A1 - Bhanderi, Savankumar A1 - Agrawal, Shiva A1 - Elger, Gordon T1 - Deep segmentation of 3+1D radar point cloud for real-time roadside traffic user detection JF - Scientific Reports N2 - Smart cities rely on intelligent infrastructure to enhance road safety, optimize traffic flow, and enable vehicle-to-infrastructure (V2I) communication. A key component of such infrastructure is an efficient and real-time perception system that accurately detects diverse traffic participants. Among various sensing modalities, automotive radar is one of the best choices due to its robust performance in adverse weather and low-light conditions. However, due to low spatial resolution, traditional clustering-based approaches for radar object detection often struggle with vulnerable road user detection and nearby object separation. Hence, this paper proposes a deep learning-based D radar point cloud clustering methodology tailored for smart infrastructure-based perception applications. This approach first performs semantic segmentation of the radar point cloud, followed by instance segmentation to generate well-formed clusters with class labels using a deep neural network. It also detects single-point objects that conventional methods often miss. The described approach is developed and experimented using a smart infrastructure-based sensor setup and it performs segmentation of the point cloud in real-time. Experimental results demonstrate 95.35% F1-macro score for semantic segmentation and 91.03% mean average precision (mAP) at an intersection over union (IoU) threshold of 0.5 for instance segmentation. Further, the complete pipeline operates at 43.61 frames per second with a memory requirement of less than 0.7 MB on the edge device (Nvidia Jetson AGX Orin). UR - https://doi.org/10.1038/s41598-025-23019-6 Y1 - 2025 UR - https://doi.org/10.1038/s41598-025-23019-6 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-63458 SN - 2045-2322 VL - 15 PB - Springer Nature CY - London ER -