Fraunhofer-Anwendungszentrum "Vernetzte Mobilität und Infrastruktur"
Refine
Document Type
- Conference Paper (36)
- Article (13)
- Preprint (3)
Institute
- Fraunhofer-Anwendungszentrum "Vernetzte Mobilität und Infrastruktur" (52)
- Fakultät Elektro- und Informationstechnik (49)
- CARISSMA Institute of Electric, Connected and Secure Mobility (C-ECOS) (25)
- Institut für Innovative Mobilität (IIMo) (22)
- AImotion Bavaria (4)
- Fakultät Informatik (3)
- Fakultät Elektrotechnik und Informatik (2)
Review
- peer-review (42)
- ja (5)
- nein (5)
Version
- published (18)
Enhancing traffic signal optimisation has the potential to improve urban traffic flow without the need for expensive infrastructure modifications. While reinforcement learning (RL) techniques have demonstrated their effectiveness in simulations, their real-world implementation is still a challenge. Real-world systems need to be developed that guarantee a deployable action definition for real traffic systems while prioritising safety constraints and robust policies. This paper introduces a method to overcome this challenge by introducing a novel action definition that optimises parameter-level control programmes designed by traffic engineers. The complete proposed framework consists of a traffic situation estimation, a feature extractor, and a system that enables training on estimates of real-world traffic situations. Further multimodal optimisation, scalability, and continuous training after deployment could be achieved. The first simulative tests using this action definition show an average improvement of more than 20% in traffic flow compared to the baseline – the corresponding pre-optimised real-world control.
Federated learning allows for cooperative training among distributed clients by sharing their locally learned model parameters, such as weights or gradients. However, as model size increases, the communication bandwidth required for deployment in wireless networks becomes a bottleneck. To address this, we propose a residual-based federated learning framework (ResFed) that transmits residuals instead of gradients or weights in networks. By predicting model updates at both clients and the server, residuals are calculated as the difference between updated and predicted models and contain more dense information than weights or gradients. We find that the residuals are less sensitive to an increasing compression ratio than other parameters, and hence use lossy compression techniques on residuals to improve communication efficiency for training in federated settings. With the same compression ratio, ResFed outperforms current methods (weight- or gradient-based federated learning) by over 1.4× on federated data sets, including MNIST, FashionMNIST, SVHN, CIFAR-10, CIFAR-100, and FEMNIST, in client-to-server communication, and can also be applied to reduce communication costs for server-to-client communication.
Environment perception using camera, radar, and/or lidar sensors has significantly improved in the last few years because of deep learning-based methods. However, a large group of these methods fall into the category of supervised learning, which requires a considerable amount of annotated data. Due to uncertainties in multi-sensor data, automating the data labeling process is extremely challenging; hence,
it is performed manually to a large extent. Even though full automation of such a process is difficult, semiautomation can be a significant step to ease this process. However, the available work in this regard is still very limited; hence, in this paper, a novel semi-automatic annotation methodology is developed for labeling RGB camera images and 3D automotive radar point cloud data using a smart infrastructure-based sensor setup. This paper also describes a new method for 3D radar background subtraction to remove clutter and a new object category, GROUP, for radar-based object detection for closely located vulnerable road users. To validate the work, a dataset named INFRA-3DRC is created using this methodology, where 75% of the
labels are automatically generated. In addition, a radar cluster classifier and an image classifier are developed, trained, and tested on this dataset, achieving accuracy of 98.26% and 94.86%, respectively. The dataset and Python scripts are available at https://fraunhoferivi.github.io/INFRA-3DRC-Dataset/.
Collaborative perception in automated vehicles leverages the exchange of information between agents, aiming to elevate perception results. Previous camera-based collaborative 3D perception methods typically employ 3D bounding boxes or bird's eye views as representations of the environment. However, these approaches fall short in offering a comprehensive 3D environmental prediction. To bridge this gap, we introduce the first method for collaborative 3D semantic occupancy prediction. Particularly, it improves local 3D semantic occupancy predictions by hybrid fusion of (i) semantic and occupancy task features, and (ii) compressed orthogonal attention features shared between vehicles. Additionally, due to the lack of a collaborative perception dataset designed for semantic occupancy prediction, we augment a current collaborative perception dataset to include 3D collaborative semantic occupancy labels for a more robust evaluation. The experimental findings highlight that: (i) our collaborative semantic occupancy predictions excel above the results from single vehicles by over 30%, and (ii) models anchored on semantic occupancy outpace state-of-the-art collaborative 3D detection techniques in subsequent perception applications, showcasing enhanced accuracy and enriched semantic-awareness in road environments.
Bird's eye view (BEV) perception is becoming increasingly important in the field of autonomous driving. It uses multi-view camera data to learn a transformer model that directly projects the perception of the road environment onto the BEV perspective. However, training a transformer model often requires a large amount of data, and as camera data for road traffic are often private, they are typically not shared. Federated learning offers a solution that enables clients to collaborate and train models without exchanging data but model parameters. In this paper, we introduce FedBEVT, a federated transformer learning approach for BEV perception. In order to address two common data heterogeneity issues in FedBEVT: (i) diverse sensor poses, and (ii) varying sensor numbers in perception systems, we propose two approaches - Federated Learning with Camera-Attentive Personalization (FedCaP) and Adaptive Multi-Camera Masking (AMCM), respectively. To evaluate our method in real-world settings, we create a dataset consisting of four typical federated use cases. Our findings suggest that FedBEVT outperforms the baseline approaches in all four use cases, demonstrating the potential of our approach for improving BEV perception in autonomous driving.
Cameras mounted on intelligent roadside infrastructure units and vehicles can detect humans on the road using state-of-the-art perception algorithms, but these algorithms are presently not trained to distinguish between human and adult. However, this is a crucial requirement from a safety perspective because a child may not follow all the traffic rules, particularly while crossing the road. Moreover, a child may stop or may start playing on the road. In such situations, the separation of a child from an adult is necessary. The work in this paper targets to solve this problem by applying a transfer-learning-based neural network approach to classify child and adult separately in camera images. The described work is comprised of image data collection, data annotation, transfer learning-based model development, and evaluation. For the work, Mask-RCNN (region-based convolutional neural network) with different backbone architectures and two different baselines are investigated and the perception precision of the architectures after transfer-learning is compared. The results reveal that the best performing trained model is able to detect and classify children and adults separately in different road scenarios with segmentation mask AP (average precision) of 85% and bounding box AP of 92%.