• search hit 25 of 37
Back to Result List

Uncertainty aware observation of surrounding traffic agents for interaction aware motion prediction models

  • The behaviour prediction module sits at the heart of the autonomous driving architecture. It is responsible for observing and predicting the behaviour of surrounding traffic agents. State of the art behaviour prediction models are data-driven. As a result of an increase in computational capabilities and access to large amounts of data, these data-driven models such as deep neural networks have proven their might in the field of autonomous driving. However, deep learning methods pose a crucial question to the autonomous driving community, which is the question of reliability. Data-driven methods are highly reliant on the data they are trained on and it is hard to predict their performance in real autonomous driving scenarios. One solution is to gather more data with the hope to capture all relevant real-life scenarios. However, gathering data is expensive, and it is hard to qualitatively ascertain which scenarios need to be captured during data acquisition. Another option is to teach deep learning models to detect scenarios where theyThe behaviour prediction module sits at the heart of the autonomous driving architecture. It is responsible for observing and predicting the behaviour of surrounding traffic agents. State of the art behaviour prediction models are data-driven. As a result of an increase in computational capabilities and access to large amounts of data, these data-driven models such as deep neural networks have proven their might in the field of autonomous driving. However, deep learning methods pose a crucial question to the autonomous driving community, which is the question of reliability. Data-driven methods are highly reliant on the data they are trained on and it is hard to predict their performance in real autonomous driving scenarios. One solution is to gather more data with the hope to capture all relevant real-life scenarios. However, gathering data is expensive, and it is hard to qualitatively ascertain which scenarios need to be captured during data acquisition. Another option is to teach deep learning models to detect scenarios where they have high uncertainty. In other words, we teach these models to say "I don’t know". Such an uncertainty measure can be vital in preventing accidents, injuries and fatalities. In the present thesis, an attempt has been made to quantify the model uncertainty in motion prediction of surrounding traffic agents in challenging heterogeneous driving scenarios. In the current research, interaction-and-uncertainty-aware models are built for a 2-second and 3-second prediction and observation horizon. These models use a sequence of colour-coded bird’s eye view (BEV) images of the surrounding scene as inputs and predict a sequence of future occupancy grid maps (OGMs) as outputs. By representing the scene as a BEV image, critical problems concerning motion prediction are addressed. These include the problem of choosing the number of traffic agents in a scene, modelling spatial-interactions among traffic agents and information encoding (position, class, size, heading direction and underlying map). Additionally, by colour-coding the BEV images and exploiting the Hue-Saturation-Value (HSV) space, confidence measures from the object tracker module are encoded along the Saturation dimension. With this, we aim to capture the uncertainty associated with the low-confidence detections of the object tracker. In the present thesis, the epistemic uncertainty of the motion prediction model is modelled using the well-known Monte Carlo dropout (MC dropout) method. The output of the uncertainty-aware model consists of the mean of the predicted OGMs and an uncertainty-aware OGM that captures the Gaussian variance associated with each grid cell. One major problem associated with the prediction of OGMs is the choice of a loss function. In the present thesis, a novel loss function called Combi-loss is introduced. Combi-loss is a combination of the Binary Crossentropy loss that measures grid cell to grid cell loss, and the Tversky loss which measures the loss holistically. The performance of these models is tested on challenging nuScenes urban scenarios. Testing is also performed on object-tracker data. This enables a realistic estimate of actual performance and also provides leeway to investigate the capabilities of the uncertainty-aware model to capture the low confidence outputs of the object tracker as uncertainty estimates. Finally, zoom data-augmentation is performed to improve the resolution of OGMs without modifying the network architecture. The results show promising prospects for capturing uncertainty associated with motion prediction in a simple yet effective manner and contribute to a safer mobility of the future.show moreshow less

Download full text files

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Chidvilas Karpenahalli Ramakrishna
Language:English
Document Type:Master's Thesis
Reviewer(s) at THI:Christian Birkner, Ondřej Vaculin
Publishing Institution:Technische Hochschule Ingolstadt
Date of first Publication:2022/02/10
Year of first Publication:2022
Place of publication:Ingolstadt
Pages:XV, 112, XXIII
URN:urn:nbn:de:bvb:573-33856
Faculties / Institutes / Organizations:Fakultät Elektro- und Informationstechnik
Degree Programmes:International Automotive Engineering (M. Eng.)
Licence (German):License Logo Urheberrechtsschutz
Release Date:2023/04/04