TY - INPR A1 - Schieber, Hannah A1 - Duerr, Fabian A1 - Schön, Torsten A1 - Beyerer, Jürgen T1 - Deep Sensor Fusion with Pyramid Fusion Networks for 3D Semantic Segmentation UR - https://doi.org/10.48550/arXiv.2205.13629 Y1 - 2022 UR - https://doi.org/10.48550/arXiv.2205.13629 N1 - Die veröffentlichte Version dieses Preprints ist ebenfalls in diesem Repositorium verzeichnet, s. https://opus4.kobv.de/opus4-haw/frontdoor/index/index/docId/2543 PB - arXiv CY - Ithaca ER - TY - THES A1 - Schieber, Hannah T1 - Camera and LiDAR based Deep Feature Fusion for 3D Semantic Segmentation N2 - For the environmental perception of autonomous vehicles, camera and LiDAR sensors are widely used. Applying computer vision algorithms to the data captured by these sensors is a key for understanding and interpreting the 3D environment. Therefore, 3D semantic segmentation algorithms are common. This master thesis investigates deep fusion of camera and LiDAR features in neural networks to improve 3D semantic segmentation of traffic scenes. Within the presented approach the so-called range view for LiDAR scans is utilized in order to generate 2D images. This 2D projection can be processed by CNNs which have already been used successfully in the camera domain. Building upon a state-of-the-art approach from the camera domain a CNN is adapted and optimized to enable the range view semantic segmentation. Based on CNNs from the camera domain and the adapted range view CNN a deep fusion architecture is built. For the deep fusion different fusion strategies and fusion locations inside the deep fusion architecture are considered. The deep fusion fuses camera and LiDAR features multiple times. Therefore, two fusion locations have been investigated resulting in two deep fusion approaches. Both fusion approaches extract feature maps of the individual modalities at different stages of the backbones. Within the first fusion approach the feature maps are fused and the fused feature maps are processed by several modules to leverage semantic meaningful information. Within the second fusion approach the fused feature maps influence each other in a top-down manner from high to low resolutions and in a bottom-up manner from low to high resolutions before they are processed by the same modules as used in the first fusion approach. Optionally, a feature refinement in the manner of a late fusion step is presented, which aims to refine the features at the last scale before the final semantic segmentation. The deep fusion architecture is evaluated on two challenging outdoor datasets. It outperforms the early and late fusion baseline building upon the same architectures as the presented deep fusion. Also, it outperforms all range image based LiDAR approaches and deep fusion approaches on the used datasets. Y1 - 2021 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-33038 CY - Ingolstadt ER - TY - CHAP A1 - Schieber, Hannah A1 - Duerr, Fabian A1 - Schön, Torsten A1 - Beyerer, Jürgen T1 - Deep Sensor Fusion with Pyramid Fusion Networks for 3D Semantic Segmentation T2 - 2022 IEEE Intelligent Vehicles Symposium (IV) UR - https://doi.org/10.1109/IV51971.2022.9827113 KW - point cloud compression KW - image segmentation KW - laser radar KW - three-dimensional displays KW - head KW - fuses KW - semantics Y1 - 2022 UR - https://doi.org/10.1109/IV51971.2022.9827113 SN - 978-1-6654-8821-1 N1 - Der Nachweis einer Preprint-Version dieser Veröffentlichung ist ebenfalls in diesem Repositorium verzeichnet, s. https://opus4.kobv.de/opus4-haw/frontdoor/index/index/docId/4800 SP - 375 EP - 381 PB - IEEE CY - Piscataway ER -