• Deutsch
Login

Open Access

  • Home
  • Search
  • Browse
  • Publish
  • FAQ
Schließen
  • Dewey Decimal Classification
  • 0 Informatik, Informationswissenschaft, allgemeine...
  • 00 Informatik, Wissen, Systeme

000 Informatik, Informationswissenschaft, allgemeine Werke

Refine

Author

  • Maier, Andreas (20)
  • Riehle, Dirk (9)
  • Eskofier, Bjoern M. (7)
  • Barcomb, Ann (6)
  • Hornegger, Joachim (5)
  • Wellein, Gerhard (5)
  • Emmert, Martin (4)
  • Hager, Georg (4)
  • Kaufmann, Andreas (4)
  • Aubreville, Marc (3)
+ more

Year of publication

  • 2021 (3)
  • 2020 (30)
  • 2019 (25)
  • 2018 (20)
  • 2017 (17)
  • 2016 (13)
  • 2015 (14)
  • 2014 (18)
  • 2013 (17)
  • 2012 (4)
+ more

Document Type

  • Doctoral Thesis (72)
  • Article (64)
  • Report (13)
  • Conference Proceeding (7)
  • PeriodicalPart (7)
  • Book (6)
  • Other (3)
  • Working Paper (3)
  • Habilitation (1)

Language

  • English (131)
  • German (44)
  • Multiple Languages (1)

Has Fulltext

  • yes (176)

Keywords

  • - (25)
  • machine learning (6)
  • Betriebssystem (4)
  • Optimierung (4)
  • Simulation (4)
  • deep learning (4)
  • 1743-1960 (3)
  • Biography (3)
  • Computerforensik (3)
  • Dictionary (3)
+ more

Institute

  • Technische Fakultät (95)
  • Department Informatik (35)
  • Fakultätsübergreifend / Sonstige Einrichtung -ohne weitere Spezifikation- (13)
  • Medizinische Fakultät (5)
  • Rechts- und Wirtschaftswissenschaftliche Fakultät (5)
  • Fachbereich Wirtschaftswissenschaften (3)
  • Medizinische Fakultät -ohne weitere Spezifikation- (3)
  • Department Elektrotechnik-Elektronik-Informationstechnik (2)
  • Department Medienwissenschaften und Kunstgeschichte (2)
  • Naturwissenschaftliche Fakultät -ohne weitere Spezifikation- (2)
+ more

176 search hits

  • 1 to 20
  • 10
  • 20
  • 50
  • 100

Sort by

  • Year
  • Year
  • Title
  • Title
  • Author
  • Author
Automatic dementia screening and scoring by applying deep learning on clock-drawing tests (2020)
Chen, Shuqing ; Stromer, Daniel ; Alnasser Alabdalrahim, Harb ; Schwab, Stefan ; Weih, Markus ; Maier, Andreas
Dementia is one of the most common neurological syndromes in the world. Usually, diagnoses are made based on paper-and-pencil tests and scored depending on personal judgments of experts. This technique can introduce errors and has high inter-rater variability. To overcome these issues, we present an automatic assessment of the widely used paper-based clock-drawing test by means of deep neural networks. Our study includes a comparison of three modern architectures: VGG16, ResNet-152, and DenseNet-121. The dataset consisted of 1315 individuals. To deal with the limited amount of data, which also included several dementia types, we used optimization strategies for training the neural network. The outcome of our work is a standardized and digital estimation of the dementia screening result and severity level for an individual. We achieved accuracies of 96.65% for screening and up to 98.54% for scoring, overcoming the reported state-of-the-art as well as human accuracies. Due to the digital format, the paper-based test can be simply scanned by using a mobile device and then be evaluated also in areas where there is a staff shortage or where no clinical experts are available.
A completely annotated whole slide image dataset of canine breast cancer to aid human breast cancer research (2020)
Aubreville, Marc ; Bertram, Christof A. ; Donovan, Taryn A. ; Marzahl, Christian ; Maier, Andreas ; Klopfleisch, Robert
Canine mammary carcinoma (CMC) has been used as a model to investigate the pathogenesis of human breast cancer and the same grading scheme is commonly used to assess tumor malignancy in both. One key component of this grading scheme is the density of mitotic figures (MF). Current publicly available datasets on human breast cancer only provide annotations for small subsets of whole slide images (WSIs). We present a novel dataset of 21 WSIs of CMC completely annotated for MF. For this, a pathologist screened all WSIs for potential MF and structures with a similar appearance. A second expert blindly assigned labels, and for non-matching labels, a third expert assigned the final labels. Additionally, we used machine learning to identify previously undetected MF. Finally, we performed representation learning and two-dimensional projection to further increase the consistency of the annotations. Our dataset consists of 13,907 MF and 36,379 hard negatives. We achieved a mean F1-score of 0.791 on the test set and of up to 0.696 on a human breast cancer dataset. Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.13182857
Efficient trajectory optimization for curved running using a 3D musculoskeletal model with implicit dynamics (2020)
Nitschke, Marlies ; Dorschky, Eva ; Heinrich, Dieter ; Schlarb, Heiko ; Eskofier, Bjoern M. ; Koelewijn, Anne D. ; van den Bogert, Antonie J.
Trajectory optimization with musculoskeletal models can be used to reconstruct measured movements and to predict changes in movements in response to environmental changes. It enables an exhaustive analysis of joint angles, joint moments, ground reaction forces, and muscle forces, among others. However, its application is still limited to simplified problems in two dimensional space or straight motions. The simulation of movements with directional changes, e.g. curved running, requires detailed three dimensional models which lead to a high-dimensional solution space. We extended a full-body three dimensional musculoskeletal model to be specialized for running with directional changes. Model dynamics were implemented implicitly and trajectory optimization problems were solved with direct collocation to enable efficient computation. Standing, straight running, and curved running were simulated starting from a random initial guess to confirm the capabilities of our model and approach: efficacy, tracking and predictive power. Altogether the simulations required 1 h 17 min and corresponded well to the reference data. The prediction of curved running using straight running as tracking data revealed the necessity of avoiding interpenetration of body segments. In summary, the proposed formulation is able to efficiently predict a new motion task while preserving dynamic consistency. Hence, labor-intensive and thus costly experimental studies could be replaced by simulations for movement analysis and virtual product design.
Deep learning algorithms out-perform veterinary pathologists in detecting the mitotically most active tumor region (2020)
Aubreville, Marc ; Bertram, Christof A. ; Marzahl, Christian ; Gurtner, Corinne ; Dettwiler, Martina ; Schmidt, Anja ; Bartenschlager, Florian ; Merz, Sophie ; Fragoso, Marco ; Kershaw, Olivia ; Klopfleisch, Robert ; Maier, Andreas
Manual count of mitotic figures, which is determined in the tumor region with the highest mitotic activity, is a key parameter of most tumor grading schemes. It can be, however, strongly dependent on the area selection due to uneven mitotic figure distribution in the tumor section. We aimed to assess the question, how significantly the area selection could impact the mitotic count, which has a known high inter-rater disagreement. On a data set of 32 whole slide images of H&E-stained canine cutaneous mast cell tumor, fully annotated for mitotic figures, we asked eight veterinary pathologists (five board-certified, three in training) to select a field of interest for the mitotic count. To assess the potential difference on the mitotic count, we compared the mitotic count of the selected regions to the overall distribution on the slide. Additionally, we evaluated three deep learning-based methods for the assessment of highest mitotic density: In one approach, the model would directly try to predict the mitotic count for the presented image patches as a regression task. The second method aims at deriving a segmentation mask for mitotic figures, which is then used to obtain a mitotic density. Finally, we evaluated a two-stage object-detection pipeline based on state-of-the-art architectures to identify individual mitotic figures. We found that the predictions by all models were, on average, better than those of the experts. The two-stage object detector performed best and outperformed most of the human pathologists on the majority of tumor cases. The correlation between the predicted and the ground truth mitotic count was also best for this approach (0.963–0.979). Further, we found considerable differences in position selection between pathologists, which could partially explain the high variance that has been reported for the manual mitotic count. To achieve better inter-rater agreement, we propose to use a computer-based area selection for support of the pathologist in the manual mitotic count.
Machine Learning and Deformation Modeling for Workflow-Compliant Image Fusion during Endovascular Aortic Repair (2021)
Breininger, Katharina
Fluoroscopy-guided endovascular aortic repair (EVAR) has become the predominant treatment strategy for elective repair of abdominal aortic aneurysms in many western countries. During the procedure, stent grafts are implanted into the vasculature to reduce the pressure on the vessel wall and prevent a potentially fatal aneurysm rupture. The fusion of preoperative information with intraoperative fluoroscopy has garnered considerable interest as a means to reduce the use of nephrotoxic contrast agent and to decrease radiation exposure and procedure time, thus limiting the negative side effects of the procedure. A rigid overlay of pre- and intraoperative images, however,disregards the substantial deformations caused by the endovascular instruments. This thesis proposes and analyses different approaches to maintaining the usefulness of image fusion during EVAR by identifying and modeling the instrument-induced deformation. Particular attention is given to compliance with the interventional workflow, specifically in terms of underlying assumptions, requirements and computational costs. An algorithmic pipeline is developed that allows for the segmentation of relevant instruments in fluoroscopic images, the reconstruction of the instrument shape from single X-ray projections and the intraoperative deformation modeling based on this information. For instrument segmentation, a deep learning approach is proposed that is able to reliably identify and distinguish stent grafts, guidewires and catheters in a multi-task setting. In contrast to prior methods applied to these tasks, the approach requires neither an explicit model of the stent graft nor a handcrafted segmentation pipeline for each instrument. To allow for deformation modeling in 3-D, a method is designed that recovers the 3-D instrument shape from a single projection image. This avoids cumbersome repositioning of the fluoroscopic C-arm system. The approach estimates a second, virtual view of the wire based on the preoperative information that takes the intraoperative vessel deformation into account. To model the deformation solely on the instrument shape, an as-rigid-as-possible modeling is devised that allows to account for the interaction between the instrument and a surface model of the vessel in a flexible manner. This is extended by a semi-automatic approach that adapts the deformation in a “one-click” scenario and further increases the accuracy of the deformation modeling. In contrast to previous methods, a bone-based initial alignment of pre- and intraoperative data suffices for accurate deformation modeling. Other approaches that assess the deformation are either based on computationally expensive finite element analysis, require a contrast-enhanced acquisition of the aortic vessel tree or demand complex user interaction. The pipeline is able to adapt the preoperative information to match the intraoperative deformation without the need for contrast injections. Still, available information can be integrated by using the semi-automatic method, resulting in a high in-plane accuracy of 0.5 mm at relevant anatomical landmarks. While each step of the proposed pipeline constitutes a value of its own, the proposed methods can be applied successively and allow for an adaption from X-ray segmentation to 3-D deformation modeling in less then 10 s, integrating smoothly with the interventional workflow. The results on clinical data show the potential to further improve navigation, reduce the use of nephrotoxic contrast agents and decrease radiation exposure, ultimately increasing the safety of both patients and medical personnel.
Fully Automated 3D Cardiac MRI Localisation and Segmentation Using Deep Neural Networks (2020)
Vesal, Sulaiman ; Maier, Andreas ; Ravikumar, Nishant
Cardiac magnetic resonance (CMR) imaging is used widely for morphological assessment and diagnosis of various cardiovascular diseases. Deep learning approaches based on 3D fully convolutional networks (FCNs), have improved state-of-the-art segmentation performance in CMR images. However, previous methods have employed several pre-processing steps and have focused primarily on segmenting low-resolutions images. A crucial step in any automatic segmentation approach is to first localize the cardiac structure of interest within the MRI volume, to reduce false positives and computational complexity. In this paper, we propose two strategies for localizing and segmenting the heart ventricles and myocardium, termed multi-stage and end-to-end, using a 3D convolutional neural network. Our method consists of an encoder–decoder network that is first trained to predict a coarse localized density map of the target structure at a low resolution. Subsequently, a second similar network employs this coarse density map to crop the image at a higher resolution, and consequently, segment the target structure. For the latter, the same two-stage architecture is trained end-to-end. The 3D U-Net with some architectural changes (referred to as 3D DR-UNet) was used as the base architecture in this framework for both the multi-stage and end-to-end strategies. Moreover, we investigate whether the incorporation of coarse features improves the segmentation. We evaluate the two proposed segmentation strategies on two cardiac MRI datasets, namely, the Automatic Cardiac Segmentation Challenge (ACDC) STACOM 2017, and Left Atrium Segmentation Challenge (LASC) STACOM 2018. Extensive experiments and comparisons with other state-of-the-art methods indicate that the proposed multi-stage framework consistently outperforms the rest in terms of several segmentation metrics. The experimental results highlight the robustness of the proposed approach, and its ability to generate accurate high-resolution segmentations, despite the presence of varying degrees of pathology-induced changes to cardiac morphology and image appearance, low contrast, and noise in the CMR volumes.
High Precision Particle Swarm Optimization Algorithm (HiPPSO) (2020)
Raß, Alexander
Particle Swarm Optimization (PSO) is a nature-inspired meta-heuristic adaptable to continuous optimization problems. To avoid numerical instabilities or artifacts it is necessary to evaluate floating point calculations with high precision. Our High Precision Particle Swarm Optimization (HiPPSO) software realizes this demand. Additionally our software provides an automatic procedure to adjust precision if it is necessary for accurate evaluations. This enables a fast execution time because the software always evaluates the calculations with suitable precision and does not use too much precision if it is not necessary. HiPPSO is implemented in C++ and has a very flexible class hierarchy to replace subroutines on purpose or extend functionality by simply implementing abstract classes. The software is available on a GitHub repository at https://github.com/alexander-rass/HiPPSO.
Will We Ever Have Conscious Machines? (2020)
Krauss, Patrick ; Maier, Andreas
The question of whether artificial beings or machines could become self-aware or conscious has been a philosophical question for centuries. The main problem is that self-awareness cannot be observed from an outside perspective and the distinction of being really self-aware or merely a clever imitation cannot be answered without access to knowledge about the mechanism's inner workings. We investigate common machine learning approaches with respect to their potential ability to become self-aware. We realize that many important algorithmic steps toward machines with a core consciousness have already been taken.
Performance engineering for real and complex tall & skinny matrix multiplication kernels on GPUs (2021)
Ernst, Dominik ; Hager, Georg ; Thies, Jonas ; Wellein, Gerhard
General matrix-matrix multiplications with double-precision real and complex entries (DGEMM and ZGEMM) in vendor-supplied BLAS libraries are best optimized for square matrices but often show bad performance for tall & skinny matrices, which are much taller than wide. NVIDIA’s current CUBLAS implementation delivers only a fraction of the potential performance as indicated by the roofline model in this case. We describe the challenges and key characteristics of an implementation that can achieve close to optimal performance. We further evaluate different strategies of parallelization and thread distribution and devise a flexible, configurable mapping scheme. To ensure flexibility and allow for highly tailored implementations we use code generation combined with autotuning. For a large range of matrix sizes in the domain of interest we achieve at least 2/3 of the roofline performance and often substantially outperform state-of-the art CUBLAS results on an NVIDIA Volta GPGPU.
Hybrid Application Mapping for Composable Many-Core Systems: Overview and Future Perspective (2020)
Pourmohseni, Behnaz ; Glaß, Michael ; Henkel, Jörg ; Khdr, Heba ; Rapp, Martin ; Richthammer, Valentina ; Schwarzer, Tobias ; Smirnov, Fedor ; Spieck, Jan ; Teich, Jürgen ; Weichslgartner, Andreas ; Wildermann, Stefan
Many-core platforms are rapidly expanding in various embedded areas as they provide the scalable computational power required to meet the ever-growing performance demands of embedded applications and systems. However, the huge design space of possible task mappings, the unpredictable workload dynamism, and the numerous non-functional requirements of applications in terms of timing, reliability, safety, and so forth. impose significant challenges when designing many-core systems. Hybrid Application Mapping (HAM) is an emerging class of design methodologies for many-core systems which address these challenges via an incremental (per-application) mapping scheme: The mapping process is divided into (i) a design-time Design Space Exploration (DSE) step per application to obtain a set of high-quality mapping options and (ii) a run-time system management step in which applications are launched dynamically (on demand) using the precomputed mappings. This paper provides an overview of HAM and the design methodologies developed in line with it. We introduce the basics of HAM and elaborate on the way it addresses the major challenges of application mapping in many-core systems. We provide an overview of the main challenges encountered when employing HAM and survey a collection of state-of-the-art techniques and methodologies proposed to address these challenges. We finally present an overview of open topics and challenges in HAM, provide a summary of emerging trends for addressing them particularly using machine learning, and outline possible future directions. While there exists a large body of HAM methodologies, the techniques studied in this paper are developed, to a large extent, within the scope of invasive computing. Invasive computing introduces resource awareness into applications and employs explicit resource reservation to enable incremental application mapping and dynamic system management.
S𝔸𝕄PLE: A Software Suite to Predict Consolidation and Microstructure for Powder Bed Fusion Additive Manufacturing (2020)
Markl, Matthias ; Rausch, Alexander M. ; Küng, Vera E. ; Körner, Carolin
Powder bed fusion comprises all layer‐by‐layer additive manufacturing technologies of parts built from a powder bed. To exploit the advantages of near‐net shape manufacturing of complex geometries, in contrast to conventional manufacturing techniques, it is essential to understand the underlying physical phenomena occurring during processing for a broad range of different process scenarios. Experimental approaches are costly in time and material and provide only limited access inside the process. However, to understand the process behavior and predict final properties of parts, numerical approaches are powerful tools. This work presents the software suite S𝔸𝕄PLE (Simulation of Additive Manufacturing on the Powder scale using a Laser or Electron beam) which simulates the consolidation and microstructure evolution during beam‐based powder bed fusion processes. It is based on a mesoscopic approach, in which statistical powder beds, melt pool dynamics, evaporation effects, and microstructure evolution are considered and can simulate the build‐up of more than 100 layers. The underlying models and algorithms of the software including a newly applied thermal model are described. Finally, the unique potential of the software is demonstrated by reviewing the influence of various powder bed properties, the effects of evaporation, and the grain structure evolution in the process.
Beiträge zur szenarienbegleiteten Entwicklung von automatisierten Fahrfunktionen (2021)
Bock, Florian
Die Komplexität der Entwicklung moderner Fahrzeuge steigt kontinuierlich, besonders auch durch das aktuelle Thema der automatisierten Fahrfunktionen. Im Gegensatz zu früheren Fahrerassistenzsystemen, die nur unterstützend auf das Fahrzeug einwirken, müssen zukünftige Systeme die vollständige Kontrolle über das Fahrzeug übernehmen können. Das stellt hohe Anforderungen an die Qualität und die Robustheit der Systeme und folglich auch deren Entwicklungsprozesse. Dennoch werden Informationen zwischen den einzelnen Entwicklungsphasen immer noch manuell und in textbasierter Form (z.B. als Lastenheft) ausgetauscht. Bisherige Versuche, dies durch modellbasierte Ansätze zu optimieren, scheiterten meist an dem mangelnden Adaptionswillen der Entwickler aufgrund des initialen Lern- und Umstellungsaufwands. Da zur Erprobung moderner Fahrfunktionen immer stärker auf Simulationen gesetzt wird, steigt auch die Bedeutung der dafür notwendigen Szenarienbeschreibungen. Diese werden bisher manuell, entweder als Text oder direkt als Simulationsmodell, erstellt. Zusätzlich ist eine belastbare Abschätzung des notwendigen Testaufwands gerade für Multisensorsysteme bereits in einer frühen Phase für eine solide Projektplanung unumgänglich. Solch eine Projektplanung umfasst auch eine Auswahl der einzusetzenden Softwareentwicklungsprozesse, -methoden und -werkzeuge. Die dafür verfügbaren Ansätze bieten aber weder einen schnellen graphischen Überblick, noch können sie sehr unterschiedliche Ansätze vergleichen. Die in dieser Arbeit vorgestellte Taxonomie bietet diese Möglichkeit auf Basis des V-Modells und zusätzlicher Annotationen. Da weder die Kundenerwartungen an automatisierte Fahrsysteme, noch die Bewertungen der Entwicklungsmethoden durch die Entwickler konstant bleiben, werden für eine aktuelle Bestandsaufnahme im Rahmen dieser Arbeit die Ergebnisse von drei Umfragen zu diesen Themen vorgestellt. Um einerseits die Übergänge zwischen den Entwicklungsphasen zu optimieren, andererseits aber den Adaptionsaufwand für die Nutzer gering zu halten, werden weiterhin zwei iterative textbasierte Konzepte zur Erstellung von Anforderungen und Szenarienbeschreibungen, jeweils mit einer passenden domänenspezifischen Sprache im Framework JetBrains Meta Programming System (MPS) vorgestellt. Bereits verfügbare Ansätze arbeiten zwar auch textbasiert, sind aber weder multilingual, noch verwenden sie verschieden abstrakte Daten oder können die für die Entwicklung notwendigen Übergabeartefakte (z.B. Modelle) automatisch generieren. Auch wenn es bereits Lösungen zur Testaufwandsschätzung für Multisensorsysteme gibt, benötigen diese entweder die physikalischen Sensormodelle oder Implementierungsdetails. In dieser Arbeit wird deshalb ein Ansatz vorgestellt, der schon in der Spezifikationsphase, allein auf Basis der Sensorcharakteristika, den zu leistenden Testaufwand für ein definiertes Konfidenzniveau ermitteln kann. Diese fünf Konzepte tragen zur szenarienbegleiteten Entwicklung automatisierter Fahrfunktionen bei.
Automatic multi‐organ segmentation in dual‐energy CT (DECT) with dedicated 3D fully convolutional DECT networks (2020)
Chen, Shuqing ; Zhong, Xia ; Hu, Shiyang ; Dorn, Sabrina ; Kachelrieß, Marc ; Lell, Michael ; Maier, Andreas
Purpose Dual‐energy computed tomography (DECT) has shown great potential in many clinical applications. By incorporating the information from two different energy spectra, DECT provides higher contrast and reveals more material differences of tissues compared to conventional single‐energy CT (SECT). Recent research shows that automatic multi‐organ segmentation of DECT data can improve DECT clinical applications. However, most segmentation methods are designed for SECT, while DECT has been significantly less pronounced in research. Therefore, a novel approach is required that is able to take full advantage of the extra information provided by DECT. Methods In the scope of this work, we proposed four three‐dimensional (3D) fully convolutional neural network algorithms for the automatic segmentation of DECT data. We incorporated the extra energy information differently and embedded the fusion of information in each of the network architectures. Results Quantitative evaluation using 45 thorax/abdomen DECT datasets acquired with a clinical dual‐source CT system was investigated. The segmentation of six thoracic and abdominal organs (left and right lungs, liver, spleen, and left and right kidneys) were evaluated using a fivefold cross‐validation strategy. In all of the tests, we achieved the best average Dice coefficients of 98% for the right lung, 98% for the left lung, 96% for the liver, 92% for the spleen, 95% for the right kidney, 93% for the left kidney, respectively. The network architectures exploit dual‐energy spectra and outperform deep learning for SECT. Conclusions The results of the cross‐validation show that our methods are feasible and promising. Successful tests on special clinical cases reveal that our methods have high adaptability in the practical application.
Limited angle tomography for transmission X‐ray microscopy using deep learning (2020)
Huang, Yixing ; Wang, Shengxiang ; Guan, Yong ; Maier, Andreas
In transmission X‐ray microscopy (TXM) systems, the rotation of a scanned sample might be restricted to a limited angular range to avoid collision with other system parts or high attenuation at certain tilting angles. Image reconstruction from such limited angle data suffers from artifacts because of missing data. In this work, deep learning is applied to limited angle reconstruction in TXMs for the first time. With the challenge to obtain sufficient real data for training, training a deep neural network from synthetic data is investigated. In particular, U‐Net, the state‐of‐the‐art neural network in biomedical imaging, is trained from synthetic ellipsoid data and multi‐category data to reduce artifacts in filtered back‐projection (FBP) reconstruction images. The proposed method is evaluated on synthetic data and real scanned chlorella data in 100° limited angle tomography. For synthetic test data, U‐Net significantly reduces the root‐mean‐square error (RMSE) from 2.55 × 10−3 µm−1 in the FBP reconstruction to 1.21 × 10−3 µm−1 in the U‐Net reconstruction and also improves the structural similarity (SSIM) index from 0.625 to 0.920. With penalized weighted least‐square denoising of measured projections, the RMSE and SSIM are further improved to 1.16 × 10−3 µm−1 and 0.932, respectively. For real test data, the proposed method remarkably improves the 3D visualization of the subcellular structures in the chlorella cell, which indicates its important value for nanoscale imaging in biology, nanoscience and materials science.
Does the Position of Foot-Mounted IMU Sensors Influence the Accuracy of Spatio-Temporal Parameters in Endurance Running? (2020)
Zrenner, Markus ; Küderle, Arne ; Roth, Nils ; Jensen, Ulf ; Dümler, Burkhard ; Eskofier, Bjoern M.
Wearable sensor technology already has a great impact on the endurance running community. Smartwatches and heart rate monitors are heavily used to evaluate runners’ performance and monitor their training progress. Additionally, foot-mounted inertial measurement units (IMUs) have drawn the attention of sport scientists due to the possibility to monitor biomechanically relevant spatio-temporal parameters outside the lab in real-world environments. Researchers developed and investigated algorithms to extract various features using IMU data of different sensor positions on the foot. In this work, we evaluate whether the sensor position of IMUs mounted to running shoes has an impact on the accuracy of different spatio-temporal parameters. We compare both the raw data of the IMUs at different sensor positions as well as the accuracy of six endurance running-related parameters. We contribute a study with 29 subjects wearing running shoes equipped with four IMUs on both the left and the right shoes and a motion capture system as ground truth. The results show that the IMUs measure different raw data depending on their position on the foot and that the accuracy of the spatio-temporal parameters depends on the sensor position. We recommend to integrate IMU sensors in a cavity in the sole of a running shoe under the foot’s arch, because the raw data of this sensor position is best suitable for the reconstruction of the foot trajectory during a stride.
DynDSE: Automated Multi-Objective Design Space Exploration for Context-Adaptive Wearable IoT Edge Devices (2020)
Schiboni, Giovanni ; Suarez, Juan Carlos ; Zhang, Rui ; Amft, Oliver
We describe a simulation-based Design Space Exploration procedure (DynDSE) for wearable IoT edge devices that retrieve events from streaming sensor data using context-adaptive pattern recognition algorithms. We provide a formal characterisation of the design space, given a set of system functionalities, components and their parameters. An iterative search evaluates configurations according to a set of requirements in simulations with actual sensor data. The inherent trade-offs embedded in conflicting metrics are explored to find an optimal configuration given the application-specific conditions. Our metrics include retrieval performance, execution time, energy consumption, memory demand, and communication latency. We report a case study for the design of electromyographic-monitoring eyeglasses with applications in automatic dietary monitoring. The design space included two spotting algorithms, and two sampling algorithms, intended for real-time execution on three microcontrollers. DynDSE yielded configurations that balance retrieval performance and resource consumption with an F1 score above 80% at an energy consumption that was 70% below the default, non-optimised configuration. We expect that the DynDSE approach can be applied to find suitable wearable IoT system designs in a variety of sensor-based applications.
Technical Note: PYRO‐NN: Python reconstruction operators in neural networks (2019)
Syben, Christopher ; Michen, Markus ; Stimpel, Bernhard ; Seitz, Stephan ; Ploner, Stefan ; Maier, Andreas K.
Purpose Recently, several attempts were conducted to transfer deep learning to medical image reconstruction. An increasingly number of publications follow the concept of embedding the computed tomography (CT) reconstruction as a known operator into a neural network. However, most of the approaches presented lack an efficient CT reconstruction framework fully integrated into deep learning environments. As a result, many approaches use workarounds for mathematically unambiguously solvable problems. Methods PYRO‐NN is a generalized framework to embed known operators into the prevalent deep learning framework Tensorflow. The current status includes state‐of‐the‐art parallel‐, fan‐, and cone‐beam projectors, and back‐projectors accelerated with CUDA provided as Tensorflow layers. On top, the framework provides a high‐level Python API to conduct FBP and iterative reconstruction experiments with data from real CT systems. Results The framework provides all necessary algorithms and tools to design end‐to‐end neural network pipelines with integrated CT reconstruction algorithms. The high‐level Python API allows a simple use of the layers as known from Tensorflow. All algorithms and tools are referenced to a scientific publication and are compared to existing non‐deep learning reconstruction frameworks. To demonstrate the capabilities of the layers, the framework comes with baseline experiments, which are described in the supplementary material. The framework is available as open‐source software under the Apache 2.0 licence at https://github.com/csyben/PYRO-NN. Conclusions PYRO‐NN comes with the prevalent deep learning framework Tensorflow and allows to setup end‐to‐end trainable neural networks in the medical image reconstruction context. We believe that the framework will be a step toward reproducible research and give the medical physics community a toolkit to elevate medical image reconstruction with new deep learning techniques.
Applications of Data Consistency Conditions in Cone-Beam Computed Tomography (2020)
Würfl, Tobias
The mathematical model of computed tomography data acquisition shows that every such measurement contains a degree of redundancy. This redundancy can be used to formulate data consistency conditions which enable quantifying how accurately a measurement fulfills the mathematical model. This enables compensating for inaccuracies in the measurement process by formulating artifact compensation as optimization problems of an objective function based on such data consistency conditions. In the course of this thesis, various methods to deal with different types of such inaccuracies in the measurement have been proposed. One type of inaccuracy arises due to the simplified physical model of computed tomography reconstruction which assumes a mono-chromatic X-ray source. This leads to an effect commonly named beam hardening. To compensate artifacts from this beam hardening effect a projection-based algorithm named the Empirical Cupping Correction using the Epipolar Consistency Condition (ECC 2 ) algorithm is introduced. It uses a recently proposed data consistency condition called the Grangeat data consistency condition to formulate an efficient procedure to estimate parameters of an appropriate compensation model. This compensation model is derived to be physically plausible by mathematical analysis of the beam hardening effect. This method is extended in a second algorithm named the Multi-material ECC 2 (MECC 2 ) algorithm to incorporate the possibility to compensate artifacts caused by the different energy dependencies of the linear attenuation coefficient of different materials. Efficient parameter estimation techniques and methods for physically plausible constraints on the compensation function are also presented. Another category of inaccuracies in the measurement are caused by inaccurate knowledge of the acquisition geometry or rigid movement of the imaged object. A general strategy to solve both problems is to estimate the geometry of the acquisition projection-based. However, this is difficult for X-ray imaging modalities because standard techniques from computer vision can not be translated straightforward to this setting. This is caused by the fact, that computer vision algorithms rely on identifying corresponding points in multiple pictures of the same scene as a fundamental step. This is very hard in X-ray imaging due to the different image formation model. To solve this issue a method to estimate the relative geometry of two views is introduced and methods to estimate a projective reconstruction based on multiple pairwise geometry estimates.
A Process Mining Software Comparison (2020)
Viner, Daniel ; Stierle, Matthias ; Matzner, Martin
www.processmining-software.com is a dedicatedwebsite for process mining software comparison and was de-veloped to give practitioners and researchers an overview ofcommercial software available on the market. Based on literaturereview and experimental software testing, a set of criteria wasdeveloped in order to assess the tools’ functional capabilitiesin an objective manner. With our publicly accessible website,we intend to increase the transparency of software functionality.Being an academic endeavour, the non-commercial nature of thestudy ensures a less biased assessment as compared with reportsfrom analyst firms.
Identifikation relevanter Verkehrssituationen für die szenarienbasierte Entwicklung automatisierter Fahrfunktionen (2020)
Sippl, Christoph Sebastian
Automatisiertes Fahren wird die zukünftige Mobilität grundlegend verändern. Die fortschreitende Digitalisierung bildet die Grundlage für technische Innovationen und Automatisierung der Fahraufgabe. Zahlreiche medienwirksame Auftritte verschiedener Firmen und Forschungsgruppen zeigen die technische Machbarkeit automatisierter Fahrfunktionen. Vor allem die steigende Komplexität zukünftiger Fahrfunktionen beinhaltet große Herausforderungen für die Entwicklung, Test und Freigabe. Derartige Systeme sollen die Fahraufgabe in definierten Verkehrsdomänen komplett übernehmen. Die Fahrfunktion muss alle auftretenden Situationen sicher beherrschen. Die Identifikation aller dafür relevanten Verkehrssituationen kann mit bekannten Methoden der Situationsanalyse nicht bewerkstelligt werden. Etwaige Methoden fokussieren nicht auf typische, normale und unkritische Verkehrssituationen, wenngleich diese für die Anforderungsanalyse und Spezifikation automatisierter Fahrfunktionen notwendig sind. Um die steigende Komplexität zu bewältigen, werden von der Forschung und Fachliteratur szenarienbasierte Methoden für die Entwicklung automatisierter Fahrfunktionen vorgeschlagen. Die vorliegende Arbeit präsentiert eine Methode zur Identifikation typischer Verkehrssituationen. Die Methodik basiert auf einem menschlichen Entscheidungsfindungsmodell und beinhaltet ein systematisches Vorgehen. Sie berücksichtigt Expertenwissen, sowie funktions- und entwicklungsspezifisch relevante Situationsmerkmale. Das systematische Vorgehen nutzt Simulationsmethoden zur Datenerhebung sowie Constraint-Programmierung. Somit wird das Constraint-Erfüllungs-Problem zur Suche nach relevanten Situationen in einer deklarativen Weise beschrieben. Die Validierung zeigt, dass relevante und typische Situationen identifiziert werden können, an die in einem unstrukturierten Vorgehen während der Anforderungsanalyse und Spezifikation des Zielsystems unter Umständen nicht gedacht wird. Zusammen mit dem durchgängigen szenarienbasierten Entwicklungsansatz SBSE zeigt die Situationsidentifikation großes Potential für die Entwicklung automatisierter Fahrfunktionen. Zudem bildet der vorgestellte Ansatz die Grundlage für weitere Forschung im Bereich der szenarienbasierten Entwicklung.
  • 1 to 20

DINI-Zertifikat 2013     DINI-Zertifikat 2016     Members of COAR OPUS4 Logo

  • Contact
  • Imprint
  • Sitelinks