004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Doctoral Thesis (202)
- Report (29)
- Article (12)
- Study Thesis (9)
- Working Paper (4)
- Book (3)
- Master's Thesis (3)
- Conference Proceeding (1)
- Habilitation (1)
Has Fulltext
- yes (264)
Keywords
- Betriebssystem (12)
- Machine Learning (11)
- Computertomographie (10)
- Maschinelles Lernen (10)
- - (7)
- Echtzeitsystem (7)
- Eingebettetes System (7)
- Rekonstruktion (7)
- Visualisierung (7)
- Bildverarbeitung (6)
Institute
- Technische Fakultät -ohne weitere Spezifikation- (140)
- Technische Fakultät (61)
- Department Informatik (39)
- Medizinische Fakultät (5)
- Medizinische Fakultät -ohne weitere Spezifikation- (5)
- Department Elektrotechnik-Elektronik-Informationstechnik (3)
- Department Medienwissenschaften und Kunstgeschichte (2)
- Fakultätsübergreifend / Sonstige Einrichtung (2)
- Institut für Medizininformatik, Biometrie und Epidemiologie (2)
- Rechts- und Wirtschaftswissenschaftliche Fakultät (2)
In a medical sense, radiography is an imaging technique to produce a plain image of the human body. Radiography of the chest is the most commonly used medical image acquisition method. In contrast to other imaging techniques, radiography benefits from fast examination procedures and low radiation doses. Moreover, a chest radiograph delivers detailed findings of diseases, pathologies, and abnormal structures in the thoracic and abdominal region. For example, fluid retention in the lung, an enlarged heart, or lung nodules specify only a fraction of the large variety of abnormalities visible in chest radiographs. With the breakout of the COVID-19 disease, an additional major use case was integrated into the clinical workflow for chest radiography assessment.
The high amount of produced radiographic images require a time-consuming reading process by clinical professionals. The daily workload and the connected liability often hinder the assurance of a consistent image reading accuracy. In other words, human-like behavior such as fatigue, concentration lack, and time pressure reduces performance.
To maintain a constant reading output, so-called computer-aided detection (CAD) systems are incorporated into the clinical workflow to compensate for the pitfalls of the human. Moreover, applying these systems not only simplifies the reading process for the experts but also reduces the reading time. In the beginning, the integrated systems were based on conventional and rule-based methods, e.g., image gradient filters. However, with the start of the deep learning (DL) era, including the availability of a huge number of training data, the performance of CAD systems rapidly increased. To leverage these systems, advanced systems are developed to increase the generalizability of such networks.
Especially lung nodules are challenging to read as it appears as a tiny fraction in the image. Due to the resulting low system performance, augmentation methods are designed to specifically focus on improving nodule recognition. For example, nodules of computed tomography (CT) images are leveraged to expand the nodule collection for training in the radiography domain.
The proposed approaches based on DL require a huge amount of training data. The demand for big chest radiography collections results in the ever-growing number of datasets publicly available. In practice, strict anonymization procedures are applied prior to the release of the data. For example, sensitive data such as patient names are removed. However, these anonymization methods may facilitate that the patient can be re-identified from the medical image content. For example, sensitive content can be linked between images by a system which identifies images from the same patient. This scenario highlights lacks in the current anonymization procedures as well as possible risks in data privacy and security.
Valvular heart disease (VHD) is a major public health problem with a prevalence of 2.5% in the US. Tens of thousands of minimally invasive procedures are performed every year and new therapeutical approaches are continuously developed at a rapid pace. Major clinical decisions and therapies are nowadays guided by peri- and intraoperative imaging with an increasing emphasis on 3D. Given the complexity of pathology and therapies in VHD, there is a strong need for fast, precise and reproducible quantification.
To this end, several contributions are proposed in this thesis across various aspects of VHD management. These are structured into two parts: i) Mitral valve (MV) modeling and ii) advanced computational decision support.
The first part starts with describing a comprehensive MV model obtained from 3D+t transesophageal echocardiography (TEE). An original methodology combines robust machine learning (ML) techniques and biomechanical constraints to obtain a temporally consistent estimate of model parameters. Extensive experiments on a large data set of 195 3D+t TEE volumes -- comprising 3026 3D frames -- demonstrate, that this automatic method is highly competitive in terms of speed and accuracy and outperforms purely data-driven methods in terms of robustness. Clinical evaluation in numerous centers around the world shows the applicability of its versatile quantification capabilities to various aspects of disease characterization and therapy planning.
Subsequently a more lightweight alternative enables for robust 3D tracking of the MV annulus at high frame rates. Two parallel components based on ML and image-based tracking complement each other in robustness and speed. Experiments with emulated probe motion suggest suitability towards an interventional setting for guiding transcatheter mitral valve repair (TMVR) procedures.
The second part proposes computational approaches for advanced decision support throughout the clinical workflow. First, the previously described valve modeling is combined with chamber models into a holistic and detailed model of the left heart. This enables for estimating patient-specific computational hemodynamics by serving as boundary condition for a level-set based computational fluid dynamics (CFD) solver. A validation concept using clinically acquired Doppler measurements is proposed and shows high agreement.
Finally, a framework is presented for post-operative modeling of self-expandable stent devices for monitoring in transcatheter aortic valve implantation (TAVI) procedures. The technique is based on deformable simplex meshes, geometrical constraints and ML. Evaluation on postoperative computed tomography (CT) data shows promising accuracy.
Das Gesundheitswesen in Deutschland befindet sich in einem stetigen Wandel. Neben der demografischen und sozialen Entwicklung spielen moderne Informations- und Kommu-nikationstechnologien, die Pharmaindustrie sowie die Medizintechnik eine große Rolle.Die Schnittstellenproblematiken entlang der Geschäftsprozesse im Rahmen der Digitali-sierung des deutschen Gesundheitswesens zeigen immer mehr, dass eine nahtlose Integra-tion der Versorgungsprozesse sowie ein digitalisierter Datenaustausch zu einer Verbes-serung der Qualität und Wirtschaftlichkeit der Versorgung führt.Im Rahmen dieser Arbeit werden die verschiedenen erwarteten Entwicklungen anhand der Szenariotechnik beschrieben. Die vorgestellten Szenarien dienen als Fundament, um die Anforderungen an die veränderten Geschäftsprozesse und deren Schnittstellen zu ermitteln. Anschließend wird eine kontextsensitive IT-Architektur entwickelt, die Prob-leme der intelligenten Geschäftsprozesssteuerung und -integration in die heutigen Digi-talisierungsbestrebungen adressiert und neue Akteure im Gesundheitswesen sowie deren Services einbindet. Diese Architektur ermöglicht, dass die Leistungen der Akteure über Sektorengrenzen hinweg miteinander verbunden und orchestriert werden. Designkrite-rien wie Schnittstellenorientierung, Interoperabilität, Kontextsensitivität, Modularität und selbstlernende Steuerung spielen dabei eine maßgebliche Rolle.
Composers of music can express emotions and communicate with their audience in a multitude of ways. They decide on which voices or instruments to use, arrange notes into melodies, and develop recurring musical patterns. When a composition is performed and turned into sound, their decisions are realized acoustically as sound events. Despite being easily understood by human listeners, teaching a machine to perceive and process such musical sound events can be a challenging task. This thesis studies computational techniques for detecting the activity of sound events in a music recording, i. e., identifying the exact moments in time when a certain event occurs. We focus on orchestral and opera music, which are rarely considered in music processing research and particularly complex due to their high degree of polyphony. In this context, we cover four different types of musical sound events, namely singing, instrumental sounds, different pitches, and leitmotifs (special kinds of musical patterns used for storytelling in opera). To detect the activity of these events within a recording, we design, implement, and evaluate deep learning systems. In addition, we explore a range of techniques including hierarchical classification, differentiable sequence alignments, and representation learning. Beyond evaluating the accuracy of our detection systems, we aim at a deeper understanding of our models with regard to their robustness and sensitivity to confounding effects.
The main contributions of this thesis can be summarized as follows: First, we investigate signal processing and deep learning methods for detecting singing activity in opera recordings. Second, we extend this scenario towards simultaneously detecting singer gender and voice type. We compare several techniques for utilizing the hierarchical relationships between these classes and propose a novel loss formulation for ensuring consistency of detection results across different hierarchy levels. Third, we apply such a hierarchical technique to instrument activity detection. For this task, research progress is often limited by the cost of obtaining manually annotated audio examples for training. To address this issue, we demonstrate that hierarchical information reduces the need for fine-grained instrument annotations during training of our detection models. Fourth, we show how the structure of certain orchestral music datasets can be exploited to learn representations related to instrumentation, without requiring any instrument annotations at all. Fifth, we consider the problem of detecting pitch activity and show how differentiable sequence alignments can be used for learning from weak annotations. Finally, we perform classification and detection of leitmotifs. We present deep learning systems that successfully detect leitmotif activity and provide a detailed analysis of their generalization ability.
In the last decade, many academic and industrial research groups around the globe are focusing on XR, leading to rapid progress of the field.
Next to incremental hardware advances, new software techniques also play an important role in the recent success.
The software improvements range from new tracking algorithms, which allow a more accurate and robust localization of the XR headset, to new rendering engines, which are able to render photo realistic environments in real time.
In the first half of this thesis, I present my work on camera tracking of low-power mobile devices such as XR headsets.
The proposed tracking pipeline uses several algorithmic tricks to reduce the computational complexity.
For example, I present a novel decoupled formulation for visual inertial bundle adjustment, which makes the optimization more efficient and can be run in parallel.
Furthermore, I show how recursive matrix algebra can be used to speed up the nonlinear optimization problems of a typical tracking pipeline.
Overall, the proposed pipeline achieves a similar or better accuracy than the state-of-the-art while being substantially faster.
On integrated or low-power computers, my method can process over 60 frames per second, which exceeds the frame rate of commodity cameras by a factor of two.
Later in this thesis, I show how the output of my tracking system can be used to generate high-quality RGB and depth images from arbitrary locations in the scene.
The proposed method renders triangulated depth images of keyframes to a target view and fuses them in the fragment shader.
This approach is very efficient and allows large scene updates of the tracking system since no global volumetric model is built.
AR applications can use the resulting images to visualize the scene or display virtual objects with correct occlusion.
Finally, I present Approximate Differentiable One-Pixel Point Rendering (ADOP), a novel point-based neural rendering approach for real-time novel view synthesis.
The input is an initial reconstruction of a scene using standard photogrammetry software.
During a short training stage, neural point descriptors are learned as well as the parameters of a rendering network and a tone-mapper.
After that we are able to synthesize photo-realistic views of these scenes at arbitrary camera locations.
Due to a novel differentiable point rasterizer, we are also able to optimize the initial camera parameters and point cloud provided by the photogrammetry software.
In several experiments, I show that this input optimization can significantly improve the image quality and make ADOP to one of the best performing neural rendering approaches.
The topic of this work is the analysis of control loops under the influence of input/output timing deviations. Such deviations from the ideal periodic timing can lead to increased control error or even to instability. To address this issue, methods are presented to determine a safe tolerance band for the timing deviation. Two alternatives are addressed: stability analysis at design time and timing adaptation at run time. In both cases, the methods support multiple inputs and outputs with individual timing uncertainties.
For the analysis at design time, the work presents methods to determine stability and maximal control error at design time for a given input/output time window. The linear case is analyzed using linear impulsive systems. Progress towards solving the nonlinear case is made by reachability analysis of hybrid automata in connection with the Continuization method.
For a flexible adaptation at run time, the framework of convergence rate abstractions is developed. By this, the decision about the permissible input/output timing can be made according to the current situation and with little computational overhead. Hence, larger temporal deviations can be permitted for a short time as long as the timing remains good enough in the long-term average to preserve stability.
Healthy gait is one of the key factors for a mentally and physically fulfilling life. However, patients suffering from neuro-degenerative disorders, such as Parkinson’s disease (PD), are affected by symptoms with respect to gait and balance that may lead to an increased fall risk. Falls can have severe consequences such as bone fractures and may ultimately reduce the quality of life. Objective analysis of gait and mobility is therefore one important tool supporting clinicians and patients with monitoring disease status and symptom progression. Based on technical advancements over the last decade, gait data of standardized gait tests in laboratory settings can be recorded with little effort using small and light-weight inertial measurement units (IMUs). In combination with signal processing and machine learning, these data allow an objective and quantitative gait analysis. The latest developments in sensor technology even enable capturing mobility data in the real world over multiple days or weeks. With this, physicians can get a holistic and continuous impression of their patients' mobility to assess specific clinical indications such as fall risk. However, novel, robust, and reliable algorithmic solutions are required for the processing of such long-term mobility data. Therefore, in this thesis, new methods for real-world gait analysis and the clinical interpretation of the outcome parameters regarding fall risk prediction will be presented.
To enable fall risk prediction based on long-term real-world gait data, technical prerequisites must be fulfilled to ensure a reliable automated analysis of inertial sensor data recorded in unsupervised conditions. Since a high portion of daily life does not consist of gait, but also other cyclic activities or rest, the first contribution of this thesis addresses the development and validation of a novel gait detection algorithm. A sequential pipeline was proposed to analyze sliding windows of IMU data by applying an activity threshold and then identifying the existence of harmonic frequency patterns. The algorithm was evaluated in a study with 150 foot-worn sensor recordings of PD patients collected in a clinical laboratory environment including manual annotations of gait periods and cyclic non-gait movements. The validation revealed a sensitivity of 98% and a specificity of 96%. For an additional independent data set with 203 unsupervised standardized gait tests recorded in the home environment of PD patients, the algorithm confirmed the high accuracy. These findings underline that a reliable detection of gait from inertial sensor signals is possible by analyzing harmonic frequency patterns. The proposed algorithm formed the foundation for the subsequent contributions of this thesis.
In long-term real-world studies, standardized anchor-points of gait data collected in repeated, predefined conditions can be valuable for the analysis. To allow a direct comparison of gait parameters recorded inside and outside of laboratory conditions, researchers have started to add standardized gait tests that were established in laboratory settings to real-world study protocols. Ideally, the integration of such tests should not cause any additional effort, for study participants regarding the operation of the sensor system, and for researchers regarding manual annotations of the tests in the data. To solve this challenge, an algorithmic pipeline for the detection of unsupervised standardized gait tests is proposed in this thesis. This pipeline aimed to identify and separate sequences of three consecutive 4x10-Meter-Walking-Tests (4x10MWTs) performed at different walking speeds from continuous recordings. A combination of peak enhancement, template matching via subsequence Dynamic Time Warping and walking speed analysis was employed and evaluated with 419 unsupervised gait tests performed by 12 PD patients. The detection part of the algorithm achieved an F1-score of 88.9% and the separation of the test sequences into the three speed levels an F1-score of 94.0%. Hence, the proposed method enables a seamless integration of gait tests into real-world studies and allows to capture a representative picture of the mobility capacity. The integration of such anchor points can help to draw clinical conclusions besides the information acquired from unconstrained gait.
The contribution of the third part of this thesis aimed to answer the question if long-term real-world gait data and unsupervised standardized gait tests performed over multiple days are valuable for prospective fall risk prediction in PD. For this study, real-world and unsupervised gait test data of 40 PD patients was collected over two weeks. Falls happening in the period after the data collection were reported during a follow-up phase of three months and allowed to divide the participants into high and low fall risk according to whether they had fallen or not. Different data aggregation approaches and machine learning methods were investigated and compared using the data from gait tests or real-world gait. The classifier achieving the highest performance of 74.0% balanced accuracy was a Random Forest trained on real-world gait parameters that were aggregated over all walking bouts of all days of each participant. These findings suggest that real-world data is more valuable for fall risk prediction than standardized gait tests (68.0% balanced accuracy) and that the aggregation of data collected over multiple days is providing the most accurate prediction.
The work presented in this thesis shows that a robust analysis of unsupervised IMU data is possible and underlines the potential of real-world gait analysis for the continuous monitoring of motor symptoms and fall risk in chronic diseases. This is not limited to the data of PD patients but may also serve in other conditions where real-world gait monitoring may support clinicians for diagnosis and treatment adaptation. Ultimately, the results of this thesis will help patients with an improved handling of chronic diseases and hence support them to maintain a high quality of life.
Machine learning (ML) has advanced at impressive speed. The rapid progress can be explained
by the increasing availability of data and access to low-cost and high-performance computational resources. These circumstances have led to the adaption of ML in almost all scientific
domains. Two prominent application fields of ML are healthcare and business. The full potential of ML to drive impact in specific domains is an ongoing research endeavor. Therefore,
this cumulative dissertation aims at utilizing ML to positively impact the fields of gait analysis
in Parkinson’s Disease (PD), predictive business process monitoring (PBPM), and customer
service. Each field is addressed by one contributed publication focusing on a specific objective.
The first objective is to define gait clusters to extract distinct gait parameters and validate their
clinical relevance. This objective is addressed in our publication [P1], where we contribute
novel gait clustering methods and validate their clinical relevance. We developed unsupervised
methods for defining and isolating distinct gait clusters based on data collected from PD
patients performing a standardized gait test. According to our study, the detailed analysis of
gait parameters in distinct gait clusters can provide clinically relevant information about gait
and balance impairments in patients with PD. Our study demonstrated that the defined gait
clusters were more accurate at distinguishing impaired from unimpaired gait and balance in
patients with Parkinson’s disease than the baseline approach (analyzing all straight strides).
The second objective is to propose a model to directly incorporate the inter-event dynamics
and account for the event type imbalance to improve PBPM performance. We addressed this
objective in the publication [P2] . Using a time-aware LSTM architecture, we demonstrated
that modeling inter-event dynamics directly could enhance the performance of PBPM tasks.
In addition, we showed that cost-sensitive learning could enhance performance in PBPM by
adjusting for event type imbalance in event logs.
The third objective is to combine Internet of Things (IoT) and enterprise data to predict
customer escalations. The publication [P3] addressed this objective by presenting a decision
support framework that combines IoT and enterprise data to predict customer escalation
risks. We showed that combining IoT and enterprise data yields better prediction performance
compared to each data modality individually. Additionally, we propose a practical workflow
for the presented decision support system framework.
In the pursuit of acquiring an immersive visual experience, camera technology has gone through a long evolution process. Modern digital cameras can capture high-resolution high dynamic range images with an extended depth of field. However, capturing just the 2D spatial information of the visual scene is not enough to deceive human perception for an immersive experience. Alternatively, light-field imaging technology allows for capturing the directional information of light rays together on top of the conventional 2D spatial data. This additional angular information in light-field images plays a pivotal role in many applications, including post-capture refocusing, depth estimation, 3D reconstruction, and novel view rendering. The thesis reviews the complete light-field processing pipeline which includes data capturing, depth estimation, and novel view synthesis. Moreover, inspired by the success of deep learning in different computer vision applications, this dissertation proposes to solve multiple issues concerning light-field processing with the help of neural networks. Starting with the data capturing, the first highresolution high dynamic range light-field dataset is captured for the community to develop and test their algorithms. Additionally, an initial study explains the effect of tone-mapping on view rendering quality. Quantitative and qualitative analysis indicates that tone-mapping after view rendering yields better results than applying it before rendering, especially in the presence of non-Lambertian objects. Moreover, disparity estimation is more reliable and accurate from raw HDR light-field than the tonemapped light-field.
Apart from HDR light fields, the thesis also presents a recurrent neural network predicting wrong disparity assignments due to the ill-posed nature of the problem. The proposed algorithm estimates a confidence value for each pixel location, filtering out the disparity outliers. The confidence for a given pixel is calculated only from its associated matching costs, without taking into account any additional nearby pixels, in order to keep a low complexity. These low-confidence pixels can then be corrected using reliable pixel values. Experimental results on multiple datasets show the robustness of the proposed method by outperforming state-of-the-art confidence estimation methods. Moreover, the size of the proposed confidence measure network in terms of the number of trainable parameters is almost 10^2-10^4 times less than state-of-the-art methods.
Despite filtering and fixing of incorrect disparity assignments, current light-field processing pipelines fail to reconstruct novel views with good quality. The thesis presents a learning-based lightfield view synthesis framework based on an end-to-end attention mechanism proposing a solution to the current pipeline shortcomings. The proposed framework consists of three convolutional neural networks connected sequentially, one network for stereo feature extraction, disparity estimation, and refinement each. The refinement network utilizes convolutional block attention modules in a residual network-style architecture to enhance depth image-based rendering. The initial design of the network renders a single virtual view, further extended by presenting two refinement network strategies to generate multiple light-field views. The proposed method performs better than state-of-the-art light-field rendering approaches, especially in occluded areas. Moreover, introducing the attention mechanism in the refinement stage helps preserving thin structures in the scene. The experimental results show that the proposed method generates consistent performance across diverse test datasets despite training on a content-specific dataset.
Novel view synthesis quality depends on the number of light-field perspective views used for the reconstruction. However, the redundant information in different light-field views poses challenges for storage and transmission resources. Motivated by the big advances in the field of image compression via machine learning, a compression of such data with the help of neural networks is desirable. However, neural network-based compression is a relatively new and immature research area lacking many basic functionalities, such as rate control. To close this gap, this thesis makes the first step toward multi-view compression by looking at stereo images with the purpose to reduce complexity and increase understanding. In particular, a novel recurrent neural network-based technique is proposed for stereo image compression with discrete rate control. The main contribution of the proposed method is to investigate how the redundancy between the images can be eliminated. A key technology is state
warping between the recurrent units of the stereo image networks to share mutual information. Additionally, a convolutional neural network utilizes compressed information to estimate occlusion maps, tackling discrepancies in the occluded areas. Quantitative and qualitative experimental analysis on two different datasets shows that the proposed technique saves 10-30% of the bit rate for the right image and outperforms conventional image codecs in terms of perceptual quality.
Detektion, Quantifikation und Mitigation von Robustheitsanfälligkeiten in Tiefen Neuronalen Netzen
(2023)
Machine learning (ML) has made enormous progress in the last two decades. Specifically, Deep Neural Networks (DNNs) have led to several breakthroughs. The applications range from synthesizing high-resolution images that are indistinguishable from real photos to large-scale language models that achieve human-level performance on various tasks.
Yet, while humans can apply learned knowledge to new situations with only a few examples, neural networks often fail at this task. As a result, real-world distribution shifts in the environment, demographics, or data collection process pose severe safety risks to humans. For instance, autonomous cars may fail to adapt to unknown road conditions, and medical systems may provide incorrect diagnoses for minorities not included in the training data. Another security threat is the vulnerability of neural networks to small adversarially crafted perturbations. As such, even imperceptible changes in the data can lead to erroneous model behavior. In this cumulative dissertation, I demonstrate a literature gap regarding methods that simultaneously address real-world and adversarial distribution shifts. Therefore, I propose three objectives to increase the robustness of neural networks against both threats.
The first objective consists of the detection of potentially harmful model decisions caused by distribution shifts in the data. Here, we showed that the input-gradient geometry of neural networks can be used to detect both real-world and adversarial distribution shifts [P1]. Unlike prior work, we demonstrated the flexibility of our method by showing its effectiveness on both image and time series classification tasks.
The second objective considers the accurate quantification of network robustness against adversarial distribution shifts (attacks), which is essential to assess the worst-case risk in safety-critical applications. Toward this objective, we propose to improve two critical components of gradient-based adversarial attacks. In one contribution, we improved the convergence of gradient-descent-based optimization by including past gradient information in the optimization history [P2]. In another contribution, we introduced a novel optimization objective that leads to an increased attack success rate while simultaneously reducing the perturbation magnitude of adversarial attacks [P3].
Third, we present a novel approach to mitigate vulnerabilities against real-world and adversarial distribution shifts [P4]. To this end, we theoretically motivate how properties of local extrema in the loss landscape can be used to identify spurious predictions. Based on these findings, we propose the Decision Region Quantification (DRQ) algorithm that analyzes the robustness of local decision regions in the vicinity of a given data point to find the most robust prediction for a given sample.
In the elderly, a fall can lead to severe injuries with the need for hospitalization, increase morbidity, and thus reduce the overall quality of life. The prevalence of falls is particularly high in patients suffering from chronic neurodegenerative diseases such as Parkinson’s disease (PD) due to disease-specific gait and balance impairments. Hence, the assessment of a PD patient’s gait function is a fundamental part of the clinical management of this disease. In this context, wearable inertial measurement units (IMUs) enable an objective assessment of detailed gait parameters. Although instrumented gait tests are a first step to support the clinical decision-making, these assessments provide only brief snapshots of a complex disease under unnatural conditions. Therefore, a transition from laboratory snapshots to continuous long-term monitoring of gait and mobility parameters is required. Recent advances in wearable sensor technology, including longer battery life and sensor miniaturization, nowadays allow an unobtrusive integration of wearable IMUs into patients’ daily lives. However, while collecting ecologically valid data is technically feasible today, generating clinically meaningful digital mobility outcomes (DMOs) from real-world datasets is still ongoing research to which this work contributes.
Due to the variability and heterogeneity of real-world recordings, existing algorithms which have been originally developed for standardized gait tests need to be re-evaluated and possibly replaced by more suitable or adapted approaches. Therefore, the first contribution of this thesis addresses the segmentation of individual strides from continuous inertial sensor data, which is a fundamental part of a gait analysis pipeline. An existing template matching-based approach and a novel hidden Markov model (HMM)-based stride segmentation were evaluated on nearly 150,000 manually annotated real-world strides of 28 PD patients. The proposed HMM-based approach achieved a mean segmentation F1 score of 92.1 % across the entire dataset and significantly outperformed the template-matching approach. Short walking bouts (< 30 strides) resulted in F1 scores ≤ 91.1 %, whereas longer walking bouts (> 50 strides) yielded F1 scores of 96.2 % and up to 98.2 % for the longest bouts (> 200 strides). However, the quality of stride segmentation was comparable to results from standardized gait tests in the laboratory only for long walking bouts. These findings highlight the challenges of processing real-world datasets due to the increased complexity and heterogeneity of the recorded gait. Especially for short walking bouts, where a large fraction of non-steady-state strides such as initiation, termination, or turns is expected, the data-driven HMM achieved promising results for future real-world applications.
However, not only a re-evaluation of concepts such as stride segmentation is required to transfer gait analysis from the laboratory to the real world, but also the development of new DMOs must be considered. Therefore, a new gait analysis pipeline was proposed to enable the assessment of new gait-related DMOs. Specifically, the implemented pipeline allowed the parameterization of individual strides from stair ascending and descending, which is a fundamental part of our daily-life mobility and has not been studied under real-world conditions. In this context, the existing HMM was extended to a multiclass model and combined with an adapted gait event detection approach matching the needs of stair ambulation biomechanics. The proposed pipeline was evaluated on an outdoor course containing three different stair geometries with 20 young and healthy participants who completed the course multiple times at slow, preferred, and fast speeds. Compared to a pressure insole reference, the pipeline achieved an F1 score of 98.5 % and gait event timing errors below 10 ms for all conditions. The walking activity could be classified on a per stride level with an accuracy of 98.2 %, based on trajectory features. Additionally, the entire analysis pipeline was validated end-to-end on an independent dataset of 13 PD patients to test its applicability for future clinical applications.
In order to evaluate not only the technical but also the clinical validity, the pipeline was successfully transferred to a first clinical application. In their daily lives, PD patients have to constantly adapt their gait to changing environmental conditions, which includes managing stairs. Due to their unique geometric constraints, stair walking adds additional challenges to the motor and control system compared to level walking. Therefore, for the first time, objective gait parameters derived from real-world stair ambulation sequences were investigated as new sensitive outcomes for fall risk assessment in a PD patient cohort. The study revealed significant differences between fallers (N = 11) and non-fallers (N = 29) in stair ascending and stair descending parameters. These differences were less pronounced for the same parameters extracted from level walking. Gait speed during stair ambulation was reduced by 16 % on average for fallers, whereas their stance phase was increased by 20 % on average. These results highlight the clinical relevance of real-world stair ambulation performance as new DMOs for fall risk assessments.
To conclude, this thesis contributes to the ongoing efforts to transfer mobile gait analysis systems from the laboratory to clinical applications in the real world. The presented contributions enable a robust assessment of detailed stride-level gait parameters to gain holistic insights into real-world mobility beyond level walking. The newly presented DMOs derived from real-world stair ambulation bouts proved their potential for fall risk assessment and may support future clinical applications to improve the health and well-being of patients suffering from gait and mobility impairments.
Measurement of the tissue sodium concentration (TSC) in human tissue offers important insights into pathological- and biological processes with applications in nephrology, neurology and myology. The TSC can be determined non-invasively by sodium magnetic resonance imaging (23Na MRI). The aim of this work is to accelerate 23Na MRI for TSC measurements in the skeletal muscle. Reconstruction methods and adapted acquisition techniques were investigated to reduce the acquisition time for 23Na MRI. The time saved could improve the applicability of TSC measurements for the use in the clinical environment and clinical studies. The main focus of this thesis is on compressed sensing (CS) reconstruction techniques and anisotropic acquisition techniques with reduced undersampling artifacts.
Here, the acquisition time reduction is achieved by undersampling, which means measuring less data points than needed for an image with a certain resolution. This undersampling causes artifacts and decreases image quality. The acquisition sequences and reconstruction techniques used and developed in this thesis seek to remedy these deficiencies.
First, 3D dictionary-learning compressed sensing (3D-DLCS) was evaluated for reconstruction of undersampled 23Na MRI data acquired with a density-adapted 3D-radial sampling sequence with a cuboid field-of-view (DA-3DPR-C). The results of the study with simulations and in vivo measurements indicated that 3D-DLCS could be applicable for a four-fold acceleration of TSC measurements in the skeletal muscle. However, the quantification error decrease was only about 10% compared to the reference.
Based on these results, optimized acquisition techniques were investigated and developed for accelerated 23Na MRI. Two acquisition sequences were used as a basis: the DA-3DPR-C method and a 3D acquisition-weighted density-adapted stack-of-stars sequence (AWSOSt). Two further acquisition sequences were developed to increase the artifact incoherence: A golden ratio rotated stack-of-stars sequence based on AWSOSt and a DA-3DPR-C adaption with golden means reordered projections. Comparisons in simulations and an in vivo/vitro study showed that the quantification error is decreased and the artifact incoherence is increased by adapting acquisition trajectories using the golden angle and golden means.
Additionally, three CS algorithms were compared for the adapted stack-of-stars acquisition sequence in simulations and an in vivo/vitro study: 3D-DLCS, total variation CS (TV-CS) and TV-CS with a block matching prior. In this comparison, the highest quantification error decrease of 35% was observed with 3D-DLCS for an acceleration factor of 4.1 and a measurement time of less than two minutes. The results demonstrate the impact of artifact incoherence for CS reconstruction. Furthermore, the investigations showed that applicable acquisition times are achievable by artifact-reduced acquisition and CS reconstruction.
Beyond CS, deep learning (DL) based reconstruction was explored in a preliminary study. A U-Net was trained using 350 23Na MRI calf measurements. The quantification error was decreased by 64% with the U-Net compared to the reference method. These promising results indicate that DL-based reconstruction could probably be the next step for accelerated 23Na MRI.
The analysis of clinical use cases suggests that one of the most important aspects in storing and transmitting medical image data still consists in compression with its focus on time-efficiently reducing storage demand while maintaining image quality. Current appliances typically perform compression using common signal processing codecs like described in JPEG or MPEG/ITU-T standards. As such codecs have been designed mostly for compression of camera-captured natural scenes, efficiency can be improved by exploitation of deviating image treatment and image characteristics in medicine.
Using a bottom-up approach, this thesis introduces novel techniques both for pixel-wise prediction, mostly used in lossless and high-quality scenarios, as well as for frame-wise prediction, most useful in medium-quality scenarios. Because of diverse image characteristics across modalities, focus is restricted to computed tomography and for frame prediction in particular to dynamic 3-D+t cardiac acquisitions. Apart from traditional approaches, foremostly numerical optimization like linear and nonlinear least-squares but also discrete algorithms are utilized in order to find context region, mean, and variance of predictions, determine local image structures, weight rate versus distortion, identify motion between frames, remove noise from predictors, etc. Backward-adaptive autoregression approaches are thoroughly compared and extended to 3-D images, adaptive context selection and boundary treatment, closed probability distribution estimation, and lots of other procedures in order to make them usable within real codecs like a massively parallel implementation on GPUs or the presented Open Source framework Vanilc. Vanilc's compression ratio is shown to beat all algorithms from literature with implementations available for comparison the author is aware of. Next to alternative developments intended for use with small contexts like Burg or EMP predictors, also a lossy application has been designed, outperforming on the one hand established codecs like HM or VTM at high qualities, while featuring on the other hand a noise removing behavior that in reality even enhances image quality as proven by phantom reconstruction simulations.
For medium-quality image compression three deformation compensation methods are proposed to replace block-based compensation in dynamic data evincing heart movements. One of them models physiological 3-D muscle contractions and is again realized in Nvidia CUDA. Together with Vanilc compression of motion information and frequency-filtered combination with axially preceding slices, it surpasses the rate-distortion performance of modern inter predictors like the one realized in HM. A harmonized deformation inversion algorithm for applications like motion-compensated temporal filtering or intermediate image interpolation completes the thesis.
Due to their ease of use and their reliability, managed storage services "in the cloud" have become a standard way to store personal files for many users. In fact, many apps on mobile devices use local storage on the client merely as a cache for data that is fully stored on a remote server. Consequently, data from cloud storage services is an increasingly valuable source of digital evidence in forensic investigations. This document presents the results of a student project that was performed at Friedrich-Alexander-Universität Erlangen-Nürnberg in the winter term 2021/22. Six groups of students analyzed the most relevant network storage technologies (Samba, Nextcloud, Dropbox, Google Drive, OneDrive, iCloud) regarding two questions: (1) What effect does data acquisition by the client have on the data stored on the server? (2) Does the technology support delayed verification of data acquisition in any way? The two questions refer to critical aspects of forensic evidence collection, namely in what way does evidence collection interfere with the evidence, and how easy is it to prove the provenance of data in a forensic investigation. In the concluding discussion we compare the different technologies and develop a taxonomy of storage services that can be used to assess other cloud storage services with regarding the evidental value of data acquired from them.
A Physics-Aware Neural Network Approach for Flow Data Reconstruction From Satellite Observations
(2022)
An accurate assessment of physical transport requires high-resolution and high-quality velocity information. In satellite-based wind retrievals, the accuracy is impaired due to noise while the maximal observable resolution is bounded by the sensors. The reconstruction of a continuous velocity field is important to assess transport characteristics and it is very challenging. A major difficulty is ambiguity, since the lack of visible clouds results in missing information and multiple velocity fields will explain the same sparse observations. It is, therefore, necessary to regularize the reconstruction, which would typically be done by hand-crafting priors on the smoothness of the signal or on the divergence of the resulting flow. However, the regularizers can smooth the solution excessively and will not guarantee that possible solutions are truly physically realizable. In this paper, we demonstrate that data recovery can be learned by a neural network from numerical simulations of physically realizable fluid flows, which can be seen as a data-driven regularization. We show that the learning-based reconstruction is especially powerful in handling large areas of missing or occluded data, outperforming traditional models for data recovery. We quantitatively evaluate our method on numerically-simulated flows, and additionally apply it to a Guadalupe Island case study—a real-world flow data set retrieved from satellite imagery of stratocumulus clouds.
Every day, organizations following a collect-everything mentality generate, process, and store an ever-increasing amount of data. With the increasing amount of data available, it is becoming more difficult and complex to analyze it in a way that creates beneficial knowledge. This analysis is usually done via queries, i. e. the queries themselves also contain knowledge in turn.
The goal of this dissertation is to make the knowledge contained in the queries available. Often, this knowledge is not immediately apparent but is tacit expert knowledge that usually exists only in the heads of a few domain experts. As a solution, this dissertation presents QRep, a query repository that can manage SQL queries and the knowledge they contain in the form of query metadata. The functionality provided by QRep goes far beyond that of a conventional database system. It can derive multiple query metadata automatically. For metadata that do not allow this, it provides a mechanism to semi-automatically derive the underlying
query knowledge via evolutionary domain-specific policy rules.
This dissertation first provides a conceptual data model for the management of query metadata. Second, it describes the mapping of this schema partly to a relational schema and a multi-relational directed property graph. To enable a performant retrieval of query metadata, QRep internally accesses tree-structured metadata via
graph traversals and tabular metadata via SQL. For simple, uniform user access to the query metadata, QRep provides a domain-specific language that can be easily adjusted and extended. Its fundamental parts are so-called basic query-processing patterns, which can be nested and combined via Boolean operators to form arbitrary query-processing patterns.
The domain-specific language is one component of the policy rules. These rules enable the externalization of tacit expert knowledge regarding queries. Furthermore, they do not require profound technical knowledge and can be defined evolutionarily at runtime. A policy rule is based on a conditional rule and automatically classifies queries matching a processing pattern according to its consequent part. For this, QRep incrementally aligns the basic patterns in the policy rule with the metadata of the query and stores the classification result as additional metadata if this alignment evaluates to True.
The pilot implementation of QRep features a conventional three-tier client-server architecture and provides a user-friendly graphical user interface. It can be deployed in a minimally-intrusive way to arbitrary existing IT landscapes to facilitate integration
into existing projects. The average latency for deriving metadata for a query is 366 ms, and 906 ms for aligning a query with all policy rules. For long-running queries, these latencies are negligible compared to their execution time.
Unter der Prämisse der guten Form wurden in der Nachkriegszeit Alltagsgegenstände mit verschiedenen Designpreisen ausgezeichnet oder in Ausstellungen gezeigt, um für gute Gestaltung zu sensibilisieren. Die Trinkgläser aus dieser Zeit stehen hier im Fokus. Was ist ein gutes Glas? Dieser Frage geht die Autorin anhand von damals verfassten Richtlinien und prämierten Gläsern nach. Zusätzlich legt sie die Entwicklung des digitalen Instrumentariums dar, das für die Forschung und den Vergleich der Objekte zum Einsatz kam. Dazu gehört die Typisierung von Trinkgläsern sowie eine genaue Formzuschreibung.
Diese designhistorische Betrachtung zeigt auf, wie zeitgemäße Forschung zu einem spezifischen Fachgebiet in Museen aussehen könnte.
Log files are plain text files written by any modern computer system. Their content is determined by logging statements during software development. Thus, any kind of event can be collected during runtime. As a result, logs contain a wealth of precious information that is used for various analyses. However, current log processing methods assume static log structures and contents. Due to today’s focus on agile software development paradigms, source code and therein contained logging statements can constantly change. This leads to failing preprocessing and incorrect data points for subsequent analyses. If those failures are not detected,
faulty information will be extracted which will result in erroneous insights.
This dissertation targets successful log file processing from different angles using artificial intelligence approaches. First, missing data points induced by changing software are addressed. Using Magnetic Resonance Imaging (MRI) examination data as an example, new variables are added to the logging mechanism with
growing requirements. As a consequence, logs from systems with earlier software version are missing the new features. Therefore, we propose classification techniques to learn feature correlations from complete data sets. Afterwards, these trained models are applied to impute data points where the respective feature is
missing. We prove the effectiveness of a feed forward neural network by successfully determining the desired feature for more than 94% of the MRI exams.
Furthermore, missing information can also originate from failing parsers. Despite availability within the raw logs, crucial data points might not be extracted correctly. Since software changes can entail log file structure adaptions, they can pose insurmountable challenges to existing log parsing methods. We investigate Hidden Markov Models and Deep Learning methods in order to process log files automatically and reliably despite those challenges. We study different log file data sets from various systems with manifold log changes and outperform state-of-the-art parsers. Thus, we propose a novel pipeline for flexible parsing. It contains a stateful Long Short-Term Memory (LSTM) network to model adaptability regarding log changes. We call it FlexParser and yield for all studied data sets an F1-Score of 92% and higher.
Successfully parsed log files hold manifold chances to find actionable insights. Therefore, we process the parsing results of MRI log files in order to predict hardware failures. High image quality is crucial for medical diagnosis, however, directly depends next to the selected imaging parameters also on flawless hardware. Following, we train Deep Learning models on image features and respective recorded hardware conditions. Since available data points from failing components is limited, we employ different data augmentation techniques. Furthermore, we investigate Ensemble Learning to combine insights from different models and compare results with those achieved by time series methods. Concluding,
we propose an Ensemble Learning pipeline which reliably detects hardware failures achieving an F1-Score of 94.14%.
Invasive Computing
(2022)
Invasive computing is a paradigm for designing and programming future parallel computing systems. For systems with 1,000 or more cores on a chip, resource-aware programming is of utmost importance to obtain high utilisation as well as computational, energy and power efficiency. Invasive computing provides a programmer explicit handles to specify and argue about resource requirements desired or required in different phases of execution: In an invade phase, an application asks the operating system to allocate a set of processor, memory and communication resources to be claimed. In a subsequent infect phase, the parallel workload is spread and executed on the obtained claim of resources. Finally, if the degree of parallelism should be lower again, a retreat operation frees the claim again, and the application resumes a sequential execution. To support this idea of self-adaptive and resource-aware programming, not only new programming concepts, languages, compilers, and operating systems were needed to be developed, but also revolutionary architectural changes in the design of MPSoCs (multiprocessor systems-on-a-chip) to efficiently support invasion, infection, and retreat operations. This book gives a comprehensive overview of all aspects of invasive computing.
Body-worn sensors, so-called wearables, are getting more and more popular in the sports domain. Wearables offer real-time feedback to athletes on technique and performance, while researchers can generate insights into the biomechanics and sports physiology of the athletes in real-world sports environments outside of laboratories. One of the first sports disciplines, where many athletes have been using wearable devices, is endurance running. With the rising popularity of smartphones, smartwatches and inertial measurement units (IMUs), many runners started to track their performance and keep a digital training diary. Due to the high number of runners worldwide, which transferred their data of wearables to online fitness platforms, large databases were created, which enable Big Data analysis of running data. This kind of analysis offers the potential to conduct longitudinal sports science studies on a larger number of participants than ever before.
In this dissertation, both studies showing how to extract endurance running-related parameters from raw data of foot-mounted IMUs as well as a Big Data study with running data from a fitness platform are presented.