• Deutsch
Login

Open Access

  • Home
  • Search
  • Browse
  • Publish
  • FAQ
Schließen
  • Dewey Decimal Classification
  • 0 Informatik, Informationswissenschaft, allgemeine...
  • 00 Informatik, Wissen, Systeme

004 Datenverarbeitung; Informatik

Refine

Author

  • Freiling, Felix (7)
  • Meyer-Wegener, Klaus (4)
  • Spreitzenbarth, Michael (4)
  • Bajramovic, Edita (3)
  • Dewald, Andreas (3)
  • Frinken, Marius (3)
  • German, Reinhard (3)
  • Hagenhoff, Svenja (3)
  • Riehle, Dirk (3)
  • Sommer, Christoph (3)
+ more

Year of publication

  • 2021 (1)
  • 2020 (9)
  • 2019 (16)
  • 2018 (6)
  • 2017 (8)
  • 2016 (9)
  • 2015 (6)
  • 2014 (10)
  • 2013 (13)
  • 2012 (18)
+ more

Document Type

  • Doctoral Thesis (165)
  • Report (28)
  • Study Thesis (9)
  • Article (8)
  • Working Paper (4)
  • Master's Thesis (3)
  • Book (1)
  • Habilitation (1)

Language

  • English (138)
  • German (81)

Has Fulltext

  • yes (219)

Keywords

  • Betriebssystem (12)
  • Computertomographie (10)
  • Maschinelles Lernen (8)
  • - (7)
  • Echtzeitsystem (7)
  • Eingebettetes System (7)
  • Rekonstruktion (7)
  • Visualisierung (7)
  • Bildverarbeitung (6)
  • Operating System (6)
+ more

Institute

  • Technische Fakultät -ohne weitere Spezifikation- (140)
  • Technische Fakultät (36)
  • Department Informatik (25)
  • Medizinische Fakultät -ohne weitere Spezifikation- (5)
  • Medizinische Fakultät (4)
  • Department Elektrotechnik-Elektronik-Informationstechnik (2)
  • Fakultätsübergreifend / Sonstige Einrichtung (2)
  • Zentrale Universitätseinrichtung -ohne weitere Spezifikation- (2)
  • Department Medienwissenschaften und Kunstgeschichte (1)
  • Department Physik (1)
+ more

219 search hits

  • 1 to 20
  • 10
  • 20
  • 50
  • 100

Sort by

  • Year
  • Year
  • Title
  • Title
  • Author
  • Author
Multi-modal Medical Image Processing with Applications in Hybrid X-ray/Magnetic Resonance Imaging (2021)
Stimpel, Bernhard
Modern medical imaging allows for a detailed insight into the human body. The wide range of imaging methods enables the acquisition of a large variety of information, but the individual modalities are usually limited to a small part of it. Therefore, often several acquisition types in different modalities are necessary to obtain sufficient information for the assessment. The evaluation of this extensive information poses great challenges for clinical users. In addition to the time expenditure, the identification of correlations across multiple data sets is a difficult task for human observers. This highlights the urgency of holistic processing of the accruing information. The simultaneous evaluation and processing of all available information thus not only has the potential to uncover previously unimagined correlations but is also an important step towards relieving the burden on clinical personnel. In this thesis, we investigate multiple approaches for the processing of multi-modal medical image data in different application areas. First, we will focus on hybrid X-ray and magnetic resonance (MR) imaging. The combination of these modalities has great potential especially in interventional imaging due to the combination of fast, high-resolution X-ray imaging and the high contrast diversity of magnetic resonance imaging. For further processing of this data, however, it is often advantageous to have the information from both modalities in one domain. Therefore, we investigate the possibility of a deep learning-based projection-to-projection translation of MR projection images to corresponding X-ray-like views. In the course of this work, we show that the characteristics of projection images pose special challenges to the methods of image synthesis. We tackle these by weighting the objective function with a focus on high-frequency structures and a corresponding adaptation of the network architecture. Both modifications show clear improvements compared to conventional approaches, quantitative as well as qualitative. Second, we deal with the topic of comprehensibility in the course of deep learning-based processing of multi-modal image data. Specifically, we investigate the combination of established deep learning approaches with known operators, in this case the guided filter. The conducted experiments show that this combination allows for a processing that performs less manipulation of the image content, is more robust to degraded input data, and ensures a higher level of protection against adversarial attacks. All this can be achieved with little or no loss of pure performance. Third, we are concerned with an approach for the optimization of image processing methods based solely on feedback from a human user. This approach addresses a difficult problem of image processing, namely the automated evaluation of image quality. While human observers can rarely explicitly provide a reference as the target of the optimization process, their ability to judge results is usually excellent. We use this to set up an objective function in a forced-choice experiment, which is based only on the judgment of a user. We show that the presented strategy can be used successfully for optimization, from simple parameterized operators up to complex neural networks.
A scalable and extensible checkpointing scheme for massively parallel simulations (2019)
Kohl, Nils ; Hötzer, Johannes ; Schornbaum, Florian ; Bauer, Martin ; Godenschwager, Christian ; Köstler, Harald ; Nestler, Britta ; Rüde, Ulrich
Realistic simulations in engineering or in the materials sciences can consume enormous computing resources and thus require the use of massively parallel supercomputers. The probability of a failure increases both with the runtime and with the number of system components. For future exascale systems, it is therefore considered critical that strategies are developed to make software resilient against failures. In this article, we present a scalable, distributed, diskless, and resilient checkpointing scheme that can create and recover snapshots of a partitioned simulation domain. We demonstrate the efficiency and scalability of the checkpoint strategy for simulations with up to 40 billion computational cells executing on more than 400 billion floating point values. A checkpoint creation is shown to require only a few seconds and the new checkpointing scheme scales almost perfectly up to more than 260, 000 (218) processes. To recover from a diskless checkpoint during runtime, we realize the recovery algorithms using ULFM MPI. The checkpointing mechanism is fully integrated in a state-of-the-art high-performance multi-physics simulation framework. We demonstrate the efficiency and robustness of the method with a realistic phase-field simulation originating in the material sciences and with a lattice Boltzmann method implementation.
Model-Based and Data-Driven Geometry Alignment for Angiography Systems (2020)
Preuhs, Alexander
Tomography is the reconstruction of internal structures based on a sensory signal. X-ray cone-beam computed tomography (CBCT) measures the X-ray absorption with a source-detector gantry mounted on a C-shaped arm that rotates around the object. During the rotation, projection images are acquired. To obtain a tomographic reconstruction, the projection data is filtered and back-projected using the geometry information from the scan trajectory. This technique is of high relevance for stroke applications, where a hemorrhagic stroke is to be distinguished from an ischemic stroke. By performing the stroke diagnostic on the interventional CBCT system, no patient transfer is required which minimizes the door-to-groin time. However, due to the prolonged acquisition time of CBCT compared to diagnostic computed tomography (CT), the likelihood of patient motion increases. This provides the demand for rigid motion compensation and geometry alignment algorithms. In this thesis, we investigate model-based and data-driven approaches for the compensation of geometry misalignment. Grangeat’s theorem provides a measure to compare the geometric consistency of two projections. In an ideal scenario, the derivative of corresponding epipolar line integrals is equal. Geometric inaccuracies and motion lead to unmatched epipolar line pairs and impair a corresponding consistency measure. We present algorithmic extensions for the computation of Grangeat’s consistency based on a pair of projection images. Using these extensions, we derive analytic motion gradients describing the change of consistency with respect to a rigid motion. Grangeat’s theorem is known to be eÿcient for out-plane motion compensation within circular trajectories, however, in-plane motion is barely detectable. We devise the X-trajectory which is sensitive to both, in-plane and out-plane motion. The X-trajectory can be computed from a regular short-scan, if the object is symmetric, by mirroring the trajectory at the symmetry plane. This requires the estimation of the object’s symmetry plane. We show that Grangeat’s theorem is suitable for the estimation of symmetry planes using an anthropomorphic head phantom. With experiments using digital phantoms and real acquisitions, we show that the X-trajectory improves in-plane motion com-pensation. Hence, symmetry is a powerful concept with encouraging initial results when applied to head imaging, which is worth further investigations to validate its clinical applicability. We also investigate data-driven approaches, where we learn a cost function di-rectly from the clinical data. In this context, we use an appearance learning ap-proach, where a network is applied to regress the reprojection error (RPE) from the reconstructed slice images. The RPE is available during training of the network, be-cause the ground-truth geometry as well as the motion-affected geometry are known. The RPE measures the reconstruction-relevant deviations and is convex. We devise a siamese triplet network to regress the mean reprojection error (mRPE) as well as a projectionwise RPE. The latter provides a rough estimate which parts of the trajec-tory are affected by motion, whereas the mRPE is used as optimization target. The motion compensated geometry is found by a constraint optimization of the mRPE, where only projections are optimized that are classified by the network as motion-corrupted. Using a motion compensation benchmark we show that our approach is superior compared to state-of-the-art image quality metrics (IQMs).
Gegen die Diskussion mit den drei Unbekannten – Daten, Algorithmen und Digitalisierung (2020)
Hagenhoff, Svenja
The inflationary use of currently prominent terms, such as data centration or the digital age, without the effort of naming and explaining how the phenomena they refer to are to be understood and by which characteristics they are characterized, does nothing more than obscure meaning without any chance of gaining insight or well-founded opinion. Algorithms, data, digitization, and are three of the candidates that are frequently used but only rarely explained in a well-founded and differentiated manner. This article comes into this desideratum. It deals with these 'three unknowns' and attempts to grasp and substantiate the terms and the phenomena they refer to and to formulate what exactly is 'new' and what constitutes the quality of difference to previous states (age without digitization.
Mapping eines deutschen, klinischen Datensatzes nach OMOP Common Data Model (2020)
Lang, Lukas
Hintergrund und Ziele Es steckt großes Potenzial in der Sekundärnutzung klinisch-medizinischer Daten, die in der elektronischen Gesundheitsakte eines Patienten oder einem Krankenhaus-informationssystem gespeichert werden. Um dieses Potenzial ausschöpfen und die Daten institutions- oder sogar internationalübergreifend nutzen zu können wird eine normierte und übergreifende IT-Infrastruktur benötigt, die die divergierenden Daten in homogenisierter Form abbilden kann. Für diesen Zweck wurde unter anderem das US-amerikanische OMOP Common Data Model entwickelt, welches in dieser Arbeit darauf untersucht werden soll, ob es für die Verwendung mit einem deutschen, klinischen Datensatz geeignet ist. Methoden Nach einer Hintergrund-Recherche erfolgte zunächst die Definition des zu mappenden Datensatzes. Danach wurde ein Ablaufschema für den Mapping-Prozess entwickelt, welcher durch die von OMOP zur Verfügung gestellten Tools unterstützt wird. Es folgte die Umsetzung des Mappings anhand des erarbeiteten Schemas sowie anschließend dessen manuelle Erweiterung. Die Dokumentation des gesamten Prozesses erfolgte im Wiki des Instituts für Medizinische Informatik. Mittels der Software Rabbit-in-a-Hat wurde abschließend ein ETL-Dokument als Zusammenfassung der Mapping-Ergebnisse erzeugt. Außerdem fand noch eine quantitative Auswertung des Mappings statt. Ergebnisse und Beobachtungen Von den insgesamt 59 Datenfeldern der fünf Tabellen des zu mappenden Datensatzes konnten 46 (78,0%) auf Datenfelder in 13 OMOP CDM Tabellen gemappt werden. Dabei gab es für 15 Datenfelder keine kongruenten CDM Datenfelder, sodass sie entsprechend ihrem Inhalt über die Tabelle OBSERVATION abgebildet wurden. In OMOP CDM wurden 100 Datenfelder beim Mapping verwendet, was 63,3% der gesamten Datenfelder der 13 OMOP CDM Tabellen entspricht. Die Abbildung der Datenfelder im OMOP CDM erfolgte in 63,0% als Konzept/Wert-Paar, in 6,5% nur als Konzept und in 30,5% durch Übertragung des Datenfeld-Werts. Für drei Werte aus dem zu mappenden Datensatz konnte kein Konzept in den Standard Vokabularen gefunden werden. Insgesamt enthalten damit 71 der 100 genutzten OMOP Datenfelder Werte und 29 Konzepte, wobei 55,2% feste Mappings zu genau einem Konzept und 44,8% Mappings zu mehr als einem Konzept sind. Es kam an mehreren Stellen im Mapping-Prozess zu Informationsverlusten. Schlussfolgerungen OMOP Common Data Model ist für das Mapping von deutschen, klinischen Datensätzen geeignet und könnte somit ein passendes Grundmodel für ein Data Warehouse zum Pooling deutscher Gesundheitsdaten darstellen. Im Mapping-Prozess kam es allerdings zu den Problemen des Informationsverlusts und der Abweichung von Standard Vokabularen. In weiteren Arbeiten gilt es die Praktikabilität des Mappings für umfangreichere Datensätze zu testen und die Kompatibilität mit den Standard Vokabularen zu optimieren.
Determination and Enhancement of the Forming Limit Curve for Sheet Metal Materials using Machine Learning (2020)
Jaremenko, Christian
Future legal standards for European automobiles will require a considerable reduction in CO2 emissions by 2021. In order to meet these requirements, an optimization of the automobiles is required, comprising technological improvements of the engine and aerodynamics, or even more important, weight reductions by using light-weight components. The properties of light-weight materials differ considerably from those of conventional materials and therefore, it is essential to correctly define the formability of high-strength steel or aluminum alloys. In sheet metal forming, the forming capacity is determined by means of the forming limit curve that specifies the maximum forming limits for a material. However, current methods are based on heuristics and have the disadvantage that only a very limited portion of the evaluation area is considered. Moreover, the methodology of the industry standard is user-dependent with simultaneously varying reproducibility of the results. Consequently, a large safety margin from the experimentally determined forming limit curves is required in process design. This thesis introduces pattern recognition methods for the determination of the forming limit curve. The focus of this work is the development of a methodology that circumvents the previous disadvantages of location-, time-, user- and material dependencies. The dependency on the required a priori knowledge is successively reduced by incrementally improving the proposed methods. The initial concept proposes a supervised classification approach based on established textural features in combination with a classifier and addresses a four-class problem consisting of the homogeneous forming, the diffuse and local necking, as well as the crack class. In particular for the relevant class of local necking, a sensitivity of up to 92% is obtained for high-strength materials. Since a supervised procedure would require expert annotations for each new material, an unsupervised classification method to determine the local necking is preferred, so that anomaly detection is feasible by means of predefined features. A probabilistic forming limit curve can thus be defined in combination with Gaussian distributions and consideration of the forming progression. In order to further reduce the necessary prior knowledge, data-driven features are learned based on unsupervised deep learning methods. These features are adapted specifically to the respective forming sequences of the individual materials and are potentially more robust and characteristic in comparison to the predefined features. However, it was discovered that the feature space is not well-regularized and thus not suitable for unsupervised clustering procedures. Consequently, the last methodology introduces a weakly supervised deep learning approach. For this purpose, several images of the beginning and end of the forming sequences are used to learn optimal features in a supervised setup while regularizing the feature space. Through unsupervised clustering, this facilitates the class membership determination for individual frames of the forming sequences and the definition of the probabilistic forming limit curve. Moreover, this approach enables a visual examination and interpretation of the actual necking area.
Coalgebraic Semantics and Minimization in Sets and Beyond (2020)
Wißmann, Thorsten
The theory of coalgebras provides a uniform view on state-based systems, which are omnipresent throughout computer science. The notion of coalgebras is parametric in the choice of a functor F on a category. Depending on this choice, the notion of an F-Coalgebra instanitates to different kinds of state-based systems, for example, Markov chains, deterministic automata, and transition systems. The present thesis contributes to the coalgebraic framework in two aspects: 1. the characterization of more systems in coalgebraic terms, and 2. the development of generic methods applicable to different systems that are coalgebras. In the preliminaries, we recall the basic coalgebraic definitions and further categorical notions that are needed later. We illustrate the notions using the well-studied examples of coalgebras in sets. Part I is dedicated to the semantics of two new (families of) instances of the generic framework of coalgebras beyond sets. By embedding these systems into the coalgebraic framework, we make existing generic coalgebraic methods applicable for the respective kind of state-based systems: In Chapter 3 in Part I, we characterize the orbit-finite coalgebraic behaviours in nominal sets by relating them to the well-understood finite behaviours in sets, where orbit-finite is the canonical finiteness notion in nominal sets. Concretely, we define a special class of functors on nominal sets and call them localizable liftings of a set-functor. Then we characterize the rational fixpoint of such a functor, which captures the semantics of finitary coalgebras, i.e. orbit-finite coalgebras in nominal sets. We prove that the rational fixpoint of localizable liftings lifts from the rational fixpoint of the corresponding set functor. Secondly, we establish a sufficient property on functors F on nominal sets such that the rational fixpoint of quotient functors is given by quotienting the rational fixpoint of F . By combining these two observations, we obtain an explicit description of the rational fixpoint, i.e. the finitary semantics, of a large family of functors involving name binding, exponentiation, finite branching, and polynomial constructs. Their orbit-finite behaviours are thus equivalence classes of the finite coalgebra semantics of their set-theoretic representation. For example, for the functor modelling λ-terms, the orbit-finite behaviours are α-equivalence classes of rational λ-trees, which are possibly infinite λ-terms that have a finite representation as a coalgebra in sets. In Chapter 4 in Part I, we model coalgebraic systems with upgrades. These are systems that run in a specific version. For each version, only a certain set of transitions are available, and the higher the version, the more transitions are available. The corresponding notion of bisimilarity, called ‘conditional bisimulation’, has been defined previously for a particular type of system called conditional transition system. In Chapter 4 we define it generically for a functor that satisfies certain axioms, generalizing the concrete notion. For this, we consider functors on the Kleisli category of the reader monad on the category of partially ordered sets. In this Kleisli category, we prove for coalgebras satisfying a sufficient condition called ‘upgrade preservation’ that the generic notion of coalgebraic bisimilarity is characterized precisely by conditional bisimilarity. Furthermore, we reduce the computation of the behavioural equivalence in the Kleisli category to that in the base category of partially ordered sets. Part II is dedicated to the development of minimization methods for state-based systems on a coalgebraic level of generality. The minimization of a system consists of two tasks. One task is to restrict the system to those states that are reachable from a distinguished initial state. This task is called reachability construction or simply reachability. The other task of minimization is to identify states that exhibit the same behaviour. This task removes the redundancy from the system and is usually called bisimilarity minimization or also partition refinement, describing the algorithm scheme often used to solve this task. The presented methods range from abstract categorical constructions to efficient algorithmic implementations. In Chapter 5 in Part II, we construct the reachable subcoalgebra of a given pointed coalgebra. The point of the coalgebra serves as the initial state. The construction is iterative, has at most countably many steps, works in a general categorical setting, and is proven correct for an existing concise definition of reachability of a pointed coalgebra. We discuss instances in sets and beyond. In the special case of Set, the reachability in a coalgebra is equivalent to the usual graph-theoretical reachability in the so-called canonical graph of the coalgebra. Furthermore, our construction resembles the standard breadth-first search that computes the reachable hull in the canonical graph. We also discuss examples beyond sets. While instances in nominal sets or multisorted sets are direct, more assumptions need to be fulfilled to instantiate the construction in Kleisli categories. As an alternative, we reduce the reachability problem for coalgebras in the Kleisli category to reachability in the base category. In Chapter 6 in Part II, we discuss the task of bisimilarity minimization of coalgebras using partition refinement algorithms. We solve this task by a categorical construction that is subsequently refined towards an efficient algorithmic implementation and a running partition refinement tool, CoPaR. For many concrete system types, concrete partition refinement algorithms have been developed throughout the past 50 years. These efficient algorithms typically run in O((m + n) ⋅ log n) time, where n is the number of states in the system and m is the number of edges. It is a common assumption that every state is involved in at least one edge, m ≥ n, and so one typically calls them m ⋅ log n algorithms. In Chapter 6 we start by showing their similarities looking at two concrete example runs of the respective algorithms for (labelled) transition systems and Markov chains. We then formalize the most significant similarities by defining a categorical partition refinement construction that works in every regular category and computes the behavioural equivalence in a coalgebra. The construction is parametric in a heuristic that tells the algorithm which information to process next in the partition refinement. By a special choice for this heuristic, we obtain an existing partition refinement algorithm for coalgebras by König and Küpper [84] as an instance. By another heuristic that involves size considerations, we formalize the so-called ‘process the smaller half’ strategy that is used in all efficient partition refinement algorithms and that contributes to their low run-time complexity. As a next step towards a generic and efficient algorithm for coalgebras, we introduce zippable functors and instantiate the construction to sets. These assumptions allow us to make an optimization where we compute the refined partitions incrementally. Finally, we formalize the remaining functor specific aspects of the algorithm in a so-called refinement interface that we implement for many functors of interest. We then design a generic algorithm in pseudocode that is only given as parameters the input coalgebra and an implementation of the abstract refinement interface of the functor. This algorithm is proven correct on the generic level, i.e. that it indeed computes the behavioural equivalence in the input coalgebra. Moreover, we analyse the run-time complexity of the algorithm and show that it runs in O((m + n) ⋅ log n) time, where n is the number of states and m the number of edges in the input coalgebra. This directly instantiates to m⋅log n partition refinement algorithms for transition systems, Markov chains (possibly with weights in a cancellative monoid), deterministic automata, and to colour refinement for graphs (also called the 1-dimensional Weisfeiler-Lehman algorithm), where each of these instances matches the run-time of the best concrete algorithm known. In order to cover weighted systems with weights in a non-cancellative monoid, we generalize the complexity analysis of the main algorithm, relaxing the complexity requirements of the functor’s refinement interface. In this relaxed setting, we can handle weighted systems with weights in a (possibly non-cancellative) monoid M in O((m + n) ⋅ log n ⋅ log min(∣M ∣, m)). Since zippable functors are not closed under composition, we define a separate modularity construction to handle composites of such system types. This construction introduces explicit intermediate states, possibly affecting the run-time complexity. For example, Segala systems combine non-deterministic and probabilistic behaviour, and the generic algorithm minimizes them in mp ⋅ log ma where mp is the number of probabilistic edges and ma the number of non-deterministic edges. This instance matches the run- time of an explicit algorithm developed independently and at the same time [53] and improves the previously best-known algorithm for Segala systems [17]. With the modularity reduction, the generic framework also supports minimization of weighted tree automata under backwards bisimulation in O(m ⋅ log m) run-time, improving the previously best-known algorithm [62]. A comparison of all instances can be found in Table 6.1 on page 145. A case-study of an actual tool implementation called CoPaR shows that our generic category-based algorithm can be of use in practise and is capable of handling input coalgebras with millions of edges within a few minutes. In Chapter 7 in Part II, we compare the two previously discussed aspects of minimality. We show that they are dual concepts by defining an abstract notion of minimality and minimization in a category. The instance of this general minimization in the category of pointed coalgebras is the reachability construction and its instance in the dual category is bisimilarity minimization. Thus, these two minimization methods have many properties in common – up to duality. Finally, we consider the minimization of F -coalgebras under both reachability and bisimilarity. If F preserves inverse images, we show that we obtain the same minimized coalgebra no matter in which order we solve the two minimization tasks.
Data-Driven Approaches for Tempo and Key Estimation of Music Recordings (2020)
Schreiber, Hendrik
In recent years, we have witnessed the creation of large digital music collections, accessible, for example, via streaming services. Efficient retrieval from such collections, which goes beyond simple text searches, requires automated music analysis methods. Creating such methods is a central part of the research area Music Information Retrieval (MIR). In this thesis, we propose, explore, and analyze novel data-driven approaches for the two MIR analysis tasks tempo and key estimation for music recordings. Tempo estimation is often defined as determining the number of times a person would “tap” per time interval when listening to music. Key estimation labels music recordings with a chord name describing its tonal center, e.g., C major. Both tasks are well established in MIR research. To improve tempo estimation, we focus mainly on shortcomings of existing approaches, particularly estimates on the wrong metrical level, known as octave errors. We first propose novel methods using digital signal processing and traditional feature engineering. We then re-formulate the signal-processing pipeline as a deep computational graph with trainable weights. This allows us to take a purely data-driven approach using supervised machine learning (ML) with convolutional neural networks (CNN). We find that the same kinds of networks can also be used for key estimation by changing the orientation of directional filters. To improve our understanding of these systems, we systematically explore network architectures for both global and local estimation, with varying depths and filter shapes, as well as different ways of splitting datasets for training, validation, and testing. In particular, we investigate the effects of learning on different splits of cross-version datasets, i.e., datasets that contain multiple recordings of the same pieces. For training and evaluation the proposed data-driven approaches rely on curated datasets covering certain key and tempo ranges as well as genres. Datasets are therefore another focus of this work. Additionally to creating or deriving new datasets for both tasks, we evaluate the quality and suitability of popular tempo datasets and metrics, and conclude that there is ample room for improvement. To promote better, transparent evaluation, we propose new metrics and establish a large open and public repository containing evaluation code, reference annotations, and estimates.
Computer-aided Diagnosis for Magnetically Guided Capsule Endoscopy (2020)
Mewes, Philip
Digestive diseases and disorders are diverse with some being more common and of short duration, whereas others may be life threatening if left untreated. They are among the most common problems that people are afflicted with at least once in their lifetime. As in many other clinical disciplines, diagnosis and treatment of digestive diseases are driven towards minimally invasive approaches. Flexible endoscopy, though today's gold standard, is still characterized by a considerable level of invasiveness, patient discomfort and the need for anesthesia. In cases with a given diagnostic indication, passive video capsule endoscopy (VCE), developed in the early 2000s, may significantly decrease the invasiveness in performing a differential diagnosis of upper and lower gastrointestinal tract diseases. However, capsule endoscopes have the drawback of not being actively steerable inside the gastrointestinal tract, which limits their diagnostic abilities. Magnetically guided capsule endoscopy (MGCE), which is the imaging modality used in this thesis, has overcome this problem by providing steerability of the capsule endoscope. Although the feasibility of MGCE procedures has been proven in two clinical studies, the quality of the examination depends on the skill of the operator and his ability to detect visual signs of possible pathologies in real time. During a MGCE procedure, the physician needs to focus on two images simultaneously as diagnosis and image acquisition happen mostly at the same time. In comparison, VCE images can be reviewed at a speed suitable for the physician and the reviewing process can be paused whenever necessary. Therefore, a system for assisting the physician in scanning and interpreting images in real time can be beneficial for MGCE. To that end, in this thesis a computer-aided diagnosis (CAD) system, comprised of multiple computational steps, is developed specifically for MGCE images. The first part of this work focuses on the segmentation of bubbles, particles and other debris, which frequently appear in MGCE images, and interfere with the CAD process. The developed approach is capable of segmenting image areas with such content with an average accuracy of 87.8% and is, therefore, providing an image region of interest for further processing. In the second part of this thesis a novel system for semantic and topological classification of MGCE images is presented. Based on a series of processing steps and extracted features, images are grouped into different categories. These categories are, for example, the anatomical location of the acquisition, or the image pose relative to anatomical structures in the upper gastrointestinal tract. An average accuracy of over 80% for all classification stages could be achieved. Such information facilitates the post-procedure review process or provides strong a priori knowledge for CAD algorithms. In the third part of this work a CAD method is developed that can automatically detect two different pathologies in MGCE images. The objective of this technique is to operate during the intervention. It indicates in real time the presence of pathological structures to the physician, who can then decide whether to more closely examine the indicated area. The presented approach yields a 95% average sensitivity. The last part of this work focuses on the detection of pre-cancerous signs. To that end, a staining technique, so-called chromoendoscopy, is applied to MGCE and is evaluated in order to highlight pathologies that are barely visible for the human observer. This work charts a path for the application of chromoendoscopy on MGCE via an ex-vivo animal study, in order to improve the visibility of pre-cancerous changes in cells and tissues. In summary for my thesis, I designed a diverse set of tools for computer-aided diagnosis in MGCE. The developed CAD system offers both real time interventional support, as well as facilitation of off-line processing.
Computer-Aided Tumor Diagnosis of Microscopy Images (2020)
Aubreville, Marc
For the diagnosis of medical images, computer-aided methods can help to lower time requirements and reduce human error by highlighting regions of particular interest, by aiding inexperienced physicians in their interpretation, or just by providing new visualizations using secondary data. In this work, several new methods to support medical experts in the diagnosis of microscopy images are presented. The first part of the thesis focuses on the processing of images acquired using confocal laser endomicroscopy. This relatively novel imaging modality provides in-vivo, in-situ microscopy at very high magnification and hence allows for microstructural analysis of tissue. A common problem with this imaging modality is the occurrence of image deteriorations. Motion artifacts are amongst the most common of these and are especially challenging to detect algorithmically. This thesis investigates multiple methods for the detection of these impediments within the image. Besides a classical machine learning pipeline, involving features known from literature and novel, specially designed features for motion artifact detection, a custom deep learning architecture that achieved an accuracy of 94% and outperformed traditional approaches is introduced. Confocal laser endomicroscopy can be used, amongst other relevant questions, for the diagnosis of squamous cell carcinoma (a tumor of the epithelium). This malignancy has a high prevalence in the head and neck region and often occurs in the upper respiratory and digestive tract. Automatic detection of these tumors using non-invasive methods could enable usage for screening by medical personnel inexperienced in the modality and thus provide possibilities for earlier detection, which could improve therapeutical outcomes. In this thesis, new, deep learning-based methods are presented and shown to achieve classification accuracies of greater than 90%, which is similar to human experts on the same modality. The second part of this thesis is about the detection of cells undergoing cell division (mitotic figures) in hematoxylin- and eosin-stained bright-field microscopy images. While it is a highly relevant task with many algorithmic competitions held on the topic, it is far from being solved. We were able to show that one crucial factor limiting the development of clinical-grade solutions is the availability of data sets of sufficient quantity and quality. In this thesis, we present new methods to efficiently create datasets for microscopy cell annotations and demonstrate their capabilities by introducing a newly created data set of unprecedented size that the methods allowed us to build. Using detection algorithms trained using the data set, it was, for the first time, possible to perform mitotic figure detection on completely labeled whole slide images of tumor tissue. In the technical validation of the data set, which was built using specimens from canine cutaneous mast cell tumors, an F1-score of 0.82 was found. An essential part of tumor grading is the selection of the mitotically most active area of tissue. Performing the mitotic count in this area is especially meaningful since the corresponding mitotic activity is known to be strongly correlated with tumor proliferation and is thus highly relevant for prognosis. This task was never before assessed algorithmically due to the lack of suitable data sets. Three novel approaches were described and compared for this purpose and shown to outperform human experts significantly. Finally, the work investigates domain-transfer and multi-domain applications of the generated models to a broader range of tissue types and species. When training and cross-domain-evaluating on two further novel whole slide image data sets of canine mammary carcinoma and human meningioma, a substantial domain shift was found. Besides the exploitation of this shift for detection, the work describes and evaluates new algorithmic approaches for unsupervised domain adaptation.
Simulation-based uncertainty correlation modeling in reliability analysis (2018)
Khosravi, Faramarz ; Müller, Malte ; Glaß, Michael ; Teich, Jürgen
Due to destructive effects like temperature and radiation, today’s embedded systems have to deal with unreliable components. The intensity of these effects depends on uncertain aspects like environmental or usage conditions such that highly safety-critical systems are pessimistically designed for worst-case mission profiles. These uncertain aspects may affect several components simultaneously, implying correlation across uncertainties in their reliability. This paper enables a state-of-the-art uncertainty-aware reliability analysis technique to consider multiple arbitrary correlations; in other words, components’ reliability is affected by several uncertain aspects to different degrees. This analysis technique combines reliability models such as binary decision diagrams with a Monte Carlo simulation, and derives the uncertainty distribution of the system’s reliability with insights on the mean, quantile intervals, and so on. The proposed correlation method aims at generating correlated samples from the uncertainty distribution of components’ reliability such that the shape and statistical properties of each individual distribution remain unchanged. Experimental results confirm that the proposed correlation model enables the employed uncertainty-aware analysis to accurately calculate uncertainty at system level.
Consistency Conditions, Compressed Sensing and Machine Learning for Limited Angle Tomography (2020)
Huang, Yixing
Cone-beam computed tomography (CBCT) is a widely used imaging technology for medical diagnosis and interventions nowadays. Compared with conventional 2-D X-ray images, 3-D volumes reconstructed by CBCT provide cross sectional images and offer much better visualization of patients' anatomical structures and disease information. In clinical applications, angiographic C-arm devices are commonly used to acquire 3-D images for image guided therapies. To acquire necessary data for image reconstruction, typically a short scan is performed. However, in practice, the gantry rotation of a C-arm system might be restricted by other system parts or external obstacles. In these cases, the limited angle reconstruction problem arises. Image reconstruction from data acquired in an insufficient angular range is called limited angle tomography. Due to missing data, artifacts, typically in the form of streaks, will degrade image quality, causing boundary distortion, edge blurry, and intensity bias. These artifacts may lead to misinterpretation of images. Therefore, artifact reduction in limited angle tomography has essential clinical value. In the course of this thesis, various algorithms are proposed to deal with data insufficiency in limited angle tomography. The first category of algorithms are to restore missing data based on data consistency conditions. Specifically, iterative Papoulis-Gerchberg algorithms based on the well known Helgason-Ludwig consistency condition (HLCC) and Fourier consistency conditions of sinograms and a regression-filtering-fusion method based on HLCC are proposed. These algorithms are able to reduce most artifacts for the Shepp-Logan phantom and a clinical image. However, it is suited for parallel-beam limited angle tomography only since HLCC is derived in parallel-beam computed tomography. The second category of algorithms are to use compressed sensing technologies. The iterative reweighted total variation (wTV) algorithm is adapted for the use in limited angle tomography. While it is able to reduce small streaks well, it is rather inept at reducing large streaks due to scale limitation. Therefore, two implementations of scale-space anisotropic total variation (ssaTV) are proposed to overcome the limitations of wTV. Both ssaTV implementations take the advantage of scale-space optimization and the anisotropic distribution of streak orientations in limited angle tomography. Therefore, they both reduce streak artifacts more effectively and efficiently than wTV. The last category of algorithms are using machine learning techniques, including traditional machine learning and deep learning. In the traditional machine learning part, reduced error-pruning tree (REPTree) is proposed to predict artifacts from extracted mean-variance-median, Hessian, and shift variant data loss features. Although REPTree achieves very good performance on the validation Shepp-Logan data, it still requires further improvement for clinical applications. Instead, deep learning, particularly the U-Net, achieves very impressive results in 120-degree cone-beam limited angle tomography on clinical data. However, its robustness is still a concern. Our experiments demonstrate that the U-Net is sensitive to Poisson noise and adversarial examples. The robustness to Poisson noise can be improved by retraining on data with noise. However, the retrained U-Net model is still susceptible to adversarial examples. To make the U-Net robust, data consistency deep learning reconstruction is proposed, which utilizes deep learning reconstructions as prior images and further constrains them consistent to measured projection data to improve image quality.
Secure Logging in Operational Instrumentation and Control Systems (2019)
Bajramovic, Edita
With Industrial Control Systems being increasingly networked, the need for sound forensic capabilities for such systems increases, including reliable log file analysis as a vital part of such investigations. However, manipulating log files is one of the steps a knowledgeable attacker can take to prevent visibility on system events and hiding traces of attacker actions in those systems. Therefore, secure logging is advisable for an effective preparation of digital forensics investigations. In addition, implementing digital forensics readiness in nuclear power plants allows efficient digital forensics investigations and proper gathering of digital forensics evidence while minimizing investigation costs. These capabilities are necessary to adequately prevent and quickly detect any security incident and perform further digital forensics investigations with complete evidence. If an attacker is able to modify log entries or blocks of log entries, critical digital evidence is lost. Within this thesis, we first evaluate the presence of digital forensics readiness in critical infrastructures, including nuclear power plants and briefly discuss existing digital forensics readiness approaches. As systems in critical infrastructures are sophisticated, such as ones in nuclear power plants, adequate preparedness is essential in order to respond to cybersecurity incidents before they happen. Due to the importance of safety in these systems, manual approaches are more favored compared to automated techniques. All required tasks and activities and expected results must be also properly documented. Application Security Controls can be one of the approaches to properly document forensic controls. However, Application Security Controls must be evaluated further to ensure forensic applicability as considerable alternatives also exist. In order to demonstrate the value of such forensic Application Security Controls, we analyze a server system of an Operational Instrumentation & Control system in terms of digital evidence. Based on the analysis results, we derive recommendations to improve the overall digital forensic readiness and the security hardening of Linux server systems in the Operational Instrumentation & Control system. Then, we introduce our formal system model and type of attackers that can access and manipulate logs and logging device. Here, we also give a brief overview of some existing secure logging approaches and compare them between each other. The goal is to standardize requirements of secure logging approaches and analyze which unified security guarantees are realized by these existing approaches under strong attacker models. Later, we extend our secure logging model by using blockchain as secure logging protocol, apply the new model to industrial settings, and build a simple prototype as proof of concept. In an evaluation of the new model and the corresponding prototype, we show the potential, but also the challenges, of this approach. Further, we take a deeper look into existing algorithms for secure logging and integrate them into a single parameterized algorithm. This log authentication and verification algorithm contains a combination of security guarantees and their parametrization returns the set of previous algorithms. Even with different file formats and common purpose, log files generally have similar structures. To this end, we evaluated three common log file types (syslog, windows event log and SQLite browser histories). Based on this evaluation, we developed a simple unified representation of log files and perform analysis independently of their format. As visualization of log files is helpful to find proper evidence, we have developed a simple log file visualization tool. This tool helps to identify evidence of system time manipulation.
Wearable computing applications in eHealth (2019)
Leutheuser, Heike
Non-communicable diseases are the leading cause of death and disability worldwide. Almost two thirds of them are linked to the following risk factors: physical inactivity, unhealthy diets, tobacco use, and harmful use of alcohol. At the same time, everyone is surrounded increasingly with wearables, like smartphones and smartwatches, making wearable computing an integral part of everyday life. The combination of addressing non-communicable diseases with the help of wearables seems one potential solution. In this thesis, methods and algorithms for three wearable computing applications in eHealth are presented: I) mobile breathing analysis, II) mobile electrocardiogram (ECG) analysis, and III) inertial measurement unit (IMU)-based activity recognition. Respiratory inductance plethysmography (RIP) provides an unobtrusive and mobile method for measuring breathing characteristics, avoiding the measurement with flowmeters (FMs) as with them the natural breathing pattern is altered. The output of RIP devices needs to be adjusted to result in correct ventilatory tidal volumes. State of the art methods for adjusting RIP data can only be applied after the actual measurement as these require the simultaneous data acquisition of RIP and FM. In this thesis, novel adjustment algorithms were created enabling the usage of RIP solely, and by this outside the controlled laboratory environment. Respiratory diseases and infections are amongst the most leading deaths group. In future work, it has to be investigated how RIP devices can effectively be used for patients suffering from these conditions. The reasons for mobile ECG analysis were to provide real-time algorithms for two particular scopes. The first scope dealt with identifying arrhythmic beats using only time instances of successive heartbeats – the RR intervals. In the second scope, three high-accurate single-lead, instantaneous P- and T-wave detection algorithms were compared. These two applications could help in addressing cardiovascular diseases that are the number one cause of deaths worldwide. An effective and easy to handle arrhythmia classification algorithm could send people in risk early to a physician. High-accurate P- and T-wave detection algorithms requiring only a single lead are beneficial for all ECG monitoring fields, and at the same time, enhance the patient’s comfort. IMU-based activity recognition provides an objective method for classifying activities addressing the risk factor physical inactivity. Therefore, a common, publicly available dataset DaLiAc was created to enable the comparison of activity recognition algorithms (http://www.activitynet.org/). Using the benchmark dataset DaLiAc, a hierarchical classification system was created that outperformed six state of the art activity recognition algorithms. Recognizing single activities of daily living might increase the awareness of individuals to increase their physical activity as physical inactivity is one of the four leading risk factors for non-communicable diseases. In this thesis, three different wearable computing applications addressing different non-communicable diseases were presented. Wearables have been continuously used for health monitoring in recent years – with still increasing trend – as they bring certain benefits to the user in daily life. This will be enlarged in the future and fostered by ongoing inventions, accumulating knowledge, technological process, and progressive digitalization.
Sprechaktbasiertes Adaptives Fallmanagement (2019)
Tenschert, Johannes
Sprechakte werden schon seit Jahrzehnten als Möglichkeit gesehen, durch ihren Fokus auf die pragmatische Intention das Design von interaktiven Systemen zu verbessern. Frühe Prototypen waren jedoch isolierte Anwendungen mit begrenzten Inferenzfähigkeiten, was ihren praktischen Einsatz erschwerte. Mit der drastischen Zunahme von Wissensarbeit und wissensintensiven Prozessen hat sich auch die Arbeitswelt deutlich geändert. Allerdings werden die Ziele, Interaktionen, Zwischen- und Endergebnisse wissensintensiver Prozesse typischerweise auf viele beteiligte Systeme verstreut - und wichtige Informationen häufig nicht dokumentiert. Adaptives Fallmanagement soll wissensintensive Prozesse unterstützen, deren Ablauf erst während der Durchführung selbst klar wird. Wir setzen Sprechakttheorie im adaptiven Fallmanagement ein, da sie eine kontextgewahre Repräsentation von Interaktionen erlaubt, aber für diesen Kontext vernachlässigbar ist, ob dieser ein strukturierter, semi-strukturierter oder ein Ad-Hoc-Prozess ist. Durch eine allgemeine Repräsentation von Intentionen ermöglicht dies Inferenz unabhängig davon, ob eine Interaktion oder Aktivität modelliert oder ad hoc dokumentiert wurde. Verstreute Prozessinformationen, die heute verantwortliche Wissensarbeiter verbinden, können mit einem solchen Ansatz besser in einem Gesamtkontext verarbeitet werden. Diese Arbeit bekräftigt, dass Sprechakte in Geschäftsprozessen verbreitet und divers sind. Für repräsentative Domänen von Wissensarbeit überprüft sie, ob die Diversität eine für Modellierung und Dokumentation beherrschbare Menge an Sprechakten erlaubt. Sie untersucht auch Anforderungen und Erwartungen an adaptive Fallmanagementsysteme – mit einer genaueren Klassifizierung der unterstützten Wissensarbeiter. Dadurch leitet sie für komplexe Arbeit mit hoher Interdependenz einen sprechaktbasierten Ansatz für adaptives Fallmanagement ab. Dieser Ansatz unterstützt wissensintensive Prozesse auch, wenn vorab kein Prozessmodell vorhanden ist. Aktivitäten werden zunächst als ad hoc angenommen. Nach Bedarf können Routinearbeiten durch Prozessmodelle vereinfacht oder automatisiert werden. Weiterhin werden sprechaktbasierte Techniken für semantische Annotationen, Modellierung und Geschäftsregeln für Compliance-Überwachung sowie Integration eingeführt. Dies ermöglicht die Verbindung von strukturierten, semi-strukturierten und Ad-Hoc-Vorgängen in einem übergreifenden, wissensintensiven Prozess, der durch warnende sowie verhindernde Kontrollmöglichkeiten unterstützt wird.
Analyse verbreiteter Anwendungen zum Lesen von elektronischen Büchern (2019)
Benenson, Zinaida ; Berger, Frederik ; Cherepantsev, Anatoliy ; Datsevich, Sergey ; Do, Long ; Eckert, Moritz ; Elsberger, Tassilo ; Freiling, Felix ; Frinken, Marius ; Glameyer, Hendrik ; Hafner, Steffen ; Hagenhoff, Svenja ; Hammer, Andreas ; Hammerl, Stefan ; Hantke, Florian ; Heindorf, Felix ; Henninger, Marcel ; Höfer, Daniel ; Kuhrt, Phillip ; Lattka, Maximilian ; Leis, Tobias ; Leubner, Christian ; Leyrer, Katharina ; Lindenmeier, Christian ; Del Medico, Katharina ; Moussa, Denise ; Nissen, Michael ; Öz, Alina ; Ottmann, Jenny ; Reif, Katharina ; Ripley, Nora ; Roth, Armin ; Schilling, Joschua ; Schleicher, Robert ; Schulz, David ; Stephan, Milan ; Volkert, Christoph ; Wehner, Max ; Wild, Matthias ; Wirth, Johannes ; Wolf, Julian ; Wunder, Julia ; Zlatanovic, Jovana
Der Marktanteil elektronischer Bucher (E-Books) am Buchmarkt wächst beständig. Um E-Books zu rezipieren, benötigt man spezielle Leseumgebungen, die als Software (im Browser oder als eigene Anwendung) oder als Spezialgerät (E-Reader) realisiert sein können. Diese Leseumgebungen sind geeignet, Daten über das Leseverhalten zu sammeln. Im Rahmen einer universitären Lehrveranstaltung wurden die Software-Leseumgebungen der beiden deutschen Marktführer Kindle und Tolino untersucht. Der vorliegende Bericht fasst die Ergebnisse dieser Analysen zusammen. Das Ergebnis ist eine umfassende Bestandsaufnahme der digitalen Spuren, die durch die Benutzung der Programme entstehen. Betrachtet wurden die zum Untersuchungszeitpunkt aktuellen Versionen der jeweiligen Webanwendungen und Android-Apps sowie des Kindle-Windows-Clients. Die Ergebnisse entstanden im Rahmen einer Übung zur Vorlesung "Fortgeschrittene forensische Informatik II" im Wintersemester 2018/19 an der Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), die gemeinsam durchgeführt wurde vom Lehrstuhl fur Informatik 1 und dem Institut für Buchwissenschaft an der FAU.
Forensic Analysis of the Resilient File System (ReFS) Version 3.4 (2019)
Prade, Paul ; Groß, Tobias ; Dewald, Andreas
ReFS is a modern file system that is developed by Microsoft and its internal structures and behavior is not officially documented. Even so there exist some analysis efforts in deciphering its data structures, some of these findings have yet become deprecated and cannot be applied to current ReFS versions anymore. In this work, general concepts and internal structures found in ReFS are examined and documented. Based on the structures and the processes by which they are modified, approaches to recover (deleted) files from ReFS formatted file systems are shown. We also evaluated our implementation and the allocation strategy of ReFS with respect to accuracy, runtime and the ability to recover older file states. In addition, we extended The Sleuth Kit allowing it to parse ReFS partitions and build a carver based on that extend The Sleuth Kit.
Non-invasive Imaging Methods for Digital Humanities, Medicine, and Quality Assessment (2019)
Stromer, Daniel
Non-invasive imaging methods are capable of revealing content that is not visible with the naked eye. In this thesis, we present solutions for three common areas of application: digitization of cultural heritage, medicine, and quality assessment. The success story of those approaches is furthermore driven by applying image processing methods allowing a human to investigate the data by the naked eye. In our work, we refer to the combination of appropriate imaging techniques combined with data processing algorithms. In case of cultural heritage digitization, we propose complete pipelines - from scanning to the final two-dimensional writing outputs. Such entire processing blocks are presented for historical books written with metallic inks used since the Roman Empire. The method is capable of processing X-ray computed tomography scans of closed books such that the pages are finally readable by the naked eye. The second example is a bamboo scroll as it was used in ancient China. A virtual cleaning followed by a virtual unwrapping step is used to reveal the hidden content of heavily soiled documents. Also this method is based on X-rays to generate the volumetric data. Optical coherence tomography is a modality widely used by ophthalmologist to investigate retinal layers. Highly accurate segmentation as well as a common basis for researchers to evaluate methods is a major issue in this field. We developed a software framework in a collaboration between four institutions including clinical partners to tackle this problem. We integrated an automatic approach for initial segmentation of three relevant retinal layers. Inaccuracies can be eliminated by using the novel manual refinement algorithm producing highly accurate results with minimal amount of labor. In addition, visualization techniques can be used to illustrate the results immediately. For quality assessment of photovoltaic cells, electroluminesence imaging can be utilized to reveal defects in multicrystalline solar cells. To automatically detect cracks in these cells, algorithmic approaches are used to segment such structures. We propose a segmentation technique with a run-time of less than a second for a single cell showing accurate results. As this field is mainly industry driven, we addressed the lack of open-source algorithms by making ours publicly available such that anyone can compare and evaluate novel techniques. For all proposed methods, we show extensive quantitative and qualitative evaluations. We discuss challenges as well as improvements achieved with the novel techniques. We are confident that the presented work is a great contribution to the field of non-invasive imaging and furthermore highlights the need for harmonizing the used acquisition techniques with subsequent data processing.
Human Activity Recognition in Daily Life and Sports Using Inertial Sensors (2019)
Schuldhaus, Dominik
Human Activity Recognition (HAR) deals with the automatic recognition of physical activities and plays a major role in the health and sports sector. Knowledge about the performed activities can be used to monitor compliance regarding physical activity recommendations, investigate the causes of physical activity behavior, implement sport-specific training programs, and replicate the physical demands during sport competition. Currently available tools for HAR often rely on questionnaires which involve problems in the reliability when recalling activities. In this thesis, algorithms for HAR are introduced and evaluated which apply machine learning techniques to inertial sensor data. Daily as well as sport-specific activities are considered including sitting, washing dishes, climbing stairs, and kicking in soccer. Besides the development and implementation of algorithms, mandatory extensions regarding the design of HAR systems are further identified and future research directions are provided.
Formale Verifikationsmethode für reale Schaltungen und Systeme (2019)
Denguir, Mohamed
Die Absicherung der funktionalen Sicherheit von Schaltungen und Systemen geschieht kostengünstig mithilfe von Verifikation. Dabei sollen die der realen Struktur zugehörigen Funktionen strukturtreu modelliert werden. Unter strukturtreuer Modellierung verstehen wir die Übereinstimmung der formal hergeleiteten Funktion mit der real erzeugten Funktion. Die Modellierung realer Systeme mithilfe von Funktionen (mithilfe der Beschreibung des Verhaltens), welche die Struktur bewahren, führt auf eine Darstellung als Signalflussgraph (SFG). Diese Arbeit stellt eine strukturtreue Verifikationsmethode, mit dem Namen "Strukturtreue Modellierung anhand von Signalflussgraphen", vor. Diese wird mithilfe einer digitalen Schaltung plausibilisiert. Die vorgestellte Verifikationsmethode "Strukturtreue Modellierung anhand von Signalflussgraphen" führt zur Erzeugung einer Quaternär-Vektorliste (QVL) als Datenbank. Diese Datenbank kodiert die Phasenlisten der gewichteten Kanten eines SFG. Es können Suchfunktionen programmiert werden, welche die Datenbank durchgehen und bestimmte Kriterien, wie z.B. Testabdeckung, Fehlerabdeckung und Prüfschärfe, bestimmen. Die durch eine komplexe Realität erzeugten Listen können jedoch in ihren Dimensionen große Ausmaße annehmen. Um nun geringeren Speicherbedarf und kürzere Rechenzeit zu erhalten, sollte die Datenbank ohne Informationsfluss maximal kompaktiert werden können. In dieser Arbeit wird eine Methode zur verlustlosen Kompaktierung präsentiert und beispielhaft erläutert. Die vorgestellte Verifikationsmethode "Strukturtreue Modellierung anhand von Signalflussgraphen" führt letztendlich zur Erzeugung eines kompaktierten Datenmodells. Mithilfe dieses kompaktierten Datenmodells kann die vorhandene Information zur Testüberdeckung, und damit Informationen über die in einem Baugruppensystem tatsächlich auftretenden Fehler, nun zügig extrahiert und genutzt werden. In dieser Arbeit wird ein optimales, gerichtetes Vorgehen für den Austausch von möglichen fehlerhaften Bauteilen in einem Diagnose-Flow vorgeschlagen. Dafür stellt diese Dissertation das Konzept einer selbstbestätigenden Methode namens "Fehlerbaumanalyse" vor. Wird diese Datenbank nun auch noch mithilfe von Zahlen kodiert, kann die Trennung von Testergebnissen die gewünschte Zielprojektion unterstützen. Doch die Eingabe von Daten mit hohen Dimensionen, in denen nicht wenige Elemente einer Datenmenge irrelevant oder weniger relevant als andere sind, führt meist zu sehr unübersichtlichen und schwierig zu interpretierenden Ergebnissen. Zu Ergebnissen also, in denen die gewünschten Zieldaten nur sehr aufwändig weiterverarbeitet werden können. Also ist es sinnvoll Verfahren heranzuziehen, welche die einzelnen Dimensionen der Datenmengen nach ihrer Relevanz klassifizieren und vorteilhaft reduzieren. In dieser Dissertation wird die Datenanalyse (SEDA) zur Trennung von Daten vorgestellt und anhand eines Anwendungsbeispiels verdeutlicht.
  • 1 to 20

DINI-Zertifikat 2013     DINI-Zertifikat 2016     Members of COAR OPUS4 Logo

  • Contact
  • Imprint
  • Sitelinks