• Deutsch
Login

Open Access

  • Home
  • Search
  • Browse
  • Publish
  • FAQ
Schließen
  • Institutes
  • Technische Fakultät

Department Informatik

Refine

Author

  • Freiling, Felix (5)
  • Bajramovic, Edita (3)
  • Dewald, Andreas (3)
  • Frinken, Marius (3)
  • Militzer, Arne (3)
  • Preclik, Tobias (3)
  • Riehle, Dirk (3)
  • Schröder-Preikschat, Wolfgang (3)
  • Breininger, Katharina (2)
  • Capraro, Maximilian (2)
+ more

Year of publication

  • 2021 (3)
  • 2020 (7)
  • 2019 (19)
  • 2018 (13)
  • 2017 (16)
  • 2016 (9)
  • 2015 (14)
  • 2014 (15)
  • 2013 (7)
  • 2012 (1)
+ more

Document Type

  • Doctoral Thesis (77)
  • Report (17)
  • Article (3)
  • Moving Images (3)
  • Working Paper (2)
  • Conference Proceeding (1)
  • Habilitation (1)
  • Other (1)

Language

  • English (83)
  • German (22)

Has Fulltext

  • yes (105)

Keywords

  • Maschinelles Lernen (7)
  • Hochleistungsrechnen (6)
  • Parallelisierung (5)
  • Computertomographie (4)
  • Machine Learning (4)
  • Medizinische Bildgebung (4)
  • Mehrkernprozessor (4)
  • Simulation (4)
  • - (3)
  • Betriebssystem (3)
+ more

Institute

  • Department Informatik (105)
  • Department Physik (1)
  • Fakultätsübergreifend / Sonstige Einrichtung -ohne weitere Spezifikation- (1)
  • Technische Fakultät (1)

105 search hits

  • 1 to 20
  • 10
  • 20
  • 50
  • 100

Sort by

  • Year
  • Year
  • Title
  • Title
  • Author
  • Author
Digital Cues in Multimedia Forensics (2021)
Mullan, Patrick
This work presents several contributions to the scientific field of digital multimedia forensics. The field addresses questions of authentication examination and source identification of multimedia files. All investigated multimedia entities are visual information carriers, i.e., images and videos. The subfield of authentication examination proposes methods which allow to verify or falsify the authenticity of the probed media object. Questions concerning source identification help to identify the source of the media file, where the term source offers different levels of granularity, for instance, the make or the model of a camera. Finding answers to questions of this kind, requires to operate with characteristic cues which suit the investigated context. Such cues, or features, can be derived from the byte stream of the file as well as from the visual content, i.e., the decoded pixels. The selected feature domain depends on the exact problem statement. Analysis on pixel data is especially revealing if the manipulation is of good quality. In contrast, applications that work with encoded files can be efficiently deployed to work on large scales of media file. Oftentimes, file-based methods are also helpful if the quality of the visual content makes working with pixel-based applications rather difficult. Therefore, this dissertation subsumes multiple of our published works from the field of digital multimedia forensics, which propose various new developed approaches for both authentication examination and source identification. Additionally, the suggested approaches work in different domains. Through this diversity, we are capable to cover a broad field of research questions and can solve different types of challenges. The first part of this work focuses on source identification. We first illustrate that the current literature does not cover developments introduced by the omnipresent usage of smartphones as imaging devices. Smartphones work with exchangeable software imaging routines. Hence, we suggest to introduce a software components axis orthogonal to a hardware components axis, i.e., proposing a new level of source identification. With the help of a large scale study on Apple devices, we demonstrate that different software versions have characteristic properties. We use this finding to predict software versions by resorting to machine learning. The influence of software when taking the image and when postprocessing is also investigated. Further, we illustrate how images from unknown models can be automatically associated with the make of the source device. The second part of the dissertation focuses on videos and their representation in modern video codecs. We discuss neural networks in terms of their ability to identify double compression in videos. Double compression is a necessary step that happens automatically when manipulating a video. The introduced method is likewise capable to predict a specific parameter, the so-called ``group of pictures'' length in the first compression of a double compressed video. We further introduce methods which are capable to locate manipulations in pixel-domain, for example, persons spliced into the footage. We demonstrate that the same method can be successfully used to probe if video sequences originate from the same video. The methods are benchmarked against state-of-the art solutions and consistently outperform them.
Machine Learning and Deformation Modeling for Workflow-Compliant Image Fusion during Endovascular Aortic Repair (2021)
Breininger, Katharina
Fluoroscopy-guided endovascular aortic repair (EVAR) has become the predominant treatment strategy for elective repair of abdominal aortic aneurysms in many western countries. During the procedure, stent grafts are implanted into the vasculature to reduce the pressure on the vessel wall and prevent a potentially fatal aneurysm rupture. The fusion of preoperative information with intraoperative fluoroscopy has garnered considerable interest as a means to reduce the use of nephrotoxic contrast agent and to decrease radiation exposure and procedure time, thus limiting the negative side effects of the procedure. A rigid overlay of pre- and intraoperative images, however,disregards the substantial deformations caused by the endovascular instruments. This thesis proposes and analyses different approaches to maintaining the usefulness of image fusion during EVAR by identifying and modeling the instrument-induced deformation. Particular attention is given to compliance with the interventional workflow, specifically in terms of underlying assumptions, requirements and computational costs. An algorithmic pipeline is developed that allows for the segmentation of relevant instruments in fluoroscopic images, the reconstruction of the instrument shape from single X-ray projections and the intraoperative deformation modeling based on this information. For instrument segmentation, a deep learning approach is proposed that is able to reliably identify and distinguish stent grafts, guidewires and catheters in a multi-task setting. In contrast to prior methods applied to these tasks, the approach requires neither an explicit model of the stent graft nor a handcrafted segmentation pipeline for each instrument. To allow for deformation modeling in 3-D, a method is designed that recovers the 3-D instrument shape from a single projection image. This avoids cumbersome repositioning of the fluoroscopic C-arm system. The approach estimates a second, virtual view of the wire based on the preoperative information that takes the intraoperative vessel deformation into account. To model the deformation solely on the instrument shape, an as-rigid-as-possible modeling is devised that allows to account for the interaction between the instrument and a surface model of the vessel in a flexible manner. This is extended by a semi-automatic approach that adapts the deformation in a “one-click” scenario and further increases the accuracy of the deformation modeling. In contrast to previous methods, a bone-based initial alignment of pre- and intraoperative data suffices for accurate deformation modeling. Other approaches that assess the deformation are either based on computationally expensive finite element analysis, require a contrast-enhanced acquisition of the aortic vessel tree or demand complex user interaction. The pipeline is able to adapt the preoperative information to match the intraoperative deformation without the need for contrast injections. Still, available information can be integrated by using the semi-automatic method, resulting in a high in-plane accuracy of 0.5 mm at relevant anatomical landmarks. While each step of the proposed pipeline constitutes a value of its own, the proposed methods can be applied successively and allow for an adaption from X-ray segmentation to 3-D deformation modeling in less then 10 s, integrating smoothly with the interventional workflow. The results on clinical data show the potential to further improve navigation, reduce the use of nephrotoxic contrast agents and decrease radiation exposure, ultimately increasing the safety of both patients and medical personnel.
Beiträge zur szenarienbegleiteten Entwicklung von automatisierten Fahrfunktionen (2021)
Bock, Florian
Die Komplexität der Entwicklung moderner Fahrzeuge steigt kontinuierlich, besonders auch durch das aktuelle Thema der automatisierten Fahrfunktionen. Im Gegensatz zu früheren Fahrerassistenzsystemen, die nur unterstützend auf das Fahrzeug einwirken, müssen zukünftige Systeme die vollständige Kontrolle über das Fahrzeug übernehmen können. Das stellt hohe Anforderungen an die Qualität und die Robustheit der Systeme und folglich auch deren Entwicklungsprozesse. Dennoch werden Informationen zwischen den einzelnen Entwicklungsphasen immer noch manuell und in textbasierter Form (z.B. als Lastenheft) ausgetauscht. Bisherige Versuche, dies durch modellbasierte Ansätze zu optimieren, scheiterten meist an dem mangelnden Adaptionswillen der Entwickler aufgrund des initialen Lern- und Umstellungsaufwands. Da zur Erprobung moderner Fahrfunktionen immer stärker auf Simulationen gesetzt wird, steigt auch die Bedeutung der dafür notwendigen Szenarienbeschreibungen. Diese werden bisher manuell, entweder als Text oder direkt als Simulationsmodell, erstellt. Zusätzlich ist eine belastbare Abschätzung des notwendigen Testaufwands gerade für Multisensorsysteme bereits in einer frühen Phase für eine solide Projektplanung unumgänglich. Solch eine Projektplanung umfasst auch eine Auswahl der einzusetzenden Softwareentwicklungsprozesse, -methoden und -werkzeuge. Die dafür verfügbaren Ansätze bieten aber weder einen schnellen graphischen Überblick, noch können sie sehr unterschiedliche Ansätze vergleichen. Die in dieser Arbeit vorgestellte Taxonomie bietet diese Möglichkeit auf Basis des V-Modells und zusätzlicher Annotationen. Da weder die Kundenerwartungen an automatisierte Fahrsysteme, noch die Bewertungen der Entwicklungsmethoden durch die Entwickler konstant bleiben, werden für eine aktuelle Bestandsaufnahme im Rahmen dieser Arbeit die Ergebnisse von drei Umfragen zu diesen Themen vorgestellt. Um einerseits die Übergänge zwischen den Entwicklungsphasen zu optimieren, andererseits aber den Adaptionsaufwand für die Nutzer gering zu halten, werden weiterhin zwei iterative textbasierte Konzepte zur Erstellung von Anforderungen und Szenarienbeschreibungen, jeweils mit einer passenden domänenspezifischen Sprache im Framework JetBrains Meta Programming System (MPS) vorgestellt. Bereits verfügbare Ansätze arbeiten zwar auch textbasiert, sind aber weder multilingual, noch verwenden sie verschieden abstrakte Daten oder können die für die Entwicklung notwendigen Übergabeartefakte (z.B. Modelle) automatisch generieren. Auch wenn es bereits Lösungen zur Testaufwandsschätzung für Multisensorsysteme gibt, benötigen diese entweder die physikalischen Sensormodelle oder Implementierungsdetails. In dieser Arbeit wird deshalb ein Ansatz vorgestellt, der schon in der Spezifikationsphase, allein auf Basis der Sensorcharakteristika, den zu leistenden Testaufwand für ein definiertes Konfidenzniveau ermitteln kann. Diese fünf Konzepte tragen zur szenarienbegleiteten Entwicklung automatisierter Fahrfunktionen bei.
Identifikation relevanter Verkehrssituationen für die szenarienbasierte Entwicklung automatisierter Fahrfunktionen (2020)
Sippl, Christoph Sebastian
Automatisiertes Fahren wird die zukünftige Mobilität grundlegend verändern. Die fortschreitende Digitalisierung bildet die Grundlage für technische Innovationen und Automatisierung der Fahraufgabe. Zahlreiche medienwirksame Auftritte verschiedener Firmen und Forschungsgruppen zeigen die technische Machbarkeit automatisierter Fahrfunktionen. Vor allem die steigende Komplexität zukünftiger Fahrfunktionen beinhaltet große Herausforderungen für die Entwicklung, Test und Freigabe. Derartige Systeme sollen die Fahraufgabe in definierten Verkehrsdomänen komplett übernehmen. Die Fahrfunktion muss alle auftretenden Situationen sicher beherrschen. Die Identifikation aller dafür relevanten Verkehrssituationen kann mit bekannten Methoden der Situationsanalyse nicht bewerkstelligt werden. Etwaige Methoden fokussieren nicht auf typische, normale und unkritische Verkehrssituationen, wenngleich diese für die Anforderungsanalyse und Spezifikation automatisierter Fahrfunktionen notwendig sind. Um die steigende Komplexität zu bewältigen, werden von der Forschung und Fachliteratur szenarienbasierte Methoden für die Entwicklung automatisierter Fahrfunktionen vorgeschlagen. Die vorliegende Arbeit präsentiert eine Methode zur Identifikation typischer Verkehrssituationen. Die Methodik basiert auf einem menschlichen Entscheidungsfindungsmodell und beinhaltet ein systematisches Vorgehen. Sie berücksichtigt Expertenwissen, sowie funktions- und entwicklungsspezifisch relevante Situationsmerkmale. Das systematische Vorgehen nutzt Simulationsmethoden zur Datenerhebung sowie Constraint-Programmierung. Somit wird das Constraint-Erfüllungs-Problem zur Suche nach relevanten Situationen in einer deklarativen Weise beschrieben. Die Validierung zeigt, dass relevante und typische Situationen identifiziert werden können, an die in einem unstrukturierten Vorgehen während der Anforderungsanalyse und Spezifikation des Zielsystems unter Umständen nicht gedacht wird. Zusammen mit dem durchgängigen szenarienbasierten Entwicklungsansatz SBSE zeigt die Situationsidentifikation großes Potential für die Entwicklung automatisierter Fahrfunktionen. Zudem bildet der vorgestellte Ansatz die Grundlage für weitere Forschung im Bereich der szenarienbasierten Entwicklung.
Tomosynthesis in Orthopedics: Ultra-Small-Angle Tomosynthesis and Applications Beyond (2020)
Luckner, Christoph
Deformities and degenerations of the spine and extremities are among the most common diseases in orthopedics. To determine the pathology and its severity, usually musculoskeletal measurements are performed in 2-D radiographs of standing patients. However, conventional X-ray images have the disadvantage that effects like magnification and distortion can impair the quality and reliability of the measurements performed. Also, the awareness of the hazard of ionizing radiation and the associated risks has vastly been increased. This leads to a higher demand for low-dose protocols, that do not compromise on image quality. In this thesis, both issues will be addressed: low-dose imaging and a true-to-scale mapping. Therefore, a slot-scanning technique based on an ultra-small-angle tomosynthesis reconstruction is proposed and evaluated. For image acquisition, a commercially available twin robotic radiographic system is used. The system is equipped with a parallel-shift trajectory of the X-ray source and the detector, that allows the imaging of the entire body. During the evaluation, several important measures are addressed. At first, the implemented motion compensation to alleviate image artifacts due to patient motion is evaluated. In a simulation study, it is shown that it is possible to correct sinusoidal motion up to an amplitude of 16 mm with a neglectable mean residual motion of 0.34 mm. Regarding in-plane spatial resolution, it was found that resolutions between 8 and 18 line pairs per centimeter can be achieved depending on the acquisition settings. Various MTF-based filter kernels have been investigated that offer a certain image impression. It was shown that the creation of an isotropic resolution is at the cost of an anisotropic noise appearance. Using an analytical model, it was validated that the suggested slot size of 50 mm on the detector is sufficient to reconstruct images that do not suffer from blurring artifacts. Using a model motivated by X-ray physics a scatter rejection coefficient, as well as dose saving potential, are calculated. Comparing the slot scan to conventional X-ray images with anti-scatter grid, dose savings of up to 71% were reported without significantly affecting the image quality in terms of signal-to-noise ratio. Lastly, the ability to create true-to-scale images is investigated. Here, an analytical model is combined with a phantom study, that shows that neither magnification nor distortion in scanning direction are present in the reconstructed image. Finally, three applications to extend the proposed approach and to further improve imaging in orthopedics are presented. At first, an advanced collimator control is proposed which combines tomosynthesis with the slot scan to alleviate anatomical occlusion, e.g. in the shoulder girdle area. Secondly, an approach to fuse a 2-D slot scan with additional 3-D information of the knee is presented. This method is dedicated to improving preoperative implant planning. Lastly, an autofocus-based method to generate smart radiographs from tomosynthesis reconstructions is proposed that aims to simplify the reading process.
Measuring Inner Source Collaboration (2020)
Capraro, Maximilian
Inner source (IS) is the use of open source software development practices and the establishment of open source-like communities within an organization. The organization may still develop proprietary software but internally opens up its development. IS promises to resolve problems of traditional software development by easing software reuse and enabling parties within an organization to collaborate across organizational boundaries. However, it is unclear what elements constitute IS (problem I) and how to measure the presence and magnitude of IS collaboration (problem II). The large majority of research articles on IS to date are limited to qualitative results regarding IS. There are yet no quantitative studies on IS collaboration exploring how much IS collaboration takes place or how IS practices affect it (problem III). We followed a three-phase research approach to address these problems. First, we performed an extensive literature survey and analyzed 43 IS publications. We found that four key elements constitute IS (shared cultural values, open development environment, communities around software, IS-specific scenarios) but that IS programs and projects differ on at least five dimensions (addressing problem I). Second, we developed the patch-flow method (and a software tool implementing it) for measuring IS collaboration. Patch-flow is the flow of code contributions across organizational boundaries ("silos") such as organizational unit or cost center boundaries. We evaluated the method using case study research with a non-trivial industry organization and found it to be viable and useful to practitioners (addressing problem II). Third, we performed a multiple-case case study with three large software organizations running a total of five IS program. We identified the used IS practices and the resulting patch-flow. We found patch-flow to exist in all organizations but that only fraction of all code contributions to IS projects constitute patch-flow. We observed that the number of IS practices implemented correlates with the distance of parties involved in collaboration. This indicates that IS is particularly suited to enable collaboration between parties of high distance in an organization (addressing problem III). This thesis delivers a holistic definition of IS and the first classification framework for IS programs and projects. Researchers can use such a framework to reason about generalizability of their results more precisely. The patch-flow measurement method is the first of its kind to measure and quantify IS collaboration and can serve as a base for further quantitative analyses of IS collaboration. The exploration of the patch-flow in the three industry cases can serve as example and benchmark for practitioners.
Development of a Fully-Convolutional-Network Architecture for the Detection of Defective LED Chips in Photoluminescence Images (2020)
Stern, Maike
Nowadays, light-emitting diodes (LEDs) can be found in a large variety of applications, from standard LEDs in domestic lighting solutions to advanced chip designs in automobiles, smart watches and video walls. The advances in chip design also affect the test processes, where the execution of certain contact measurements is exacerbated by ever decreasing chip dimensions or even rendered impossible due to the chip design. As an instance, wafer probing determines the electrical and optical properties of all LED chips on a wafer by contacting each and every chip with a prober needle. Chip designs without a contact pad on the surface, however, elude wafer probing and while electrical and optical properties can be determined by sample measurements, defective LED chips are distributed randomly over the wafer. Here, advanced data analysis methods provide a new approach to gather defect information from already available non-contact measurements. Photoluminescence measurements, for example, record a brightness image of an LED wafer, where conspicuous brightness values indicate defective chips. To extract these defect information from photoluminescence images, a computer-vision algorithm is required that transforms photoluminescence images into defect maps. In other words, each and every pixel of a photoluminescence image must be classifed into a class category via semantic segmentation, where so-called fully-convolutional-network algorithms represent the state-of-the-art method. However, the aforementioned task poses several challenges: on the one hand, each pixel in a photoluminescence image represents an LED chip and thus, pixel-fine output resolution is required. On the other hand, photoluminescence images show a variety of brightness values from wafer to wafer in addition to local areas of differing brightness. Additionally, clusters of defective chips assume various shapes, sizes and brightness gradients and thus, the algorithm must reliably recognise objects at multiple scales. Finally, not all salient brightness values correspond to defective LED chips, requiring the algorithm to distinguish salient brightness values corresponding to measurement artefacts, non-defect structures and defects, respectively. In this dissertation, a novel fully-convolutional-network architecture was developed that allows the accurate segmentation of defective LED chips in highly variable photoluminescence wafer images. For this purpose, the basic fully-convolutional-network architecture was modifed with regard to the given application and advanced architectural concepts were incorporated so as to enable a pixel-fine output resolution and a reliable segmentation of multiple scaled defect structures. Altogether, the developed dense ASPP Vaughan architecture achieved a pixel accuracy of 97.5 %, mean pixel accuracy of 96.2% and defect-class accuracy of 92.0 %, trained on a dataset of 136 input-label pairs and hereby showed that fully-convolutional-network algorithms can be a valuable contribution to data analysis in industrial manufacturing.
C-arm Cone-Beam Computed Tomography Reconstruction for Knee Imaging under Weight-Bearing Conditions (2020)
Bier, Bastian
Medical imaging is essential for the assessment of osteoarthritis and the overall knee health. For that purpose, radiographs of the knees of standing patients are acquired commonly. These suffer, however, under projective transformation and thus do not allow conclusions to be drawn about the complex 3-D joint anatomy. Conversely, compared to many 3-D capable imaging modalities, imaging under load is easily feasible using X-rays. This is beneficial, since it has been shown that this reflects the knee joint under more realistic conditions. Recently, a 3-D imaging protocol has been proposed that enables cone-beam computed tomography (CBCT) reconstruction of the knees acquired under weight-bearing conditions. To this end, the C-arm rotates on a horizontal trajectory around the standing patient. Involuntary patient motion and scattered radiation deteriorate the reconstructions’ image quality substantially. In this thesis, novel concepts and methods are proposed to further develop this imaging protocol in order to improve the reconstructions. In a first approach, a primary modulator-based scatter correction method has been transferred on a clinical C-arm CBCT system. The method is a suitable candidate to be applied to projection images of the knees, since it is capable of estimating heterogeneous scatter distributions. To this end, extensions to an existing method have been developed to compensate for the system wobble and the automatic exposure control of the C-arm systems. In multiple experiments, it is demonstrated that the primary modulator method works on clinical C-arm scanners and also for imaging under load. A current state-of-the-art motion estimation method is based on metallic fiducial markers that are placed on the patient’s skin. The marker placement is, however, a tedious and time-consuming process. To this end, two marker-free alternatives are proposed. In a first attempt, a range camera is utilized to track the patient surface simultaneous to a CBCT image acquisition. Using point cloud registration of the acquired depth frames, transformations can be computed that correspond to the patient motion, which directly can be integrated into the image reconstruction. In a simulation study, comparable results to the marker-based method could be achieved. Yet, initial real data experiments on a clinical scanner did not achieve satisfying image quality, even though part of the motion could be estimated. Therefore, the promising results make this method to a pre-cursor to future research. Although this method is marker-free, a prepared environment is required. Hence, another purely image-based motion estimated approach has been investigated. The idea is to replace the positions of the fiducial markers with the ones of anatomical landmarks present in the projections. Anatomical landmark detection in X-ray images from different directions is difficult and, to the best of our knowledge, has not been investigated yet. For this purpose, a novel deep learning-based approach has been developed. In a first evaluation, the method was tested on X-ray images of the pelvis. Here, it could be demonstrated that the detection accuracy sufficed to initialize a 2-D/3-D registration. Subsequently, the approach is transferred to knee projection images, where the good detection results served as input for the motion estimation. Despite limited results on real data acquisition, the achieved improvements of the image quality are an indicator for a successful future application for motion estimation.
Projection-to-Projection Translation for Hybrid X-ray and Magnetic Resonance Imaging (2019)
Stimpel, Bernhard ; Syben, Christopher ; Würfl, Tobias ; Breininger, Katharina ; Hoelter, Philip ; Dörfler, Arnd ; Maier, Andreas
Hybrid X-ray and magnetic resonance (MR) imaging promises large potential in interventional medical imaging applications due to the broad variety of contrast of MRI combined with fast imaging of X-ray-based modalities. To fully utilize the potential of the vast amount of existing image enhancement techniques, the corresponding information from both modalities must be present in the same domain. For image-guided interventional procedures, X-ray fluoroscopy has proven to be the modality of choice. Synthesizing one modality from another in this case is an ill-posed problem due to ambiguous signal and overlapping structures in projective geometry. To take on these challenges, we present a learning-based solution to MR to X-ray projection-to-projection translation. We propose an image generator network that focuses on high representation capacity in higher resolution layers to allow for accurate synthesis of fine details in the projection images. Additionally, a weighting scheme in the loss computation that favors high-frequency structures is proposed to focus on the important details and contours in projection imaging. The proposed extensions prove valuable in generating X-ray projection images with natural appearance. Our approach achieves a deviation from the ground truth of only 6% and structural similarity measure of 0.913 ± 0.005. In particular the high frequency weighting assists in generating projection images with sharp appearance and reduces erroneously synthesized fine details.
Operating-System Support for Efficient Fine-Grained Concurrency in Applications (2020)
Erhardt, Christoph
The steadily advancing trend towards multi- and manycore computing architectures bears enormous challenges for developers of application software. To be able to make efficient use of the raw parallelism provided by the hardware, programs must explicitly cater for that fact. The classic programming model of a multithreaded application process, which consists of a number of control flows (threads) managed and scheduled by the operating-system kernel within a shared address space, is being increasingly stretched to its limits: on the one hand, creating threads and switching between them is not sufficiently lightweight; on the other hand, structuring a parallel application around threads is often cumbersome and puts needless obstacles in the programmer’s way. A suitable alternative to multithreaded programming is the use of a so-called concurrency platform that supports developers in articulating applications as a conglomeration of fine-grained concurrent activities. Concurrency platforms come with a runtime system that is responsible for dispatching the lightweight work packages to the available computing resources. Such runtime systems generally build upon the abstractions provided by an underlying commodity operating system such as Linux – that is, upon threads as abstractions of processor cores. This construction results in a number of disadvantages: for instance, the operating system’s scheduler acts without consulting the runtime system, thus making decisions that are potentially unfavourable from the application’s point of view; the coexistence of multiple parallel application processes causes problematic reciprocal interference; blocking system calls cause a temporary loss of parallelism. This thesis presents AtroPOS, the design of an atrophied parallel operating system that is specially geared towards supporting concurrency platforms on manycore systems. AtroPOS is a derivative of the OctoPOS operating system and has undergone comprehensive further development; it rests on the paradigm of invasive computing and adopts its fundamental concepts: resource-aware programming, exclusive allocation of processor cores to applications, tailoring and dynamic reconfigurability. The operating-system kernel provides a boiled-down set of essential low-level abstractions on top of which arbitrary runtime libraries can be placed. InvRT, the invasive runtime system that supports executing applications of invasive computing, was developed as a reference runtime library. By default, AtroPOS makes the existing physical processor cores directly available to the application; their virtualisation is strictly optional and there is no notion of threads. The scheduling of user control flows is carried out purely on the user level by the runtime system without involving the operating-system kernel; this allows for the efficient handling even of very fine-grained concurrency within the application. System calls that may block within the kernel have asynchronous invocation semantics and return immediately upon blocking so that loss of parallelism during the waiting time is ruled out by design. Notification of completed system operations is carried out by means of a generic mechanism that passes user-defined data structures upward to the application and can be used by the runtime system to construct arbitrary synchronisation data structures such as futures. The same versatile mechanism is harnessed on tiled computing systems to allow parts of a distributed application to communicate with one another. In addition, AtroPOS offers configurable vertical isolation: the strict separation of the operating-system kernel from the application can be enabled and disabled in a coarse- and fine-grained manner, and both statically and dynamically. With this, type-safe applications can issue system calls as ordinary function calls and thus lower their direct and indirect costs. The aforementioned concepts were implemented in the AtroPOS kernel and the InvRT runtime system in the context of this thesis; they were evaluated with the aid of micro-benchmarks and various application suites. Moreover, the runtime library of the parallel programming language Cilk Plus – an extension of C/C++ – was ported to the AtroPOS interface in order to showcase the versatility of the approach.
Methods for Quantification and Respiratory Motion Management in SPECT Imaging (2020)
Sanders, James
Single photon emission computed tomography (SPECT) is a medical imaging modality used to visualize the distribution of radioactive tracers in a patient's body. While SPECT's utility as a diagnostic modality has long been established, its use for planning and managing nuclear medicine therapies has grown in recent years. With these new applications comes a need for absolute quantitation of the amount of radioactivity in tissues. When small, detailed structures are being imaged, a major confounding factor for the quantification task is respiratory motion, which blurs images and leads to underestimation. This thesis seeks to contribute new methods for quantitation and respiratory motion management in SPECT imaging. We first briefly describe the underlying principles that enable SPECT image formation, as well as physical nonidealities and physiological aspects of respiratory motion that confound it. Following this, we introduce methods for analytical and iterative image reconstruction before surveying the techniques that have been developed to correct for these confounding processes. A benefit of these corrections is the enabling of absolute quantitation, and in the next chapter we propose a quantification protocol for Lu-177, an isotope frequently used in radionuclide therapies. After characterizing the protocol in a phantom experiment meant to establish a parameter set offering a favorable bias-variance trade-off, we validate the results with an in vivo patient study. We found that our protocol delivered mean errors relative to truth in the bladder of 10.1%. We then move to the task of respiratory motion management, the first step of which is obtaining a surrogate signal representing a patient's respiratory state over time. After describing five data-driven methods for extracting such a signal, we compare their performance in a phantom experiment and with a collective of cardiac patient scans. We then expand upon this by taking the best-performing method -- a dimensionality reduction-based approach using Laplacian Eigenmaps (LE) -- and augment it with post-processing steps to make it fit for fully-automated operation in clinical practice. Following this, we present results from a follow-up patient validation on a larger collective with 67 scans indicating that the LE-based approach correlates well with a clinically-accepted sensor-based method. To provide an independent assessment of surrogate signal quality, we then analyze respiratory-gated acquisitions from two types of SPECT scans used for therapy planning: selective internal radionuclide therapy (SIRT) planning scans with Tc-99m- MAA and dosimetry acquisitions for Lu-177-based radionuclide therapies. The results show that data-driven LE surrogates allow recovery of meaningful respiratory motion, and we report preliminary results indicating the type of clinical benefits that compensating for this motion might possibly provide. As a final contribution, we propose an algorithm to improve the robustness of respiratory motion estimation in SPECT projections using a sequence-based estimation scheme and a motion model driven by the surrogate signal itself. In a simulation study, we show that our proposed Sequence-based Motion Model (S-MM) algorithm reduces estimation variance compared to two comparison methods. Furthermore, in a collective of 20 patient Tc-99m-MAA liver scans, S-MM reduces respiratory motion blur more consistently and to a greater extent than the comparison methods. We conclude the thesis with a summary and outlook of possible future work.
Automatic Code Generation for Massively Parallel Applications in Computational Fluid Dynamics (2019)
Kuckuk, Sebastian
Solving partial differential equations (PDEs) is a fundamental challenge in many application domains in industry and academia alike. With increasingly large problems, efficient and highly scalable implementations become more and more crucial. Today, facing this challenge is more difficult than ever due to the increasingly heterogeneous hardware landscape. One promising approach is developing domain‐specific languages (DSLs) for a set of applications. Using code generation techniques then allows targeting a range of hardware platforms while concurrently applying domain‐specific optimizations in an automated fashion. The present work aims to further the state of the art in this field. As domain, we choose PDE solvers and, in particular, those from the group of geometric multigrid methods. To avoid having a focus too broad, we restrict ourselves to methods working on structured and patch‐structured grids. We face the challenge of handling a domain as complex as ours, while providing different abstractions for diverse user groups, by splitting our external DSL ExaSlang into multiple layers, each specifying different aspects of the final application. Layer 1 is designed to resemble LaTeX and allows inputting continuous equations and functions. Their discretization is expressed on layer 2. It is complemented by algorithmic components which can be implemented in a Matlab‐like syntax on layer 3. All information provided to this point is summarized on layer 4, enriched with particulars about data structures and the employed parallelization. Additionally, we support automated progression between the different layers. All ExaSlang input is processed by our jointly developed Scala code generation framework to ultimately emit C++ code. We particularly focus on how to generate applications parallelized with, e.g., MPI and OpenMP that are able to run on workstations and large‐scale cluster alike. We showcase the applicability of our approach by implementing simple test problems, like Poisson’s equation, as well as relevant applications from the field of computational fluid dynamics (CFD). In particular, we implement scalable solvers for the Stokes, Navier‐Stokes and shallow water equations (SWE) discretized using finite differences (FD) and finite volumes (FV). For the case of Navier‐Stokes, we also extend our implementation towards non‐uniform grids, thereby enabling static mesh refinement, and advanced effects such as the simulated fluid being non‐Newtonian and non‐isothermal.
Optimized Integration of Electric Vehicles in Low Voltage Distribution Grids (2019)
Spitzer, Martin ; Schlund, Jonas ; Apostolaki-Iosifidou, Elpiniki ; Pruckner, Marco
All over the world the reduction of greenhouse gas (GHG) emissions, especially in the transportation sector, becomes more and more important. Electric vehicles will be one of the key factors to mitigate GHG emissions due to their higher efficiency in contrast to internal combustion engine vehicles. On the other hand, uncoordinated charging will put more strain on electrical distribution grids and possible congestions in the grid become more likely. In this paper, we analyze the impact of uncoordinated charging, as well as optimization-based coordination strategies on the voltage stability and phase unbalances of a representative European semi-urban low voltage grid. Therefore, we model the low voltage grid as a three-phase system and take realistic arrival and departure times of the electric vehicle fleet into account. Subsequently, we compare different coordinated charging strategies with regard to their optimization objectives, e.g., cost reduction or GHG emissions reduction. Results show that possible congestion problems can be solved by coordinated charging. Additionally, depending on the objective, the costs can be reduced by more than 50% and the GHG emissions by around 40%.
Design and Evaluation of Ethernet-based E/E-Architectures for Latency- and Safety-critical Applications (2019)
Smirnov, Fedor
In recent years, there has been a tremendous number of innovations in car electronics. New infotainment and driver assistance features introduce an ever increasing amount of data that has to be transmitted via the in-car communication network. With its huge bandwidth advantage over other communication protocols, Ethernet offers an interesting opportunity to meet the increasing bandwidth and latency requirements of modern car communication networks and is generally seen as the most promising solution for future automotive systems. The strict real-time and reliability requirements of modern ADAS may be addressed by protocol extensions like Ethernet TSN which offer new mechanisms like time-triggered traffic or seamlessly redundant message transmission. The complexity of automotive communication networks, regarding both the size of the networks and the number of configuration parameters like the sending period or the priority of messages, will certainly further increase in the future and necessitates design automation already today. Yet, existing approaches for automated network design cannot be applied to the design of automotive Ethernet (TSN) networks, as they do not account for their special features such as the introduction of transmission schedules, virtually isolated subnetworks, redundant transmissions, and, in particular, Ethernet's lack of real-time and reliability guarantees. In this context, this thesis, for the first time, presents a system-level design approach for automotive Ethernet networks where the multi-dimensional solution search space created by the many---oftentimes non-linear and possibly conflicting---design objectives from the automotive domain is explored within a DSE to find not one, but multiple high-quality designs. This approach enables an automated design and evaluation of Ethernet-based E/E architectures, in particular for latency- and safety-critical applications, and is based on contributions from the areas of formal analysis, constraint-based restriction of the search space, and the injection of problem-specific knowledge into the optimization. During network design, the evaluation of design decisions plays an important role, especially for the timing and the reliability of message transmissions within the network. Existing approaches for timing analysis provide safe timing guarantees for strict-priority Ethernet networks, but are not applicable for networks with TSN-specific features like time-triggered traffic or transmission preemption. To cope with these novel network features, this thesis extends existing timing analysis approaches, so that the timing of the scheduled traffic and, in particular, the interference imposed on unscheduled traffic are considered. The timing analysis is, moreover, complemented by preprocessing techniques that significantly reduce the time required for the analysis of each network design. While a lot of work can be found on the formal analysis of permanent hardware errors and their impact on the system reliability, the influence of transient errors has, so far, attracted less attention from the scientific community. This thesis provides a contribution in this area by proposing a formal analysis approach for the analysis of transient errors which is specifically tailored to the error-detection mechanism used in automotive networks. The proposed approach combines timing and reliability analysis and demonstrates that temporal redundancy can be used as an effective means to improve transmission reliability. Especially for problems like the optimization of automotive networks, where the search space is huge and the evaluation of a single solution can take considerable amounts of time, excluding infeasible solutions from the evaluation space has been shown to significantly accelerate the optimization process. Based on SAT-Decoding, an existing approach for hybrid optimization of constrained problems, this work contributes constraint systems that formally describe Ethernet networks with overlap-free transmission schedules, message routes that are created with respect to a given VLAN partitioning, and a redundant routing without communication loops, respectively. These constraint sets enable an automatic creation of network designs which are valid with respect to application-specific requirements, which makes a design optimization of these networks at all possible. Over the years, a great pool of experience has been built by design and analysis experts. With the third area of contributions, this work proposes novel means of making parts of this problem-specific knowledge accessible to the optimizer. The thesis contributes Artificial Gene Design, a novel approach that extends SAT-Decoding and enables the optimizer to directly adjust problem characteristics with a high relevance for the design objectives. The application of AGD is demonstrated using the optimization of redundant routings with respect to the transmission reliability as an example. Furthermore, this thesis shows how topology-specific knowledge can be considered during the formulation of routing constraints to significantly reduce the number of encoding variables, resulting in a smaller search space and a faster convergence towards the (Pareto-)optimal solutions.
Secure Logging in Operational Instrumentation and Control Systems (2019)
Bajramovic, Edita
With Industrial Control Systems being increasingly networked, the need for sound forensic capabilities for such systems increases, including reliable log file analysis as a vital part of such investigations. However, manipulating log files is one of the steps a knowledgeable attacker can take to prevent visibility on system events and hiding traces of attacker actions in those systems. Therefore, secure logging is advisable for an effective preparation of digital forensics investigations. In addition, implementing digital forensics readiness in nuclear power plants allows efficient digital forensics investigations and proper gathering of digital forensics evidence while minimizing investigation costs. These capabilities are necessary to adequately prevent and quickly detect any security incident and perform further digital forensics investigations with complete evidence. If an attacker is able to modify log entries or blocks of log entries, critical digital evidence is lost. Within this thesis, we first evaluate the presence of digital forensics readiness in critical infrastructures, including nuclear power plants and briefly discuss existing digital forensics readiness approaches. As systems in critical infrastructures are sophisticated, such as ones in nuclear power plants, adequate preparedness is essential in order to respond to cybersecurity incidents before they happen. Due to the importance of safety in these systems, manual approaches are more favored compared to automated techniques. All required tasks and activities and expected results must be also properly documented. Application Security Controls can be one of the approaches to properly document forensic controls. However, Application Security Controls must be evaluated further to ensure forensic applicability as considerable alternatives also exist. In order to demonstrate the value of such forensic Application Security Controls, we analyze a server system of an Operational Instrumentation & Control system in terms of digital evidence. Based on the analysis results, we derive recommendations to improve the overall digital forensic readiness and the security hardening of Linux server systems in the Operational Instrumentation & Control system. Then, we introduce our formal system model and type of attackers that can access and manipulate logs and logging device. Here, we also give a brief overview of some existing secure logging approaches and compare them between each other. The goal is to standardize requirements of secure logging approaches and analyze which unified security guarantees are realized by these existing approaches under strong attacker models. Later, we extend our secure logging model by using blockchain as secure logging protocol, apply the new model to industrial settings, and build a simple prototype as proof of concept. In an evaluation of the new model and the corresponding prototype, we show the potential, but also the challenges, of this approach. Further, we take a deeper look into existing algorithms for secure logging and integrate them into a single parameterized algorithm. This log authentication and verification algorithm contains a combination of security guarantees and their parametrization returns the set of previous algorithms. Even with different file formats and common purpose, log files generally have similar structures. To this end, we evaluated three common log file types (syslog, windows event log and SQLite browser histories). Based on this evaluation, we developed a simple unified representation of log files and perform analysis independently of their format. As visualization of log files is helpful to find proper evidence, we have developed a simple log file visualization tool. This tool helps to identify evidence of system time manipulation.
Wearable computing applications in eHealth (2019)
Leutheuser, Heike
Non-communicable diseases are the leading cause of death and disability worldwide. Almost two thirds of them are linked to the following risk factors: physical inactivity, unhealthy diets, tobacco use, and harmful use of alcohol. At the same time, everyone is surrounded increasingly with wearables, like smartphones and smartwatches, making wearable computing an integral part of everyday life. The combination of addressing non-communicable diseases with the help of wearables seems one potential solution. In this thesis, methods and algorithms for three wearable computing applications in eHealth are presented: I) mobile breathing analysis, II) mobile electrocardiogram (ECG) analysis, and III) inertial measurement unit (IMU)-based activity recognition. Respiratory inductance plethysmography (RIP) provides an unobtrusive and mobile method for measuring breathing characteristics, avoiding the measurement with flowmeters (FMs) as with them the natural breathing pattern is altered. The output of RIP devices needs to be adjusted to result in correct ventilatory tidal volumes. State of the art methods for adjusting RIP data can only be applied after the actual measurement as these require the simultaneous data acquisition of RIP and FM. In this thesis, novel adjustment algorithms were created enabling the usage of RIP solely, and by this outside the controlled laboratory environment. Respiratory diseases and infections are amongst the most leading deaths group. In future work, it has to be investigated how RIP devices can effectively be used for patients suffering from these conditions. The reasons for mobile ECG analysis were to provide real-time algorithms for two particular scopes. The first scope dealt with identifying arrhythmic beats using only time instances of successive heartbeats – the RR intervals. In the second scope, three high-accurate single-lead, instantaneous P- and T-wave detection algorithms were compared. These two applications could help in addressing cardiovascular diseases that are the number one cause of deaths worldwide. An effective and easy to handle arrhythmia classification algorithm could send people in risk early to a physician. High-accurate P- and T-wave detection algorithms requiring only a single lead are beneficial for all ECG monitoring fields, and at the same time, enhance the patient’s comfort. IMU-based activity recognition provides an objective method for classifying activities addressing the risk factor physical inactivity. Therefore, a common, publicly available dataset DaLiAc was created to enable the comparison of activity recognition algorithms (http://www.activitynet.org/). Using the benchmark dataset DaLiAc, a hierarchical classification system was created that outperformed six state of the art activity recognition algorithms. Recognizing single activities of daily living might increase the awareness of individuals to increase their physical activity as physical inactivity is one of the four leading risk factors for non-communicable diseases. In this thesis, three different wearable computing applications addressing different non-communicable diseases were presented. Wearables have been continuously used for health monitoring in recent years – with still increasing trend – as they bring certain benefits to the user in daily life. This will be enlarged in the future and fostered by ongoing inventions, accumulating knowledge, technological process, and progressive digitalization.
Sprechaktbasiertes Adaptives Fallmanagement (2019)
Tenschert, Johannes
Sprechakte werden schon seit Jahrzehnten als Möglichkeit gesehen, durch ihren Fokus auf die pragmatische Intention das Design von interaktiven Systemen zu verbessern. Frühe Prototypen waren jedoch isolierte Anwendungen mit begrenzten Inferenzfähigkeiten, was ihren praktischen Einsatz erschwerte. Mit der drastischen Zunahme von Wissensarbeit und wissensintensiven Prozessen hat sich auch die Arbeitswelt deutlich geändert. Allerdings werden die Ziele, Interaktionen, Zwischen- und Endergebnisse wissensintensiver Prozesse typischerweise auf viele beteiligte Systeme verstreut - und wichtige Informationen häufig nicht dokumentiert. Adaptives Fallmanagement soll wissensintensive Prozesse unterstützen, deren Ablauf erst während der Durchführung selbst klar wird. Wir setzen Sprechakttheorie im adaptiven Fallmanagement ein, da sie eine kontextgewahre Repräsentation von Interaktionen erlaubt, aber für diesen Kontext vernachlässigbar ist, ob dieser ein strukturierter, semi-strukturierter oder ein Ad-Hoc-Prozess ist. Durch eine allgemeine Repräsentation von Intentionen ermöglicht dies Inferenz unabhängig davon, ob eine Interaktion oder Aktivität modelliert oder ad hoc dokumentiert wurde. Verstreute Prozessinformationen, die heute verantwortliche Wissensarbeiter verbinden, können mit einem solchen Ansatz besser in einem Gesamtkontext verarbeitet werden. Diese Arbeit bekräftigt, dass Sprechakte in Geschäftsprozessen verbreitet und divers sind. Für repräsentative Domänen von Wissensarbeit überprüft sie, ob die Diversität eine für Modellierung und Dokumentation beherrschbare Menge an Sprechakten erlaubt. Sie untersucht auch Anforderungen und Erwartungen an adaptive Fallmanagementsysteme – mit einer genaueren Klassifizierung der unterstützten Wissensarbeiter. Dadurch leitet sie für komplexe Arbeit mit hoher Interdependenz einen sprechaktbasierten Ansatz für adaptives Fallmanagement ab. Dieser Ansatz unterstützt wissensintensive Prozesse auch, wenn vorab kein Prozessmodell vorhanden ist. Aktivitäten werden zunächst als ad hoc angenommen. Nach Bedarf können Routinearbeiten durch Prozessmodelle vereinfacht oder automatisiert werden. Weiterhin werden sprechaktbasierte Techniken für semantische Annotationen, Modellierung und Geschäftsregeln für Compliance-Überwachung sowie Integration eingeführt. Dies ermöglicht die Verbindung von strukturierten, semi-strukturierten und Ad-Hoc-Vorgängen in einem übergreifenden, wissensintensiven Prozess, der durch warnende sowie verhindernde Kontrollmöglichkeiten unterstützt wird.
Analyse verbreiteter Anwendungen zum Lesen von elektronischen Büchern (2019)
Benenson, Zinaida ; Berger, Frederik ; Cherepantsev, Anatoliy ; Datsevich, Sergey ; Do, Long ; Eckert, Moritz ; Elsberger, Tassilo ; Freiling, Felix ; Frinken, Marius ; Glameyer, Hendrik ; Hafner, Steffen ; Hagenhoff, Svenja ; Hammer, Andreas ; Hammerl, Stefan ; Hantke, Florian ; Heindorf, Felix ; Henninger, Marcel ; Höfer, Daniel ; Kuhrt, Phillip ; Lattka, Maximilian ; Leis, Tobias ; Leubner, Christian ; Leyrer, Katharina ; Lindenmeier, Christian ; Del Medico, Katharina ; Moussa, Denise ; Nissen, Michael ; Öz, Alina ; Ottmann, Jenny ; Reif, Katharina ; Ripley, Nora ; Roth, Armin ; Schilling, Joschua ; Schleicher, Robert ; Schulz, David ; Stephan, Milan ; Volkert, Christoph ; Wehner, Max ; Wild, Matthias ; Wirth, Johannes ; Wolf, Julian ; Wunder, Julia ; Zlatanovic, Jovana
Der Marktanteil elektronischer Bucher (E-Books) am Buchmarkt wächst beständig. Um E-Books zu rezipieren, benötigt man spezielle Leseumgebungen, die als Software (im Browser oder als eigene Anwendung) oder als Spezialgerät (E-Reader) realisiert sein können. Diese Leseumgebungen sind geeignet, Daten über das Leseverhalten zu sammeln. Im Rahmen einer universitären Lehrveranstaltung wurden die Software-Leseumgebungen der beiden deutschen Marktführer Kindle und Tolino untersucht. Der vorliegende Bericht fasst die Ergebnisse dieser Analysen zusammen. Das Ergebnis ist eine umfassende Bestandsaufnahme der digitalen Spuren, die durch die Benutzung der Programme entstehen. Betrachtet wurden die zum Untersuchungszeitpunkt aktuellen Versionen der jeweiligen Webanwendungen und Android-Apps sowie des Kindle-Windows-Clients. Die Ergebnisse entstanden im Rahmen einer Übung zur Vorlesung "Fortgeschrittene forensische Informatik II" im Wintersemester 2018/19 an der Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), die gemeinsam durchgeführt wurde vom Lehrstuhl fur Informatik 1 und dem Institut für Buchwissenschaft an der FAU.
Forensic Analysis of the Resilient File System (ReFS) Version 3.4 (2019)
Prade, Paul ; Groß, Tobias ; Dewald, Andreas
ReFS is a modern file system that is developed by Microsoft and its internal structures and behavior is not officially documented. Even so there exist some analysis efforts in deciphering its data structures, some of these findings have yet become deprecated and cannot be applied to current ReFS versions anymore. In this work, general concepts and internal structures found in ReFS are examined and documented. Based on the structures and the processes by which they are modified, approaches to recover (deleted) files from ReFS formatted file systems are shown. We also evaluated our implementation and the allocation strategy of ReFS with respect to accuracy, runtime and the ability to recover older file states. In addition, we extended The Sleuth Kit allowing it to parse ReFS partitions and build a carver based on that extend The Sleuth Kit.
A Standards-based Framework for Test-driven Agile Simulation (2019)
Schneider, Vitali
In view of increasing the efficiency of development processes in the field of software and systems engineering, model-driven techniques are coming into ever more widespread use. On the one hand, the abstract graphic models help to master the complexity of the system under development. On the other hand, formal models serve as the source for analysis and automated synthesis of a system. Thereby, model-based transformation engines and generators allow the specifications to be defined platform- and target-language-independently and to be automatically mapped to the desired target platform. Test-driven development is a promising approach in the field of agile software development. In this method, the development process is based on relatively short iteration cycles with preceding test specifications. Due to the fact that the actual implementation is consistently carried out in compliance with the previously written tests, this method leads to a higher test coverage at early development stages and thus contributes to the overall quality assurance of the resulting system. This thesis introduces the concept of Test-driven Agile Simulation(TAS) as a consistent evolution of the systems engineering methods through the combination of test- and model-driven development techniques with the model-driven simulation. With the help of simulations, performance evaluations and validation of the modeled system can be carried out in the early stages of the development process, even when no program code or a fully implemented system is yet available. The primary goal of this approach is to combine the advantages of the above techniques to enable a holistic model-based approach to systems engineering with improved quality assurance. In particular, special attention is thereby paid to the modeling and validation of the overall system, taking into account the effects of communication between its components. The whole approach is founded upon the widespread and established standards for Model-Driven Architecture(MDA) provided by the Object Management Group(OMG). Using the OMG's standard modeling language UML in combination with the specialized extension profiles, it is possible to specify requirements, system models as well as test models in a uniform, formal, and standard-compliant manner. The creation and presentation of the essential elements of such specifications are largely done with the help of graphical diagrams, such as class, composition structure, state, and activity diagrams. In order to facilitate behavioral modeling using detailed activity diagrams, TAS provides support for the textual activity language Alf, which is also a standard provided by OMG. UML models can be used at different levels of abstraction for specification as well as for analysis. In the TAS approach, these models are automatically transformed into an executable simulation code which can then be executed to ensure primarily the required behavior of the system and the correctness of the tests. In this way, by running tests early on the simulated model, the mutual validation of the system and test specification is performed. The simulation of the modeled system also provides insights into the expected dynamic behavior of the system in terms of functional as well as non-functional properties. For supporting the TAS approach, a versatile integrated tool environment is provided by our framework SimTAny. The framework offers seamless support for modeling, transforming, simulating, and testing UML-based specification models. In addition to the modeling methodology of TAS, the realization of the framework itself is largely based on the standards of the OMG. Thereby, model-based approaches and standardized transformation languages have a wide application in different components of SimTAny. Other helpful features of SimTAny include traceability of requirements across modeled elements and automatically generated code artifacts, as well as the management and design of simulation experiments. The service-oriented architecture of the framework also makes it possible to met the challenges in the distributed development processes. This also leads to the simplifications in terms of functionality extension of the framework itself and its integration into existing development environments and processes.
  • 1 to 20

DINI-Zertifikat 2013     DINI-Zertifikat 2016     Members of COAR OPUS4 Logo

  • Contact
  • Imprint
  • Sitelinks