Informatik
Refine
Document Type
- Article (21)
- Preprint (5)
- Conferenceobjectvolume (2)
- Bachelor Thesis (1)
Has Fulltext
- yes (29)
Keywords
Cardiovascular magnetic resonance imaging is the gold standard for cardiac function assessment. Quantification of clinical results (CR) requires precise segmentation. Clinicians statistically compare CRs to ensure reproducibility. Convolutional Neural Network developers compare their results via metrics. Aim: Introducing software capable of automatic multilevel comparison. A multilevel analysis covering segmentations and CRs builds on a generic software backend. Metrics and CRs are calculated with geometric accuracy. Segmentations and CRs are connected to track errors and their effects. An interactive GUI makes the software accessible to different users. The software’s multilevel comparison was tested on a use case based on cardiac function assessment. The software shows good reader agreement in CRs and segmentation metrics (Dice > 90%). Decomposing differences by cardiac position revealed excellent agreement in midventricular slices: > 90% but poorer segmentations in apical (> 71%) and basal slices (> 74%). Further decomposition by contour type locates the largest millilitre differences in the basal right cavity (> 3 ml). Visual inspection shows these differences being caused by different basal slice choices. The software illuminated reader differences on several levels. Producing spreadsheets and figures concerning metric values and CR differences was automated. A multilevel reader comparison is feasible and extendable to other cardiac structures in the future.
Cardiac magnetic resonance (CMR) examinations require standardization to achieve reproducible results. Therefore, quality control as known as in other industries such as in-vitro diagnostics, could be of essential value. One such method is the statistical detection of long-time drifts of clinically relevant measurements. Starting in 2010, reports from all CMR examinations of a high-volume center were stored in a hospital information system. Quantitative parameters of the left ventricle were analyzed over time with moving averages of different window sizes. Influencing factors on the acquisition and on the downstream analysis were captured. 26,902 patient examinations were exported from the clinical information system. The moving median was compared to predefined tolerance ranges, which revealed an overall of 50 potential quality relevant changes (“alerts”) in SV, EDV and LVM. Potential causes such as change of staff, scanner relocation and software changes were found not to be causal of the alerts. No other influencing factors were identified retrospectively. Statistical quality assurance systems based on moving average control charts may provide an important step towards reliability of quantitative CMR. A prospective evaluation is needed for the effective root cause analysis of quality relevant alerts.
The manual and often time-consuming segmentation of the myocardium in cardiovascular magnetic resonance is increasingly automated using convolutional neural networks (CNNs). This study proposes a cascaded segmentation (CASEG) approach to improve automatic image segmentation quality. First, an object detection algorithm predicts a bounding box (BB) for the left ventricular myocardium whose 1.5 times enlargement defines the region of interest (ROI). Then, the ROI image section is fed into a U-Net based segmentation. Two CASEG variants were evaluated: one using the ROI cropped image solely (cropU) and the other using a 2-channel-image additionally containing the original BB image section (crinU). Both were compared to a classical U-Net segmentation (refU). All networks share the same hyperparameters and were tested on basal and midventricular slices of native and contrast enhanced (CE) MOLLI T1 maps. Dice Similarity Coefficient improved significantly (p < 0.05) in cropU and crinU compared to refU (81.06%, 81.22%, 72.79% for native and 80.70%, 79.18%, 71.41% for CE data), while no significant improvement (p < 0.05) was achieved in the mean absolute error of the T1 time (11.94 ms, 12.45 ms, 14.22 ms for native and 5.32 ms, 6.07 ms, 5.89 ms for CE data). In conclusion, CASEG provides an improved geometric concordance but needs further improvement in the quantitative outcome.
Many challenges of today’s science are parametric optimization problems that are extremely complex and computationally intensive to calculate. At the same time, the hardware for high-performance computing is becoming increasingly powerful. Geneva is a framework for parallel optimization of large-scale problems with highly nonlinear quality surfaces in grid and cloud environments. To harness the immense computing power of high-performance computing clusters, we have developed a new networking component for Geneva—the so-called MPI Consumer—which makes Geneva suitable for HPC. Geneva is most prominent for its evolutionary algorithm, which requires repeatedly evaluating a user-defined cost function. The MPI Consumer parallelizes the computation of the candidate solutions’ cost functions by sending them to remote cluster nodes. By using an advanced multithreading mechanism on the master node and by using asynchronous requests on the worker nodes, the MPI Consumer is highly scalable. Additionally, it provides fault tolerance, which is usually not the case for MPI programs but becomes increasingly important for HPC. Moreover, the MPI Consumer provides a framework for the intuitive implementation of fine-grained parallelization of the cost function. Since the MPI Consumer conforms to the standard paradigm of HPC programs, it vastly improves Geneva’s user-friendliness on HPC clusters. This article gives insight into Geneva’s general system architecture and the system design of the MPI Consumer as well as the underlying concepts. Geneva—including the novel MPI Consumer—is publicly available as an open source project on GitHub ( https://github.com/gemfony/geneva ) and is currently used for fundamental physics research at GSI in Darmstadt, Germany.
Proceedings CERC 2023
(2024)
In an era marked by unprecedented technological advancements, the 2023 Collaborative European Research Conference (CERC) convened in Barcelona, Spain, on June 9-10, 2023, as a hybrid event. This gathering underscored the imperative of interdisciplinary collaboration across Europe, bringing together researchers from diverse fields to address the multifaceted challenges and opportunities presented by rapid innovation.
The conference featured a keynote address that delved into the swift evolution of artificial intelligence (AI) and its profound societal implications. The discourse highlighted the integration of AI across various professions, emphasizing the necessity for human oversight to navigate ethical considerations and mitigate potential risks. The keynote also examined the European Union's proactive stance on AI regulation, particularly through the forthcoming AI Act, which aims to establish a robust framework for the responsible development and deployment of AI technologies.
The proceedings encompass a wide array of research contributions, reflecting the conference's commitment to fostering knowledge transfer and interdisciplinary exchange. Topics span from data processing and machine learning to e-healthcare innovations and the societal impacts of emerging technologies. Notably, discussions on AI's role in healthcare, legal frameworks, and education underscore the critical need for ethical standards and regulatory measures to ensure that technological progress aligns with societal well-being.
In this article, selected new directions in knowledge-based artificial intelligence (AI) and machine learning (ML) are presented: ontology development methodologies and tools, automated engineering of WordNets, innovations in semantic search, and automated machine learning (AutoML). Knowledge-based AI and ML complement each other ideally, as their strengths compensate for the weaknesses of the other discipline. This is demonstrated via selected corporate use cases: anomaly detection, efficient modeling of supply networks, circular economy, and semantic enrichment of technical information.
Using random linear network coding (RLNC) in asynchronous networks with one-to-many information flow has already been proven to be a valid approach to maximize the channel capacities. Message-based consensus protocols such as practical Byzantine fault tolerance (pBFT) adhere partially to said scenario. Protocol phases with many-to-many communication, however, still suffer from quadratic growth in the number of required transmissions to reach consensus. We show that an enhancement in the data transmission behavior in the quadratic phases is possible through combining RLNC with pBFT as one hybrid protocol. We present several experiments conducted on random network topologies. We conclude that using RLNC-based data transmission offers a significantly better performance under specific circumstances, which depend on the number of participating network nodes and the chosen coding parameters. Applying the same approach to other combinations of message-based consensus and network coding protocols promises not only a gain in performance, but may also improve robustness and security and open up new application scenarios for RLNC, e.g., running it on the application layer.
CERC 2021 Proceedings
(2024)
CERC 2021, the first online CERC conference, provided an opportunity to welcome not only our European friends and colleagues but also participants from across the globe. Munster Technological University has made a notable impact in Artificial Intelligence, Cybersecurity, and Computer Science research, largely due to national, European, and international funding and partnerships. It feels fitting, therefore, that CERC is once again being hosted by our university.
The selected papers for presentation and publication are covering a wide range of topics like Visual Computing, Data Processing and Machine Learning, E-Healthcare and Smart Diagnostics, E-Learning and Education and Engineering and Society.
Throughout this conference, we have received invaluable support from the program committee and my fellow program chairs, especially Prof. Udo Bleimann for his unwavering support, Prof. Huiru Zheng, Prof. Ingo Stengel, Dr. Haiying Wang, and Prof. Stefanie Regier for their dedication to the review process. We are also grateful to Dirk Burkhardt and Dr. Robert Loew for their efforts in setting up the website, managing the conference system, and preparing the programme and proceedings.
A special thanks goes to Munster Technological University, Ulster University, Hochschule Karlsruhe, and Hochschule Darmstadt for their essential support of this conference.
Dr Haithem Afli
Im Laufe der Jahre haben Zertifizierungen wie Product Owner (PO) und Scrum Master (SM) im agilen Umfeld immer mehr an Bedeutung gewonnen. Gleichzeitig gilt hier wie bei vielen anderen beruflichen Weiterbildungen, dass Kompaktkurse und Zertifizierungen allein nicht ausreichen dürften, um die durch die Zertifikate bescheinigten Rollen wirksam ausfüllen zu können, sondern dass dazu weitere Kompetenzen erforderlich sind. Auch hierfür gibt es inzwischen vielfältige Ergänzungskurse und Trainingspfade. Diese sind aber typischerweise auch wieder modular nach Wissens- und Kompetenzgebieten strukturiert. Demgegenüber zeichnet sich die Praxis gerade dadurch aus, dass sie prozessorientiert ist und die verschiedenen Wissensbausteine und Kompetenzgebiete aus der Schulung deshalb in immer neuen Konstellationen und Kombinationen situativ angepasst angewendet werden müssen und gerade nicht in sauber getrennten Modulen. Insgesamt resultiert also eine Lücke zwischen dem, was die Zertifikatskurse einschließlich Aufbaukursen vermitteln (können) und was in der Praxis darüber hinaus benötigt wird, um die Rollen eines POs oder SMs kompetent ausüben zu können. Der Beitrag entwickelt am Beispiel der Rolle eines PO einen Vorschlag, wie diese Lücke auf der Basis einer Erstqualifizierung geschlossen werden könnte. Dazu wird eine berufsintegrierte und prozessorientierte On-the-Job-Qualifizierung vorgestellt, die die fehlende unmittelbare Umsetzung und situative Verbindung der verschiedenen Wissens- und Kompetenzgebiete aus den Zertifikatskursen unter Begleitung von erfahrenen Mentoren in den Mittelpunkt stellt. Die Inhalte und die Grundstruktur der On-the-Job-Qualifizierung sind dabei mithilfe einer qualitativ-empirischen Vorgehensweise entwickelt und validiert worden. Der Ansatz ermöglicht es agilen Unternehmen, die Lücke zwischen den Zertifizierungsinhalten und den eigentlichen PO-Aufgaben in der Praxis zu schließen, so dass die PO rascher in ihrer Rolle wirksam werden können. Eine Übertragung des Ansatzes auf die Rolle des SM ist ebenfalls leicht möglich.
In order to generate a machine learning algorithm (MLA) that can support ophthalmologists with the diagnosis of glaucoma, a carefully selected dataset that is based on clinically confirmed glaucoma patients as well as borderline cases (e.g., patients with suspected glaucoma) is required. The clinical annotation of datasets is usually performed at the expense of the data volume, which results in poorer algorithm performance. This study aimed to evaluate the application of an MLA for the automated classification of physiological optic discs (PODs), glaucomatous optic discs (GODs), and glaucoma-suspected optic discs (GSODs). Annotation of the data to the three groups was based on the diagnosis made in clinical practice by a glaucoma specialist. Color fundus photographs and 14 types of metadata (including visual field testing, retinal nerve fiber layer thickness, and cup–disc ratio) of 1168 eyes from 584 patients (POD = 321, GOD = 336, GSOD = 310) were used for the study. Machine learning (ML) was performed in the first step with the color fundus photographs only and in the second step with the images and metadata. Sensitivity, specificity, and accuracy of the classification of GSOD vs. GOD and POD vs. GOD were evaluated. Classification of GOD vs. GSOD and GOD vs. POD performed in the first step had AUCs of 0.84 and 0.88, respectively. By combining the images and metadata, the AUCs increased to 0.92 and 0.99, respectively. By combining images and metadata, excellent performance of the MLA can be achieved despite having only a small amount of data, thus supporting ophthalmologists with glaucoma diagnosis.