Filtern
Erscheinungsjahr
Dokumenttyp
- Sonstiges (86) (entfernen)
Volltext vorhanden
- nein (86) (entfernen)
Gehört zur Bibliographie
- nein (86)
Institut
- Visual Data Analysis (32)
- Visual and Data-centric Computing (32)
- Visual Data Analysis in Science and Engineering (21)
- Distributed Algorithms and Supercomputing (14)
- Mathematical Optimization (13)
- Numerical Mathematics (11)
- Therapy Planning (7)
- Computational Nano Optics (5)
- Modeling and Simulation of Complex Processes (4)
- Applied Algorithmic Intelligence Methods (3)
Für die Energiesystemforschung sind Software-Modelle ein Kernelement zur Analyse von Szenarien. Das Forschungsprojekt UNSEEN hatte das Ziel eine bisher unerreichte Anzahl an modellbasierten Energieszenarien zu berechnen, um Unsicherheiten – vor allem unter Nutzung linear optimierender Energiesystem-Modelle - besser bewerten zu können. Hierfür wurden umfangreiche Parametervariationen auf Energieszenarien angewendet und das wesentliche methodische Hindernis in diesem Zusammenhang adressiert: die rechnerische Beherrschbarkeit der zu lösenden mathematischen Optimierungsprobleme. Im Vorläuferprojekt BEAM-ME wurde mit der Entwicklung und Anwendung des Open-Source-Lösers PIPS-IPM++ die Grundlage für den Einsatz von High-Performance-Computing (HPC) zur Lösung dieser Modelle gelegt. In UNSEEN war dieser Löser die zentrale Komponente eines Workflows, welcher zur Generierung, Lösung und multi-kriteriellen Bewertung von Energieszenarien auf dem Hochleistungscomputer JUWELS am Forschungszentrum Jülich implementiert wurde. Zur effizienten Generierung und Kommunikation von Modellinstanzen für Methoden der mathematischen Optimierung auf HPC wurde eine weitere Workflow-Komponente von der GAMS Software GmbH entwickelt: der Szenariogenerator. Bei der Weiterentwicklung von Lösungsalgorithmen für linear optimierende Energie-Systemmodelle standen gemischt-ganzzahlige Optimierungsprobleme im Fokus, welche für die Modellierung konkreter Infrastrukturen und Maßnahmen zur Umsetzung der Energiewende gelöst werden müssen. Die in diesem Zusammenhang stehenden Arbeiten zur Entwicklung von Algorithmen wurden von der Technischen Universität Berlin verantwortet. Bei Design und Implementierung dieser Methoden wurde sie vom Zuse Instituts Berlin unterstützt.
Rolling stock is one of the major assets for a railway transportation company. Hence, their utilization should be as efficiently and effectively as possible. Railway undertakings are facing rolling stock scheduling challenges in different forms - from rather idealized weekly strategic problems to very concrete operational ones. Thus, a vast of optimization models with different features and objectives exist. Thorlacius et al. (2015) provides a comprehensive and valuable collection on technical requirements, models, and methods considered in
the scientific literature. We contribute with an update including recent works. The main focus of the paper is to present a classification and elaboration of the major features which our solver R-OPT is able to handle. Moreover, the basic optimization model and algorithmic ingredients of R-OPT are discussed. Finally, we present computational results for a cargo application at SBB CARGO AG and other railway undertakings for passenger traffic in Europe to show the capabilities of R-OPT.
Synergistic approach of multi-energy models for a European optimal energy system management tool
(2021)
Motivation: The ever-rising volume of patients, high maintenance cost of operating rooms and time consuming analysis of surgical skills are fundamental problems that hamper the practical training of the next generation of surgeons. The hospitals prefer to keep the surgeons busy in real operations over training young surgeons for obvious economic reasons. One fundamental need in surgical training is the reduction of the time needed by the senior surgeon to review the endoscopic procedures performed by the young surgeon while minimizing the subjective bias in evaluation. The unprecedented performance of deep learning ushers the new age of data-driven automatic analysis of surgical skills.
Method: Deep learning is capable of efficiently analyzing thousands of hours of laparoscopic video footage to provide an objective assessment of surgical skills. However, the traditional end-to-end setting of deep learning (video in, skill assessment out) is not explainable. Our strategy is to utilize the surgical process modeling framework to divide the surgical process into understandable components. This provides the opportunity to employ deep learning for superior yet automatic detection and evaluation of several aspects of laparoscopic cholecystectomy such as surgical tool and phase detection.
We employ ZIBNet for the detection of surgical tool presence. ZIBNet employs pre-processing based on tool usage imbalance, a transfer learned 50-layer residual network (ResNet-50) and temporal smoothing. To encode the temporal evolution of tool usage (over the entire video sequence) that relates to the surgical phases, Long Short Term Memory (LSTM) units are employed with long-term dependency.
Dataset: We used CHOLEC 80 dataset that consists of 80 videos of laparoscopic cholecystectomy performed by 13 surgeons, divided equally for training and testing. In these videos, up to three different tools (among 7 types of tools) can be present in a frame.
Results: The mean average precision of the detection of all tools is 93.5 ranging between 86.8 and 99.3, a significant improvement (p <0.01) over the previous state-of-the-art. We observed that less frequent tools like Scissors, Irrigator, Specimen Bag etc. are more related to phase transitions. The overall precision (recall) of the detection of all surgical phases is 79.6 (81.3).
Conclusion: While this is not the end goal for surgical skill analysis, the development of such a technological platform is essential toward a data-driven objective understanding of surgical skills. In future, we plan to investigate surgeon-in-the-loop analysis and feedback for surgical skill analysis.