TY - THES A1 - Hoffmann, Marie T1 - Approximate Algorithms for Distributed Systems N2 - Peer-to-peer (P2P) systems form a special class of distributed systems. Typically, nodes in a P2P system are flat and share the same responsabilities. In this thesis we focus on three problems that occur in P2P systems: the storage of data replicates, quantile computation on distributed data streams, and churn rate estimation. Data replication is one of the oldest techniques to maintain stored data in a P2P system and to reply to read requests. Applications, which use data replication are distributed databases. They are part of an abstract overlay network and do not see the underlying network topology. The question is how to place a set of data replicates in a distributed system such that response times and failure probabilities become minimal without a priori knowledge of the topology of the underlying hardware nodes? We show how to utilize an agglomerative clustering procedure to reach this goal. State-of-the-art algorithms for aggregation of distributed data or data streams require at some point synchronization, or merge data aggregates hierarchically, which does not accompany the basic principle of P2P systems. We test whether randomized communication and merging of data aggregates are able to produce the same results. These data aggregates serve for quantile queries. Constituting and maintaining a P2P overlay network requires frequent message passing. It is a goal to minimize the number of maintenance messages since they consume bandwidth which might be missing for other applications. The lower bound of the frequency for mainte- nance messages is highly dependent on the churn rate of peers. We show how to estimate the mean lifetime of peers and to reduce the frequency for maintenance messages without destabilizing the infrastructure of the constituting overlay. KW - peer-to-peer, machine learning, approximate, clustering, quantile, linear regression Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-42370 ER - TY - THES A1 - Keidel, Stefan T1 - Snapshots in Scalaris N2 - Eines der größten Hindernisse beim praktischen Einsatz von Scalaris, einer skalierbaren Implementierung einer verteilten Hashtabelle mit Unterstützung für Transaktionen, ist das Fehlen eines Verfahrens zur Aufnahme eines konsistenten Zustandes des gesamten Systems. Wir stellen in dieser Arbeit ein einfaches Protokoll vor, dass diese Aufgabe erfüllt und sich, auf Grund der von uns gewählten Herangehensweise, leicht implementieren lässt. Als Ausgangspunkt dafür wählen wir aus einer Reihe von „klassischen“ Snapshot-Algorithmen ein 1993 von Mattern entworfenes Verfahren, welches auf dem Algorithmus von Lai und Yang basiert, aus. Diese Entscheidung basiert auf einer gründlichen Analyse der Protokolle unter Berücksichtigung der Architektur der existierenden Software. Im nächsten Arbeitsschritt benutzen wir unser vollständiges Wissen über die Interna des Transaktionssystems von Scalaris und vereinfachen damit das Verfahren hinsichtlich Benutzbarkeit und Implementierungskomplexität, ohne die Anforderungen an den aufgenommenen Zustand aufzuweichen. Statt einer losen Anhäufung lokaler Zustände der einzelnen Teilnehmerknoten können wir am Ende eine große Schlüssel-Wert-Tabelle als Ergebnis erzeugen, die konsistent ist, sich leicht weiterverarbeiten lässt und die einem Zustand entspricht, in dem sich das System einmal befunden haben könnte. Nachdem wir das Verfahren dann in Software umgesetzt haben, werten wir die Ergebnisse hinsichtlich des Einflusses auf die Performanz des Gesamtsystems aus und diskutieren mögliche Weiterentwicklungen. KW - scalaris KW - dht KW - algorithm Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-42282 ER - TY - THES A1 - Krause, Jan T1 - Investigation of Options to Handle 3D MRI Data via Convolutional Neural Networks Application in Knee Osteoarthritits Classification KW - Machine Learning KW - Computational Diagnosis KW - Knee Osteoarthritis Y1 - 2021 ER - TY - THES A1 - Witzig, Jakob T1 - Reoptimization Techniques in MIP Solvers N2 - Many optimization problems can be modeled as Mixed Integer Programs (MIPs). In general, MIPs cannot be solved efficiently, since solving MIPs is NP-hard, see, e.g., Schrijver, 2003. Common methods for solving NP-hard problems are branch-and-bound and column generation. In the case of column generation, the original problem becomes decomposed or re-formulated into one ore more smaller subproblems, which are easier to solve. Each of these subproblems is solved separately and recurrently, which can be interpreted as solving a sequence of optimization problems. In this thesis, we consider a sequence of MIPs which only differ in the respective objective functions. Furthermore, we assume each of these MIPs get solved with a branch-and-bound algorithm. This thesis aims to figure out whether the solving process of a given sequence of MIPs can be accelerated by reoptimization. As reoptimization we understand starting the solving process of a MIP of this sequence at a given frontier of a search tree corresponding to another MIP of this sequence. At the beginning we introduce an LP-based branch-and-bound algorithm. This algorithm is inspired by the reoptimizing algorithm of Hiller, Klug, and the author of this thesis, 2013. Since most of the state-of-the-art MIP solvers come to decisions based on dual information, which leads to the loss of feasible solutions after changing the objective function, we present a technique to guarantee optimality despite using these information. A decision is based on a dual information if this decision is valid for at least one feasible solution, whereas a decision is based on a primal information if this decision is valid for all feasible solutions. Afterwards, we consider representing the search frontier of the tree by a set of nodes of a given size. We call this the Tree Compression Problem. Moreover, we present a criterion characterizing the similarity of two objective functions. To evaluate our approach of reoptimization we extend the well-known and well-maintained MIP solver SCIP to an LP-based branch-and-bound framework, introduce two heuristics for solving the Tree Compression Problem, and a primal heuristic which is especially fitted to column generation. Finally, we present computational experiments on several problem classes, e.g., the Vertex Coloring and k-Constrained Shortest Path. Our experiments show, that a straightforward reoptimization, i.e., without additional heuristics, provides no benefit in general. However, in combination with the techniques and methods presented in this thesis, we can accelerate the solving of a given sequence up to the factor 14. For this purpose it is essential to take the differences of the objective functions into account and to restart the reoptimization, i.e., solve the subproblem from scratch, if the objective functions are not similar enough. Finally, we discuss the possibility to parallelize the solving process of the search frontier at the beginning of each solving process. Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-54067 ER - TY - THES A1 - Shestakov, Alexey T1 - A Deep Learning Method for Automated Detection of Meniscal Tears in Meniscal Sub-Regions in 3D MRI Data N2 - This work presents a fully automated pipeline, centered around a deep neural network, as well as a method to train that network in an efficient manner, that enables accurate detection of lesions in meniscal anatomical subregions. The network architecture is based on a transformer encoder/decoder. It is trained on DESS and tuned on IW TSE 3D MRI scans sourced from the Osteoarthritis Initiative. Furthermore, it is trained in a multilabel, and multitask fashion, using an auxiliary detection head. The former enables implicit localisation of meniscal defects, that to the best of my knowledge, has not yet been reported elsewhere. The latter enables efficient learning on the entire 3D MRI volume. Thus, the proposed method does not require any expert knowledge at inference. Aggregated inference results from two datasets resulted in an overall AUCROC result of 0.90, 0.91 and 0.93 for meniscal lesion detection anywhere in the knee, in medial and in lateral menisci respectively. These results compare very well to the related work, even though only a fraction of the data has been utilized. Clinical applicability and benefit is yet to be determined. KW - Machine Learning KW - Computational Diagnosis KW - Knee Osteoarthritis Y1 - 2021 ER - TY - THES A1 - Paskin, Martha T1 - Estimating 3D Shape of the Head Skeleton of Basking Sharks Using Annotated Landmarks on a 2D Image N2 - Basking sharks are thought to be one of the most efficient filter-feeding fish in terms of the throughput of water filtered through their gills. Details about the underlying morphology of their branchial region have not been studied due to various challenges in acquiring real-world data. The present thesis aims to facilitate this, by developing a mathematical shape model which constructs the 3D structure of the head skeleton of a basking shark using annotated landmarks on a single 2D image. This is an ill-posed problem as estimating the depth of a 3D object from a single 2D view is, in general, not possible. To reduce this ambiguity, we create a set of pre-defined training shapes in 3D from CT scans of basking sharks. First, the damaged structures of the sharks in the scans are corrected via solving a set of optimization problems, before using them as accurate 3D representations of the object. Then, two approaches are employed for the 2D-to-3D shape fitting problem–an Active Shape Model approach and a Kendall’s Shape Space approach. The former represents a shape as a point on a high-dimensional Euclidean space, whereas the latter represents a shape as an equivalence class of points in this Euclidean space. Kendall’s shape space approach is a novel technique that has not yet been applied in this context, and a comprehensive comparison of the two approaches suggests this approach to be superior for the problem at hand. This can be credited to an improved interpolation of the training shapes. N2 - Riesenhaie zählen zu den effizientesten Filtrierern hinsichtlich des durch die Kiemen gefilterten Wasservolumens. Die Kiemenregion dieser Tiere besitzt eine markante Morphologie, die jedoch bisher nicht umfassend erforscht werden konnte, da es schwierig ist, reale Daten dieser Tiere zu erheben. Die vorliegende Arbeit zielt darauf ab, dies durch die Entwicklung eines mathematischen Formmodels zu ermöglichen, das es erlaubt, die 3D-Struktur des Schädelskeletts anhand von Landmarken, die auf einem 2D-Bild platziert werden, zu rekonstruieren. Die hierzu benötigte Tiefenbestimmung der Landmarken aus einer 2D-Projektion ist ein unterbestimmtes Problem. Wir lösen dies durch die Hinzunahme von Trainingsformen, welche wir aus CT-Scans von Riesenhaien gewinnen. Der Zustand der tomografierten Exemplare erfordert jedoch einen vorhergehenden Korrekturschritt, den wir mit Hilfe eines Optimierungsansatzes lösen, bevor die extrahierten Strukturen als 3D-Trainingsformen dienen können. Um die 3D-Struktur des Schädelskelettes aus 2D-Landmarken zu rekonstruieren, vergleichen wir zwei Ansätze – den sogenannten Active-Shape-Model (ASM)-Ansatz und einen Ansatz basierend auf Kendalls Formenraum. Während eine Form des ASM-Ansatzes durch einen Punkt in einem hochdimensionalen Euklidischen Raum repräsentiert ist, repräsentiert eine Form im Kendall-Formenraum eine Äquivalenzklasse von Punkten des Euklidischen Raumes. Die Anwendung des Kendall-Formenraumes für das beschriebene Problem ist neu und ein umfassender Vergleich der Methoden hat ergeben, dass dieser Ansatz für die spezielle Anwendung zu besseren Ergebnissen führt. Wir führen dies auf die überlegene Interpolation der Trainingsformen in diesem Raum zurück. Y1 - 2022 UR - https://arxiv.org/abs/2207.12687 ER -