Filtern
Erscheinungsjahr
Dokumenttyp
- Artikel (48)
- Konferenzbeitrag (22)
- Buchkapitel (6)
- Buch (Monographie) (2)
- Beitrag Sammelband (1)
Sprache
- Englisch (79)
Volltext vorhanden
- nein (79) (entfernen)
Schlagworte
- COVID-19 (1)
- SARS-CoV-2 (1)
- drug discovery (1)
- machine learning (1)
Institut
- Mathematical Optimization (48)
- Visual Data Analysis (6)
- Visual and Data-centric Computing (6)
- Distributed Algorithms and Supercomputing (3)
- Mathematical Algorithmic Intelligence (3)
- Applied Algorithmic Intelligence Methods (2)
- Mathematical Optimization Methods (2)
- Modeling and Simulation of Complex Processes (2)
- Numerical Mathematics (2)
- Therapy Planning (2)
Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.
Purpose: Despite the success of total knee arthroplasty there continues to be a significant proportion of patients who are dissatisfied. One explanation may be a shape mismatch between pre and post-operative distal femurs. The purpose of this study was to investigate a method to match a statistical shape model (SSM) to intra-operatively acquired point cloud data from a surgical navigation system, and to validate it against the pre-operative magnetic resonance imaging (MRI) data from the same patients.
Methods: A total of 10 patients who underwent navigated total knee arthroplasty also had an MRI scan less than 2 months pre-operatively. The standard surgical protocol was followed which included partial digitization of the distal femur. Two different methods were employed to fit the SSM to the digitized point cloud data, based on (1) Iterative Closest Points (ICP) and (2) Gaussian Mixture Models (GMM). The available MRI data were manually segmented and the reconstructed three-dimensional surfaces used as ground truth against which the statistical shape model fit was compared.
Results: For both approaches, the difference between the statistical shape model-generated femur and the surface generated from MRI segmentation averaged less than 1.7 mm, with maximum errors occurring in less clinically important areas.
Conclusion: The results demonstrated good correspondence with the distal femoral morphology even in cases of sparse data sets. Application of this technique will allow for measurement of mismatch between pre and post-operative femurs retrospectively on any case done using the surgical navigation system and could be integrated into the surgical navigation unit to provide real-time feedback.
In this article we investigate methods to solve a fundamental task in gas transportation, namely the validation of nomination problem: Given a gas transmission network consisting of passive pipelines and active, controllable elements and given an amount of gas at every entry and exit point of the network, find operational settings for all active elements such that there exists a network state meeting all physical, technical, and legal constraints.
We describe a two-stage approach to solve the resulting complex and numerically difficult feasibility problem. The first phase consists of four distinct algorithms applying linear, and methods for complementarity constraints to compute possible settings for the discrete decisions. The second phase employs a precise continuous programming model of the gas network. Using this setup, we are able to compute high quality solutions to real-world industrial instances that are significantly larger than networks that have appeared in the mathematical programming literature before.
The modeling flexibility and the optimality guarantees provided by mixed-integer programming greatly aid the design of robust and future-proof decision support systems. The complexity of industrial-scale supply chain optimization, however, often poses limits to the application of general mixed-integer programming solvers. In this paper we describe algorithmic innovations that help to ensure that MIP solver performance matches the complexity of the large supply chain problems and tight time limits encountered in practice. Our computational evaluation is based on a diverse set, modeling real-world scenarios supplied by our industry partner SAP.
Background: High-throughput proteomics techniques, such as mass spectrometry (MS)-based approaches, produce very high-dimensional data-sets. In a clinical setting one is often interested in how mass spectra differ between patients of different classes, for example spectra from healthy patients vs. spectra from patients having a particular disease. Machine learning algorithms are needed to (a) identify these discriminating features and (b) classify unknown spectra based on this feature set. Since the acquired data is usually noisy, the algorithms should be robust against noise and outliers, while the identified feature set should be as small as possible.
Results: We present a new algorithm, Sparse Proteomics Analysis (SPA),based on thet heory of compressed sensing that allows us to identify a minimal discriminating set of features from mass spectrometry data-sets. We show (1) how our method performs on artificial and real-world data-sets, (2) that its performance is competitive with standard (and widely used) algorithms for analyzing proteomics data, and (3) that it is robust against random and systematic noise. We further demonstrate the applicability of our algorithm to two previously published clinical data-sets.
Mass spectrometry-based serum metabolic profiling is a promising tool to analyse complex cancer associated metabolic alterations, which may broaden our pathophysiological understanding of the disease and may function as a source of new cancer-associated biomarkers. Highly standardized serum samples of patients suffering from colon cancer (n = 59) and controls (n = 58) were collected at the University Hospital Leipzig. We based our investigations on amino acid screening profiles using electrospray tandem-mass spectrometry. Metabolic profiles were evaluated using the Analyst 1.4.2 software. General, comparative and equivalence statistics were performed by R 2.12.2. 11 out of 26 serum amino acid concentrations were significantly different between colorectal cancer patients and healthy controls. We found a model including CEA, glycine, and tyrosine as best discriminating and superior to CEA alone with an AUROC of 0.878 (95\% CI 0.815?0.941). Our serum metabolic profiling in colon cancer revealed multiple significant disease-associated alterations in the amino acid profile with promising diagnostic power. Further large-scale studies are necessary to elucidate the potential of our model also to discriminate between cancer and potential differential diagnoses. In conclusion, serum glycine and tyrosine in combination with CEA are superior to CEA for the discrimination between colorectal cancer patients and controls.
We propose a mathematical optimization model and its solution for joint chance constrained DC Optimal Power Flow. In this application, it is particularly important that there is a high probability of transmission limits being satisfied, even in the case of uncertain or fluctuating feed-in from renewable energy sources. In critical network situations where the network risks overload, renewable energy feed-in has to be curtailed by the transmission system operator (TSO). The TSO can reduce the feed-in in discrete steps at each network node. The proposed optimization model minimizes curtailment while ensuring that there is a high probability of transmission limits being maintained. The latter is modeled via (joint) chance constraints that are computationally challenging. Thus, we propose a solution approach based on the robust safe approximation of these constraints. Hereby, probabilistic constraints are replaced by robust constraints with suitably defined uncertainty sets constructed from historical data. The ability to discretely control the power feed-in then leads to a robust optimization problem with decision-dependent uncertainties, i.e. the uncertainty sets depend on decision variables. We propose an equivalent mixed-integer linear reformulation for box uncertainties with the exact linearization of bilinear terms. Finally, we present numerical results for different test cases from the Nesta archive, as well as for a real network. We consider the discrete curtailment of solar feed-in, for which we use real-world weather and network data. The experimental tests demonstrate the effectiveness of this method and run times are very fast. Moreover, on average the calculated robust solutions only lead to a small increase in curtailment, when compared to nominal solutions.
This paper describes three presolving techniques for solving mixed integer programming problems (MIPs) that were implemented in the academic MIP solver SCIP. The task of presolving is to reduce the problem size and strengthen the formulation, mainly by eliminating redundant information and exploiting problem structures. The first method fixes continuous singleton columns and extends results known from duality fixing. The second analyzes and exploits pairwise dominance relations between variables, whereas the third detects isolated subproblems and solves them independently. The performance of the presented techniques is demonstrated on two MIP test sets. One contains all benchmark instances from the last three MIPLIB versions, while the other consists of real-world supply chain management problems. The computational results show that the combination of all three presolving techniques almost halves the solving time for the considered supply chain management problems. For the MIPLIB instances we obtain a speedup of 20 % on affected instances while not degrading the performance on the remaining problems.
Background: Metabolomics as one of the most rapidly growing technologies in the ?-omics?field denotes the comprehensive analysis of low molecular-weight compounds and their pathways. Cancer-specific alterations of the metabolome can be detected by high-throughput massspectrometric metabolite profiling and serve as a considerable source of new markers for the early differentiation of malignant diseases as well as their distinction from benign states. However, a comprehensive framework for the statistical evaluation of marker panels in a multi-class setting has not yet been established.
Methods: We collected serum samples of 40 pancreatic carcinoma patients, 40 controls, and 23 pancreatitis patients according to standard protocols and generated amino acid profiles by routine mass-spectrometry. In an intrinsic three-class bioinformatic approach we compared these profiles, evaluated their selectivity and computed multi-marker panels combined with the conventional tumor marker CA 19-9. Additionally, we tested for non-inferiority and superiority to determine the diagnostic surplus value of our multi-metabolite marker panels.
Results: Compared to CA 19-9 alone, the combined amino acid-based metabolite panel had a superior selectivity for the discrimination of healthy controls, pancreatitis, and pancreatic carcinoma patients [Volume under ROC surface (VUS) = 0.891 (95\% CI 0.794 - 0.968)].
Conclusions: We combined highly standardized samples, a three-class study design, a highthroughput mass-spectrometric technique, and a comprehensive bioinformatic framework to identify metabolite panels selective for all three groups in a single approach. Our results suggest that
metabolomic profiling necessitates appropriate evaluation strategies and ?despite all its current limitations? can deliver marker panels with high selectivity even in multi-class settings.
MIPLIB 2003
(2006)
The recently imposed new gas market liberalization rules in Germany lead to a change of business of gas network operators. While previously network operator and gas vendor were united, they were forced to split up into independent companies. The network has to be open to any other gas trader at the same conditions, and free network capacities have to be identified and publicly offered in a non-discriminatory way. We discuss how these changing paradigms lead to new and challenging mathematical optimization problems. This includes the validation of nominations, that asks for the decision if the network’s capacity is sufficient to transport a specific amount of flow, the verification of booked capacities and the detection of available freely allocable capacities, and the topological extension of the network with new pipelines or compressors in order to increase its capacity. In order to solve each of these problems and to provide meaningful results for the practice, a mixture of different mathematical aspects have to be addressed, such as combinatorics, stochasticity, uncertainty, and nonlinearity. Currently, no numerical solver is available that can deal with such blended problems out-of-the-box. The main goal of our research is to develop such a solver, that moreover is able to solve instances of realistic size. In this article, we describe the main ingredients of our prototypical software implementations.