TY - GEN A1 - Quer, Jannes A1 - Donati, Luca A1 - Keller, Bettina A1 - Weber, Marcus T1 - An automatic adaptive importance sampling algorithm for molecular dynamics in reaction coordinates N2 - In this article we propose an adaptive importance sampling scheme for dynamical quantities of high dimensional complex systems which are metastable. The main idea of this article is to combine a method coming from Molecular Dynamics Simulation, Metadynamics, with a theorem from stochastic analysis, Girsanov's theorem. The proposed algorithm has two advantages compared to a standard estimator of dynamic quantities: firstly, it is possible to produce estimators with a lower variance and, secondly, we can speed up the sampling. One of the main problems for building importance sampling schemes for metastable systems is to find the metastable region in order to manipulate the potential accordingly. Our method circumvents this problem by using an assimilated version of the Metadynamics algorithm and thus creates a non-equilibrium dynamics which is used to sample the equilibrium quantities. T3 - ZIB-Report - 17-09 KW - Adaptive Importance Sampling KW - Molecular Dynamics KW - Metastability KW - Variance Reduction KW - Non Equilibrium Sampling KW - Metadynamics KW - Girsanov Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-62075 SN - 1438-0064 ER - TY - GEN A1 - Baum, Daniel A1 - Lindow, Norbert A1 - Hege, Hans-Christian A1 - Lepper, Verena A1 - Siopi, Tzulia A1 - Kutz, Frank A1 - Mahlow, Kristin A1 - Mahnke, Heinz-Eberhard T1 - Revealing hidden text in rolled and folded papyri N2 - Ancient Egyptian papyri are often folded, rolled up or kept as small packages, sometimes even sealed. Physically unrolling or unfolding these packages might severely damage them. We demonstrate a way to get access to the hidden script without physical unfolding by employing computed tomography and mathematical algorithms for virtual unrolling and unfolding. Our algorithmic approaches are combined with manual interaction. This provides the necessary flexibility to enable the unfolding of even complicated and partly damaged papyrus packages. In addition, it allows us to cope with challenges posed by the structure of ancient papyrus, which is rather irregular, compared to other writing substrates like metallic foils or parchment. Unfolding of packages is done in two stages. In the first stage, we virtually invert the physical folding process step by step until the partially unfolded package is topologically equivalent to a scroll or a papyrus sheet folded only along one fold line. To minimize distortions at this stage, we apply the method of moving least squares. In the second stage, the papyrus is simply flattened, which requires the definition of a medial surface. We have applied our software framework to several papyri. In this work, we present the results of applying our approaches to mockup papyri that were either rolled or folded along perpendicular fold lines. In the case of the folded papyrus, our approach represents the first attempt to address the unfolding of such complicated folds. T3 - ZIB-Report - 17-02 KW - unfolding, papyri, computed tomography Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-61826 SN - 1438-0064 ER - TY - GEN A1 - Shinano, Yuji A1 - Achterberg, Tobias A1 - Berthold, Timo A1 - Heinz, Stefan A1 - Koch, Thorsten A1 - Winkler, Michael T1 - Solving Open MIP Instances with ParaSCIP on Supercomputers using up to 80,000 Cores N2 - This paper describes how we solved 12 previously unsolved mixed-integer program- ming (MIP) instances from the MIPLIB benchmark sets. To achieve these results we used an enhanced version of ParaSCIP, setting a new record for the largest scale MIP computation: up to 80,000 cores in parallel on the Titan supercomputer. In this paper we describe the basic parallelization mechanism of ParaSCIP, improvements of the dynamic load balancing and novel techniques to exploit the power of parallelization for MIP solving. We give a detailed overview of computing times and statistics for solving open MIPLIB instances. T3 - ZIB-Report - 15-53 KW - Mixed Integer Programming KW - Parallel processing KW - Node merging KW - Racing ParaSCIP KW - Ubiquity Generator Framework KW - MIPLIB Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-56404 SN - 1438-0064 ER - TY - GEN A1 - Fujii, Koichi A1 - Ito, Naoki A1 - Kim, Sunyoung A1 - Kojima, Masakazu A1 - Shinano, Yuji A1 - Toh, Kim-Chuan T1 - Solving Challenging Large Scale QAPs N2 - We report our progress on the project for solving larger scale quadratic assignment problems (QAPs). Our main approach to solve large scale NP-hard combinatorial optimization problems such as QAPs is a parallel branch-and-bound method efficiently implemented on a powerful computer system using the Ubiquity Generator(UG) framework that can utilize more than 100,000 cores. Lower bounding procedures incorporated in the branch-and-bound method play a crucial role in solving the problems. For a strong lower bounding procedure, we employ the Lagrangian doubly nonnegative (DNN) relaxation and the Newton-bracketing method developed by the authors’ group. In this report, we describe some basic tools used in the project including the lower bounding procedure and branching rules, and present some preliminary numerical results. Our next target problem is QAPs with dimension at least 50, as we have succeeded to solve tai30a and sko42 from QAPLIB for the first time. T3 - ZIB-Report - 21-02 KW - QAP KW - Parallel Branch-and-Bound Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-81303 SN - 1438-0064 ER - TY - GEN A1 - Schäfer, Patrick T1 - Bag-Of-SFA-Symbols in Vector Space (BOSS VS) N2 - Time series classification mimics the human understanding of similarity. When it comes to larger datasets, state of the art classifiers reach their limits in terms of unreasonable training or testing times. One representative example is the 1-nearest-neighbor DTW classifier (1-NN DTW) that is commonly used as the benchmark to compare to and has several shortcomings: it has a quadratic time and it degenerates in the presence of noise. To reduce the computational complexity lower bounding techniques or recently a nearest centroid classifier have been introduced. Still, execution times to classify moderately sized datasets on a single core are in the order of hours. We present our Bag-Of-SFA-Symbols in Vector Space (BOSS VS) classifier that is robust and accurate due to invariance to noise, phase shifts, offsets, amplitudes and occlusions. We show that it is as accurate while being multiple orders of magnitude faster than state of the art classifiers. Using the BOSS VS allows for mining massive time series datasets and real-time analytics. T3 - ZIB-Report - 15-30 KW - Time Series KW - Classification KW - Data Mining Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-54984 SN - 1438-0064 ER - TY - GEN A1 - Hasler, Tim A1 - Peters-Kottig, Wolfgang T1 - Vorschrift oder Thunfisch? – Zur Langzeitverfügbarkeit von Forschungsdaten N2 - „Ich mache ihm ein Angebot, das er nicht ablehnen kann.” Diese Aussage aus einem gänzlich anderen Kontext lässt sich recht treffend übertragen als Wunsch von Dienstleistern und Zweck von Dienstleistungen für Datenproduzenten im Forschungsdatenmanagement. Zwar wirkt Druck zur Datenübergabe nicht förderlich, die Eröffnung einer Option aber sehr wohl. Im vorliegenden Artikel geht es um das Verständnis der Nachhaltigkeit von Forschung und ihren Daten anhand der Erkenntnisse und Erfahrungen aus der ersten Phase des DFG-Projekts EWIG. [Fn 01] Eine Auswahl von Fallstricken beim Forschungsdatenmanagement wird anhand der Erkenntnisse aus Expertengesprächen und eigenen Erfahrungen beim Aufbau von LZA-Workflows vorgestellt. Erste Konzepte in EWIG zur Datenübertragung aus unterschiedlich strukturierten Datenquellen in die „Langfristige Domäne” werden beschrieben. N2 - "I'm gonna make him an offer he can't refuse". This quote from a completely different context can be aptly rendered as a statement of service providers as well as the purpose of services for data producers in the field of research data management. Although pressure is not the leverage of choice if you want researchers to deposit their research data in some kind of repository, offering an option does the trick quite well. In this article we present some of the concepts for sustainability of research and its data from the first phase the of the project EWIG, funded by the Deutsche Forschungsgemeinschaft. A selection of pitfalls in research data management is presented based on the findings from expert interviews and our own experiences in the construction of LTP workflows. First concepts in EWIG to transfer data from differently structured data sources into the "Permanent Domain" are described. T3 - ZIB-Report - 13-70 KW - Langzeitverfügbarkeit KW - Forschungsdatenmanagement Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-43010 UR - http://libreas.eu/ausgabe23/08hasler/ SN - 1438-0064 ER - TY - GEN A1 - Dercksen, Vincent J. A1 - Hege, Hans-Christian A1 - Oberlaender, Marcel T1 - The Filament Editor: An Interactive Software Environment for Visualization, Proof-Editing and Analysis of 3D Neuron Morphology N2 - Neuroanatomical analysis, such as classification of cell types, depends on reliable reconstruction of large numbers of complete 3D dendrite and axon morphologies. At present, the majority of neuron reconstructions are obtained from preparations in a single tissue slice in vitro, thus suffering from cut off dendrites and, more dramatically, cut off axons. In general, axons can innervate volumes of several cubic millimeters and may reach path lengths of tens of centimeters. Thus, their complete reconstruction requires in vivo labeling, histological sectioning and imaging of large fields of view. Unfortunately, anisotropic background conditions across such large tissue volumes, as well as faintly labeled thin neurites, result in incomplete or erroneous automated tracings and even lead experts to make annotation errors during manual reconstructions. Consequently, tracing reliability renders the major bottleneck for reconstructing complete 3D neuron morphologies. Here, we present a novel set of tools, integrated into a software environment named ‘Filament Editor’, for creating reliable neuron tracings from sparsely labeled in vivo datasets. The Filament Editor allows for simultaneous visualization of complex neuronal tracings and image data in a 3D viewer, proof-editing of neuronal tracings, alignment and interconnection across sections, and morphometric analysis in relation to 3D anatomical reference structures. We illustrate the functionality of the Filament Editor on the example of in vivo labeled axons and demonstrate that for the exemplary dataset the final tracing results after proof-editing are independent of the expertise of the human operator. T3 - ZIB-Report - 13-75 Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-43157 SN - 1438-0064 ER - TY - GEN A1 - Shinano, Yuji T1 - The Ubiquity Generator Framework: 7 Years of Progress in Parallelizing Branch-and-Bound N2 - Mixed integer linear programming (MIP) is a general form to model combinatorial optimization problems and has many industrial applications. The performance of MIP solvers has improved tremendously in the last two decades and these solvers have been used to solve many real-word problems. However, against the backdrop of modern computer technology, parallelization is of pivotal importance. In this way, ParaSCIP is the most successful parallel MIP solver in terms of solving previously unsolvable instances from the well-known benchmark instance set MIPLIB by using supercomputers. It solved two instances from MIPLIB2003 and 12 from MIPLIB2010 for the first time to optimality by using up to 80,000 cores on supercomputers. ParaSCIP has been developed by using the Ubiquity Generator (UG) framework, which is a general software package to parallelize any state-of-the-art branch-and-bound based solver. This paper discusses 7 years of progress in parallelizing branch-and-bound solvers with UG. T3 - ZIB-Report - 17-60 KW - Parallelization, Branch-and-bound, Mixed Integer Programming, UG, ParaSCIP, FiberSCIP, ParaXpress, FiberXpress, SCIP-Jack Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-65545 SN - 1438-0064 ER - TY - GEN A1 - Fujii, Koichi A1 - Ito, Naoki A1 - Kim, Sunyoung A1 - Kojima, Masakazu A1 - Shinano, Yuji A1 - Toh, Kim-Chuan T1 - 大規模二次割当問題への挑戦 T2 - 統計数理研究所共同研究リポート 453 最適化:モデリングとアルゴリズム33 2022年3月 「大規模二次割当問題への挑戦」 p.84-p.92 N2 - 二次割当問題は線形緩和が弱いことが知られ,強化のため多様な緩和手法が考案されているが,その一つである二重非負値計画緩和( DNN 緩和)及びその解法として近年研究が進んでいるニュートン・ブラケット法を紹介し,それらに基づく分枝限定法の実装及び数値実験結果について報告する. T2 - Solving Large Scale QAPs with DNN-based Branch-and-bound : a progress report T3 - ZIB-Report - 22-11 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-86779 SN - 1438-0064 ER - TY - GEN A1 - Weber, Britta A1 - Tranfield, Erin M. A1 - Höög, Johanna L. A1 - Baum, Daniel A1 - Antony, Claude A1 - Hyman, Tony A1 - Verbavatz, Jean-Marc A1 - Prohaska, Steffen T1 - Automated stitching of microtubule centerlines across serial electron tomograms N2 - Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts’ opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort. T3 - ZIB-Report - 14-41 KW - electron tomography KW - microtubules KW - serial sectioning KW - image analysis KW - geometry reconstruction KW - image and geometry alignment KW - point correspondence Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-52958 SN - 1438-0064 ER -