Filtern
Erscheinungsjahr
- 2014 (53) (entfernen)
Dokumenttyp
- ZIB-Report (44)
- Masterarbeit (7)
- ZIB-Jahresbericht (1)
- Bachelorarbeit (1)
Volltext vorhanden
- ja (53) (entfernen)
Schlagworte
- directed networks (2)
- finite element method (2)
- metastability (2)
- 3D neural network (1)
- 3D reconstruction (1)
- Active message (1)
- Alanine dipeptide (1)
- Ant Colony Optimization (1)
- Benders' decomposition (1)
- Beton (1)
Institut
- Mathematical Optimization (22)
- Numerical Mathematics (15)
- Visual Data Analysis (10)
- Visual and Data-centric Computing (10)
- Mathematics of Transportation and Logistics (9)
- Distributed Algorithms and Supercomputing (4)
- Computational Nano Optics (3)
- Computational Molecular Design (2)
- Computational Systems Biology (2)
- Digital Data and Information for Society, Science, and Culture (2)
Nano-optical scattering problems play an important role in our modern, technologically driven society. Computers, smartphones and all kinds of electronic devices are manufactured by the semiconductor industry which relies on production using photomasks as well as optical process control. The digital world, e.g. the world wide web, is based on optical interconnects and so-called quantum computers based on optics are supposed to be next generation computers. Moreover, global economic progress demands new and sustainable energy resources and one option is to make use of the power stored in optical radiation from the sun. Additionally, understanding fundamental physics such as the optical properties of asymmetric, or chiral, structures could promote future innovations in engineering. In order to understand and manipulate these kinds of processes, physics provides a well established model: the so-called Maxwell’s equations. Stated by James Clerk Maxwell in 1862, this description of the interaction of light and matter still provides a profound basis for the analysis of electromagnetic phenomena. However, real world problems cannot be calculated using simple mathematics. Rather, computer simulations are needed to obtain solutions of the physical model. Finding suitable methods to solve these problems opens up a wide variety of possibilities. On the one hand, there are methods which require long computing times. On the other hand, some algorithms depend on high memory usage. That is why the field of numerics deals with the question which method is optimally suited for specific problems. The aim of this work is to investigate the applicability of the so-called Fourier Modal Method (FMM) to nano-optical scattering problems in general. Since simple analytical solutions are non-existent for most recent physical problems, we use the Finite Element Method (FEM) to double-check performance of the FMM. Mathematics provide reliable procedures to control the errors of numerics using the FEM. Yet up to now it has not been possible to rigorously classify the quality of the Fourier Modal Method’s results. It is not fully understood whether the process of investing more and more computing resources yields more accurate results. So, we have to ask ourselves: does the numerical method invariably converge? In spite of this uncertainty when using the FMM, it is a well established method dating back to the 1980s. This numerical method has recently been used to optimize performance of solar cells [19] as well as to improve the optical properties of so-called single-photon sources [41] which are essential for quantum cryptography. The latter is a promising candidate to increase digital security and revolutionise cryptography techniques. Furthermore, with the help of the Fourier Modal Method an important issue in optics has been partly resolved: angular filtering of light was made possible by using a mirror which becomes transparent at a certain viewing angle [77]. In addition, an improved numerical technique to design so-called Photonic Crystal waveguides based on the FMM was developed recently [15]. Photonic Crystals are used in the fields of optical bio-sensing and for the construction of novel semiconductor devices. Moreover, approaches to link the FMM and the FEM try to combine advantages of both methods to obtain fast and accurate results [81]. These ideas are closely linked to the well-known concept of Domain Decomposition within the FEM [88]. Here, one possibility to couple domains is to use the scattering matrix formalism as it is done in the FMM. In the scope of this convergence study, we state Maxwell’s equations, particularly for periodic geometries. We describe two physical phenomena of nano-optics, namely chirality and opto-electrical coupling, and define the errors of our simulations. Afterwards, the two investigated methods are analysed with respect to their general properties and a way to unify modelling physics when using both algorithms is presented. With the help of various numerical experiments, we explore convergence characteristics of the FMM and draw conclusions about the ability of this approach to provide accurate results and, consequently, its potential for research on technological innovations.
Coding da Vinci, der erste deutsche Kultur-Hackathon, wurde gemeinsam von der Open Knowledge Foundation Deutschland, der Servicestelle Digitalisierung Berlin, Wikimedia
Deutschland und der Deutschen Digitalen Bibliothek veranstaltet. Zwischen Ende April und Anfang Juli haben 150 Teilnehmer (Coder, Webdesigner, Kulturinteressierte u.a.) an Webseiten, mobilen Apps,Spielen, Hardwareprojekten und anderen Anwendungen offener Daten gearbeitet. Die Daten wurden von 16 Kultur-Einrichtungen bereitgestellt. Im Verlauf des Hackathons wurden daraus 17 funktionsfähige Prototypen entwickelt, öffentlich präsentiert und fünf davon auch prämiert.Doch was genau ist ein Hackathon? Wie kommen Kulturinstitutionen und Hacker zusammen? Wie gelangt eine Kulturinstitution zu offenen Daten? Welche Herausforderungen und Chancen bietet ein Hackathon für den Kulturbereich? Welche neue Qualität erwächst aus einem partizipativen Zugang zum digitalen Kulturerbe und der Möglichkeit mit Daten zu arbeiten? Was bleibt zu tun,um die Ergebnisse nachhaltig zu sichern? Diese Fragen sollen anhand der Ergebnisse von CdV diskutiert werden. Unter dem Motto „lessons learned“ wagen wir den Ausblick auf Coding da
Vinci 2015.
Rays and sharks are cartilaginous fishes. Most of the cartilaginous skeleton is covered with calcified tiles to improve the stability of the skeleton. These tiles are called tesserae and enclose areas of uncalcified cartilage. Because of the special properties of the tesserae, biologists are interested to understand shape and structure of tessellated cartilage.
This thesis presents a segmentation pipeline for the separation of tesserae on the cartilaginous skeleton of rays and sharks. The segmentation pipeline consists of an automatic initial segmentation step followed by manual error corrections by the user. The initial segmentation is based on the contour tree data structure that tracks the evolution of level sets in a dataset during iso-value changes. The presented segmentation concepts are not limited to the segmentation of tesserae but also viable for similar kinds of tiled structures. The input datasets are given as micro-CT scans.
The contribution of this thesis is the development of a segmentation pipeline. The pipeline uses a newly developed fast version of the contour-tree-based segmentation algorithm that, after a preprocessing step, does not need to iterate over all voxels in the dataset. Visualizations and computations are done with the software system ZIBAmira. Used algorithms are either implemented as new ZIBAmira modules or they extend already existing ZIBAmira modules.
Modern solving software for mixed-integer programming (MIP)
incorporates numerous algorithmic components whose behavior is
controlled by user parameter choices,
and whose usefulness dramatically varies depending on the progress of the solving process.
In this thesis, our aim is to
construct a phase-based solver that dynamically reacts
on phase transitions with an appropriate change of its component behavior.
Therefore, we decompose the branch-and-bound solving process into three distinct phases:
The first phase objective is to find a feasible solution. During the second phase,
a sequence of incumbent solutions gets constructed
until the incumbent is eventually optimal. Proving
optimality is the central objective of the remaining third phase.
Based on the MIP-solver SCIP we construct a phase-based solver to make use of the phase concept in two steps:
First, we identify promising components for every solving phase individually and show that their
combination is beneficial on a test bed of practical MIP instances.
We then present and evaluate three heuristic criteria to make use of the phase-based solver
in practice, where it is infeasible to distinguish between the last two phases
before the termination of the solving process.
Many optimization problems can be modeled as Mixed Integer Programs (MIPs). In general, MIPs cannot be solved efficiently, since solving MIPs is NP-hard, see, e.g., Schrijver, 2003. Common methods for solving NP-hard problems are branch-and-bound and column generation. In the case of column generation, the original problem
becomes decomposed or re-formulated into one ore more smaller subproblems, which are easier to solve. Each of these subproblems is solved separately and recurrently, which can be interpreted as solving a sequence of optimization problems.
In this thesis, we consider a sequence of MIPs which only differ in the respective objective functions. Furthermore, we assume each of these MIPs get solved with a branch-and-bound algorithm. This thesis aims to figure out whether the solving process of a given sequence of MIPs can be accelerated by reoptimization. As reoptimization we understand starting the solving process
of a MIP of this sequence at a given frontier of a search tree corresponding to another MIP of this sequence.
At the beginning we introduce an LP-based branch-and-bound algorithm. This algorithm is inspired by the reoptimizing algorithm of Hiller, Klug, and the author of this
thesis, 2013. Since most of the state-of-the-art MIP
solvers come to decisions based on dual information, which leads to the loss of feasible solutions after changing the objective function, we present a technique to guarantee optimality despite using these information. A decision is based on a dual information if this decision is valid for at least one feasible solution, whereas a decision is based on a primal information if this decision is valid for all feasible solutions. Afterwards, we consider representing the search frontier of the tree by a set of nodes of a given size. We call this the Tree Compression Problem. Moreover, we present a criterion characterizing the similarity of two objective functions. To evaluate our approach of reoptimization we extend the well-known and well-maintained MIP solver SCIP to an LP-based branch-and-bound framework, introduce two heuristics for solving the Tree Compression Problem, and a primal heuristic which is especially fitted to column generation. Finally, we present computational experiments on several problem classes, e.g., the Vertex Coloring and k-Constrained Shortest Path. Our experiments show, that a straightforward reoptimization, i.e., without additional heuristics, provides no benefit in general. However, in combination with the techniques and methods presented in this thesis, we can accelerate the solving of a given sequence up to the factor 14. For this purpose it is essential to take the differences of the objective functions into account and to restart the reoptimization, i.e., solve the subproblem from scratch, if the objective functions are not similar enough. Finally, we discuss the possibility to parallelize the solving process of the search frontier at the beginning of each solving process.
We investigate the Robust Multiperiod Network Design Problem, a generalization of the Capacitated Network Design Problem (CNDP) that, besides establishing flow routing and network capacity installation as in a canonical CNDP, also considers a planning horizon made up of multiple time periods and protection against fluctuations in traffic volumes. As a remedy against traffic volume uncertainty, we propose a Robust Optimization model based on Multiband Robustness (Büsing and D'Andreagiovanni, 2012), a refinement of classical Gamma-Robustness by Bertsimas and Sim (2004) that uses a system of multiple deviation bands. Since the resulting optimization problem may prove very challenging even for instances of moderate size solved by a state-of-the-art optimization solver, we propose a hybrid primal heuristic that combines a randomized fixing strategy inspired by ant colony optimization and an exact large neighbourhood search. Computational experiments on a set of realistic instances from the SNDlib (2010) show that our original heuristic can run fast and produce solutions of extremely high quality associated with low optimality gaps.
Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non-reversible and will lose important properties, like the real valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem.
Recent years have seen an increased interest in non-equilibrium molecular dynamics (NEMD) simulations, especially for molecular systems with periodic forcing by external fields, e.g., in the context of studying effects of electromagnetic radiation on the human body tissue. Lately, an NEMD methods with local thermostating has been proposed that allows for studying non-equilibrium processes in a statistically reliable and thermodynamically consistent way. In this article, we demonstrate how to construct Markov State Models (MSMs) for such NEMD simulations. MSM building has been well-established for systems in equilibrium where MSMs with just a few (macro-)states allow for accurate reproduction of the essential kinetics of the molecular system under consideration. Non-equilibrium MSMs have been lacking so far. The article presents how to construct such MSMs and illustrates their validity and usefulness for the case of conformation dynamics of alanine dipeptide in an external electric field.
Sensory-evoked signal flow, at cellular and network levels, is primarily determined by the synaptic wiring of the underlying neuronal circuitry. Measurements of synaptic innervation, connection probabilities and sub-cellular organization of synaptic inputs are thus among the most active fields of research in contemporary neuroscience. Methods to measure these quantities range from electrophysiological recordings over reconstructions
of dendrite-axon overlap at light-microscopic levels to dense circuit reconstructions of small volumes at electron-microscopic resolution. However, quantitative and complete measurements at subcellular resolution and mesoscopic scales to obtain all local and long-range synaptic in/outputs for any neuron within an entire brain region are beyond present methodological limits. Here, we present a novel concept, implemented within an interactive software environment called NeuroNet, which allows (i) integration of sparsely sampled (sub)cellular morphological data into an accurate anatomical reference frame of the brain region(s) of interest, (ii) up-scaling to generate an average dense model of the neuronal circuitry within the respective brain region(s) and (iii) statistical measurements of synaptic innervation between all neurons within the model. We illustrate our approach by generating a dense average model of the entire rat vibrissal cortex, providing the required anatomical data, and illustrate how to measure synaptic innervation statistically. Comparing our results with data from paired recordings in vitro and in vivo, as well as with reconstructions of synaptic contact sites at light- and electron-microscopic levels, we find that our in silico measurements are in line with previous results.
The integrated line planning and passenger routing problem is an important planning problem in service design of public transport. A major challenge is the treatment of transfers. A main property of a line system is its connectivity.
In this paper we show that analysing the connecvitiy aspect of a line plan gives a new idea to handle the transfer aspect of the line planning problem.