Refine
Document Type
- Doctoral thesis (16) (remove)
Has Fulltext
- yes (16)
Is part of the Bibliography
- no (16)
Year of publication
Keywords
- Optimierung (3)
- Electrooptics (2)
- Elektrooptik (2)
- Flüssigkristall (2)
- Halbleitertechnologie (2)
- Lichtwellenleiter (2)
- Liquid crystal (2)
- Neuronales Netz (2)
- Paranematic (2)
- Paranematisch (2)
Institute
- FG Experimentalphysik und funktionale Materialien (2)
- FG Mikro- und Nanosysteme (2)
- FG Angewandte Mathematik (1)
- FG Anorganische Chemie (1)
- FG Chemische Reaktionstechnik (1)
- FG Computational Physics (1)
- FG Datenbanken und Informationssysteme (1)
- FG Datenstrukturen und Softwarezuverlässigkeit (1)
- FG Diskrete Mathematik und Grundlagen der Informatik (1)
- FG Energieverteilung und Hochspannungstechnik (1)
In Zeiten des liberalisierten Strommarktes muss sich der Kraftwerkserrichter und -betreiber die Wahl seiner Primärenergie und die Auslegung jedes einzelnen Kraftwerks hinsichtlich Emissionen und last but not least die Errichtungs- und späteren Betriebskosten genau überlegen. Simulationen unter der Annahme verschiedener Szenarien sind bei diesem Marktumfeld zwingend erforderlich. Das Spannungsfeld zwischen Errichtungskosten, davon teilweise abhängiger Verfügbarkeit, Instandhaltungsphilosophie und Betriebskosten bildet die Motivation für die Instandhaltungsoptimierung und Simulationen zur Verfügbarkeitsprognose, die Thema dieser Arbeit sind. Der Schwerpunkt dieser Arbeit liegt auf der Planung der zeitbezogenen Instandhaltungsstrategie, bei der die Inspektion der Komponente nach einer vorgeschriebenen Zeitspanne erfolgt, die sich aus den Betriebs- und Herstellererfahrungen ableitet. Diese Problemstellung ist besonders anspruchsvoll, weil es durch Synergieeffekte zu starken Wechselwirkungen der Komponenten untereinander kommt. So besitzt jede Komponente ihre eigene theoretisch optimale Inspektionsperiode, im Verbund kann es jedoch sein, dass es wegen Freischaltungen oder Stillstandszeiten günstiger ist, einige Komponenten gemeinsam instandzuhalten und dabei eine einzelne Baugruppe vorzuziehen oder zurückzustellen. Die hieraus entstehenden Wechselwirkungen bilden eine nichtlineare, gemischt-ganzzahlige Berechnungsvorschrift für die Kostenabschätzung. Für die Optimierung dieser Instandhaltungsplanung wurde in der vorliegenden Arbeit ein neuer Ansatz entwickelt. Nachdem festgestellt wurde, dass klassische Optimierungsverfahren dieses Problem der Instandhaltungsoptimierung nicht zufriedenstellend lösen können, wurde eine Lösung auf Basis von „Genetischen Algorithmen“ entwickelt. Gleichzeitig zur Instandhaltungsoptimierung wurde aufbauend auf vorangegangenen Arbeiten am Lehrstuhl die Methodik der Verfügbarkeitsprognose komplexer kraftwerkstechnischer Anlagen weiterentwickelt und um die neue Komponente "Speicher" (ggf. mit Verlusten) erweitert. Mit dem Speichermodell wird die Verhaltensweise eines Speichers in die aufwandsreduzierte Monte-Carlo-Methode integriert.
Signalized flows - optimizing traffic signals and guideposts and related network flow problems
(2011)
Guideposts and traffic signals are important devices for controlling inner-city traffic and their optimized operation is essential for efficient traffic flow without congestion. In this thesis, we develop a mathematical model for guideposts and traffic signals in the context of network flow theory. Guideposts lead to confluent flows where each node in the network may have at most one outgoing flow-carrying arc. The complexity of finding maximum confluent flows is studied and several polynomial time algorithms for special graph classes are developed. For traffic signal optimization, a cyclically time-expanded model is suggested which provides the possibility of the simultaneous optimization of offsets and traffic assignment. Thus, the influence of offsets on travel times can be accounted directly. The potential of the presented approach is demonstrated by simulation of real-world instances.
The subject of this thesis is the development and test of silicon strip detectors for the high luminosity upgrade of the tracking detector of the ATLAS experiment at the Large Hadron Collider. Special emphasis is devoted to the understanding of the impact of mechanical stress on the electrical properties and the particle detection performance of detector modules.
First simulations were done to estimate the maximum expected stress on a sensor when operated at -30 °C within the future silicon strip tracking detector ITk, at ATLAS. The maximum stress in a worst case scenario is expected to be 27 MPa. Tensile strength tests were done to estimate the maximum stress which can be applied to a silicon strip sensor. Silicon shards, with a thickness and dopand concentration corresponding to the ITk sensor specifications break at >23 MPa, wafers at >700 MPa and sensors at ~ 400 MPa. The huge variations lead to the assumption, that the tensile stregth, which is highly dependent on the quality of the crystal lattice, is due to different cutting technology. Wafers, irradiated with a fluence equivalent of a lifetime dose of an ITk sensor, show no stress dependency of the youngs modulus. The tensile strength of irradiated wafers is decreased by ~ 6,6 %. No damage on silicon sensors from mechanical stress is expected for sensor modules installed it the ITk.
The electrical properties of silicon strip sensors were studied for applied mechanical stress on ATLAS07 sensors up to 60 MPa. The specifications of the sensors are similar to the specification of strip sensors in the future silicon strip tracker barrel region of the ATLAS detector. The leakage current changes at 50 MPa by -1.7 %, the bias resistance by +0.8 % and the interstrip resistance by -25 %. The depletion voltage and the implant resistance are not affected by mechanical stress. Except for the interstrip resistance the results can be explained by piezoresistive effects.
Silicon strip modules were build and studied in particle test beams. These modules consists of an ATLAS07 or an ATLAS12 sensor and an analogue readout to study the influence of stress on the module performance. The sensor module noise is independent from the applied stress. An effect of stress on the signal strength was seen. The ATLAS07 sensor module signal strength was decreased and the ATLAS12 sensor module signal strength was increased with a slope of ~0,6 MPa^-1. The average cluster size of the ATLAS07 sensor module was increased by 0,25 % MPa^-1 and the average cluster size of the ATLAS12 sensor module was decreased by 0,06 % MPa^-1 with applied stress.
The thesis discusses the problems of database development and maintenance, and presents an approach to conceptual tuning realized by conceptual design using the HERM/RADD notation. The RADD design tool has been designed in order to develope HERM specifications graphically. RADD adds semantics and operations to the design, which are not directly annotated on the graphical specification, such as "afunctional" dependencies and SQL operations and procedures. The RADD/raddstar system extends the graphical specification of the database schema with the posibility to specify the operations and with the invocations for transforming the schema, for evaluating transactions, and for optimizing the schema, each of which according the implicite requirements graphically modeled and the explicite requirements specified by means of the conceptual specification language (CSL). CSL is used as command line interface of the RADD/raddstar. The graphical RADD schema as well as the CSL specifications are compiled into terms of the RADD* data model by the system, such that these terms are used for further evaluation actions. The actions performed by the RADD/raddstar (schema transformation, transaction and cost evaluating, schema optimization) are based on rules, that can be developed and modified by the user using the CSL.
This thesis investigates the efficient analysis, especially the model checking, of bounded stochastic Petri nets (SPNs) which can be augmented with reward structures. An SPN induces a continuous-time Markov chain (CTMC). A reward structure associates a reward to each state of the CTMC and defines a Markov reward model (MRM). The Continuous Stochastic Reward Logic (CSRL) permits to define sophisticated properties of CTMCs and MRMs which can be automatically verified by a model checker.
CSRL model checking can be realized on top of established numerical analysis techniques for CTMCs which are based on the multiplication of a matrix and a vector. However, as these techniques consider a matrix and a vector at least in the size of the number of reachable states, it is still challenging to deal with the famous state space explosion problem.
Several approaches, as for instance the use of Multi-terminal Decision Diagrams or Kronecker products to represent the matrix, have been investigated so far. They often enable the implementation of efficient CTMC analysis and are available in a couple of tools.
As an alternative to these established techniques I enhance the idea of an on-the-fly computation of the matrix entries deploying a symbolic state space representation. The set of state transitions defining the matrix will be enumerated by the firing of the transitions of the given SPN for all reachable states. The reachable states are encoded by means of Interval Decision Diagrams (IDD).
Further, I discuss crucial aspects for the implementation of the first multi-threaded symbolic CSRL model checker which is based on the developed technique and available in the tool MARCIE. An experimental comparison with the probabilistic model checker PRISM for a large number of experiments proves empirically the efficiency of the approach and its implementation, especially when investigating biological models.
The International Linear Collider offers a lot of different interesting challenges concerning the physics of elementary particles as well as the development of accelerator and detector technologies. In this thesis, we investigate two rather separate topics - the precision measurement of the Higgs boson mass and of its coupling to the neutral gauge boson Z and the research and development of sensors for BeamCal, which is a sub-detector system of the ILC detector. After the Higgs boson has been found, it is important to determine its properties with high precision. We employ the Higgs-strahlung process for this purpose. A virtual Z boson is created in the electron-positron collisions, which emits a Higgs-boson while becoming on-shell. Using the so-called recoil technique, we determine the Higgs boson mass by reconstructing the Z boson momentum and using the center-of-mass energy of the colliding leptons. This technique allows to measure the Higgs boson mass without considering the Higgs boson decay, i.e. it can be applied even to a Higgs boson invisibly decaying. Monte-Carlo studies including a full detector simulation and a full event reconstruction were performed to simulate the impact of a realistic detector model on the precision of the Higgs boson mass and production cross-section measurement. Also, an analytical estimate of the influence of a given detector performance on the Higgs boson mass measurement uncertainty is given. We included a complete sample of background events predicted by the Standard Model, which may have a detector response similar to the signal events. A probabilistic method is used for the signal-background separation. Several other probabilistic methods were used to investigate and improve the measurement of the Higgs-strahlung cross-section and the Higgs boson mass from the recoil mass spectrum obtained after the signal-background separation. For a Higgs boson mass of 120 GeV, a center-of-mass energy of 250 GeV and an integrated luminosity of 50/fb, a relative uncertainty of 10% is obtained for the cross-section measurement, and a precision of 118 MeV for the Higgs boson mass. The original motivation to use the recoil technique for a Higgs boson mass measurement independent on its decay modes could not be completely confirmed. For a Higgs boson mass of 180 GeV and 350 GeV, a statistics corresponding to 50/fb is not sufficient to achieve the necessary significance of the recoil mass peak above the background. The BeamCal is a calorimeter in the very forward region, about 3 m away from the nominal interaction point and surrounding the beam pipe. Due to its location, a lot of beamstrahlung pair particles will hit this calorimeter, representing a challenge for the operational reliability of the sensors under such harsh radiation conditions. We investigated single-crystal and polycrystalline CVD diamond, gallium arsenide and radiation-hard silicon as sensor candidates for their radiation hardness and found that diamond and gallium arsenide are promising. We used a 10 MeV electron beam of few nA to irradiate the samples under investigation up to doses of 5 MGy for diamond, up to about 1.5 MGy for gallium arsenide and up to about 90 kGy for silicon. We measured in regular periods the CCD to characterize the impact of the absorbed dose on the size of the signal, which is generated by electrons of a Sr-90 source crossing the sensor. Additional measurements such as the dark current and the CCD as functions of the voltage completed the characterization of the sensor candidates. For the single-crystal CVD diamond, also the thermally stimulated current was measured to determine amongst others the defect density created by irradiation. In the diamond samples, evidence for strong polarization effects inside the material was found and investigated in more detail. A phenomenological model based on semi-conductor physics was developed to describe the sensor properties as a function of the applied electric field, the dose and the dose rate. Its predictions were compared with the results of the measurements. Several parameters such as time scales and cross-sections were determined using this model, which led to ongoing investigations.
For many countries, gasturbine technology is one of the key technologies for the reduction of climate-damaging pollutant emissions. The profitability of such facilities, however, is highly dependent on the price for the utilized fossil fuel, which is why there is a constant need for increased efficiency. The potential of increasing the efficiency of the individual components is basically limited by factors which will reduce operating life. The goal of this thesis is to develop methods for improved automated structural design optimization, which shall be developed on the basis of compressor airfoils. Special attention is payed to avoid the excitation of failure critical eigenmodes by detecting them automatically. This is achieved by introducing a method based on self-organizing neural networks which enables the projection of eigenmodes of arbitrary airfoil geometries onto standard surfaces, thereby making them comparable. Another neural network is applied to identify eigenmodes which have been defined as critical for operating life. The failure rate of such classifiers is significantly reduced by introducing a newly developed initialization method based on principle components. A structural optimization is set up which shifts the eigenfrequency bands of critical modes in such a way that the risk of resonance with engine orders is minimized. In order to ensure practical relevance of optimization results, the structural optimization is coupled with an aerodynamic optimization in a combined process. Conformity between the loaded hot-geometry utilized by the aerodynamic design assessment and the unloaded cold-geometry utilized by the structural design assessment is ensured by using loaded-to-unloaded geometry transformation. Therefor an innovative method is introduced which, other than the established time-consuming iterative approach, uses negative density for a direct transformation taking only a few seconds, hence, making it applicable to optimization. Additionally, in order for the optimal designs to be robust against manufacturing variations, a method is developed which allows to assess the maximum production tolerance of a design from which onwards possible design variations are likely to violate design constraints. In contrast to the usually applied failure rate, the production tolerance is a valid requirement for suppliers w.r.t.~expensive parts produced in low-quantity, and therefore is a more suitable optimization objective.
Ausgehend von der bestehenden Idee eines Lignocelluloseaufschlusses in einem alkalischen Polyol (Alkaline Polyol Pulping = AlkaPolP) wurden im Rahmen dieser Arbeit ein Aufschlussprozess für Lignocellulosen in alkalischem Glycerin sowie die nachgeschaltete Aufbereitung der Produktströme entwickelt und gezeigt, dass diese den Ausgangspunkt eines vielversprechenden Bioraffineriekonzeptes bilden können. Es wurde nachgewiesen, dass der AlkaPolP-Prozess für alle Arten von Lignocellulose geeignet ist und diese innerhalb weniger Minuten nahezu vollständig delignifiziert.
Im Rahmen des Downstream Processings der Aufschlussprodukte wurden die enzymatische Hydrolyse des Zellstoffes, die Ligninfällung sowie die Aufreinigung des Ligninfiltrates genauer untersucht.
Ein Vergleich mit Literaturdaten anderer Verfahren zur Lignocellulosefraktionierung zeigte, dass mit dem AlkaPolP-Prozess mindestens ähnliche Produktausbeuten und -qualitäten erreicht werden. Vor allem für den Aufschluss von widerstandsfähigem Nadelholz ist der AlkaPolP-Prozess den meisten anderen Verfahren deutlich überlegen.
Nach der Konstruktion und Fertigung eines Reaktionsextruders wurden Betriebsbedingungen gefunden, mit denen Nadelholzschnitzel über mehrere Stunden hinweg in einem stabilen kontinuierlichen Prozess effektiv in alkalischem Glycerin aufgeschlossen werden konnten.
Auf der Grundlage von Literaturdaten und der durchgeführten Experimente wurde ein Gesamtprozess entworfen, der neben bereits untersuchten Verfahrensstufen auch eine Aufarbeitung des Ligninfiltrates durch eine Elektrodialyse und mehrere Destillationsschritte umfasst und eine Regeneration der Einsatzstoffe sowie eine Abtrennung der während des Aufschlusses gebildeten Carbonsäuren ermöglicht.
Die Ergebnisse dieser Arbeit bilden den Grundstein für die Weiterentwicklung des AlkaPolP-Prozesses. Es konnte gezeigt werden, dass dieser, im kontinuierlichen Betrieb durchführbare und für alle Arten von Lignocellulose geeignete, Aufschlussprozess einen vielversprechenden Ausgangspunkt für ein nachhaltiges Bioraffineriekonzept darstellt.
In der vorliegenden Arbeit wird die Eignung von Künstlichen Neuronalen Netzen hinsichtlich der Modellierung des komplexen Prozessverhaltens in einer Flachglasschmelzanlage analysiert. Die Identifikation und das Training der neuronalen Prozessmodelle erfolgen mit Messdaten einer Schmelzwanne für Flachglas. Im Vordergrund steht die Evaluation einer geeigneten Netzstruktur und die Parametrierung der Netzparameter. Dabei wird der Einfluss der einzelnen Netzparameter in Bezug auf die Genauigkeit der Netze eingehend untersucht. Anhand von Testdaten wird nachgewiesen, dass die qualitätsbestimmenden Temperaturen und der Glasstand mit Künstlichen Neuronalen Netzen hinreichend genau berechnet werden können.
Auf Basis der entwickelten neuronalen Prozessmodelle wird anschließend eine modellbasierte prädiktive Regelstrategie beschrieben. Neben der Auswahl des Gütekriteriums und des Optimierungsalgorithmus zur Berechnung zukünftiger Stellgrößen werden Richtlinien zur Dimensionierung der verfügbaren Reglerparameter abgeleitet.
The investigation of novel structure-to-property relations of many transition metal trihalides MX₃ by downscaling to promising monolayer is still pending. However, the production of two-dimensional MX₃ sheets that are both high crystalline and thin is an experimental challenge. This thesis is focused on the rational synthesis planning and the derived targeted preparation of thin MX₃ nanosheets (≤ 100 nm) on suitable substrates by chemical vapor transport (CVT) as well as their characterization by complementary analytical methods. CVT of nanosheets directly on substrates benefits of low timescales, less material consumption and only few structural distortions. For the determination of optimal growth conditions, the CVT processes of investigated compounds were initially simulated by using the Calphad method (program package TRAGMIN). Thus, the occurring transport efficient gas species and temperature dependent, dominating vapor transport equilibria were calculated to optimize the growth process in a direct and straightforward way. Based on prior simulation results single crystalline sheets of MCl₃ (M = Ru, Mo, Ti, Cr) and CrX₃ (X = I, Br, Cl) were successfully prepared at temperatures between 573 – 1023 K on YSZ (yttrium stabilized zirconia) or sapphire substrates. The adjustable CVT parameters (transport duration, temperatures or weighed starting material) were optimized with respect to the targeted synthesis of either bulk or nanosheets at substrates. Microsheets with thicknesses of less than 4 μm (α-TiCl₃) and about 20 nm thin nanosheets (α-RuCl₃, CrCl₃ and CrI₃) down to ultrathin flakes (≈ 3 nm, α-MoCl₃ and CrBr₃) were obtained by CVT. As a highlight, monolayers of α-RuCl₃ and CrCl₃ were isolated successfully by means of a subsequent delamination. The MX₃ sheets morphology and dimension was described by optical and electron microscopy, highlighting their two-dimensional nature. By several X-ray spectroscopy and diffraction techniques the desired composition (M:X = 1:3), high crystallinity and phase-purity of thick and thin MX₃ platelets was confirmed subsequently. With respect to MX₃ nanosheets a slight increase (α-RuCl₃, α-MoCl₃ and CrBr₃) or decrease (CrCl₃) in phonon energies was observed in comparison to their bulk counterparts. The magnetic properties of CrCl₃ micro- and nanosheets were determined to be solely ferromagnetic and thus different than those of the bulk samples. Finally, the structure-to-property relations were investigated at a first example. The catalytic properties of α-TiCl₃ microsheets were investigated by gas-phase polymerization of ethylene. By downscaling the catalysts thickness by CVT, we obtained an activity improvement of 24 % in comparison to bulk α-TiCl₃.