Refine
Document Type
- Doctoral thesis (16)
- Report (4)
- Book (2)
- Chapter (book) (1)
- Working paper (1)
Has Fulltext
- yes (24)
Is part of the Bibliography
- no (24)
Year of publication
Keywords
- Optimierung (4)
- Classification (2)
- Electrooptics (2)
- Elektrooptik (2)
- Flüssigkristall (2)
- Gasturbine (2)
- Halbleitertechnologie (2)
- Klassifizierung (2)
- Lichtwellenleiter (2)
- Liquid crystal (2)
Institute
- FG Produktionswirtschaft (3)
- FG Experimentalphysik und funktionale Materialien (2)
- FG Mikro- und Nanosysteme (2)
- FG Technische Mechanik und Fahrzeugdynamik (2)
- FG Angewandte Mathematik (1)
- FG Anorganische Chemie (1)
- FG Chemische Reaktionstechnik (1)
- FG Computational Physics (1)
- FG Datenbanken und Informationssysteme (1)
- FG Datenstrukturen und Softwarezuverlässigkeit (1)
In Zeiten des liberalisierten Strommarktes muss sich der Kraftwerkserrichter und -betreiber die Wahl seiner Primärenergie und die Auslegung jedes einzelnen Kraftwerks hinsichtlich Emissionen und last but not least die Errichtungs- und späteren Betriebskosten genau überlegen. Simulationen unter der Annahme verschiedener Szenarien sind bei diesem Marktumfeld zwingend erforderlich. Das Spannungsfeld zwischen Errichtungskosten, davon teilweise abhängiger Verfügbarkeit, Instandhaltungsphilosophie und Betriebskosten bildet die Motivation für die Instandhaltungsoptimierung und Simulationen zur Verfügbarkeitsprognose, die Thema dieser Arbeit sind. Der Schwerpunkt dieser Arbeit liegt auf der Planung der zeitbezogenen Instandhaltungsstrategie, bei der die Inspektion der Komponente nach einer vorgeschriebenen Zeitspanne erfolgt, die sich aus den Betriebs- und Herstellererfahrungen ableitet. Diese Problemstellung ist besonders anspruchsvoll, weil es durch Synergieeffekte zu starken Wechselwirkungen der Komponenten untereinander kommt. So besitzt jede Komponente ihre eigene theoretisch optimale Inspektionsperiode, im Verbund kann es jedoch sein, dass es wegen Freischaltungen oder Stillstandszeiten günstiger ist, einige Komponenten gemeinsam instandzuhalten und dabei eine einzelne Baugruppe vorzuziehen oder zurückzustellen. Die hieraus entstehenden Wechselwirkungen bilden eine nichtlineare, gemischt-ganzzahlige Berechnungsvorschrift für die Kostenabschätzung. Für die Optimierung dieser Instandhaltungsplanung wurde in der vorliegenden Arbeit ein neuer Ansatz entwickelt. Nachdem festgestellt wurde, dass klassische Optimierungsverfahren dieses Problem der Instandhaltungsoptimierung nicht zufriedenstellend lösen können, wurde eine Lösung auf Basis von „Genetischen Algorithmen“ entwickelt. Gleichzeitig zur Instandhaltungsoptimierung wurde aufbauend auf vorangegangenen Arbeiten am Lehrstuhl die Methodik der Verfügbarkeitsprognose komplexer kraftwerkstechnischer Anlagen weiterentwickelt und um die neue Komponente "Speicher" (ggf. mit Verlusten) erweitert. Mit dem Speichermodell wird die Verhaltensweise eines Speichers in die aufwandsreduzierte Monte-Carlo-Methode integriert.
Signalized flows - optimizing traffic signals and guideposts and related network flow problems
(2011)
Guideposts and traffic signals are important devices for controlling inner-city traffic and their optimized operation is essential for efficient traffic flow without congestion. In this thesis, we develop a mathematical model for guideposts and traffic signals in the context of network flow theory. Guideposts lead to confluent flows where each node in the network may have at most one outgoing flow-carrying arc. The complexity of finding maximum confluent flows is studied and several polynomial time algorithms for special graph classes are developed. For traffic signal optimization, a cyclically time-expanded model is suggested which provides the possibility of the simultaneous optimization of offsets and traffic assignment. Thus, the influence of offsets on travel times can be accounted directly. The potential of the presented approach is demonstrated by simulation of real-world instances.
The subject of this thesis is the development and test of silicon strip detectors for the high luminosity upgrade of the tracking detector of the ATLAS experiment at the Large Hadron Collider. Special emphasis is devoted to the understanding of the impact of mechanical stress on the electrical properties and the particle detection performance of detector modules.
First simulations were done to estimate the maximum expected stress on a sensor when operated at -30 °C within the future silicon strip tracking detector ITk, at ATLAS. The maximum stress in a worst case scenario is expected to be 27 MPa. Tensile strength tests were done to estimate the maximum stress which can be applied to a silicon strip sensor. Silicon shards, with a thickness and dopand concentration corresponding to the ITk sensor specifications break at >23 MPa, wafers at >700 MPa and sensors at ~ 400 MPa. The huge variations lead to the assumption, that the tensile stregth, which is highly dependent on the quality of the crystal lattice, is due to different cutting technology. Wafers, irradiated with a fluence equivalent of a lifetime dose of an ITk sensor, show no stress dependency of the youngs modulus. The tensile strength of irradiated wafers is decreased by ~ 6,6 %. No damage on silicon sensors from mechanical stress is expected for sensor modules installed it the ITk.
The electrical properties of silicon strip sensors were studied for applied mechanical stress on ATLAS07 sensors up to 60 MPa. The specifications of the sensors are similar to the specification of strip sensors in the future silicon strip tracker barrel region of the ATLAS detector. The leakage current changes at 50 MPa by -1.7 %, the bias resistance by +0.8 % and the interstrip resistance by -25 %. The depletion voltage and the implant resistance are not affected by mechanical stress. Except for the interstrip resistance the results can be explained by piezoresistive effects.
Silicon strip modules were build and studied in particle test beams. These modules consists of an ATLAS07 or an ATLAS12 sensor and an analogue readout to study the influence of stress on the module performance. The sensor module noise is independent from the applied stress. An effect of stress on the signal strength was seen. The ATLAS07 sensor module signal strength was decreased and the ATLAS12 sensor module signal strength was increased with a slope of ~0,6 MPa^-1. The average cluster size of the ATLAS07 sensor module was increased by 0,25 % MPa^-1 and the average cluster size of the ATLAS12 sensor module was decreased by 0,06 % MPa^-1 with applied stress.
The thesis discusses the problems of database development and maintenance, and presents an approach to conceptual tuning realized by conceptual design using the HERM/RADD notation. The RADD design tool has been designed in order to develope HERM specifications graphically. RADD adds semantics and operations to the design, which are not directly annotated on the graphical specification, such as "afunctional" dependencies and SQL operations and procedures. The RADD/raddstar system extends the graphical specification of the database schema with the posibility to specify the operations and with the invocations for transforming the schema, for evaluating transactions, and for optimizing the schema, each of which according the implicite requirements graphically modeled and the explicite requirements specified by means of the conceptual specification language (CSL). CSL is used as command line interface of the RADD/raddstar. The graphical RADD schema as well as the CSL specifications are compiled into terms of the RADD* data model by the system, such that these terms are used for further evaluation actions. The actions performed by the RADD/raddstar (schema transformation, transaction and cost evaluating, schema optimization) are based on rules, that can be developed and modified by the user using the CSL.
MARCIE manual
(2016)
This manual gives an overview on MARCIE – Model Checking And Reachability analysis done effiCIEntly. MARCIE was originally developed as a symbolic model checker for stochastic Petri nets, building on its predecessor – IDDMC – Interval Decision Diagram based Model Checking – which has been previously developed for the qualitative analysis of bounded Place/Transition nets extended by special arcs. Over the last years the tool has been enriched to allow also quantitative analysis of extended stochastic Petri nets. We concentrate here on the user viewpoint. For a detailed introduction to the relevant formalisms, formal definitions and algorithms we refer to related literature.
This thesis investigates the efficient analysis, especially the model checking, of bounded stochastic Petri nets (SPNs) which can be augmented with reward structures. An SPN induces a continuous-time Markov chain (CTMC). A reward structure associates a reward to each state of the CTMC and defines a Markov reward model (MRM). The Continuous Stochastic Reward Logic (CSRL) permits to define sophisticated properties of CTMCs and MRMs which can be automatically verified by a model checker.
CSRL model checking can be realized on top of established numerical analysis techniques for CTMCs which are based on the multiplication of a matrix and a vector. However, as these techniques consider a matrix and a vector at least in the size of the number of reachable states, it is still challenging to deal with the famous state space explosion problem.
Several approaches, as for instance the use of Multi-terminal Decision Diagrams or Kronecker products to represent the matrix, have been investigated so far. They often enable the implementation of efficient CTMC analysis and are available in a couple of tools.
As an alternative to these established techniques I enhance the idea of an on-the-fly computation of the matrix entries deploying a symbolic state space representation. The set of state transitions defining the matrix will be enumerated by the firing of the transitions of the given SPN for all reachable states. The reachable states are encoded by means of Interval Decision Diagrams (IDD).
Further, I discuss crucial aspects for the implementation of the first multi-threaded symbolic CSRL model checker which is based on the developed technique and available in the tool MARCIE. An experimental comparison with the probabilistic model checker PRISM for a large number of experiments proves empirically the efficiency of the approach and its implementation, especially when investigating biological models.
The International Linear Collider offers a lot of different interesting challenges concerning the physics of elementary particles as well as the development of accelerator and detector technologies. In this thesis, we investigate two rather separate topics - the precision measurement of the Higgs boson mass and of its coupling to the neutral gauge boson Z and the research and development of sensors for BeamCal, which is a sub-detector system of the ILC detector. After the Higgs boson has been found, it is important to determine its properties with high precision. We employ the Higgs-strahlung process for this purpose. A virtual Z boson is created in the electron-positron collisions, which emits a Higgs-boson while becoming on-shell. Using the so-called recoil technique, we determine the Higgs boson mass by reconstructing the Z boson momentum and using the center-of-mass energy of the colliding leptons. This technique allows to measure the Higgs boson mass without considering the Higgs boson decay, i.e. it can be applied even to a Higgs boson invisibly decaying. Monte-Carlo studies including a full detector simulation and a full event reconstruction were performed to simulate the impact of a realistic detector model on the precision of the Higgs boson mass and production cross-section measurement. Also, an analytical estimate of the influence of a given detector performance on the Higgs boson mass measurement uncertainty is given. We included a complete sample of background events predicted by the Standard Model, which may have a detector response similar to the signal events. A probabilistic method is used for the signal-background separation. Several other probabilistic methods were used to investigate and improve the measurement of the Higgs-strahlung cross-section and the Higgs boson mass from the recoil mass spectrum obtained after the signal-background separation. For a Higgs boson mass of 120 GeV, a center-of-mass energy of 250 GeV and an integrated luminosity of 50/fb, a relative uncertainty of 10% is obtained for the cross-section measurement, and a precision of 118 MeV for the Higgs boson mass. The original motivation to use the recoil technique for a Higgs boson mass measurement independent on its decay modes could not be completely confirmed. For a Higgs boson mass of 180 GeV and 350 GeV, a statistics corresponding to 50/fb is not sufficient to achieve the necessary significance of the recoil mass peak above the background. The BeamCal is a calorimeter in the very forward region, about 3 m away from the nominal interaction point and surrounding the beam pipe. Due to its location, a lot of beamstrahlung pair particles will hit this calorimeter, representing a challenge for the operational reliability of the sensors under such harsh radiation conditions. We investigated single-crystal and polycrystalline CVD diamond, gallium arsenide and radiation-hard silicon as sensor candidates for their radiation hardness and found that diamond and gallium arsenide are promising. We used a 10 MeV electron beam of few nA to irradiate the samples under investigation up to doses of 5 MGy for diamond, up to about 1.5 MGy for gallium arsenide and up to about 90 kGy for silicon. We measured in regular periods the CCD to characterize the impact of the absorbed dose on the size of the signal, which is generated by electrons of a Sr-90 source crossing the sensor. Additional measurements such as the dark current and the CCD as functions of the voltage completed the characterization of the sensor candidates. For the single-crystal CVD diamond, also the thermally stimulated current was measured to determine amongst others the defect density created by irradiation. In the diamond samples, evidence for strong polarization effects inside the material was found and investigated in more detail. A phenomenological model based on semi-conductor physics was developed to describe the sensor properties as a function of the applied electric field, the dose and the dose rate. Its predictions were compared with the results of the measurements. Several parameters such as time scales and cross-sections were determined using this model, which led to ongoing investigations.
For many countries, gasturbine technology is one of the key technologies for the reduction of climate-damaging pollutant emissions. The profitability of such facilities, however, is highly dependent on the price for the utilized fossil fuel, which is why there is a constant need for increased efficiency. The potential of increasing the efficiency of the individual components is basically limited by factors which will reduce operating life. The goal of this thesis is to develop methods for improved automated structural design optimization, which shall be developed on the basis of compressor airfoils. Special attention is payed to avoid the excitation of failure critical eigenmodes by detecting them automatically. This is achieved by introducing a method based on self-organizing neural networks which enables the projection of eigenmodes of arbitrary airfoil geometries onto standard surfaces, thereby making them comparable. Another neural network is applied to identify eigenmodes which have been defined as critical for operating life. The failure rate of such classifiers is significantly reduced by introducing a newly developed initialization method based on principle components. A structural optimization is set up which shifts the eigenfrequency bands of critical modes in such a way that the risk of resonance with engine orders is minimized. In order to ensure practical relevance of optimization results, the structural optimization is coupled with an aerodynamic optimization in a combined process. Conformity between the loaded hot-geometry utilized by the aerodynamic design assessment and the unloaded cold-geometry utilized by the structural design assessment is ensured by using loaded-to-unloaded geometry transformation. Therefor an innovative method is introduced which, other than the established time-consuming iterative approach, uses negative density for a direct transformation taking only a few seconds, hence, making it applicable to optimization. Additionally, in order for the optimal designs to be robust against manufacturing variations, a method is developed which allows to assess the maximum production tolerance of a design from which onwards possible design variations are likely to violate design constraints. In contrast to the usually applied failure rate, the production tolerance is a valid requirement for suppliers w.r.t.~expensive parts produced in low-quantity, and therefore is a more suitable optimization objective.
Le philosophe Pyrrhonien : comédie en trois actes ; texte établi d'après l'édition d'Angers, 1761
(2009)
La comédie "Le Philosophe Pyrrhonien" a été publiée en 1761 à Angers, dans le nord de la France. L’auteur de l’oeuvre est Monsieur Martin, professeur au collège de Château-Gontier. Avec sa pièce de théâtre, Martin veut réfuter le scepticisme du philosophe grec Pyrrhon (~360 - ~270 av. J.-C.) et faire connaître à son public la théorie de la connaissance et l’éthique d’Aristote. L’ouvrage, publié à l’apogée des Lumières, est donc aussi -sous une forme voilée- un témoignage des débats philosophiques de son époque.
Im Zuge der zunehmenden Digitalisierung und Vernetzung von Geschäftsprozessen innerhalb und außerhalb des Unternehmens sowie den steigenden Dynamiken im gesamten Auftragsabwicklungsprozess sind die Möglichkeiten und Grenzen klassischer betrieblicher Informationssysteme zur Unterstützung der Entscheidungsträger zu hinterfragen. Neben gängigen Tabellenkalkulationstools setzten Unternehmen bisher verschiedene Informationssysteme ein. Klassische Systeme sind u.a. Enterprise Resource Planning (ERP) - Systeme, Supply Chain Management (SCM) - Systeme bzw. Advanced Planning and Scheduling (APS) - Systeme sowie Manufacturing Execution Systeme (MES).
Bisherige Studien konnten bereits verschiedene Probleme gängiger Systeme im Kontext der Implementierung, der Systemleistung, des Systemnutzens und der Interoperabilität aufzeigen. Insbesondere stärker projektorientierte Unternehmen mit kundenindividueller Auftragsfertigung kritisieren die fehlende Anpassungsfähigkeit der Systeme an die veränderten Unternehmensprozesse. Unternehmen setzen daher teilweise auf individuell entwickelte Systeme, um den Problemen entgegen zu wirken.
Zur Untersuchung des Status quo betrieblicher Entscheidungsunterstützungssysteme wurde eine Studie durchgeführt, welche einen aktualisierten Blick auf die Probleme und Anforderungen zur Verbesserung der Systeme ermöglicht. Die Ergebnisse wurden für die einzelnen Systeme und zusätzlich unter Berücksichtigung des im Unternehmen vorliegenden Auftragsabwicklungstyps analysiert. Es zeigt sich, dass insbesondere die Unterstützung bei Problemen und Störungen im Auftragsabwicklungsprozess verbessert werden muss.