Refine
Document Type
- Doctoral thesis (16)
- Report (4)
- Book (2)
- Scientific article (1)
- Chapter (book) (1)
- Working paper (1)
Has Fulltext
- yes (25)
Is part of the Bibliography
- no (25)
Year of publication
Keywords
- Optimierung (4)
- Classification (2)
- Electrooptics (2)
- Elektrooptik (2)
- Flüssigkristall (2)
- Gasturbine (2)
- Halbleitertechnologie (2)
- Klassifizierung (2)
- Lichtwellenleiter (2)
- Liquid crystal (2)
Institute
- FG Produktionswirtschaft (3)
- FG Experimentalphysik und funktionale Materialien (2)
- FG Mikro- und Nanosysteme (2)
- FG Technische Mechanik und Fahrzeugdynamik (2)
- FG Angewandte Mathematik (1)
- FG Anorganische Chemie (1)
- FG Bodenschutz und Rekultivierung (1)
- FG Chemische Reaktionstechnik (1)
- FG Computational Physics (1)
- FG Datenbanken und Informationssysteme (1)
The thesis discusses the problems of database development and maintenance, and presents an approach to conceptual tuning realized by conceptual design using the HERM/RADD notation. The RADD design tool has been designed in order to develope HERM specifications graphically. RADD adds semantics and operations to the design, which are not directly annotated on the graphical specification, such as "afunctional" dependencies and SQL operations and procedures. The RADD/raddstar system extends the graphical specification of the database schema with the posibility to specify the operations and with the invocations for transforming the schema, for evaluating transactions, and for optimizing the schema, each of which according the implicite requirements graphically modeled and the explicite requirements specified by means of the conceptual specification language (CSL). CSL is used as command line interface of the RADD/raddstar. The graphical RADD schema as well as the CSL specifications are compiled into terms of the RADD* data model by the system, such that these terms are used for further evaluation actions. The actions performed by the RADD/raddstar (schema transformation, transaction and cost evaluating, schema optimization) are based on rules, that can be developed and modified by the user using the CSL.
In this thesis we study efficient time integration methods for linear parabolic PDEs to solve practical problems that arise in a variety of real-world applications. The classical construction of numerical methods for solving PDEs is based on the method of lines, which leads to a large sparse semi-discretised system of ODEs to which any numerical method for initial value ODE problems can be applied. When dealing with parabolic-type problems, the underlying ODE systems are known to be stiff. Therefore, in the context of linear model problems, the use of implicit schemes is usually considered to be the best choice in practice.
However, this statement is not correct for some relevant real-world applications. In particular, implicit schemes can cause high computational costs that are equipped with certain model conditions. The model problems considered here are coupled with various settings, ranging from many different initial conditions over long-term simulation with relatively frequent model updates, to dealing with very large-scale problems for which the matrix size can exceed several millions. For this reason, we are interested in sophisticated and computationally efficient numerical methods that bring the aspects of approximation accuracy as well as computational and storage complexity into balance.
Even nowadays it is still a challenging task to devise a numerical method that combines high accuracy, robustness and computational efficiency for the model problem to be solved. Therefore, the main objective is to find an easy and efficient ODE integration scheme for each individual model problem. On this basis, we first give a comprehensive introduction to the state-of-the-art methods that are often used for practical purposes. In this framework, we will investigate very detailed the theoretical and numerical foundations of two popular techniques, namely the fast explicit methods and the model order reduction techniques. This is primarily important in order to fully understand the numerical methods, and also absolutely essential in finding the best numerical method that is specifically suitable for the intended purpose.
Our second goal is then to efficiently solve the practical problems that arise in connection with shape correspondence, geothermal energy storage and image osmosis filtering. For each application we specify a complete setup, and in order to provide an efficient and accurate numerical approximation, we give a thorough discussion of the various numerical solvers along with many technical details and own adaptations. We validate our numerical findings through many experiments using synthetic and real-world data. In addition, the thesis provides a complete and detailed description of the powerful methods that can be very useful for tackling similar problems that are the subject of interest in many applications.
In 2012 a group of researchers proposed a basic research initiative to the German Research Foundation (DFG) as a special priority project (SPP) with the name: Wireless 100 Gbps and beyond. The main goal of this initiative was the investigation of architectures, technologies and methods to go well beyond the state of the art. The target of 100 Gbps was set far away from the (at that time) achievable 1 Gbps such that it was not possible to achieve promising results just by tuning some parameters. We wanted to find breakthrough solutions. When we started the work on the proposal we discussed the challenges to be addressed in order to advancing the wireless communication speed significantly. Having the fundamental Shannon boundary in mind we discussed how to achieve the 100 Gbps speed.
This thesis investigates the efficient analysis, especially the model checking, of bounded stochastic Petri nets (SPNs) which can be augmented with reward structures. An SPN induces a continuous-time Markov chain (CTMC). A reward structure associates a reward to each state of the CTMC and defines a Markov reward model (MRM). The Continuous Stochastic Reward Logic (CSRL) permits to define sophisticated properties of CTMCs and MRMs which can be automatically verified by a model checker.
CSRL model checking can be realized on top of established numerical analysis techniques for CTMCs which are based on the multiplication of a matrix and a vector. However, as these techniques consider a matrix and a vector at least in the size of the number of reachable states, it is still challenging to deal with the famous state space explosion problem.
Several approaches, as for instance the use of Multi-terminal Decision Diagrams or Kronecker products to represent the matrix, have been investigated so far. They often enable the implementation of efficient CTMC analysis and are available in a couple of tools.
As an alternative to these established techniques I enhance the idea of an on-the-fly computation of the matrix entries deploying a symbolic state space representation. The set of state transitions defining the matrix will be enumerated by the firing of the transitions of the given SPN for all reachable states. The reachable states are encoded by means of Interval Decision Diagrams (IDD).
Further, I discuss crucial aspects for the implementation of the first multi-threaded symbolic CSRL model checker which is based on the developed technique and available in the tool MARCIE. An experimental comparison with the probabilistic model checker PRISM for a large number of experiments proves empirically the efficiency of the approach and its implementation, especially when investigating biological models.
Assessing alternative agricultural water management strategies requires long-term field trials or vast data collection for model calibration and simulation. This work aims to assess whether an uncalibrated agro-hydrological model using global input datasets for climate, soil and crop information can serve as a decision support tool for crop water management under data scarcity. This study employs the Cool Farm Tool Water (CFTW) at eight eddy covariance sites of the FLUXNET2015 dataset. CFTW is tested using global (CFTWglobal) and local (CFTWlocal) input datasets under current and alternative management scenarios. Results show that the use of global datasets for estimating daily evapotranspiration had little effect on the median Root Mean Square Error (RMSE) (CFTWglobal: 1.70 mm, CFTWlocal: 1.79 mm), while, however, the median model bias is much greater (CFTWglobal: - 18.6%, CFTWlocal: - 4.3%). Furthermore, the periods of water stress were little affected by the use of local or global data (median accuracy: 0.84), whereas the use of global data inputs led to a significant overestimation of irrigation water requirements (median difference: 110 mm). The model performance improves predominantly through the use of more representative local precipitation data, followed by local reference evapotranspiration and soil for some European growing seasons. We identify model outputs that can support decision-making when relying on global data, such as periods of water stress and the daily dynamics of water use. However, our findings also emphasize the difficulty of overcoming data scarcity in decision-making in agricultural water management. Furthermore, we provide recommendations for enhancing model performance and thus may increase the accessibility of reliable decision support tools in the future.