Filtern
Dokumenttyp
- Vortrag (13)
Referierte Publikation
- nein (13)
Schlagworte
- Concrete (3)
- Modellkalibrierung (2)
- Variational Bayesian (2)
- Bayesian Inferenz (1)
- Bayesian inference (1)
- Bridge Monitoring (1)
- Cycle jump (1)
- Damage (1)
- Data provenance (1)
- Database for Analyzing (1)
Organisationseinheit der BAM
- 7 Bauwerkssicherheit (13) (entfernen)
Eingeladener Vortrag
- nein (13) (entfernen)
Numerical simulators, such as finite element models, have become increasingly capable of predicting the behaviour of structures and components owing to more sophisticated underlying mathematical models and advanced computing power. A common challenge lies, however, in calibrating these models in terms of their unknown/uncertain parameters. When measurements exist, this can be achieved by comparing the model response against measured data. Besides uncertain model parameters, phenomena like damage can give rise to further uncertainties; in particular, quasi-brittle materials, like concrete, experience damage in a heterogeneous manner due to various imperfections, e.g. in geometry and boundary conditions. This hardens an accurate prediction of the damaged behaviour of real structures that comprise such materials.
In this study, which draws from a data-driven approach, we use the force-version of the finite element model updating method (FEMU-F) to incorporate measured displacements into the identification of the damage parameters, in order to cope with heterogeneity. In this method, instead of conducting a forward evaluation of the model and comparing the model response (displacements) against the data, we impose displacements to the model and compare the resulting force residuals with measured reaction forces. To account for uncertainties in the measurement of displacements, we endow this approach with a penalty term, which reflects the discrepancy between measured and imposed displacements, where the latter is assumed as unknown random variables to be identified as well. A Variational Bayesian approach is used as an approximating tool for computing posterior parameters. The underlying damage model considered in this work is a gradient-enhanced damage model.
We first establish the identification procedure through two virtual examples, where synthetic data (displacements) are generated over a certain spatially-dense set of points over the domain. The procedure is then validated on an experimental case-study; namely a 3-point bending experiment with displacement measurements resulting from a digital image correlation (DIC) analysis.
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general, and when implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods, tools and procedures is outlined.
We begin with a motivation on potential use cases for the applications of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data are discussed. On the one hand, this includes a discussion on how to define actual data structures and in particular metadata schema, and on the other hand, two different systems for storing the data are discussed. The first one is the openBIS system, which is an open-source Lab notebook and PostgreSQL based data management system. A second option are a semantic representations using RDF based ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. Metadata schemata for a rheological measurement setup are derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behaviour, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
Appropriate monitoring of transportation infrastructures (e.g. bridges) is of utmost importance to ensure safe operation conditions. Accurate and reliable assessment of such structures can be achieved through the integration of data from non-destructive testing, advanced modeling and model updating techniques. The Bayesian framework has been widely used for updating engineering and mechanical models, due to its probabilistic description of information, in which the posterior probability distribution reflects the knowledge, over the model parameters of interest, inferred from the data. For most real-life applications, the computation of the true posterior involves integrals that are analytically intractable, therefore the implementation of Bayesian inference requires in practice some approximation methods.
This paper investigates the application of Variational Bayesian Inference for structural model parameter identification and update, based on measurements from a real experimental setup. The Variational Bayesian method circumvents the issue of evaluating intractable integrals by using a factorized approximation of the true posterior (mean field approximation) and by choosing a family of conjugate distributions that facilitates the calculations. Inference in the Variational Bayesian framework is seen as solving an optimization problem with the aim of finding the parameters of the factorized posterior which would minimize its Kullback-Leibler divergence in relation to the exact posterior. The Variational Approach is an efficient alternative to sampling methods, such as Markov Chain Monte Carlo, since the latter’s accuracy depends on sampling from the posterior distribution a sufficient amount of times (and therefore requiring an equivalent number of computations of the forward problem, which can be quite expensive).
The durability of concrete structures and its performance over the lifetime is strongly influenced by many interacting phenomena such as e.g. mechanical degradation due to fatigue loading, loss of prestress, degradation due to chemical reactions or creep and shrinkage. Failure due to cyclic loading is generally not instantaneous, but characterized by a steady damage accumulation.
Many constitutive models for concrete are currently available, which are applicable for specific loading regimes, different time scales and different resolution scales. A key limitation is that the models often do not address issues related to fatigue on a structural level. Very few models can be found in the literature that reproduce deterioration of concrete under repeated loading-unloading cycles.
The objective of this paper is the presentation of numerical methods for the simulation of concrete under fatigue loading using a temporal multiscale method.
First, a continuum damage model for concrete is developed with a focus on fatigue under compressive stresses. This includes the possibility to model stress redistributions and capture size effects. In contrast to cycle based approaches, where damage is accumulated based on the number of full stress cycles, a strain based approach is developed that can capture cyclic degradation under variable loading cycles including different amplitudes and loading frequencies. Second, a multiscale approach in time is presented to enable structural computations of fatigue failure with a reduced computational effort. The damage rate within the short time scale corresponding to a single cycle is computed based on a Fourier based approach. This evolution equation is then solved on the long time scale using different time integration schemes.
Lifetime aspects including fatigue failure of concrete structures were traditionally only of minor importance. Because of the growing interest in maxing out the capacities of concrete, its fatigue failure under compression has become an issue. A variety of interacting phenomena such as e.g. loss of prestress, degradation due to chemical reactions or creep and shrinkage influence the fatigue resistance. Failure due to cyclic loads is generally not instantaneous, but characterized by a steady damage accumulation. Therefore, a reliable numerical model to predict the performance of concrete over its lifetime is required, which accurately captures order effects and full three-dimensional stress states.
Many constitutive models for concrete are currently available, which are applicable for specific loading regimes, different time scales and different resolution scales.
However, a key limitation of those models is that they generally do not address issues related to fatigue on a structural level. Very few models can be found in the literature that reproduce deterioration of concrete under repeated loading-unloading cycles. This is due to the computational effort necessary to explicitly resolve every cycle which exceeds the currently available computational resources. The limitation can only be overcome by the application of multiscale methods in time.
The objective of the paper is the development of numerical methods for the simulation of concrete under fatigue loading using temporal multiscale methods.
First, a continuum damage model for concrete is developed with a focus on fatigue under compressive stresses. This includes the possibility to model stress redistributions and capture size effects. In contrast to cycle based approaches, where damage is accumulated based on the number of full stress cycles, a strain based approach is developed that can capture cyclic degradation under variable loading cycles including different amplitudes and loading frequencies. The model is designed to represent failure under static loading as a particular case of fatigue failure after a single loading cycle. As a consequence, most of the material parameters can be deduced from static tests. Only a limit set of additional constitutive parameters is required to accurately describe the evolution under fatigue loading. Another advantage of the proposed model is the possibility to directly incorporate other multi-physics effects such as creep and shrinkage or thermal loading on the constitutive level.
Second, a multiscale approach in time is presented to enable structural computations of fatigue failure with a reduced computational effort. The damage rate within the short time scale corresponding to a single cycle is computed based on a Fourier based approach. This evolution equation is then solved on the long time scale using different implicit and explicit time integration schemes. Their performance and some limitations for specific loading regimes is discussed.
Finally, the developed methods will be validated and compared to experimental data.
Combination of model reduction and adaptive subset simulation for structural reliability problems
(2019)
A safe and robust design is a key criterion when building a structure or a component. Ensuring this criterion can either be performed by fullfilling prescribed safety margins, or by using a full probabilistic approach with a computation of the failure probability. The latter approach is particularly well suited for complex Problems with an interaction of different physical penomena that can be described in a numerical model. The bottleneck in this approach is the computational effort. Sampling methods such as Markov chain Monte Carlo methods are often used to evaluate the system reliability. Due to small failure probabilities (e.g. 10^6) and complex physical models with already and extensive computational effort for a single set of parameters, these methods a prohibitively expensive. The focus of this contribution is to demonstrate the advantages of combining model reduction techniques within the concept a variance reducing adaptive sampling procedures. In the developed method, a modification of the adaptive subset simulation based on Papaioannou et al. 2015 is used and coupled with a limit state function based on Proper Generalized Decomposition (PGD) (Chinesta et al. 2011). In the subset simulation the failure probability is expressed as a product of larger conditional failure probabilities. The intermediate failure events are chosen as a decreasing sequence. Instead of solving each conditional probability with a Markov chain approach, an importance sampling approach is used. It is be shown that the accuracy of the estimation depends mainly on the number of samples in the last sub-problem. For model reduction, the PGD approach is used to solve the structural problem a priori for a given Parameter space (physical space plus all random parameters). The PGD approach results in an approximation of the problem output within a prescribed range of all input Parameters (load factor, material properties, ..). The approximation of the solution by a separated form allows an evaluation of the limit state function within the sampling algorithm with almost no cost. This coupled PGD – adaptive subset Simulation approach is used to estimate the failure probability of examples with different complexity. The convergence, the error propagation as well as the reduction in computational time is discussed.
A safe and robust performance is a key criterion when building and maintaining structures and component. Ensuring this criterion at different stages of the lifetime can be supported by applying continuous monitoring concepts. The latter usually can serve multiple purposes, including the determination of material parameters for the design phase, the evaluation of the actual loading/environmental conditions (instead of using conservative estimates that are usually larger) and evaluating or predicting the true performance of the structure (thus decreasing the model bias). In this context, a digital twin of the structure has many benefits. It allows to introduce virtual sensors to “measure” sensor information that is e.g. inaccessible or unmeasureable. In order to efficiently use monitoring techniques in the context of a digital twin, it is important to consider the complete chain of information including the choice of sensors, the data processing and structuring, the modelling assumptions, the numerical simulation and finally the stochastic nature of the model prediction. In this presentation, challenges in this context are discussed with a specific focus on Bayesian model updating of the digital twin, accounting for both parameter updates as well as model bias that results from the limitations of modelling assumption. A bottleneck in this approach is the computational effort related to sampling methods such as Markov chain Monte Carlo methods that require many evaluations of the forward model. An alternative to the expensive computation of the forward model for updating the digital twin is the combination with model reduction techniques such as the Proper General Decomposition [1, 2]. The results are illustrated for several examples and scale, ranging from digitals twin for material tests in the lab over lab scale structural digital twins up to damage identification in field experiments.