Filtern
Dokumenttyp
Sprache
- Englisch (14)
Schlagworte
Organisationseinheit der BAM
Paper des Monats
- ja (3)
One of the most important goals in civil engineering is to guaranty the safety of constructions. National standards prescribe a required failure probability in the order of 10−6 (e.g. DIN EN 199:2010-12). The estimation of these failure probabilities is the key point of structural reliability analysis. Generally, it is not possible to compute the failure probability analytically.
Therefore, simulation-based methods as well as methods based on surrogate modeling or response surface methods have been developed. Nevertheless, these methods still require a few thousand evaluations of the structure, usually with finite element (FE) simulations, making reliability analysis computationally expensive for relevant applications.
The aim of this contribution is to increase the efficiency of structural reliability analysis by using the advantages of model reduction techniques. Model reduction is a popular concept to decrease the computational effort of complex numerical simulations while maintaining a reasonable accuracy. Coupling a reduced model with an efficient variance reducing sampling algorithm significantly reduces the computational cost of the reliability analysis without a relevant loss of accuracy.
One of the most important goals in civil engineering is to guarantee the safety of the construction. Standards prescribe a required failure probability in the order of 10−4 to 10−6. Generally, it is not possible to compute the failure probability analytically.
Therefore, many approximation methods have been developed to estimate the failure probability. Nevertheless, these methods still require a large number of evaluations of the investigated structure, usually finite element (FE) simulations, making full probabilistic design studies not feasible for relevant applications. The aim of this paper is to increase the efficiency of structural reliability analysis by means of reduced order models. The developed method paves the way for using full probabilistic approaches in industrial applications. In the proposed PGD reliability analysis, the solution of the structural computation is directly obtained from evaluating the PGD solution for a specific parameter set without computing a full FE simulation. Additionally, an adaptive importance sampling scheme is used to minimize the total number of required samples. The accuracy of the failure probability depends on the accuracy of the PGD model (mainly influenced on mesh discretization and mode truncation) as well as the number of samples in the sampling algorithm. Therefore, a general iterative PGD reliability procedure is developed to automatically verify the accuracy of the computed failure probability. It is based on a goal-oriented refinement of the PGD model around the adaptively approximated design point. The methodology is applied and evaluated for 1D and 2D examples. The computational savings compared to the method based on a FE model is shown and the influence of the accuracy of the PGD model on the failure probability is studied.
One of the main challenges regarding our civil infrastructure is the efficient operation over their complete design lifetime while complying with standards and safety regulations. Thus, costs for maintenance or replacements must be optimized while still ensuring specified safety levels. This requires an accurate estimate of the current state as well as a prognosis for the remaining useful life. Currently, this is often done by regular manual or visual inspections within constant intervals. However, the critical sections are often not directly accessible or impossible to be instrumented at all. Model‐based approaches can be used where a digital twin of the structure is set up. For these approaches, a key challenge is the calibration and validation of the numerical model based on uncertain measurement data. The aim of this contribution is to increase the efficiency of model updating by using the advantage of model reduction (Proper Generalized Decomposition, PGD) and applying the derived method for efficient model identification of a random stiffness field of a real bridge.”
Numerical models built as virtual-twins of a real structure (digital-twins) are considered the future ofmonitoring systems. Their setup requires the estimation of unknown parameters, which are not directly measurable. Stochastic model identification is then essential, which can be computationally costly and even unfeasible when it comes to real applications. Efficient surrogate models, such as reduced-order method, can be used to overcome this limitation and provide real time model identification. Since their numerical accuracy influences the identification process, the optimal surrogate not only has to be computationally efficient, but also accurate with respect to the identified parameters. This work aims at automatically controlling the Proper Generalized Decomposition (PGD) surrogate’s numerical accuracy for parameter identification. For this purpose, a sequence of Bayesian model identification problems, in which the surrogate’s accuracy is iteratively increased, is solved with a variational Bayesian inference procedure. The effect of the numerical accuracy on the resulting posteriors probability density functions is analyzed through two metrics, the Bayes Factor (BF) and a criterion based on the Kullback-Leibler (KL) divergence. The approach is demonstrated by a simple test example and by two structural problems. The latter aims to identify spatially distributed damage, modeled with a PGD surrogate extended for log-normal random fields, in two different structures: a truss with synthetic data and a small, reinforced bridge with real measurement data. For all examples, the evolution of the KL-based and BF criteria for increased accuracy is shown and their convergence indicates when model refinement no longer affects the identification results.
Using digital twins for decision making is a very promising concept which combines simulation models with corresponding experimental sensor data in order to support maintenance decisions or to investigate the reliability. The quality of the prognosis strongly depends on both the data quality and the quality of the digital twin. The latter comprises both the modeling assumptions as well as the correct parameters of these models. This article discusses the challenges when applying this concept to realmeasurement data for a demonstrator bridge in the lab, including the data management, the iterative development of the simulation model as well as the identification/updating procedure using Bayesian inference with a potentially large number of parameters. The investigated scenarios include both the iterative identification of the structural model parameters as well as scenarios related to a damage identification. In addition, the article aims at providing all models and data in a reproducibleway such that other researcher can use this setup to validate their methodologies.
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general, and when implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods, tools and procedures is outlined.
We begin with a motivation on potential use cases for the applications of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data are discussed. On the one hand, this includes a discussion on how to define actual data structures and in particular metadata schema, and on the other hand, two different systems for storing the data are discussed. The first one is the openBIS system, which is an open-source Lab notebook and PostgreSQL based data management system. A second option are a semantic representations using RDF based ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. Metadata schemata for a rheological measurement setup are derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behaviour, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
Despite the advances in hardware and software techniques, standard numerical methods fail in providing real-time simulations, especially for complex processes such as additive manufacturing applications. A real-time simulation enables process control through the combination of process monitoring and automated feedback, which increases the flexibil- ity and quality of a process. Typically, before producing a whole additive manufacturing structure, a simplified experiment in form of a bead-on- plate experiment is performed to get a first insight into the process and to set parameters suitably. In this work, a reduced order model for the transient thermal problem of the bead-on-plate weld simulation is devel- oped, allowing an efficient model calibration and control of the process. The proposed approach applies the proper generalized decomposition (PGD) method, a popular model order reduction technique, to decrease the computational effort of each model evaluation required multiple times in parameter estimation, control and optimization. The welding torch is modeled by a moving heat source, which leads to difficulties separating space and time, a key ingredient in PGD simulations. A novel approach for separating space and time is applied and extended to 3D problems allowing the derivation of an efficient separated representation of the tem- perature. The results are verified against a standard finite element model showing excellent agreement. The reduced order model is also leveraged in a Bayesian model parameter estimation setup, speeding up calibrations and ultimately leading to an optimized real-time simulation approach for welding experiment using synthetic as well as real measurement data.
FenicsXConcrete
(2023)
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general. When implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods and tools is outlined.
We begin with a motivation on potential use cases for the application of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data (including metadata schema) are discussed. The first one is the openBIS system, which is an open-source lab notebook and PostgreSQL based data management system. A second option is a semantic representation using RDF based on ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. A metadata schema for a rheological measurement setup is derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behavior, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
Structural build-up describes the stability and early-age strength development of fresh mortar used in 3D printing. lt is influenced by several factors, i.e. the composition of the print able material, the printing regime, and the ambient conditions. The existing modelling approaches for structural build-up usually define the model parameters for a specific material composition with out considering the influence of the ambient conditions. The goal of this contribution is to explicitly include the temperature dependency in the modelling approach. Temperature changes have signifi cant impact on the structural build-up process: an increase of the temperature leads to a faster dissol ution of cement phases and accelerates hydration. The proposed extended model includes temperature dependency using the Arrhenius theory. The new model parameters are successfully calibrated based on Viskomat measurement data using Bayesian inference. Furthermore, a higher impact of the temperature in the re-flocculation as in the structuration stage is observed.