Filtern
Erscheinungsjahr
Dokumenttyp
- Zeitschriftenartikel (26)
- Vortrag (23)
- Beitrag zu einem Tagungsband (14)
- Forschungsdatensatz (5)
- Posterpräsentation (3)
Schlagworte
- Concrete (8)
- Multiscale (6)
- Fatigue (4)
- Spectral element method (4)
- Damage (3)
- Isogeometric analysis (3)
- Model calibration (3)
- Model updating (3)
- Proper generalized decomposition (3)
- Stress wave propagation (3)
Organisationseinheit der BAM
- 7 Bauwerkssicherheit (47)
- 7.7 Modellierung und Simulation (42)
- 5 Werkstofftechnik (5)
- 7.0 Abteilungsleitung und andere (5)
- 5.2 Metallische Hochtemperaturwerkstoffe (4)
- 5.5 Materialmodellierung (4)
- 8 Zerstörungsfreie Prüfung (3)
- 7.1 Baustoffe (2)
- 8.1 Sensorik, mess- und prüftechnische Verfahren (2)
- 1 Analytische Chemie; Referenzmaterialien (1)
Paper des Monats
- ja (2)
Eingeladener Vortrag
- nein (23)
Simulation-based digital twins have emerged as a powerful tool for evaluating the mechanical response of bridges. As virtual representations of physical systems, digital twins can provide a wealth of information that complements traditional inspection and monitoring data. By incorporating virtual sensors and predictive maintenance strategies, they have the potential to improve our understanding of the behavior and performance of bridges over time. However, as bridges age and undergo regular loading and extreme events, their tructural characteristics change, often differing from the predictions of their initial design. Digital twins must be continuously adapted to reflect these changes. In this article, we present a Bayesian framework for updating simulation-based digital twins in the context of bridges. Our approach integrates information from measurements to account for inaccuracies in the simulation model and quantify uncertainties. Through its implementation and assessment, this work demonstrates the potential for digital twins to provide a reliable and up-to-date representation of bridge behavior, helping to inform decision-making for maintenance and management.
Die sprunghaft zunehmende Wichtigkeit von FAIR- und Open-Data für die Qualitätssicherung, aber auch für die Nachnutzbarkeit von Daten und den Erkenntnisfortschritt führt zu enormem Flandlungsbedarf in Forschung und Entwicklung. Damit verbunden laufen derzeit vielfältige, ambitionierte Aktionen, z. B. bezüglich der Erstellung von Ontologien und Wissensgraphen. Das Knowhow entwickelt sich rasant, die Ansätze zur Implementation entstehen in verschiedenen Fachwelten bzw. mit
unterschiedlichen Zielsetzungen parallel, so dass recht heterogene Herangehensweisen resultieren.
Diese Veröffentlichung fokussiert auf Arbeiten, die derzeit als möglichst ganzheitlicher Ansatz für Materialdaten im Rahmen der Digitalisierungsinitiative „Plattform MaterialDigital" vorangetrieben werden. Die Autoren bearbeiten baustoffbezogene Aspekte im Verbundprojekt „LeBeDigital - Lebenszyklus von Beton". Zielsetzung ist die digitale Beschreibung des Materialverhaltens von Beton über den kompletten Herstellungsprozess eines Fertigteils mit einer Integration von Daten und Modellen innerhalb eines Workflows zur probabilistischen Material- und Prozessoptimierung.
Es wird über die Vorgehensweise und die dabei gewonnenen Erfahrungen berichtet, nicht ohne den Blick auf die oft unterschätzte Komplexität der Thematik zu lenken.
In recent years, the use of simulation-based digital twins for monitoring and assessment of complex mechanical systems has greatly expanded. Their potential to increase the information obtained from limited data makes them an invaluable tool for a broad range of real-world applications. Nonetheless, there usually exists a discrepancy between the predicted response and the measurements of the system once built. One of the main contributors to this difference in addition to miscalibrated model parameters is the model error. Quantifying this socalled model bias (as well as proper values for the model parameters) is critical for the reliable performance of digital twins. Model bias identification is ultimately an inverse problem where information from measurements is used to update the original model. Bayesian formulations can tackle this task. Including the model bias as a parameter to be inferred enables the use of a Bayesian framework to obtain a probability distribution that represents the uncertainty between the measurements and the model. Simultaneously, this procedure can be combined with a classic parameter updating scheme to account for the trainable parameters in the original model.
This study evaluates the effectiveness of different model bias identification approaches based on Bayesian inference methods. This includes more classical approaches such as direct parameter estimation using MCMC in a Bayesian setup, as well as more recent proposals such as stat-FEM or orthogonal Gaussian Processes. Their potential use in digital twins, generalization capabilities, and computational cost is extensively analyzed.
In materials and component research, artificial intelligence methodologies will lead to massive upheavals in the coming years. The processes of material development, material processing, lifetime prediction and material characterization will change significantly. By combining AI methods and new forms of knowledge representation, the data-based management of product life cycles will take on new qualities. To address this emerging field of research Fraunhofer IWM set up the online workshop »AI Methods for Fatigue Behavior Assessment and Component Lifetime Prediction«
In this paper, the imperialist competitive optimization algorithm is improved by damage functions to detect damage in a model steel frame test structure for offshore applications. A finite element model of the test structure is developed, validated and updated using the proposed method. As there are much more design variables, which are related to the stiffness of each finite element than the measured mode shapes, the problem is underdetermined. Therefore, damage functions are used to regularize the problem and decrease the number of design variables. A new objective function is proposed for the algorithm using the mode shapes and their l1 norm. The first ten measured mode shapes are used to solve the problem. It is shown that the proposed method is capable of predicting the damage locations with acceptable accuracy.
A safe and robust performance is a key criterion when building and maintaining structures and components. Ensuring this criterion at different stages of the lifetime can be supported by applying continuous monitoring concepts. The latter usually can serve multiple purposes, including the determination of material parameters for the design phase, the evaluation of the actual loading/environmental conditions (instead of using conservative estimates that are usually larger) and evaluating or predicting the true performance of the structure (thus decreasing the model bias). In this context, a digital twin of the structure has many benefits. In addition, it allows to introduce virtual sensors to “measure” sensor information that is e.g. inaccessible or unmeasureable. In the limit, the remaining useful life of a structure can be interpreted as a property that can be “measured” indirectly via the numerical model in combination with real sensor data. In order to efficiently use monitoring techniques in the context of a digital twin, it is important to consider the complete chain of information including the choice of sensors, the data processing and structuring, the modelling assumptions, the numerical simulation and finally the stochastic nature of the model prediction. In this presentation, challenges in this context are discussed with a specific focus on Bayesian model updating of the digital twin, accounting for both parameter updates as well as model bias that results from the limitations of modelling assumption. A bottleneck in this approach is the computational effort related to sampling methods such as Markov chain Monte Carlo methods that require many evaluations of the forward model. An alternative to the expensive computation of the forward model for updating the digital twin is the combination with model reduction techniques such as the Proper General Decomposition. The results are illustrated for several examples and scales, ranging from digitals twin for material tests in the lab over lab scale structural digital twins up to damage identification in field experiments.
A safe and robust performance is a key criterion when building and maintaining structures and component. Ensuring this criterion at different stages of the lifetime can be supported by applying continuous monitoring concepts. The latter usually can serve multiple purposes, including the determination of material parameters for the design phase, the evaluation of the actual loading/environmental conditions (instead of using conservative estimates that are usually larger) and evaluating or predicting the true performance of the structure (thus decreasing the model bias). In this context, a digital twin of the structure has many benefits. It allows to introduce virtual sensors to “measure” sensor information that is e.g. inaccessible or unmeasureable. In order to efficiently use monitoring techniques in the context of a digital twin, it is important to consider the complete chain of information including the choice of sensors, the data processing and structuring, the modelling assumptions, the numerical simulation and finally the stochastic nature of the model prediction. In this presentation, challenges in this context are discussed with a specific focus on Bayesian model updating of the digital twin, accounting for both parameter updates as well as model bias that results from the limitations of modelling assumption. A bottleneck in this approach is the computational effort related to sampling methods such as Markov chain Monte Carlo methods that require many evaluations of the forward model. An alternative to the expensive computation of the forward model for updating the digital twin is the combination with model reduction techniques such as the Proper General Decomposition [1, 2]. The results are illustrated for several examples and scale, ranging from digitals twin for material tests in the lab over lab scale structural digital twins up to damage identification in field experiments.
A three-phase transport model for high-temperature concrete simulations validated with X-ray CT data
(2021)
Concrete exposure to high temperatures induces thermo-hygral phenomena, causing water phase changes, buildup of pore pressure and vulnerability to spalling. In order to predict these phenomena under various conditions, a three-phase transport model is proposed. The model is validated on X-ray CT data up to 320 ◦C, showing good agreement of the temperature profiles and moisture changes. A dehydration description, traditionally derived from thermogravimetric analysis, was replaced by a formulation based on data from neutron radiography. In addition, treating porosity and dehydration evolution as independent processes, previous approaches do not fulfil the solid mass balance. As a consequence, a new formulation is proposed that introduces the porosity as an independent variable, ensuring the latter condition.
Numerical models built as virtual-twins of a real structure (digital-twins) are considered the future ofmonitoring systems. Their setup requires the estimation of unknown parameters, which are not directly measurable. Stochastic model identification is then essential, which can be computationally costly and even unfeasible when it comes to real applications. Efficient surrogate models, such as reduced-order method, can be used to overcome this limitation and provide real time model identification. Since their numerical accuracy influences the identification process, the optimal surrogate not only has to be computationally efficient, but also accurate with respect to the identified parameters. This work aims at automatically controlling the Proper Generalized Decomposition (PGD) surrogate’s numerical accuracy for parameter identification. For this purpose, a sequence of Bayesian model identification problems, in which the surrogate’s accuracy is iteratively increased, is solved with a variational Bayesian inference procedure. The effect of the numerical accuracy on the resulting posteriors probability density functions is analyzed through two metrics, the Bayes Factor (BF) and a criterion based on the Kullback-Leibler (KL) divergence. The approach is demonstrated by a simple test example and by two structural problems. The latter aims to identify spatially distributed damage, modeled with a PGD surrogate extended for log-normal random fields, in two different structures: a truss with synthetic data and a small, reinforced bridge with real measurement data. For all examples, the evolution of the KL-based and BF criteria for increased accuracy is shown and their convergence indicates when model refinement no longer affects the identification results.
Numerical simulators, such as finite element models, have become increasingly capable of predicting the behaviour of structures and components owing to more sophisticated underlying mathematical models and advanced computing power. A common challenge lies, however, in calibrating these models in terms of their unknown/uncertain parameters. When measurements exist, this can be achieved by comparing the model response against measured data. Besides uncertain model parameters, phenomena like damage can give rise to further uncertainties; in particular, quasi-brittle materials, like concrete, experience damage in a heterogeneous manner due to various imperfections, e.g. in geometry and boundary conditions. This hardens an accurate prediction of the damaged behaviour of real structures that comprise such materials.
In this study, which draws from a data-driven approach, we use the force-version of the finite element model updating method (FEMU-F) to incorporate measured displacements into the identification of the damage parameters, in order to cope with heterogeneity. In this method, instead of conducting a forward evaluation of the model and comparing the model response (displacements) against the data, we impose displacements to the model and compare the resulting force residuals with measured reaction forces. To account for uncertainties in the measurement of displacements, we endow this approach with a penalty term, which reflects the discrepancy between measured and imposed displacements, where the latter is assumed as unknown random variables to be identified as well. A Variational Bayesian approach is used as an approximating tool for computing posterior parameters. The underlying damage model considered in this work is a gradient-enhanced damage model.
We first establish the identification procedure through two virtual examples, where synthetic data (displacements) are generated over a certain spatially-dense set of points over the domain. The procedure is then validated on an experimental case-study; namely a 3-point bending experiment with displacement measurements resulting from a digital image correlation (DIC) analysis.
Concrete is one of the most important building materials world wide. The safety of constructions build from concrete is of utmost importance in daily life. As a consequence, accurate predictions of the structural behavior over the entire lifetime of concrete structures are required to ensure a prescribed safety level. A lack of exact models and/or stochastically varying constitutive parameters are compensated by large safety factors.
The nonlinear structural performance is strongly related to the constitutive behavior of concrete. Arbitrary complex models can be used to describe the macroscopic constitutive behavior of concrete. The parameters in these models often lack any physical meaning. Consequently, the fitting can only be performed by an inverse analysis. In contrast, models on finer scales are able to simulate the physical phenomena more accurately and are thus better suited to understand the failure mechanisms. In addition, the macroscopically observed strong nonlinearities can at least partially be explained by the direct modeling of the material heterogeneities on finer scales.
The presentation discusses several phenomena that are strongly related to the internal microstructure of concrete. This includes the discrepancy between the unique results of a numerical model and the stochastic scatter observed in real experiments. A short discussion on the generation of random mesoscale geometries to model aggregates and mortar matrix explicitly and random fields are given. The strong nonlinearities especially for stresses close to the peak strength are usually the result of failure in the mortar matrix or the interfacial transition zone, whereas the aggregates are inert and often can accurately be modeled by a linear elastic model. The different constitutive properties lead to eigenstresses that strongly in uence the macroscopic behavior. In addition, this effect is even more pronounced when dealing with multiphysics phenomena such as drying, creep and shrinkage, fatigue or thermal problems. It will be demonstrated for several examples that simple models on the fine scale can be superimposed and coupled to obtain a macroscopically nonlinear behavior, where the superposition principle does not hold any longer. Finally, a short discussion on upscaling techniques to couple mesoscale models with large scale structural problems is given.
In this paper, a new methodology based on the Hill–Mandel lemma in an FE² sense is proposed that is able to deal with localized deformations. This is achieved by decomposing the displacement field of the fine scale model into a homogeneous part, fluctuations, and a
cracking part based on additional degrees of freedom (X¹)—the crack opening in normal and tangential directions. Based on this decomposition, the Hill–Mandel lemma is extended to relate coarse and fine scale energies using the assumption of separation of scales such
that the fine scale model is not required to have the same size as the corresponding
macroscopic integration point. In addition, a procedure is introduced to mimic periodic
boundary conditions in the linear elastic range by adding additional shape functions for the boundary nodes that represent the difference between periodic boundary conditions and pure displacement boundary conditions due to the same macroscopic strain. In order to decrease the computational effort, an adaptive strategy is proposed allowing different
macroscopic integration points to be resolved in different levels on the fine scale.
A regularized model for impact in explicit dynamics applied to the split Hopkinson pressure bar
(2016)
In the numerical simulation of Impact phenomena, artificial oscillations can occur due to an instantaneous change of velocity in the contact area. In this paper, a nonlinear penalty regularization is used to avoid these oscillations. Aparticular focus is the investigation of higher order methods in space and time to increase the computational efficiency. The spatial discretization is realized by higher order spectral element methods that are characterized by a diagonal mass matrix. The time integration scheme is based on half-explicit Runge–Kutta scheme of fourth order. For the conditionally stable scheme, the critical time step is influenced by the penalty regularization. A framework is presented to adjust the penalty stiffness and the time step for a specific mesh to avoid oscillations. The methods presented in this paper are applied to 1D-simulations of a split Hopkinson pressure bar, which is commonly used for the investigation of materials under dynamic loading.
Constitutive modeling of creep-fatigue interaction for normal strength concrete under compression
(2015)
Conventional approaches to model fatigue failure are based on a characterization of the lifetime as a function of the loading amplitude. The Wöhler diagram in combination with a linear damage accumulation assumption predicts the lifetime for different loading regimes. Using this phenomenological approach, the evolution of damage and inelastic strains and a redistribution of stresses cannot be modeled. The gradual degration of the material is assumed to not alter the stress state. Using the Palmgren–Miner rule for damage accumulation, order effects resulting from the non-linear response are generally neglected.
In this work, a constitutive model for concrete using continuum damage mechanics is developed. The model includes rate-dependent effects and realistically reproduces gradual performance degradation of normal strength concrete under compressive static, creep and cyclic loading in a unified framework. The damage evolution is driven by inelastic deformations and captures strain rate effects observed experimentally. Implementation details are discussed. Finally, the model is validated by comparing simulation and experimental data for creep, fatigue and triaxial compression.