7.7 Modellierung und Simulation
Filtern
Dokumenttyp
- Vortrag (55)
- Zeitschriftenartikel (19)
- Beitrag zu einem Tagungsband (8)
- Forschungsdatensatz (6)
- Dissertation (2)
- Posterpräsentation (2)
Schlagworte
- Model order reduction (7)
- Concrete (6)
- Model calibration (6)
- Variational multiscale method (6)
- Digital twin (5)
- Multiscale methods (5)
- Proper generalized decomposition (5)
- Additive manufacturing (4)
- Model updating (4)
- Structural build-up (4)
Organisationseinheit der BAM
- 7 Bauwerkssicherheit (92)
- 7.7 Modellierung und Simulation (92)
- 5 Werkstofftechnik (5)
- 5.2 Metallische Hochtemperaturwerkstoffe (5)
- 5.5 Materialmodellierung (4)
- 7.1 Baustoffe (4)
- 7.4 Baustofftechnologie (4)
- 8 Zerstörungsfreie Prüfung (4)
- 9 Komponentensicherheit (4)
- 9.3 Schweißtechnische Fertigungsverfahren (4)
Paper des Monats
- ja (2)
Eingeladener Vortrag
- nein (55)
3D concrete printing is an innovative new construction technology offering the potential to enable an efficient production of individual structures with less consumption of resources. The technology will mainly shape the future construction philosophy, automate the build process and help reaching the climate goals in civil engineering. From the design of a structure to the printed component, many individual steps based on different software are required which must be repeated for each new or changed structure. First, the geometry of the structure is created in a CAD program. Second, the print path is defined in a slicer software creating the machine code for the printer (G-Code). Finally, the structure can be printed. Furthermore, a numerical model of the printed structure is necessary for process optimization and control. In that way the number of test prints can be reduced, costs can be saved, and the component behavior can be predicted. For those purposes, an automated workflow allowing to run all steps or individual steps without interacting with each individual software program is required. Furthermore, changes in parameters or the exchange of parts (using a different design or different printer) must be possible in a simple manner. In the presented work, such an automated workflow based on the example of a parametrized wall element for extrusion-based concrete printing is developed. The investigated wall structure is parametrized using the global geometry parameters: height, width, thickness, radius, kind of infill structure (honey-comb, zig-zag) and number of repeated infills. All above mentioned steps are implemented via python interfaces using pydoit as workflow tool. General interfaces with prescribed input and output files are defined allowing adaptations for different software and tools. The described workflow is tested by performing a test series investigating the influence of the in-fill structure on the mechanical properties of the test walls.
Interlaboratory studies are common tools for collecting comparable data to implement standards for new materials or testing technologies. In the case of construction materials, these studies form the basis for recommendations and design codes. Depending on the study, the amount of data collected can be enormous, making manual handling and evaluation difficult. On the other hand, the importance of the FAIR (findable, accessible, interoperable, and reusable) principles for scientific data management, published by Wilkinson et al. in 2016, is constantly growing and changing the view on data usage.
The benefits of using data management tools such as data stores/repositories or electronic laboratory notebooks are many. Data is stored in a structured and accessible way (at least within a group) and data loss due to staff turnover is reduced. Tools usually support data publishing and analysis interfaces. In this way, data can be reused years later to generate new knowledge with future insights. On the other hand, there are many challenges in setting up a data repository, such as selecting suitable software tools, defining the data structure, enabling data access, and understanding by others and
ensuring maintenance, among others.
This talk discusses the advantages and challenges of setting up and applying a data repository using the interlaboratory study on the mechanical properties of printed concrete structures carried out in RILEM TC 304-ADC as example. First, the definition of a suitable data structure including all information is discussed. The tool-dependent upload process is then described. Here, the data
management system openBIS (open source software developed by ETH Zurich) is used. Since in most cases an open compute platform allowing access from different organisations is not possible or available due to data protection and maintenance issues, tool-independent export options are discussed and compared. Finally, the different query and analysis possibilities are demonstrated.
In the field of computational science and engineering, workflows often entail the application of various software, for instance, for simulation or pre- and postprocessing. Typically, these components have to be combined in arbitrarily complex workflows to address a specific research question. In order for peer researchers to understand, reproduce and (re)use the findings of a scientific publication, several challenges have to be addressed. For instance, the employed workflow has to be automated and information on all used software must be available for a reproduction of the results. Moreover, the results must be traceable and the workflow documented and readable to allow for external verification and greater trust. In this paper, existing workflow management systems (WfMSs) are discussed regarding their suitability for describing, reproducing and reusing scientific workflows. To this end, a set of general requirements for WfMSswere deduced from user stories that we deem relevant in the domain of computational science and engineering. On the basis of an exemplary workflow implementation, publicly hosted at GitHub (https:// this http URL), a selection of different WfMSs is compared with respect to these requirements, to support fellow scientists in identifying the WfMSs that best suit their requirements.
Multiscale modeling of linear elastic heterogeneous structures via localized model order reduction
(2024)
In this paper, a methodology for fine scale modeling of large scale linear elastic structures is proposed, which combines the variational multiscale method, domain decomposition and model order reduction. The influence of the fine scale on the coarse scale is modelled by the use of an additive split of the displacement field, addressing applications without a clear scale separation. Local reduced spaces are constructed bysolving an oversampling problem with random boundary conditions. Herein, we inform the boundary conditions by a global reduced problem and compare our approach using physically meaningful correlated samples with existing approaches using uncorrelated samples. The local spaces are designed such that the local contribution of each subdomain can be coupled in a conforming way, which also preserves the sparsity pattern of standard finite element assembly procedures. Several numerical experiments show the accuracy and efficiency of the method, as well as its potential to reduce the size of the local spaces and the number of training samples compared to the uncorrelated sampling
Thermal transient problems, essential for modeling applications like welding and additive metal manufacturing, are characterized by a dynamic evolution of temperature. Accurately simulating these phenomena is often computationally expensive, thus limiting their applications, for example for model parameter estimation or online process control. Model order reduction, a solution to preserve the accuracy while reducing the computation time, is explored. This article addresses challenges in developing reduced order models using the proper generalized decomposition (PGD) for transient thermal problems with a specific treatment of the moving heat source within the reduced model. Factors affecting accuracy, convergence, and computational cost, such as discretization methods (finite element and finite difference), a dimensionless formulation, the size of the heat source, and the inclusion of material parameters as additional PGD variables are examined across progressively complex examples. The results demonstrate the influence of these factors on the PGD model’s performance and emphasize the importance of their consideration when implementing such models. For thermal example, it is demonstrated that a PGD model with a finite difference discretization in time, a dimensionless representation, a mapping for a moving heat source, and a spatial domain non-separation yields the best approximation to the full order model.
Software-driven scientific workflows are often characterized by a complex interplay of various pieces of software executed in a particular order. The output of a computational step may serve as input to a subsequent computation, which requires them to be processed sequentially with a proper mapping of outputs to inputs. Other computations are independent of each other and can be executed in parallel. Thus, one of the main tasks of a workflow tool is a proper and efficient scheduling of the individual processing steps.
Each processing step, just as the workflow itself, typically processes some input and produces output data. Apart from changing the input data to operate on, processing steps can usually be configured by a set of parameters to change their behavior. Moreover, the behavior of a processing step is determined by its source code and/or executable binaries/packages that are called within it. Beyond this, the computation environment not only has a significant influence on its behavior, but is also crucial in order for the processing step to work at all. The environment includes the versions of the interpreters or compilers, as well as all third-party libraries and packages that contribute to the computations carried out in a processing step.
Despite the advances in hardware and software techniques, standard numerical methods fail in providing real-time simulations, especially for complex processes such as additive manufacturing applications. A real-time simulation enables process control through the combination of process monitoring and automated feedback, which increases the flexibil- ity and quality of a process. Typically, before producing a whole additive manufacturing structure, a simplified experiment in form of a bead-on- plate experiment is performed to get a first insight into the process and to set parameters suitably. In this work, a reduced order model for the transient thermal problem of the bead-on-plate weld simulation is devel- oped, allowing an efficient model calibration and control of the process. The proposed approach applies the proper generalized decomposition (PGD) method, a popular model order reduction technique, to decrease the computational effort of each model evaluation required multiple times in parameter estimation, control and optimization. The welding torch is modeled by a moving heat source, which leads to difficulties separating space and time, a key ingredient in PGD simulations. A novel approach for separating space and time is applied and extended to 3D problems allowing the derivation of an efficient separated representation of the tem- perature. The results are verified against a standard finite element model showing excellent agreement. The reduced order model is also leveraged in a Bayesian model parameter estimation setup, speeding up calibrations and ultimately leading to an optimized real-time simulation approach for welding experiment using synthetic as well as real measurement data.
Despite the advances in hardware and software techniques, standard numerical methods fail in providing real-time simulations, especially for complex processes such as additive manufacturing applications. A real-time simulation enables process control through the combination of process monitoring and automated feedback, which increases the flexibility and quality of a process. Typically, before producing a whole additive manufacturing structure, a simplified experiment in the form of a beadon-plate experiment is performed to get a first insight into the process and to set parameters suitably. In this work, a reduced order model for the transient thermal problem of the bead-on-plate weld simulation is developed, allowing an efficient model calibration and control of the process. The proposed approach applies the proper generalized decomposition (PGD) method, a popular model order reduction technique, to decrease the computational effort of each model evaluation required multiple times in parameter estimation, control, and optimization. The welding torch is modeled by a moving heat source, which leads to difficulties separating space and time, a key ingredient in PGD simulations. A novel approach for separating space and time is applied and extended to 3D problems allowing the derivation of an efficient separated representation of the temperature.
The results are verified against a standard finite element model showing excellent agreement. The reduced order model is also leveraged in a Bayesian model parameter estimation setup, speeding up calibrations and ultimately leading to an optimized real-time simulation approach for welding experiment using synthetic as well as real measurement data.
Beton ist weltweit einer der wichtigsten Konstruktionswerkstoffe und zeichnet sich durch eine enorme Anpassungsfähigkeit an sich verändernde Anforderungen aus. Damit verbunden ist eine hohe und kontinuierlich zunehmende Komplexität hinsichtlich der Ausgangsstoffe, Rezepturen und des Herstellungsprozesses. Folglich setzt eine Ausschöpfung des technischen und umweltbezogenen Potenzials der Betonbauweise höchste Expertise bei den Einzelakteuren der Bauindustrie voraus.
PGDrome
(2023)
FenicsXConcrete
(2023)
Additive manufacturing (AM) has revolutionized the manufacturing industry, offering a new paradigm to produce complex geometries and parts with customized properties. Among the different AM techniques, the wire arc additive manufacturing (WAAM) process has gained significant attention due to its high deposition rate and low equipment cost. However, the process is characterized by a complex thermal history making it challenging to simulate it in real-time for online process control and optimization.
In this context, a reduced order model (ROM) using the proper generalized decomposition (PGD) method [1] is proposed as a powerful tool to overcome the limitations of conventional numerical methods and enable the real-time simulation of the temperature field of WAAM processes. These simulations use a moving heat source leading to a hardly separable parametric problem, which is handled by applying a novel mapping approach [2]. This procedure makes it possible to create a simple separated representation of the model, which allows to simulate multiple layers.
In this contribution, a PGD model is derived for the temperature field simulation of the WAAM process. A good agreement with a standard finite element method is shown. The reduced model is further used in a stochastic model parameter estimation using Bayesian inference, speeding up calibrations and ultimately leading to a calibrated real-time simulation.
Blast experiments on reinforced concrete structures are often limited to small structures and therefore simple shock waves. Such experiments are carried out at the Bundesanstalt für Materialforschung und -prüfung (BAM) and the structural response is investigated using several measuring methods. Complex load scenarios that occur as a result of reflection of the shock wave in larger structures are harder to realise in practice. Numerical simulations for the propagation of the shock wave and the structural response can therefore be an alternative method for the investigation of blast loads on complex structures.
For the simulation of concrete under impact and blast loads, several local constitutive models exist that are formulated as plasticity models with softening taken into account by introducing a scalar damage field. Local damage models however often lead to mesh-dependent results which do not converge with mesh refinement. In order to achieve meaningful predictions from numerical experiments, independence from the mesh is needed.
In this contribution, the JH2 model (Johnson and Holmquist 1994) with a parameter set for concrete is investigated in a simple blast load scenario. The shockwave is implemented as a simplified Friedlander-curve and the overpressures are applied as a boundary condition for the structural simulation. In order to account for large displacements that can occur during blast loads, an updated Lagrangian formulation is utilised. A Runge-Kutta method with adaptive time stepping is used to advance the solution in time. The open source FEM software FEniCS (Logg et al. 2012) is used together with an implementation of the JH2 model which has been developed at BAM. An extensive convergence analysis with both timestep- and mesh-refinement is carried out to show the mesh dependency.
In order to make the results independent of the mesh, possible nonlocal versions of the JH2 model with gradient-enhancement are presented. Since many damage models for concrete share the damage mechanism of the JH2 model, the application of the regularisation methods to more complex material models, like the RHT model (Grunwald et al. 2017), is also discussed. Advantages of a gradient-enhanced formulation to simulate dynamic strength increase of concrete, as suggested in (Häußler-Combe and Kitzig 2009), is discussed as well.
Simulation-based digital twins have emerged as a powerful tool for evaluating the mechanical response of bridges. As virtual representations of physical systems, digital twins can provide a wealth of information that complements traditional inspection and monitoring data. By incorporating virtual sensors and predictive maintenance strategies, they have the potential to improve our understanding of the behavior and performance of bridges over time. However, as bridges age and undergo regular loading and extreme events, their tructural characteristics change, often differing from the predictions of their initial design. Digital twins must be continuously adapted to reflect these changes. In this article, we present a Bayesian framework for updating simulation-based digital twins in the context of bridges. Our approach integrates information from measurements to account for inaccuracies in the simulation model and quantify uncertainties. Through its implementation and assessment, this work demonstrates the potential for digital twins to provide a reliable and up-to-date representation of bridge behavior, helping to inform decision-making for maintenance and management.
Simulation-based digital twins have emerged as a powerful tool for evaluating the mechanical response of bridges. As virtual representations of physical systems, digital twins can provide a wealth of information that complements traditional inspection and monitoring data. By incorporating virtual sensors and predictive maintenance strategies, they have the potential to improve our understanding of the behavior and performance of bridges over time. However, as bridges age and undergo regular loading and extreme events, their structural characteristics change, often differing from the predictions of their initial design. Digital twins must be continuously adapted to reflect these changes. In this article, we present a Bayesian framework for updating simulation-based digital twins in the context of bridges. Our approach integrates information from measurements to account for inaccuracies in the simulation model and quantify uncertainties. Through its implementation and assessment, this work demonstrates the potential for digital twins to provide a reliable and up-to-date representation of bridge behavior, helping to inform decision-making for maintenance and management.
In this contribution, a methodology for fine scale modeling of large scale structures is proposed, which combines the variational multiscale method[1], domain decomposition and model order reduction. The influence of the fine scale on the coarse scale is modelled by the use of an additive split of the displacement field, addressing applications without a clear scale separation. Based on the work of Buhr and Smetana[2], local reduced spaces are constructed by solving an oversampling problem with random boundary conditions. Herein, we inform the boundary conditions by a global reduced problem and compare our approach using physically meaningful correlated samples with existing approaches using uncorrelated samples. The local spaces are designed such that the local contribution of each subdomain can be coupled in a conforming way, which also preserves the sparsity pattern of standard finite element assembly procedures. Several numerical experiments show the accuracy and efficiency of the method, as well as its potential to reduce the size of the local spaces and the number of training samples compared to the uncorrelated sampling.
In the field of computational science and engineering, workflows often entail the application of various software, for instance, for simulation or pre- and postprocessing.
Typically, these components have to be combined in arbitrarily complex workflows to address a specific research question. In order for peer researchers to understand, reproduce and (re)use the findings of a scientific publication, several challenges have to be addressed. For instance, the employed workflow has to be automated and information on all used software must be available for a reproduction of the results. Moreover, the results must be traceable and the workflow documented and readable to allow for external verification and greater trust.
In this paper, existing workflow management systems (WfMSs) are discussed regarding their suitability for describing, reproducing and reusing scientific workflows. To this end, a set of general requirements for WfMSs were deduced from user stories that we deem relevant in the domain of computational science and engineering. On the basis of an exemplary workflow implementation, publicly hosted at GitHub (https://github.com/BAMresearch/NFDI4IngScientificWorkflowRequirements), a selection of different WfMSs is compared with respect to these requirements, to support fellow scientists in identifying the WfMSs that best suit their requirements.
Die sprunghaft zunehmende Wichtigkeit von FAIR- und Open-Data für die Qualitätssicherung, aber auch für die Nachnutzbarkeit von Daten und den Erkenntnisfortschritt führt zu enormem Flandlungsbedarf in Forschung und Entwicklung. Damit verbunden laufen derzeit vielfältige, ambitionierte Aktionen, z. B. bezüglich der Erstellung von Ontologien und Wissensgraphen. Das Knowhow entwickelt sich rasant, die Ansätze zur Implementation entstehen in verschiedenen Fachwelten bzw. mit
unterschiedlichen Zielsetzungen parallel, so dass recht heterogene Herangehensweisen resultieren.
Diese Veröffentlichung fokussiert auf Arbeiten, die derzeit als möglichst ganzheitlicher Ansatz für Materialdaten im Rahmen der Digitalisierungsinitiative „Plattform MaterialDigital" vorangetrieben werden. Die Autoren bearbeiten baustoffbezogene Aspekte im Verbundprojekt „LeBeDigital - Lebenszyklus von Beton". Zielsetzung ist die digitale Beschreibung des Materialverhaltens von Beton über den kompletten Herstellungsprozess eines Fertigteils mit einer Integration von Daten und Modellen innerhalb eines Workflows zur probabilistischen Material- und Prozessoptimierung.
Es wird über die Vorgehensweise und die dabei gewonnenen Erfahrungen berichtet, nicht ohne den Blick auf die oft unterschätzte Komplexität der Thematik zu lenken.
Finite element (FE) models are widely used to capture the mechanical behavior of structures. Uncertainties in the underlying physics and unknown parameters of such models can heavily impact their performance. Thus, to satisfy high precision and reliability requirements, the performance of such models is often validated using experimental data. In such model updating processes, uncertainties in the incoming measurements should be accounted for, as well. In this context, Bayesian methods have been recognized as a powerful tool for addressing different types of uncertainties. Quasi-brittle materials subjected to damage pose a further challenge due to the increased uncertainty and complexity involved in modeling crack propagation effects. In this respect, techniques such as Digital Image Correlation (DIC) can provide full-field displacement measurements that are able to reflect the crack path up to a certain accuracy. In this study, DIC-based full field measurements are incorporated into a finite element model updating approach, to calibrate unknown/uncertain parameters of an ansatz constitutive model. In contrast to the standard FEMU, where measured displacements are compared to the displacements from the FE model response, in the force-version of the standard FEMU, termed FEMU-F [1], displacements are applied as Dirichlet constraints. This enables the evaluation of the internal forces, which are then compared to measured external forces, thus quantifying the fulfillment of the momentum balance equation as a metric for the model discrepancy. In the present work, the FEMU-F approach is further equipped with a Bayesian technique that accounts for uncertainties in the measured displacements, as well. Via this modification, displacements are treated as unknown variables to be subsequently identified, while they are allowed to deviate from the measured values up to a certain measurement accuracy. To be able to identify many unknown variables; including constitutive parameters and the aforementioned displacements, the Variational Bayesian technique proposed in [2] is utilized as an approximative technique. A numerical example of a three-point bending case study is presented first to demonstrate the effectiveness of the proposed approach. The parameters of a gradient-enhanced damage material model [4] are identified using noisy synthetic data, and the effect of measurement noise is studied. The ability of the suggested approach on identifying constitutive parameters is then validated using real experimental data from a three-point bending test from [3]. The full field displacements required as input to the inference setup are extracted through a digital image correlation (DIC) analysis of the provided raw images.
Bayesian updating of constitutive laws for Finite Element simulation using full field measurements
(2023)
Developed finite element (FE) models have been recognized as powerful tools for predicting the mechanical behaviour of engineered systems. As a prerequisite, those models need to be improved with respect to various uncertainties; most notably, concerning underlying physics assumptions and unknown parameters. This is very often accomplished by comparing the performance of a model (e.g. the model response) against available data measured from real experiments. Another challenge emerges in doing that, however, which is accounting for uncertainties of measured data. Bayesian methods have been widely considered and utilized as a suitable approach for coping with and quantifying the aforementioned uncertainties. Phenomena like damage - in particular in quasi-brittle materials - introduce further uncertainties due to the complexity underlying the crack propagation of phenomenon. This implies that, the fitting of a numerical model and an associated constitute law that can adequately describe such effects is non-trivial. The standard approach of finite element model updating (FEMU) is therefore modified to account for tracking of the crack propagation, as recorded during an experiment under increasing loading, via full field displacement measurements. The latter are fed as Dirichlet constraints to an available finite element model, leading to the evaluation of force residuals, which quantifies the accuracy of the model. This approach - which is known as FEMU-F (force-version of the standard FEMU) [1] - is here further equipped with a Bayesian technique, which accounts for the measurement uncertainties in the full field displacement. This is achieved by penalizing the discrepancy between the measured displacements and the modeled Dirichlet constraints, where the latter are considered as further unknowns. We specifically employ the Variational Bayesian technique, proposed in [2], as an approximating tool for the estimation of posterior parameters, including displacement variables that are allowed to deviate from the measurements. A Markov chain Monte Carlo (MCMC) is also used for sampling the posterior distribution of the unknown model parameters. The model updating procedure is first demonstrated through a numerically simulated example of threepoint bending, where the parameters of a gradient-enhanced damage material model [4] are identified in accordance with synthetic noisy data (displacements and reaction forces). For the validation, experimental data from a three-point bending test are used, where full field displacements are collected through a digital image correlation (DIC) analysis (raw data taken from [3]). The data is then used for the parameter identification of a gradient damage constitutive law, which is employed as an ansatz model
Concrete has a long history in the construction industry and is currently one of the most widely used building materials. Unfortunately, the concrete industry has a significant impact on the environment by contributing to about 9% of the total anthropogenic greenhouse gas (GHG) emissions. Concrete is a highly complex composite material. However, the main source of concrete's GHG emissions is the cement. This leads to two main strategies when trying to reduce the environmental impact. The first is to reduce the cement within the concrete mix. This can be done by substituting it using additives or increasing the amount of aggregates. Usually this will lead to decreased material properties, like compressive strength or stiffness. The second option is to reduce the amount of required concrete by optimizing the topology of the structure. However, this might require higher compressive strength. In addition, there are other properties like to workability which need to be considered. All in all, this leads to a highly complex optimization problem, which requires the estimation of effective concrete properties, based on the mixture as input to a predictive simulation.
We present an automated workflow framework which combines experimental data with simulations, calibrates the simulation and performs the desired optimization. This workflow includes classical FE models, design guidelines based on model codes, as well as data driven methods. The chosen example is a beam, for which the concrete mixture is optimized to reduce GHG emissions. The first step is an estimation of material parameters, based on experimental data. This includes measures of stochastic distribution, allowing the quantification of the quality of the estimated parameters. The second step is the optimization. It takes into account constraints like the loading capacity after 28 days, the maximum allowed temperature during cement hydration and the maximum time till demoulding. The applied models include a Mori-Tanaka-based homogenization method to estimate effective concrete parameters, an FE simulation including the evolution of the concrete compressive strength and stiffness, the temperature field, displacements, and stress. This research shows a way towards a more performance-oriented material design.
Additive manufacturing (AM) has revolutionized the manufacturing industry, offering a new paradigm to produce complex geometries and parts with customized properties. Among the different AM techniques, the wire arc additive manufacturing (WAAM) process has gained significant attention due to its high deposition rate and low equipment cost. However, the process is characterized by a complex thermal history, dynamic metallurgy, and mechanical behaviour that make it challenging to simulate it in real-time for online process control and optimization.
In this context, a reduced order model (ROM) using the proper generalized decomposition (PGD) method is proposed as a powerful tool to overcome the limitations of conventional numerical methods and enable the real-time simulation of the temperature field of WAAM processes. Though, the simulation of a moving heat source leads to a hardly separable parametric problem, which is handled by applying a novel mapping approach. Using this procedure, it is possible to create a simple separated representation of the model, also allowing to simulate multiple layers.
In this contribution, a PGD model is derived for the WAAM procedure simulating the temperature field. A good agreement with a standard finite element method is shown. The reduced model is further used in a stochastic model parameter estimation using Bayesian inference, speeding up calibrations and ultimately leading to a calibrated real-time simulation.
Multiscale modeling of heterogeneous structures based on a localized model order reduction approach
(2023)
Many of today’s problems in engineering demand reliable and accurate prediction of failure mechanisms of mechanical structures. Thus, it is necessary to take into account the heterogeneous structure on the smaller scale, to capture the underlying physical phenomena. However, this poses a great challenge to the numerical solution since the computational cost is significantly increased by resolving the smaller scale in the model. Moreover, in applications where scale separation as the basis of classical homogenization schemes does not hold, the influence of the smaller scale on the larger scale has to be modelled directly. This work aims to develop an efficient concurrent methodology to model heterogeneous structures combining the variational multiscale method (VMM) [1] and model order reduction techniques (e. g. [2]). First, the influence of the smaller scale on the larger scale can be taken into account following the additive split of the displacement field as in the VMM. Here, also a decomposition of the global domain into subdomains, each containing a fine grid discretization of the smaller scale, is introduced. Second, local reduced approximation spaces for the smaller scale solution are constructed by exploring possible solutions for each subdomain based on the concept of oversampling [3]. The associated transfer operator is approximated by random sampling [4]. Herein, we propose to incorporate the actual physical behaviour of the structure of interest in the training data by drawing random samples from a multivariate normal distribution with the solution of a reduced global problem as mean. The local reduced spaces are designed such that local contributions of each subdomain can be coupled in a conforming way. Thus, the resulting global system is sparse and reduced in size compared to the direct numerical simulation, leading to a faster solution of the problem.
In recent years, the use of simulation-based digital twins for monitoring and assessment of complex mechanical systems has greatly expanded. Their potential to increase the information obtained from limited data makes them an invaluable tool for a broad range of real-world applications. Nonetheless, there usually exists a discrepancy between the predicted response and the measurements of the system once built. One of the main contributors to this difference in addition to miscalibrated model parameters is the model error. Quantifying this socalled model bias (as well as proper values for the model parameters) is critical for the reliable performance of digital twins. Model bias identification is ultimately an inverse problem where information from measurements is used to update the original model. Bayesian formulations can tackle this task. Including the model bias as a parameter to be inferred enables the use of a Bayesian framework to obtain a probability distribution that represents the uncertainty between the measurements and the model. Simultaneously, this procedure can be combined with a classic parameter updating scheme to account for the trainable parameters in the original model. This study evaluates the effectiveness of different model bias identification approaches based on Bayesian inference methods. This includes more classical approaches such as direct parameter estimation using MCMC in a Bayesian setup, as well as more recent proposals such as stat- FEM or orthogonal Gaussian Processes. Their potential use in digital twins, generalization capabilities, and computational cost is extensively analyzed.
In recent years, the use of simulation-based digital twins for monitoring and assessment of complex mechanical systems has greatly expanded. Their potential to increase the information obtained from limited data makes them an invaluable tool for a broad range of real-world applications. Nonetheless, there usually exists a discrepancy between the predicted response and the measurements of the system once built. One of the main contributors to this difference in addition to miscalibrated model parameters is the model error. Quantifying this socalled model bias (as well as proper values for the model parameters) is critical for the reliable performance of digital twins. Model bias identification is ultimately an inverse problem where information from measurements is used to update the original model. Bayesian formulations can tackle this task. Including the model bias as a parameter to be inferred enables the use of a Bayesian framework to obtain a probability distribution that represents the uncertainty between the measurements and the model. Simultaneously, this procedure can be combined with a classic parameter updating scheme to account for the trainable parameters in the original model.
This study evaluates the effectiveness of different model bias identification approaches based on Bayesian inference methods. This includes more classical approaches such as direct parameter estimation using MCMC in a Bayesian setup, as well as more recent proposals such as stat-FEM or orthogonal Gaussian Processes. Their potential use in digital twins, generalization capabilities, and computational cost is extensively analyzed.
For extrusion-based 3D concrete printing, the early age mechanical behavior is influenced by various time dependent phenomena: structural build-up, plasticity as well as viscosity. The structural build-up is governing the stability and early-age strength development of the fresh printable cementitious materials and with that influencing the printability, buildability, and open time of the printing process. Generally, it is influenced by a number of factors, i.e. composition of the printable material, printing regime, and ambient conditions (temperature, humidity, etc.). There are several approaches to model the structural build-up of cementitious materials. All models are based on a time-dependent internal structural parameter describing the flocculation state, which is assumed to be zero after mixing and increases with time. The approaches differ in the definition of the time dependency (linear, exponential, bi-linear). Usually, the parameters are defined for a specific material composition without considering the influence of ambient conditions.
In this contribution, the bi-linear structural build-up model [Kruger et al., Construction and Building Materials 224, 2019] is extended by the temperature influence. Temperature changes will occur in real life printing processes due to changing ambient conditions (summer, winter, day, night) as well as the printing process (pressure changes etc.) and have a significant impact on the structural build-up process: an increase of the temperature leads to a faster dissolution of cement phases, accelerates hydration and boosts the Brownian motion. For that reason, the model parameters are simulated as temperature dependent using an Arrhenius function. Furthermore, the proposed extended model is calibrated based on measurement data using Bayesian inference. A very good agreement of the predicted model data with the measured control data was reached. Additionally, the structural build-up model is integrated into a viscoelastic and elastoplastic mechanical model, simulating the whole mechanical behavior during layer deposition.
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general. When implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods and tools is outlined.
We begin with a motivation on potential use cases for the application of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data (including metadata schema) are discussed. The first one is the openBIS system, which is an open-source lab notebook and PostgreSQL based data management system. A second option is a semantic representation using RDF based on ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. A metadata schema for a rheological measurement setup is derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behavior, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
Structural build-up describes the stability and early-age strength development of fresh mortar used in 3D printing. lt is influenced by several factors, i.e. the composition of the print able material, the printing regime, and the ambient conditions. The existing modelling approaches for structural build-up usually define the model parameters for a specific material composition with out considering the influence of the ambient conditions. The goal of this contribution is to explicitly include the temperature dependency in the modelling approach. Temperature changes have signifi cant impact on the structural build-up process: an increase of the temperature leads to a faster dissol ution of cement phases and accelerates hydration. The proposed extended model includes temperature dependency using the Arrhenius theory. The new model parameters are successfully calibrated based on Viskomat measurement data using Bayesian inference. Furthermore, a higher impact of the temperature in the re-flocculation as in the structuration stage is observed.
Multiscale modeling of linear elastic heterogeneous structures via localized model order reduction
(2023)
In this paper, a methodology for fine scale modeling of large scale linear elastic structures is proposed, which combines the variational multiscale method, domain decomposition and model order reduction. The influence of the fine scale on the coarse scale is modelled by the use of an additive split of the displacement field, addressing applications without a clear scale separation. Local reduced spaces are constructed by solving an oversampling problem with random boundary conditions. Herein, we inform the boundary conditions by a global reduced problem and compare our approach using physically meaningful correlated samples with existing approaches using uncorrelated samples. The local spaces are designed such that the local contribution of each subdomain can be coupled in a conforming way, which also preserves the sparsity pattern of standard finite element assembly procedures. Several numerical experiments show the accuracy and efficiency of the method, as well as its potential to reduce the size of the local spaces and the number of training samples compared to the uncorrelated sampling.
Blast tests are indispensable for investigations of accidental or intentional explosions and to evaluate the level of protection to people and equipment within critical infrastructure. Current capabilities for detailed blast effects assessment are limited to performing full-scale field testing, which, for complex scenarios, are highly resource intensive. In this regard, reliable numerical simulations are an effective alternative option. A discussion of the scope and challenges of using numerical tools for a technical-safety assessment of reinforced concrete structures under blast loading is presented. Different coupling possibilities between shock wave simulations and structural simulations with the help of practical examples is given. An outlook on the development of new methods for structural simulations currently being researched at BAM concludes the presentation.
Using digital twins for decision making is a very promising concept which combines simulation models with corresponding experimental sensor data in order to support maintenance decisions or to investigate the reliability. The quality of the prognosis strongly depends on both the data quality and the quality of the digital twin. The latter comprises both the modeling assumptions as well as the correct parameters of these models. This article discusses the challenges when applying this concept to realmeasurement data for a demonstrator bridge in the lab, including the data management, the iterative development of the simulation model as well as the identification/updating procedure using Bayesian inference with a potentially large number of parameters. The investigated scenarios include both the iterative identification of the structural model parameters as well as scenarios related to a damage identification. In addition, the article aims at providing all models and data in a reproducibleway such that other researcher can use this setup to validate their methodologies.
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general, and when implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods, tools and procedures is outlined.
We begin with a motivation on potential use cases for the applications of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data are discussed. On the one hand, this includes a discussion on how to define actual data structures and in particular metadata schema, and on the other hand, two different systems for storing the data are discussed. The first one is the openBIS system, which is an open-source Lab notebook and PostgreSQL based data management system. A second option are a semantic representations using RDF based ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. Metadata schemata for a rheological measurement setup are derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behaviour, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general, and when implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods, tools and procedures is outlined.
We begin with a motivation on potential use cases for the applications of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data are discussed. On the one hand, this includes a discussion on how to define actual data structures and in particular metadata schema, and on the other hand, two different systems for storing the data are discussed. The first one is the openBIS system, which is an open-source Lab notebook and PostgreSQL based data management system. A second option are a semantic representations using RDF based ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. Metadata schemata for a rheological measurement setup are derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behaviour, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
Data provenance - from experimental data to trust worthy simulation models and standards Jörg F. Unger, Annika Robens-Radermacher, Erik Tamsen Bundesanstalt für Materialforschung und -prüfung (BAM). Unter den Eichen 87, 12205 Berlin, Germany FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general, and when implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods, tools and procedures is outlined. We begin with a motivation on potential use cases for the applications of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models. In a second part, different options for structuring the data are discussed. On the one hand, this includes a discussion on how to define actual data structures and in particular metadata schema, and on the other hand, two different systems for storing the data are discussed. The first one is the open BIS system, which is an opensource Lab notebook and Postgre SQL based data management system. A second option are a semantic representations using RDF based ontologies for the domain of interest. In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Multiscale modeling of heterogeneous structures based on a localized model order reduction approach
(2022)
Many of today’s problems in engineering demand reliable and accurate prediction of failure mechanisms of mechanical structures. Thus, it is necessary to take into account the heterogeneous structure on the smaller scale, to capture the underlying physical phenomena. However, this poses a great challenge to the numerical solution since the computational cost is significantly increased by resolving the smaller scale in the model. Moreover, in applications where scale separation as the basis of classical homogenization schemes does not hold, the influence of the smaller scale on the larger scale has to be modelled directly. This work aims to develop an efficient concurrent methodology to model heterogeneous structures combining the variational multiscale method (VMM) and model order reduction techniques. First, the influence of the smaller scale on the larger scale can be taken into account following the additive split of the displacement field as in the VMM. Here, also a decomposition of the global domain into subdomains, each containing a fine grid discretization of the smaller scale, is introduced. Second, local reduced approximation spaces for the smaller scale solution are constructed by exploring possible solutions for each subdomain based on the concept of oversampling and the solution of the associated transfer operator problem. Herein, we propose to choose the training data based on the solution of a reduced global problem to incorporate the actual physical behaviour of the structure of interest and to extend it by random samples to ensure sufficient approximation capabilities in general. The local reduced spaces are designed such that local contributions of each subdomain can be coupled in a conforming way. Thus, the resulting global system is sparse and reduced in size compared to the direct numerical simulation, leading to a faster solution of the problem.
The authors gratefully acknowledge financial support by the German Research Foundation (DFG), project number 394350870, and by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (ERC Grant agreement No. 818473).
Numerical simulators, such as finite element models, have become increasingly capable of predicting the behaviour of structures and components owing to more sophisticated underlying mathematical models and advanced computing power. A common challenge lies, however, in calibrating these models in terms of their unknown/uncertain parameters. When measurements exist, this can be achieved by comparing the model response against measured data. Besides uncertain model parameters, phenomena like damage can give rise to further uncertainties; in particular, quasi-brittle materials, like concrete, experience damage in a heterogeneous manner due to various imperfections, e.g. in geometry and boundary conditions. This hardens an accurate prediction of the damaged behaviour of real structures that comprise such materials.
In this study, which draws from a data-driven approach, we use the force-version of the finite element model updating method (FEMU-F) to incorporate measured displacements into the identification of the damage parameters, in order to cope with heterogeneity. In this method, instead of conducting a forward evaluation of the model and comparing the model response (displacements) against the data, we impose displacements to the model and compare the resulting force residuals with measured reaction forces. To account for uncertainties in the measurement of displacements, we endow this approach with a penalty term, which reflects the discrepancy between measured and imposed displacements, where the latter is assumed as unknown random variables to be identified as well. A Variational Bayesian approach is used as an approximating tool for computing posterior parameters. The underlying damage model considered in this work is a gradient-enhanced damage model.
We first establish the identification procedure through two virtual examples, where synthetic data (displacements) are generated over a certain spatially-dense set of points over the domain. The procedure is then validated on an experimental case-study; namely a 3-point bending experiment with displacement measurements resulting from a digital image correlation (DIC) analysis.
The aim of the project LeBeDigital is to present opportunities of digitalization for concrete applications and show a way towards a performance oriented material design.
Due to the high complexity of the manufacturing process of concrete and the range of parameters affecting the effective composite properties, a global optimization is challenging.
Currently, most optimization is only carried out on a narrow scope related to the respective players, e.g. a mix optimization for a target strength, or a design optimization for minimum weight, using a given mix. To enable a path toward a full global optimization
requires a reproducible chain of data, accessible for all contributors.
We propose a framework based on an ontology, which automatically combines experimental data with numerical simulations. This not only simplifies experimental knowledge transfer, but allows the model calibration and the resulting simulation predictions to be
reproducible and interpretable. In addition to an optimized set of parameters, this setup allows to study the quality and uncertainty of the data and models, as well as giving information about optimal experiments to improve the data set.
We will present the proposed optimization workflow, using the example of a precast concrete element. The contribution will focus on the workflow and challenges of an interoperable FEM formulation.
With increasing focus on industrialized processing, investigating, understanding, and modelling the structural build-up of cementitious materials becomes more important. The structural build-up governs the key property of fresh printable materials -- buildability -- and it influences the mechanical properties after the deposition. The structural build-up rate can be adjusted by optimization of the mixture composition and the use of concrete admixtures. Additionally, it is known, that the environmental conditions, i.e. humidity and temperature have a significant impact on the kinetic of cement hydration and the resulting hardened properties, such as shrinkage, cracking resistance etc. In this study, small amplitude oscillatory shear (SAOS) tests are applied to examine the structural build-up rate of cement paste subject to different temperatures under controlled humidity. The results indicate significant influences of the ambient temperature on the intensity of the re-flocculation (Rthix) rate, while the structuration rate (Athix) is almost not affected. A bi-linear thixotropy model extended by temperature dependent parameters coupled with a linear viscoelastic material model is proposed to simulate the mechanical behaviour considering the structural build-up during the SAOS test
With increasing focus on industrialized processing, investigating, understanding, and modelling the structural build-up of cementitious materials becomes more important. The structural build-up governs the key property of fresh printable materials -- buildability -- and it influences the mechanical properties after the deposition. The structural build-up rate can be adjusted by optimization of the mixture composition and the use of concrete admixtures. Additionally, it is known, that the environmental conditions, i.e. humidity and temperature have a significant impact on the kinetic of cement hydration and the resulting hardened properties, such as shrinkage, cracking resistance etc. In this study, small amplitude oscillatory shear (SAOS) tests are applied to examine the structural build-up rate of cement paste subject to different temperatures under controlled humidity. The results indicate significant influences of the ambient temperature on the intensity of the re-flocculation (Rthix) rate, while the structuration rate (Athix) is almost not affected. A bi-linear thixotropy model extended by temperature dependent parameters coupled with a linear viscoelastic material model is proposed to simulate the mechanical behaviour considering the structural build-up during the SAOS test.
Numerical simulations are essential in predicting the behavior of systems in many engineering fields and industrial sectors. The development of accurate virtual representations of actual physical products or processes (also known as digital twins) allows huge savings in cost and resources. In fact, digital twins would allow reducing the number of real, physical prototypes, tests, and experiments, thus also increasing the sustainability of production processes and products’ lifetime. Standard numerical methods fail in providing real time simulations, especially for complex processes such as additive manufacturing applications.
This work aims to use a reduced order model for efficient wire arc additive manufacturing simulations, calibrations and real-time process control. Model reduction, e.g. the proper generalized decomposition [1,2] method, is a popular concept to decrease the computational effort. A new mapping approach [3] was applied to simulate a moving heat source with the proper generalized decomposition. Using this procedure even complex models can be simulated in real-time. The physical model is later on calibrated with the use of a stochastic model updating process and the reduced order model, leading to an optimized real-time simulation.
In this contribution, a proper generalized decomposition model for a bead-on-plate wire arc additive manufacturing is presented. It is also coupled with a stochastic model updating process identifying the heat source characteristics as well as the boundary conditions of the transient thermal problem, whereas the heat source shape is simulated using a Goldak heat source
Numerical models built as virtual-twins of a real structure (digital-twins) are considered the future ofmonitoring systems. Their setup requires the estimation of unknown parameters, which are not directly measurable. Stochastic model identification is then essential, which can be computationally costly and even unfeasible when it comes to real applications. Efficient surrogate models, such as reduced-order method, can be used to overcome this limitation and provide real time model identification. Since their numerical accuracy influences the identification process, the optimal surrogate not only has to be computationally efficient, but also accurate with respect to the identified parameters. This work aims at automatically controlling the Proper Generalized Decomposition (PGD) surrogate’s numerical accuracy for parameter identification. For this purpose, a sequence of Bayesian model identification problems, in which the surrogate’s accuracy is iteratively increased, is solved with a variational Bayesian inference procedure. The effect of the numerical accuracy on the resulting posteriors probability density functions is analyzed through two metrics, the Bayes Factor (BF) and a criterion based on the Kullback-Leibler (KL) divergence. The approach is demonstrated by a simple test example and by two structural problems. The latter aims to identify spatially distributed damage, modeled with a PGD surrogate extended for log-normal random fields, in two different structures: a truss with synthetic data and a small, reinforced bridge with real measurement data. For all examples, the evolution of the KL-based and BF criteria for increased accuracy is shown and their convergence indicates when model refinement no longer affects the identification results.
Concrete has a long history in the construction industry and is currently one of the most widely used building materials. Especially precast concrete elements are frequently utilized in construction projects for standardized applications, increasing the quality of the composite material, as well as reducing the required building time. Despite the accumulated knowledge, continuous research and development in this field is essential due to the complexity of the composite combined with the ever-growing number of applications and requirements. Especially in view of global climate change, design aspects as CO2 emissions and resource efficiency require new mix designs and optimization strategies. A result of the material’s high complexity and heterogeneity on multiple scales is that utilizing the full potential with changing demands is highly challenging, even for the established industry. We propose a framework based on an ontology, which automatically combines experimental data with numerical simulations. This not only simplifies experimental knowledge transfer, but allows the model calibration and the resulting simulation predictions to be reproducible and interpretable. This research shows a way towards a more performance oriented material design. Within this talk we present our workflow for an automated simulation of a precast element, demonstrating the interaction of the ontology and the finite element simulation. We show the automatic calibration of our early-age concrete model [1, 2], to improve the prediction of the optimal time for the removal of the form work.
Multiscale modeling of heterogeneous structures based on a localized model order reduction approach
(2022)
Many of today’s problems in engineering demand reliable and accurate prediction of failure mechanisms of mechanical structures. Herein, it is necessary to take into account the heterogeneous structure on the lower scale, to capture the underlying physical phenomena. However, this poses a great challenge to the numerical solution as the computational cost is significantly increased by resolving the lower scale in the model. Moreover, in applications where scale separation as the basis of classical homogenization schemes does not hold, the influence of the lower scale on the upper scale has to be modelled directly. This work aims to develop an efficient concurrent methodology to model heterogeneous structures combining the variational multiscale method (VMM) [1] and model order reduction techniques (e. g. [2]). First, the influence of the lower scale on the upper scale can be taken into account following the additive split of the displacement field as in the VMM. Here, also a decomposition of the global domain into subdomains, each containing a fine grid discretization of the lower scale, is introduced. Second, reduced approximation spaces for the upper and lower scale solution are constructed by exploring possible solutions for each subdomain based on a representative unit cell. The local reduced spaces are designed such that local contributions of each subdomain can be coupled in a conforming way. Thus, the resulting global system is sparse and reduced in size compared to the direct numerical simulation, leading to a faster solution of the problem. The authors gratefully acknowledge financial support by the German Research Foundation (DFG), project number 394350870, and by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (ERC Grant agreement No. 818473).
There is a rising attention of using numerical models for effcient structural monitoring and ensuring the structure's safety. Setting up virtual models as twin for real structures requires a model identification process calculating the unknown model parameters, which mostly are only indirectly measurable. This is a computationally very costly inverse optimization process, which often makes it unfeasible for real applications. Effcient surrogate models such as reduced order models can be used, to overcome this limitation. But the influence of the model accuracy on the identification process has then to be considered. The aim is to automatically control the influence of the model's accuracy on the identification. Here, a variational Bayesian inference approach[3] is coupled with a reduced forward model using the Proper Generalized Decomposition (PGD) method. The influence of the model accuracy on the inference result is studied and measured. Therefore, besides the commonly used Bayes factor the Kullback-Leibler divergences between the predicted posterior pdfs are proposed. In an adaptive inference procedure, the surrogate's accuracy is iteratively increased, and the convergence of the posterior pdf is analysed. The proposed adaptive identification process is applied to the identification of spatially distributed damage modeled by a random eld for a simple example with synthetic data as well as a small, reinforced bridge with real measurement data. It is shown that the proposed criteria can mirror the influence of the model accuracy and can be used to automatically select a suffciently accurate surrogate model.
PGD model with domain mapping of Bead-on-Plate weld simulation for wire arc additive manufacturing
(2022)
Numerical simulations are essential in predicting the behavior of systems in many engineering fields and industrial sectors. The development of accurate virtual representations of actual physical products or processes allows huge savings in cost and resources. In fact, digital twins would allow reducing the number of real, physical prototypes, tests, and experiments, thus also increasing the sustainability of the production processes and products’ lifetime. Standard numerical methods fail in providing real time simulations, especially for complex processes such as additive manufacturing applications.
This work aims to build up a reduced order model for efficient wire arc additive manufacturing simulations by using the proper generalized decomposition (PGD) [1,2] method. Model order reduction is a popular concept to decrease the computational effort, where each evaluation of the reduced forward model is faster than evaluations using classical methods, even for complex models. The simulation of a moving heat source leads to a hardly separable parametric problem, which is solved by a new mapping approach [3]. Using this procedure, it is possible to create a simple separated representation of the forward model.
In this contribution, a PGD model is derived for the first part of wire arc additive manufacturing: bead-on-plate weld. An excellent agreement with a standard finite element method is shown. The reduced model is further used in a model calibration set up, speeding up calibrations and ultimately leading to an optimized real-time simulation.
Numerical models are an essential tool in predicting and monitoring the behavior of civil structures. Inferring the model parameters is a challenging tasks as they are often measured indirectly and are affected by uncertainties. Digital twins couple those models with real-world data and can introduce additional, systematic sensor uncertainties related to the sensor calibration, i.e. uncertain offsets and calibration factors.
In this work, the challenges of data processing, parameter identification, model selection and damage detection are explored using a lab-scale cable stayed bridge demonstrator. By combining force measurements in the cables with displacement measurements from both laser and stereo-photogrammetry systems, the elastic parameters of a three-dimensional finite element beam model are inferred.
Depending on the number of sensors and the number of datasets used, parametrizing the sensor offsets and factors, leads to model with over 100 parameters. With a real-time solution of the problem in mind, a highly efficient analytical variational Bayesian approach is used to solve it within seconds. An analysis of the required assumptions and limitations of the approach, especially w.r.t. to the computed evidence, is provided by a comparison with dynamic nested sampling in a simplified problem.
Finally, by inferring the value of additional damage parameters along the bridge, the method is successfully used to detect the location of an artificially introduced weak spot in the demonstrator bridge.
The amount of data generated worldwide is constantly increasing. These data come from a wide variety of sources and systems, are processed differently, have a multitude of formats, and are stored in an untraceable and unstructured manner, predominantly in natural language in data silos. This problem can be equally applied to the heterogeneous research data from materials science and engineering. In this domain, ways and solutions are increasingly being generated to smartly link material data together with their contextual information in a uniform and well-structured manner on platforms, thus making them discoverable, retrievable, and reusable for research and industry. Ontologies play a key role in this context. They enable the sustainable representation of expert knowledge and the semantically structured filling of databases with computer-processable data triples.
In this perspective article, we present the project initiative Materials-open-Laboratory (Mat-o-Lab) that aims to provide a collaborative environment for domain experts to digitize their research results and processes and make them fit for data-driven materials research and development. The overarching challenge is to generate connection points to further link data from other domains to harness the promised potential of big materials data and harvest new knowledge.
A safe and robust performance is a key criterion when building and maintaining structures and components. Ensuring this criterion at different stages of the lifetime can be supported by applying continuous monitoring concepts. The latter usually can serve multiple purposes, including the determination of material parameters for the design phase, the evaluation of the actual loading/environmental conditions (instead of using conservative estimates that are usually larger) and evaluating or predicting the true performance of the structure (thus decreasing the model bias). In this context, a digital twin of the structure has many benefits. In addition, it allows to introduce virtual sensors to “measure” sensor information that is e.g. inaccessible or unmeasureable. In the limit, the remaining useful life of a structure can be interpreted as a property that can be “measured” indirectly via the numerical model in combination with real sensor data. In order to efficiently use monitoring techniques in the context of a digital twin, it is important to consider the complete chain of information including the choice of sensors, the data processing and structuring, the modelling assumptions, the numerical simulation and finally the stochastic nature of the model prediction. In this presentation, challenges in this context are discussed with a specific focus on Bayesian model updating of the digital twin, accounting for both parameter updates as well as model bias that results from the limitations of modelling assumption. A bottleneck in this approach is the computational effort related to sampling methods such as Markov chain Monte Carlo methods that require many evaluations of the forward model. An alternative to the expensive computation of the forward model for updating the digital twin is the combination with model reduction techniques such as the Proper General Decomposition. The results are illustrated for several examples and scales, ranging from digitals twin for material tests in the lab over lab scale structural digital twins up to damage identification in field experiments.
A safe and robust performance is a key criterion when building and maintaining structures and component. Ensuring this criterion at different stages of the lifetime can be supported by applying continuous monitoring concepts. The latter usually can serve multiple purposes, including the determination of material parameters for the design phase, the evaluation of the actual loading/environmental conditions (instead of using conservative estimates that are usually larger) and evaluating or predicting the true performance of the structure (thus decreasing the model bias). In this context, a digital twin of the structure has many benefits. It allows to introduce virtual sensors to “measure” sensor information that is e.g. inaccessible or unmeasureable. In order to efficiently use monitoring techniques in the context of a digital twin, it is important to consider the complete chain of information including the choice of sensors, the data processing and structuring, the modelling assumptions, the numerical simulation and finally the stochastic nature of the model prediction. In this presentation, challenges in this context are discussed with a specific focus on Bayesian model updating of the digital twin, accounting for both parameter updates as well as model bias that results from the limitations of modelling assumption. A bottleneck in this approach is the computational effort related to sampling methods such as Markov chain Monte Carlo methods that require many evaluations of the forward model. An alternative to the expensive computation of the forward model for updating the digital twin is the combination with model reduction techniques such as the Proper General Decomposition [1, 2]. The results are illustrated for several examples and scale, ranging from digitals twin for material tests in the lab over lab scale structural digital twins up to damage identification in field experiments.
In materials and component research, artificial intelligence methodologies will lead to massive upheavals in the coming years. The processes of material development, material processing, lifetime prediction and material characterization will change significantly. By combining AI methods and new forms of knowledge representation, the data-based management of product life cycles will take on new qualities. To address this emerging field of research Fraunhofer IWM set up the online workshop »AI Methods for Fatigue Behavior Assessment and Component Lifetime Prediction«
In this paper, the imperialist competitive optimization algorithm is improved by damage functions to detect damage in a model steel frame test structure for offshore applications. A finite element model of the test structure is developed, validated and updated using the proposed method. As there are much more design variables, which are related to the stiffness of each finite element than the measured mode shapes, the problem is underdetermined. Therefore, damage functions are used to regularize the problem and decrease the number of design variables. A new objective function is proposed for the algorithm using the mode shapes and their l1 norm. The first ten measured mode shapes are used to solve the problem. It is shown that the proposed method is capable of predicting the damage locations with acceptable accuracy.
A three-phase transport model for high-temperature concrete simulations validated with X-ray CT data
(2021)
Concrete exposure to high temperatures induces thermo-hygral phenomena, causing water phase changes, buildup of pore pressure and vulnerability to spalling. In order to predict these phenomena under various conditions, a three-phase transport model is proposed. The model is validated on X-ray CT data up to 320 ◦C, showing good agreement of the temperature profiles and moisture changes. A dehydration description, traditionally derived from thermogravimetric analysis, was replaced by a formulation based on data from neutron radiography. In addition, treating porosity and dehydration evolution as independent processes, previous approaches do not fulfil the solid mass balance. As a consequence, a new formulation is proposed that introduces the porosity as an independent variable, ensuring the latter condition.
In analyzing large scale structures, it is necessary to take into account the material heterogeneity for accurate failure prediction. However, this greatly increases the degrees of freedom in the numerical method making it infeasible. Moreover, in applications where scale separation as the basis of classical homogenization schemes does not hold, the influence of the fine scale on the coarse scale has to be modelled directly.
This work aims to develop an efficient methodology to model heterogeneous structures combining the variational multiscale method and model order reduction techniques.
Superposition based methods assume a split of the solution field into coarse and fine scale contributions. In deriving practical methods some form of localization is necessary to eliminate the fine scale part from the coarse scale equation. Hund and Ramm discussed different locality constraints and resulting solution procedures in the context of solid mechanics. Particularly, zero jump conditions ensuring continuity of the fine scale solution which are enforced by a Lagrange type method lead to a coupled solution procedure.
In this contribution, a combination of the variational multiscale method and model order reduction techniques is applied to model the influence of the fine scale on the coarse scale directly. First, possible coarse and fine scale solutions are exploited for a representative volume element (RVE), specific to the material of interest, to construct local approximation spaces. The local spaces are designed such that local contributions of RVEs can be coupled in a conforming way. Therefore, the resulting global system takes the effect of the fine scale on the coarse scale into account, is sparse and reduced in size compared to the direct numerical simulation.
The authors gratefully acknowledge financial support by the German Research Foundation (DFG), project number 394350870, and by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (ERC Grant agreement No. 818473).
In analyzing large scale structures, it is necessary to take into account the material heterogeneity for accurate failure prediction. However, this greatly increases the degrees of freedom in the numerical method thus making it infeasible. Moreover, in applications where scale separation as the basis of classical homogenization schemes does not hold, the influence of the fine scale on the coarse scale
has to be modelled directly.
This work aims to develop an efficient methodology to model heterogeneous structures combining the variational multiscale method and model order reduction techniques. Superposition-based methods assume a split of the solution field into coarse and fine scale contributions. In deriving practical methods, some form of localization is necessary to eliminate the fine-scale part from the coarse-scale equation. Hund and Ramm [2] discussed different locality constraints and in particular zero jump conditions enforced by a Lagrange-type method leading to a coupled solution scheme.
In this contribution, a combination of the variational multiscale method and model order reduction techniques is applied to model the influence of the fine scale on the coarse scale directly. First, possible coarse and fine scale solutions are exploited for a representative volume element (RVE), specific to the material of interest, to construct local approximation spaces. For the local fine scale spaces different choices are presented, which ensure continuity between adjacent coarse grid elements. Therefore,the resulting global system takes into account, the effect of the fine scale on the coarse scale, is sparse and has much lower dimensions compared to the full system in the direct numerical simulation.
The authors gratefully acknowledge financial support by the German Research Foundation (DFG), project number 394350870. This result is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 818473).
The main challenge in using numerical models as digital twins in real applications for prognosis purposes, such as reliability analysis, is the calibration and validation of the models based on uncertain measurement data. Uncertainties are not limited to the measurement data, but the numerical model itself will not be perfect due to the modelling assumptions.
In this contribution, a probabilistic inference method for model calibration, based on the Bayes’ Theorem, is used to face that issue. Such inference approaches include uncertainties on the data as well as on the model parameters, allowing to compute an a posteriori distribution for the model parameters as well as a noise term reflecting the measured data. However, such probabilistic inference methods require a lot of evaluations of the numerical forward model for different model parameters. An improvement of the efficiency is obtained by replacing the forward model with a reduced model. Model reduction, e.g. the proper generalized decomposition (PGD) method, is a popular concept to decrease the computational effort, where each evaluation of the reduced forward model is a pure less costly function evaluation.
The heterogeneous spatial distribution of material parameters in the forward model is described by a lognormal random field. This allows identifying a variable stiffness over the spatial directions by identifying the random field variables with given measurement data. These changes can e.g. be caused by damage. The lognormal field is approximated as series expansion for the PGD problem.
The derived efficient model identification procedure is shown using a real reinforced prestress demonstrator bridge and stereophotogrammetry measurement data. A digital twin for that demonstrator bridge is build up using a set of measurement data and verified by testing additional measurement data. PGD model error against the FEM model is discussed based on an importance sampling analysis computing the Bayes Factor.
The main challenge using numerical models as digital twins in real applications is the calibration and validation of the model based on uncertain measurement data. Therefore, model updating approaches which are inverse optimization processes are applied. This requires a huge number of computations of the same numerical model with slightly different model parameters. For that reason, model updating becomes computationally very expensive for real applications.
Model reduction, e.g. the proper generalized decomposition method, is a popular concept to decrease the computational effort of complex numerical simulations. Therefore, a reduced model of the structure of interest is derived and will be used as surrogate model in a Variational Bayesian procedure to create a very efficient digital twin of the structure.
An efficient model updating approach by means of a PGD reduced model with random field material stiffness parameters is shown. The random field allows, to calibrate the model considering parameter changes over the spatial direction. These changes can be caused by local damages as well as by production. As an exemplary application a demonstrator bridge is used. Digital twins can reduce the costs for maintenance and inspections especially for the costly civil infrastructure with high requirements at their performance over the whole lifetime. Currently, the current state of the structure is determined by regular manual and visual inspections within constant intervals. However, the critical sections are often not directly accessible or impossible to be instrumented at all. In this case, model-based approaches where a digital twin is set up can improve the process. Based on this digital twin, a prognosis of the future performance of the structure, e.g. the failure probability, can be computed.
The influences of the reduction degree, the mesh discretization as well as the correlation length in the PGD Bayesian approach are studied by means of the digital twin of a simple pre-stressed concrete two field bridge.
Many of today’s problems in engineering demand reliable and accurate prediction of failure mechanisms of mechanical structures. Herein it is necessary to take into account the often heterogeneous structure on the fine scale, to capture the underlying physical phenomena. Despite ever increasing Computational resources, dissolving the fine scales in a direct numerical simulation is prohibitive. This work aims to develop an efficient approach to modeling nonlinear heterogeneous structures using the variational multiscale method (VMM) and model order reduction (MOR).
The VMM, introduced in, assumes an additive split of the solution into coarse and fine scale contributions. In, the VMM is applied to a damage mechanics–based material model for concrete-like materials. Herein, suitable boundary conditions for the fine scale which enable localization phenomena to evolve are discussed. As such, zero jump conditions between fine scale solutions are proposed which are enforced pointwise by a Lagrange type method leading to a coupled solution procedure.
In this contribution, possible extensions of the VMM with reduced order modeling are presented. In the linear case, assuming the fine scale solution to be zero on coarse scale element boundaries allows for static condensation and a decoupled solution procedure. Based on this, an efficient localized Training strategy will be developed. For the nonlinear case, the situation of coupled non-conforming spaces, i. e. finite element and reduced order spaces for the fine scales, arises. Thus the imposition of suitable fine scale interface conditions in the weak sense by the use of Lagrange multipliers is investigated. Specific problems in solid mechanics are used to illustrate the performance of the above approaches.
The authors gratefully acknowledge financial support by the German Research Foundation (DFG), Project number 394350870, and by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (ERC Grant agreement No. 818473).
Fatigue models that accurately resolve the complex three-dimensional failure mechanisms of concrete are numerically expensive. Especially the calibration of fatigue parameters to existing Wöhler lines requires solving for thousands or millions of cycles and a naive cycle-by-cycle integration is not feasible.
The proposed adaptive cycle jump methods provide a remedy to this challenge.
They greatly reduce the numerical effort of fatigue simulations and provide the basis for a development of those models.
One of the main challenges regarding our civil infrastructure is the efficient operation over their complete design lifetime while complying with standards and safety regulations. Thus, costs for maintenance or replacements must be optimized while still ensuring specified safety levels. This requires an accurate estimate of the current state as well as a prognosis for the remaining useful life. Currently, this is often done by regular manual or visual inspections within constant intervals. However, the critical sections are often not directly accessible or impossible to be instrumented at all. Model‐based approaches can be used where a digital twin of the structure is set up. For these approaches, a key challenge is the calibration and validation of the numerical model based on uncertain measurement data. The aim of this contribution is to increase the efficiency of model updating by using the advantage of model reduction (Proper Generalized Decomposition, PGD) and applying the derived method for efficient model identification of a random stiffness field of a real bridge.”
The quality of a model - and thus its predictive capabilities - is influenced by numerous uncertainties. They include possibly unknown boundary and initial conditions, noise in the data used for its calibration and uncertainties in the model itself. Here, the latter part is not only restricted to uncertain model parameters, but also refers to the choice of the model itself. Inferring these uncertainties in an automatic way allows for an adaption of the model to new data sets and for a reliable, reproducible model assessment. Note that similar concepts apply at the structural level, where a continuously updated digital twin allows virtual measurements at inaccessible positions of the structure and a simulation based lifetime prediction.
This work presents an inference workflow that describes the difference of measured data and simulated model responses with a generic interface that is independent from the specific model or even the geometry and can easily incorporate multiple data sources. A variational Bayesian inference algorithm is then used to a) calibrate a set of models to given data and to b) identify the best fitting one. The developed concepts are applied to a bridge Demonstrator equipped with displacement sensors, force sensors and a stereophotogrammetry system to perform a system identification of the material parameters as well as a real-time identification of a moving load.
Simulating high-cycle fatigue with continuum models offers the possibility to model stress-redistributions, consider 3Dstress states and simplifies extensions to multi-physics problems. The computational cost of conventional cycle-by-cycle time integrations is reduced by reformulating the fatigue problem as an ordinary differential equation for the material state and solving it with high-order adaptive time integration schemes. The computational cost of calculating the Change of the material state in one cycle is further reduced by a high-order fatigue-specific time integration. The approach is exemplarily demonstrated for a fatigue extension of the implicit gradient-enhanced damage model in 3D and compared to experimental Wöhler lines.
The simulation of the structural response for impact scenarios strongly requires an accurate simulation of both the impact event as well as the subsequent wave propagation. The numerical modeling of the impact event is intrinsically ill-posed due to the instantaneous changes of velocities in the contact area, leading to unbounded accelerations for decreasing time steps which causes oscillations in the contact stresses. These oscillations then propagate into the bulk material. Using a rate dependent material model, like concrete, they might lead to significant errors and a wrong prediction of the structural response. A regularization is thus required to avoid oscillations in the contact stresses. Another issue is related to the numerical computation of the contact conditions. In impact simulations, the nonlinear contact computation needs to be evaluated in every time step. A segmentation technique of the contact area is accurate but time consuming and may result in a bottleneck for the simulation and implementation, especially for 3D problems. The modeling of the subsequent wave propagation requires small time steps, which is primarily due to accuracy reasons. Implicit schemes are thus not affordable. Explicit time integration schemes are efficient only for diagonal mass matrices, as in this case no solution of a linear system is required. In this work, a coupled finite element - Non-Uniform Rational B-Spline (FE-NURBS) approach is applied to impact problems. The coupled approach uses an intermediate NURBS layer to compute the contact forces between the contacting bodies discretized by FEs. The advantages of a smooth isogeometric contact formulation are used to compute the contact forces. A segmentation of the contact area is avoided and an efficient element-based integration is used. The impact event is regularized using a mesh dependent nonlinear penalty approach. The penalty function is a polynomial which ensures a smooth transition between the noncontact and the contact state during the impact. For finer meshes, the penalty regularization becomes stiffer while still avoiding artificial oscillations in the contact stresses. Efficient higher order space and time discretizations are used to model the wave propagation. Explicit time integration is combined with higher order spectral element spatial discretization.
One of the most important goals in civil engineering is to guarantee the safety of the construction. Standards prescribe a required failure probability in the order of 10−4 to 10−6. Generally, it is not possible to compute the failure probability analytically.
Therefore, many approximation methods have been developed to estimate the failure probability. Nevertheless, these methods still require a large number of evaluations of the investigated structure, usually finite element (FE) simulations, making full probabilistic design studies not feasible for relevant applications. The aim of this paper is to increase the efficiency of structural reliability analysis by means of reduced order models. The developed method paves the way for using full probabilistic approaches in industrial applications. In the proposed PGD reliability analysis, the solution of the structural computation is directly obtained from evaluating the PGD solution for a specific parameter set without computing a full FE simulation. Additionally, an adaptive importance sampling scheme is used to minimize the total number of required samples. The accuracy of the failure probability depends on the accuracy of the PGD model (mainly influenced on mesh discretization and mode truncation) as well as the number of samples in the sampling algorithm. Therefore, a general iterative PGD reliability procedure is developed to automatically verify the accuracy of the computed failure probability. It is based on a goal-oriented refinement of the PGD model around the adaptively approximated design point. The methodology is applied and evaluated for 1D and 2D examples. The computational savings compared to the method based on a FE model is shown and the influence of the accuracy of the PGD model on the failure probability is studied.
In this paper, the impact problem and the subsequent wave Propagation are considered. For the contact discretization an intermediate non-uniform rational B-spline (NURBS) layer is added between the contacting finite element bodies, which allows a smooth contact formulation and efficient element-based integration.
The impact event is ill-posed and requires a regularization to avoid propagating stress oscillations. A nonlinear mesh-dependent penalty regularization is used, where the stiffness of the penalty regularization increases upon mesh refinement. Explicit time integration methods are well suited for wave propagation problems, but are efficient only for diagonal mass matrices. Using a spectral element discretization in combination with a NURBS contact layer the bulk part of the mass matrix is diagonal.
Concrete is a complex material and can be modeled on various spatial and temporal scales. While simulations on coarse scales are practical for engineering applications, a deeper understanding of the material is gained on finer scales. This is at the cost of an increased numerical effort that can be reduced by the three methods developed and used in this work, each corresponding to one publication.
The coarse spatial scale is related to fully homogenized models. The material is described in a phenomenological approach and the numerous parameters sometimes lack a physical meaning. Resolving the three-phase mesoscopic structure consisting of aggregates, the mortar matrix and the interfaces between them allow to describe similar effects with simpler models.
One of the most important goals in civil engineering is to guaranty the safety of constructions. National standards prescribe a required failure probability in the order of 10-6 (e.g. DIN EN 199:2010-12). The estimation of these failure probabilities is the key point of structural reliability analysis. Generally, it is not possible to compute the failure probability analytically. Therefore, simulation-based methods as well as methods based on surrogate modelling or response surface methods have been developed. Nevertheless, these methods still require a few thousand evaluations of the structure, usually with finite element (FE) simulations, making reliability analysis computationally expensive for relevant applications.
The aim of this contribution is to increase the efficiency of structural reliability analysis by using the advantages of model reduction techniques. Model reduction is a popular concept to decrease the computational effort of complex numerical simulations while maintaining a reasonable accuracy. Coupling a reduced model with an efficient variance reducing sampling algorithm significantly reduces the computational cost of the reliability analysis without a relevant loss of accuracy.
In this paper, the impact problem and the subsequent wave propagation are considered. For the contact discretization an intermediate NURBS layer is added between the contacting finite element bodies, which allows a smooth contact formulation and efficient element‐based integration. The impact event is ill‐posed and requires a regularization to avoid propagating stress oscillations. A nonlinear mesh dependent penalty regularization is used, where the stiffness of the penalty regularization increases upon mesh refinement. Explicit time integration methods are well suited for wave propagation problems, but are efficient only for diagonal mass matrices. Using a spectral element discretization and the coupled FE‐NURBS approach the bulk part of the mass matrix is diagonal.
The key point of structural reliability analysis is the estimation of the failure probability. This probability is defined as the integral over the failure domain which is given by a limit state function. Usually, this function is only implicit given by an underlying finite element simulation of the structure. It is generally not possible to solve the integral analytically. For that reason, numerical methods based on sampling and surrogates have been developed. Nevertheless, these sampling methods still require a few thousand calculations of the underlying finite element model, making reliability analysis computationally expensive for relevant applications.
Coupling a reduced order model (proper generalized decomposition) with an efficient variance reducing sampling algorithm can reduce the computational cost of reliability analysis drastically. In the proposed method, an importance sampling technique is coupled with a reduced structural model by means of PGD to estimate the failure probability. Instead of calculating the design point e.g. with optimization algorithms, the design point is adaptively estimated by using the idea of subset simulation. The failure probability is estimated in an iterative scheme based on adaptively computing the design point and refining the PGD model.
One of the most important goals in civil engineering is to guaranty the safety of constructions. National standards prescribe a required failure probability in the order of 10−6 (e.g. DIN EN 199:2010-12). The estimation of these failure probabilities is the key point of structural reliability analysis. Generally, it is not possible to compute the failure probability analytically.
Therefore, simulation-based methods as well as methods based on surrogate modeling or response surface methods have been developed. Nevertheless, these methods still require a few thousand evaluations of the structure, usually with finite element (FE) simulations, making reliability analysis computationally expensive for relevant applications.
The aim of this contribution is to increase the efficiency of structural reliability analysis by using the advantages of model reduction techniques. Model reduction is a popular concept to decrease the computational effort of complex numerical simulations while maintaining a reasonable accuracy. Coupling a reduced model with an efficient variance reducing sampling algorithm significantly reduces the computational cost of the reliability analysis without a relevant loss of accuracy.
The efficiency of structural model updating and the subsequent reliability analysis is increased by using the advantages of reduced order models. Coupling a reduced model of the structure of interest with a Bayesian model updating approach or an reliability analysis to estimate the failure probability reduce the computational cost of such complex analyses drastically.
In this work, a probabilistic framework for identification of traffic loads on concrete Bridge structures is presented using data from a FE structural model in combination with a finite volume approach for traffic load modelling. The identification approach uses Bayesian Inference to identify traffic loads from measured sensor data from travelling load experiments performed at BAM. The work focuses on the load identification part of the Framework utilizing global structural response measurements only. The obtained information on traffic loads can be forwarded to further analysis such as fatigue and structure state estimation or model updating.
A hyper reduced domain decomposition approach for modeling nonlinear heterogeneous structures
(2019)
Many of today's problems in engineering demand reliable and accurate prediction of failure mechanisms of mechanical structures. Herein it is necessary to take into account the often heterogeneous structure on the fine scale, to capture the underlying physical phenomena. However, an increase of accuracy by dissolving the fine scale inevitably leads to an increase in computational cost. In the context of multiscale simulations, the FE2 method is widely used. In a two-level computation, the fine scale is depicted by a boundary value problem for a representative volume element (RVE), which is then solved in each integration point of the macro scale to determine the macroscopic response. However, the FE2 approach in general is computationally expensive and problematic in the special case of concrete structures. Here rather large RVEs are necessary to sufficiently represent the meso-structure, such that separation of scales cannot be assumed.
Therefore, the aim is to develop an efficient approach to modeling nonlinear heterogeneous structures using domain decomposition and reduced order modeling.
Spalling of concrete structures is a serious issue for their safety. A better understanding of the pore water distribution and state during a fire is a prerequisite for numerical approaches to such problems.
Temperature-driven water transport in concrete consists of multiple phenomena, such as convection, diffusion, adsorption and dehydration. Distinguishing the different influences experimentally is difficult because typically they cannot be disentangled.
A common experimental setup approximates a one-dimensional flow, and places temperature and pressure gauges along the propagation direction. For direct information about the water content inside a sample, methods such as NMR or neuron radiography are necessary.
A multiphase model for the flow in porous media is presented, with dehydration and changes in the pore size distribution taken into consideration. NMR measurements for temperature-driven flow have been performed. The numerical and experimental results are compared for water transport at temperatures below the critical point. Since both the finite-element model and the experiment allow the distinction between adsorbed, capillary and bulk water, a more fine-grained view of the pore water state is obtained.
The key point of structural reliability analysis is the estimation of the failure probability (Pf), typically a rare event. This probability is defined as the integral over the failure domain which is given by a limit state function. Usually, this function is only implicit given by an underlying finite element simulation of the structure. It is generally not possible to solve the integral for Pf analytically. For that reason, simulation-based methods as well as methods based on surrogate modeling (or Response surface methods) has been developed. Nevertheless, these variance reducing methods still require a few thousand calculations of the underlying finite element model, making reliability Analysis computationally expensive for real applications.
Elastic waves in inhomogeneous meshes avoiding numerical artifacts
Elastic waves in solids resulting from damage processes, e.g. microcracking are used to monitor the integrity of structures. The numerical modelling of these acoustic Emission processes is hindered by the different scales involved. Crack opening is a fast process and the size of the damaged zone is small, leading to small time steps and fine meshes in a numerical finite element simulation. On the other hand the relevant wave Propagation takes place on a much larger spatial scale, e.g covering the distance between Emission source and sensor.
To avoid numerical oscillations, the mesh size at the emission source has to be coupled to its time scale. Using higher order spectral elements can be beneficial with respect to the needed number of degrees of freedom. To make the computation of an acoustic emission process feasible one is lead to coarsening the mesh for larger distances to the source. Solution components with a higher frequency will be reflected at mesh density steps. The mesh coarsening has to be done in a way to avoid or minimize this kind of reflections.
To get more insight into the propagation characteristics of the numerical solution, dispersion curves are calculated for different element types assuming a structured mesh with constant element size. Coupling two meshes with different mesh densities will then lead to frequency dependent reflections at the boundary similar to the coupling of different materials.
The starting mesh density is dictated by the acoustic emission source time scale. The largest allowable mesh size needs to resolve the components of propagating signal with the highest frequency, smallest wavelength which is given by the bandwidth of the sensor.
Still coarser meshes may be used when high frequency components are propagated by a different method.
Accurate and reliable assessment of transportation infrastructures (e.g. bridges) are crucial for ensuring public safety. Simulation-based engineering analyses can be used to assess and predict the health state of structures. Although some of the structural parameters necessary for such simulations cannot be directly measured, they can be inferred from Non-Destructive Testing data, in a typical inverse problem formulation. However, any prediction based on engineering models will never be an exact representation of reality, since many sources of uncertainties can be present: poor physical representation of the problem (model bias); measurement errors; uncertainty on inferred parameters; etc. Uncertainty quantification is, therefore, of utmost importance to ensure reliability of any decision based on the simulations.
This paper investigates the application of the Modular Bayesian framework to perform structural parameter calibration while estimating a function for the model bias. The data for tests comes from an experimental setup and is represented as a combination of a physics-based model, a model bias term and an additive measurement error (considered to be known). In this approach, the inference problem is divided into 3 modules: the first and the second estimate the optimal hyper-parameters of Gaussian Processes (GP) to replace the physics-based and the model bias term, respectively; the third computes the posterior distribution of the structural parameters of interest, taking into account the GPs from the previous modules. By computing a bias-correction function and calibrating the parameters, the modular framework aims at improving the accuracy of the predictions.
Combination of model reduction and adaptive subset simulation for structural reliability problems
(2019)
A safe and robust design is a key criterion when building a structure or a component. Ensuring this criterion can either be performed by fullfilling prescribed safety margins, or by using a full probabilistic approach with a computation of the failure probability. The latter approach is particularly well suited for complex Problems with an interaction of different physical penomena that can be described in a numerical model. The bottleneck in this approach is the computational effort. Sampling methods such as Markov chain Monte Carlo methods are often used to evaluate the system reliability. Due to small failure probabilities (e.g. 10^6) and complex physical models with already and extensive computational effort for a single set of parameters, these methods a prohibitively expensive. The focus of this contribution is to demonstrate the advantages of combining model reduction techniques within the concept a variance reducing adaptive sampling procedures. In the developed method, a modification of the adaptive subset simulation based on Papaioannou et al. 2015 is used and coupled with a limit state function based on Proper Generalized Decomposition (PGD) (Chinesta et al. 2011). In the subset simulation the failure probability is expressed as a product of larger conditional failure probabilities. The intermediate failure events are chosen as a decreasing sequence. Instead of solving each conditional probability with a Markov chain approach, an importance sampling approach is used. It is be shown that the accuracy of the estimation depends mainly on the number of samples in the last sub-problem. For model reduction, the PGD approach is used to solve the structural problem a priori for a given Parameter space (physical space plus all random parameters). The PGD approach results in an approximation of the problem output within a prescribed range of all input Parameters (load factor, material properties, ..). The approximation of the solution by a separated form allows an evaluation of the limit state function within the sampling algorithm with almost no cost. This coupled PGD – adaptive subset Simulation approach is used to estimate the failure probability of examples with different complexity. The convergence, the error propagation as well as the reduction in computational time is discussed.
A continuum damage model for concrete is developed with a focus on fatigue under compressive stresses. This includes the possibility to model stress redistributions and capture size effects. In contrast to cycle based approaches, where damage is accumulated based on the number of full stress cycles, a strain based approach is developed that can capture cyclic degradation under variable loading cycles including different amplitudes and loading frequencies. The model is designed to represent failure under static loading as a particular case of fatigue failure after a single loading cycle. As a consequence, most of the material parameters can be deduced from statictests. Only a limit set of additional constitutive parameters is required to accurately describe the evolution under fatigue loading. Another advantage of the proposed model is the possibility to directly incorporate other multi-physics effects such as creep and shrinkage or thermal loading on the constitutive level. A multiscale approach in time is presented to enable structural computations of fatigue failure with a reduced computational effort. The damage rate within the short time scale corresponding to a single cycle is computed based on a Fourier based approach. This evolution equation is then solved on the long time scale using different implicit and explicit time integration schemes. Their performance and some limitations for specific loading regimes is discussed.
A continuum damage model for concrete is developed with a focus on fatigue under compressive stresses. This includes the possibility to model stress redistributions and capture size effects. In contrast to cycle based approaches, where damage is accumulated based on the number of full stress cycles, a strain based approach is developed that can capture cyclic degradation under variable loading cycles including different amplitudes and loading frequencies. The model is designed to represent failure under static loading as a particular case of fatigue failure after a single loading cycle. As a consequence, most of the material parameters can be deduced from statictests. Only a limit set of additional constitutive parameters is required to accurately describe the evolution under fatigue loading. Another advantage of the proposed model is the possibility to directly incorporate other multi-physics effects such as creep and shrinkage or thermal loading on the constitutive level. A multiscale approach in time is presented to enable structural computations of fatigue failure with a reduced computational effort. The damage rate within the short time scale corresponding to a single cycle is computed based on a Fourier based approach. This evolution equation is then solved on the long time scale using different implicit and explicit time integration schemes. Their performance and some limitations for specific loading regimes is discussed.
Quasi-brittle materials exhibit strain softening. Their modeling requires regularized constitutive formulations to avoid instabilities on the material level. A commonly used model is the implicit gradient-enhanced damage model. For complex geometries, it still Shows structural instabilities when integrated with classical backward Euler schemes. An alternative is the implicit–explicit (IMPL-EX) Integration scheme. It consists of the extrapolation of internal variables followed by an implicit calculation of the solution fields. The solution procedure for the nonlinear gradient-enhanced damage model is thus transformed into a sequence of problems that are algorithmically linear in every time step. Therefore, they require one single Newton–Raphson iteration per time step to converge. This provides both additional robustness and computational acceleration. The introduced extrapolation error is controlled by adaptive time-stepping schemes. This paper introduced and assessed two novel classes of error control schemes that provide further Performance improvements. In a three-dimensional compression test for a mesoscale model of concrete, the presented scheme was about 40 times faster than an adaptive backward Euler time integration.
In this contribution a Proper Generalized Decomposition model is derived for linear elastic structures with Young's modulus as extra-coordinates. Next, the computed PGD solution is used in a Bayesian model updating algorithm resulting a probability distribution of the searched model parameter for given synthetic data. The future goal is to extend the method to be able to identify arbitrary material parameters, based on experimental data.
Lifetime aspects including fatigue failure of concrete structures were traditionally only of minor importance. Because of the growing interest in maxing out the capacities of concrete, its fatigue failure under compression has become an issue. A variety of interacting phenomena such as e.g. loss of prestress, degradation due to chemical reactions or creep and shrinkage influence the fatigue resistance. Failure due to cyclic loads is generally not instantaneous, but characterized by a steady damage accumulation. Therefore, a reliable numerical model to predict the performance of concrete over its lifetime is required, which accurately captures order effects and full three-dimensional stress states.
Many constitutive models for concrete are currently available, which are applicable for specific loading regimes, different time scales and different resolution scales.
However, a key limitation of those models is that they generally do not address issues related to fatigue on a structural level. Very few models can be found in the literature that reproduce deterioration of concrete under repeated loading-unloading cycles. This is due to the computational effort necessary to explicitly resolve every cycle which exceeds the currently available computational resources. The limitation can only be overcome by the application of multiscale methods in time.
The objective of the paper is the development of numerical methods for the simulation of concrete under fatigue loading using temporal multiscale methods.
First, a continuum damage model for concrete is developed with a focus on fatigue under compressive stresses [1]. This includes the possibility to model stress redistributions and capture size effects. In contrast to cycle based approaches, where damage is accumulated based on the number of full stress cycles, a strain based approach is developed that can capture cyclic degradation under variable loading cycles including different amplitudes and loading frequencies. The model is designed to represent failure under static loading as a particular case of fatigue failure after a single loading cycle. As a consequence, most of the material parameters can be deduced from static tests. Only a limit set of additional constitutive parameters is required to accurately describe the evolution under fatigue loading. Another advantage of the proposed model is the possibility to directly incorporate other multi-physics effects such as creep and shrinkage or thermal loading on the constitutive level.
Second, a multiscale approach in time is presented to enable structural computations of fatigue failure with a reduced computational effort. The damage rate within the short time scale corresponding to a single cycle is computed based on a Fourier based approach [2]. This evolution equation is then solved on the long time scale using different implicit and explicit time integration schemes. Their performance and some limitations for specific loading regimes is discussed.
Finally, the developed methods will be validated and compared to experimental data.
[1] Vitaliy Kindrachuk, Marc Thiele, Jörg F. Unger. Constitutive modeling of creep-fatigue interaction for normal strength concrete under compression, International Journal of Fatigue, 78:81-94, 2015
[2] Vitaliy Kindrachuk, Jörg F. Unger. A Fourier transformation-based temporal integration scheme for viscoplastic solids subjected to fatigue deterioration, International Journal of Fatigue, 100:215-228, 2017
Lifetime aspects including fatigue failure of concrete structures were traditionally only of minor importance. Because of the growing interest in maxing out the capacities of concrete, its fatigue failure under compression has become an issue. A variety of interacting phenomena such as e.g. loss of prestress, degradation due to chemical reactions or creep and shrinkage influence the fatigue resistance. Failure due to cyclic loads is generally not instantaneous, but characterized by a steady damage accumulation. Therefore, a reliable numerical model to predict the performance of concrete over its lifetime is required, which accurately captures order effects and full three-dimensional stress states.
Many constitutive models for concrete are currently available, which are applicable for specific loading regimes, different time scales and different resolution scales.
However, a key limitation of those models is that they generally do not address issues related to fatigue on a structural level. Very few models can be found in the literature that reproduce deterioration of concrete under repeated loading-unloading cycles. This is due to the computational effort necessary to explicitly resolve every cycle which exceeds the currently available computational resources. The limitation can only be overcome by the application of multiscale methods in time.
The objective of the paper is the development of numerical methods for the simulation of concrete under fatigue loading using temporal multiscale methods.
First, a continuum damage model for concrete is developed with a focus on fatigue under compressive stresses. This includes the possibility to model stress redistributions and capture size effects. In contrast to cycle based approaches, where damage is accumulated based on the number of full stress cycles, a strain based approach is developed that can capture cyclic degradation under variable loading cycles including different amplitudes and loading frequencies. The model is designed to represent failure under static loading as a particular case of fatigue failure after a single loading cycle. As a consequence, most of the material parameters can be deduced from static tests. Only a limit set of additional constitutive parameters is required to accurately describe the evolution under fatigue loading. Another advantage of the proposed model is the possibility to directly incorporate other multi-physics effects such as creep and shrinkage or thermal loading on the constitutive level.
Second, a multiscale approach in time is presented to enable structural computations of fatigue failure with a reduced computational effort. The damage rate within the short time scale corresponding to a single cycle is computed based on a Fourier based approach. This evolution equation is then solved on the long time scale using different implicit and explicit time integration schemes. Their performance and some limitations for specific loading regimes is discussed.
Finally, the developed methods will be validated and compared to experimental data.
Die Brücken im Netz der Bundesverkehrswege sind überwiegend in einem ausreichenden bis guten Zustand. Allerdings steigt der Unterhalts- und Sanierungsaufwand aufgrund des inzwischen hohen Alters vieler Brücken sowie des ständig wachsenden Schwerlastverkehrs. Techniken zur Einschätzung der verbleibenden Lebensdauer von Brücken sowie zur dauerhaften Beobachtung des Tragverhaltens bzw. des Erfolges von Sanierungsmaßnahmen werden daher für den sicheren und wirtschaftlichen Betrieb dringend benötigt. Zur Evaluierung dafür geeigneter holistischer Ansätze wurde in der BAM das Projekt BLEIB - Bewertung, Lebensdauerprognose und Instandsetzung von Brückenbauwerken - ins Leben gerufen.
Ein zentrales Ergebnis des Projektes ist eine extern vorgespannte Stahlbetonbrücke als Zweifeldträger mit einer Gesamtlänge von 24 m, die für den Test verschiedenster Sensorsysteme, zur Validierung numerischer Modelle und zur Erprobung von Sanierungs- und Verstärkungsmaßnahmen entwickelt wurde. Für die Simulation unterschiedlicher Schädigungsgrade kann die Vorspannung der Brücke variiert werden. Die Brücke wird mit beweglichen Gewichten belastet und über einen Shaker zum Schwingen angeregt.
Das Brückenmodell wurde bewusst geschädigt, indem die Vorspannung der Struktur erstmalig schrittweise bis auf null reduziert wurde. Unter der Eigenlast verformte sich die Brücke, wodurch eine Rissbildung im Beton einsetzte. Die Zugspannung, die zuvor durch die Vorspannung aufgenommen wurde, übernahm Schritt für Schritt der Beton. Als die Zugspannungen die relativ geringe Zugfestigkeit des Betons überstiegen, begann dieser zu reißen und die schlaffe Bewehrung der Struktur nahm die Spannungen auf. Dieser Versuch wurde unter anderem von Schallemissionsmessungen begleitet. Der Rissbildungsprozess konnte damit, bei gleichzeitiger Aufzeichnung der Vorspannung, früh detektiert und die Risse geortet werden. Die Ergebnisse korrelieren gut mit den Ergebnissen der stereophotogrammetrischen Verformungsmessungen der Struktur.
Die Brücken im Netz der Bundesverkehrswege sind überwiegend in einem ausreichenden bis guten Zustand. Allerdings steigt der Unterhalts- und Sanierungsaufwand aufgrund des inzwischen hohen Alters vieler Brücken sowie des ständig wachsenden Schwerlastverkehrs. Techniken zur Einschätzung der verbleibenden Lebensdauer von Brücken sowie zur dauerhaften Beobachtung des Tragverhaltens bzw. des Erfolges von Sanierungsmaßnahmen werden daher für den sicheren und wirtschaftlichen Betrieb dringend benötigt. Zur Evaluierung dafür
geeigneter holistischer Ansätze wurde in der BAM das Projekt BLEIB - Bewertung, Lebensdauerprognose und Instandsetzung von Brückenbauwerken - ins Leben gerufen.
Ein zentrales Ergebnis des Projektes ist eine extern vorgespannte Stahlbetonbrücke als Zweifeldträger mit einer Gesamtlänge von 24 m, die für den Test verschiedenster Sensorsysteme, zur Validierung numerischer Modelle und zur Erprobung von Sanierungs- und Verstärkungsmaßnahmen entwickelt wurde. Für die Simulation unterschiedlicher Schädigungsgrade kann die Vorspannung der Brücke variiert werden. Die Brücke wird mit beweglichen Gewichten belastet und über einen Shaker zum Schwingen angeregt.
Das Brückenmodell wurde bewusst geschädigt, indem die Vorspannung der Struktur erstmalig schrittweise bis auf null reduziert wurde. Unter der Eigenlast verformte sich die Brücke, wodurch eine Rissbildung im Beton einsetzte. Die Zugspannung, die zuvor durch die Vorspannung aufgenommen wurde, übernahm Schritt für Schritt der Beton. Als die Zugspannungen die relativ geringe Zugfestigkeit des Betons überstiegen, begann dieser zu reißen und die schlaffe Bewehrung der Struktur nahm die Spannungen auf. Dieser Versuch wurde unter anderem von Schallemissionsmessungen begleitet. Der Rissbildungsprozess konnte damit, bei gleichzeitiger Aufzeichnung der Vorspannung, früh detektiert und die Risse geortet werden. Die Ergebnisse korrelieren gut mit den Ergebnissen der stereophotogrammetrischen Verformungsmessungen der Struktur.