7.7 Modellierung und Simulation
Filtern
Dokumenttyp
- Vortrag (55)
- Zeitschriftenartikel (19)
- Beitrag zu einem Tagungsband (8)
- Forschungsdatensatz (6)
- Dissertation (2)
- Posterpräsentation (2)
Schlagworte
- Model order reduction (7)
- Concrete (6)
- Model calibration (6)
- Variational multiscale method (6)
- Digital twin (5)
- Multiscale methods (5)
- Proper generalized decomposition (5)
- Additive manufacturing (4)
- Model updating (4)
- Structural build-up (4)
Organisationseinheit der BAM
- 7 Bauwerkssicherheit (92)
- 7.7 Modellierung und Simulation (92)
- 5 Werkstofftechnik (5)
- 5.2 Metallische Hochtemperaturwerkstoffe (5)
- 5.5 Materialmodellierung (4)
- 7.1 Baustoffe (4)
- 7.4 Baustofftechnologie (4)
- 8 Zerstörungsfreie Prüfung (4)
- 9 Komponentensicherheit (4)
- 9.3 Schweißtechnische Fertigungsverfahren (4)
Paper des Monats
- ja (2)
Eingeladener Vortrag
- nein (55)
3D concrete printing is an innovative new construction technology offering the potential to enable an efficient production of individual structures with less consumption of resources. The technology will mainly shape the future construction philosophy, automate the build process and help reaching the climate goals in civil engineering. From the design of a structure to the printed component, many individual steps based on different software are required which must be repeated for each new or changed structure. First, the geometry of the structure is created in a CAD program. Second, the print path is defined in a slicer software creating the machine code for the printer (G-Code). Finally, the structure can be printed. Furthermore, a numerical model of the printed structure is necessary for process optimization and control. In that way the number of test prints can be reduced, costs can be saved, and the component behavior can be predicted. For those purposes, an automated workflow allowing to run all steps or individual steps without interacting with each individual software program is required. Furthermore, changes in parameters or the exchange of parts (using a different design or different printer) must be possible in a simple manner. In the presented work, such an automated workflow based on the example of a parametrized wall element for extrusion-based concrete printing is developed. The investigated wall structure is parametrized using the global geometry parameters: height, width, thickness, radius, kind of infill structure (honey-comb, zig-zag) and number of repeated infills. All above mentioned steps are implemented via python interfaces using pydoit as workflow tool. General interfaces with prescribed input and output files are defined allowing adaptations for different software and tools. The described workflow is tested by performing a test series investigating the influence of the in-fill structure on the mechanical properties of the test walls.
Interlaboratory studies are common tools for collecting comparable data to implement standards for new materials or testing technologies. In the case of construction materials, these studies form the basis for recommendations and design codes. Depending on the study, the amount of data collected can be enormous, making manual handling and evaluation difficult. On the other hand, the importance of the FAIR (findable, accessible, interoperable, and reusable) principles for scientific data management, published by Wilkinson et al. in 2016, is constantly growing and changing the view on data usage.
The benefits of using data management tools such as data stores/repositories or electronic laboratory notebooks are many. Data is stored in a structured and accessible way (at least within a group) and data loss due to staff turnover is reduced. Tools usually support data publishing and analysis interfaces. In this way, data can be reused years later to generate new knowledge with future insights. On the other hand, there are many challenges in setting up a data repository, such as selecting suitable software tools, defining the data structure, enabling data access, and understanding by others and
ensuring maintenance, among others.
This talk discusses the advantages and challenges of setting up and applying a data repository using the interlaboratory study on the mechanical properties of printed concrete structures carried out in RILEM TC 304-ADC as example. First, the definition of a suitable data structure including all information is discussed. The tool-dependent upload process is then described. Here, the data
management system openBIS (open source software developed by ETH Zurich) is used. Since in most cases an open compute platform allowing access from different organisations is not possible or available due to data protection and maintenance issues, tool-independent export options are discussed and compared. Finally, the different query and analysis possibilities are demonstrated.
In the field of computational science and engineering, workflows often entail the application of various software, for instance, for simulation or pre- and postprocessing. Typically, these components have to be combined in arbitrarily complex workflows to address a specific research question. In order for peer researchers to understand, reproduce and (re)use the findings of a scientific publication, several challenges have to be addressed. For instance, the employed workflow has to be automated and information on all used software must be available for a reproduction of the results. Moreover, the results must be traceable and the workflow documented and readable to allow for external verification and greater trust. In this paper, existing workflow management systems (WfMSs) are discussed regarding their suitability for describing, reproducing and reusing scientific workflows. To this end, a set of general requirements for WfMSswere deduced from user stories that we deem relevant in the domain of computational science and engineering. On the basis of an exemplary workflow implementation, publicly hosted at GitHub (https:// this http URL), a selection of different WfMSs is compared with respect to these requirements, to support fellow scientists in identifying the WfMSs that best suit their requirements.
Multiscale modeling of linear elastic heterogeneous structures via localized model order reduction
(2024)
In this paper, a methodology for fine scale modeling of large scale linear elastic structures is proposed, which combines the variational multiscale method, domain decomposition and model order reduction. The influence of the fine scale on the coarse scale is modelled by the use of an additive split of the displacement field, addressing applications without a clear scale separation. Local reduced spaces are constructed bysolving an oversampling problem with random boundary conditions. Herein, we inform the boundary conditions by a global reduced problem and compare our approach using physically meaningful correlated samples with existing approaches using uncorrelated samples. The local spaces are designed such that the local contribution of each subdomain can be coupled in a conforming way, which also preserves the sparsity pattern of standard finite element assembly procedures. Several numerical experiments show the accuracy and efficiency of the method, as well as its potential to reduce the size of the local spaces and the number of training samples compared to the uncorrelated sampling
Thermal transient problems, essential for modeling applications like welding and additive metal manufacturing, are characterized by a dynamic evolution of temperature. Accurately simulating these phenomena is often computationally expensive, thus limiting their applications, for example for model parameter estimation or online process control. Model order reduction, a solution to preserve the accuracy while reducing the computation time, is explored. This article addresses challenges in developing reduced order models using the proper generalized decomposition (PGD) for transient thermal problems with a specific treatment of the moving heat source within the reduced model. Factors affecting accuracy, convergence, and computational cost, such as discretization methods (finite element and finite difference), a dimensionless formulation, the size of the heat source, and the inclusion of material parameters as additional PGD variables are examined across progressively complex examples. The results demonstrate the influence of these factors on the PGD model’s performance and emphasize the importance of their consideration when implementing such models. For thermal example, it is demonstrated that a PGD model with a finite difference discretization in time, a dimensionless representation, a mapping for a moving heat source, and a spatial domain non-separation yields the best approximation to the full order model.
Software-driven scientific workflows are often characterized by a complex interplay of various pieces of software executed in a particular order. The output of a computational step may serve as input to a subsequent computation, which requires them to be processed sequentially with a proper mapping of outputs to inputs. Other computations are independent of each other and can be executed in parallel. Thus, one of the main tasks of a workflow tool is a proper and efficient scheduling of the individual processing steps.
Each processing step, just as the workflow itself, typically processes some input and produces output data. Apart from changing the input data to operate on, processing steps can usually be configured by a set of parameters to change their behavior. Moreover, the behavior of a processing step is determined by its source code and/or executable binaries/packages that are called within it. Beyond this, the computation environment not only has a significant influence on its behavior, but is also crucial in order for the processing step to work at all. The environment includes the versions of the interpreters or compilers, as well as all third-party libraries and packages that contribute to the computations carried out in a processing step.
Despite the advances in hardware and software techniques, standard numerical methods fail in providing real-time simulations, especially for complex processes such as additive manufacturing applications. A real-time simulation enables process control through the combination of process monitoring and automated feedback, which increases the flexibil- ity and quality of a process. Typically, before producing a whole additive manufacturing structure, a simplified experiment in form of a bead-on- plate experiment is performed to get a first insight into the process and to set parameters suitably. In this work, a reduced order model for the transient thermal problem of the bead-on-plate weld simulation is devel- oped, allowing an efficient model calibration and control of the process. The proposed approach applies the proper generalized decomposition (PGD) method, a popular model order reduction technique, to decrease the computational effort of each model evaluation required multiple times in parameter estimation, control and optimization. The welding torch is modeled by a moving heat source, which leads to difficulties separating space and time, a key ingredient in PGD simulations. A novel approach for separating space and time is applied and extended to 3D problems allowing the derivation of an efficient separated representation of the tem- perature. The results are verified against a standard finite element model showing excellent agreement. The reduced order model is also leveraged in a Bayesian model parameter estimation setup, speeding up calibrations and ultimately leading to an optimized real-time simulation approach for welding experiment using synthetic as well as real measurement data.
Despite the advances in hardware and software techniques, standard numerical methods fail in providing real-time simulations, especially for complex processes such as additive manufacturing applications. A real-time simulation enables process control through the combination of process monitoring and automated feedback, which increases the flexibility and quality of a process. Typically, before producing a whole additive manufacturing structure, a simplified experiment in the form of a beadon-plate experiment is performed to get a first insight into the process and to set parameters suitably. In this work, a reduced order model for the transient thermal problem of the bead-on-plate weld simulation is developed, allowing an efficient model calibration and control of the process. The proposed approach applies the proper generalized decomposition (PGD) method, a popular model order reduction technique, to decrease the computational effort of each model evaluation required multiple times in parameter estimation, control, and optimization. The welding torch is modeled by a moving heat source, which leads to difficulties separating space and time, a key ingredient in PGD simulations. A novel approach for separating space and time is applied and extended to 3D problems allowing the derivation of an efficient separated representation of the temperature.
The results are verified against a standard finite element model showing excellent agreement. The reduced order model is also leveraged in a Bayesian model parameter estimation setup, speeding up calibrations and ultimately leading to an optimized real-time simulation approach for welding experiment using synthetic as well as real measurement data.
Beton ist weltweit einer der wichtigsten Konstruktionswerkstoffe und zeichnet sich durch eine enorme Anpassungsfähigkeit an sich verändernde Anforderungen aus. Damit verbunden ist eine hohe und kontinuierlich zunehmende Komplexität hinsichtlich der Ausgangsstoffe, Rezepturen und des Herstellungsprozesses. Folglich setzt eine Ausschöpfung des technischen und umweltbezogenen Potenzials der Betonbauweise höchste Expertise bei den Einzelakteuren der Bauindustrie voraus.
PGDrome
(2023)
FenicsXConcrete
(2023)
Additive manufacturing (AM) has revolutionized the manufacturing industry, offering a new paradigm to produce complex geometries and parts with customized properties. Among the different AM techniques, the wire arc additive manufacturing (WAAM) process has gained significant attention due to its high deposition rate and low equipment cost. However, the process is characterized by a complex thermal history making it challenging to simulate it in real-time for online process control and optimization.
In this context, a reduced order model (ROM) using the proper generalized decomposition (PGD) method [1] is proposed as a powerful tool to overcome the limitations of conventional numerical methods and enable the real-time simulation of the temperature field of WAAM processes. These simulations use a moving heat source leading to a hardly separable parametric problem, which is handled by applying a novel mapping approach [2]. This procedure makes it possible to create a simple separated representation of the model, which allows to simulate multiple layers.
In this contribution, a PGD model is derived for the temperature field simulation of the WAAM process. A good agreement with a standard finite element method is shown. The reduced model is further used in a stochastic model parameter estimation using Bayesian inference, speeding up calibrations and ultimately leading to a calibrated real-time simulation.
Blast experiments on reinforced concrete structures are often limited to small structures and therefore simple shock waves. Such experiments are carried out at the Bundesanstalt für Materialforschung und -prüfung (BAM) and the structural response is investigated using several measuring methods. Complex load scenarios that occur as a result of reflection of the shock wave in larger structures are harder to realise in practice. Numerical simulations for the propagation of the shock wave and the structural response can therefore be an alternative method for the investigation of blast loads on complex structures.
For the simulation of concrete under impact and blast loads, several local constitutive models exist that are formulated as plasticity models with softening taken into account by introducing a scalar damage field. Local damage models however often lead to mesh-dependent results which do not converge with mesh refinement. In order to achieve meaningful predictions from numerical experiments, independence from the mesh is needed.
In this contribution, the JH2 model (Johnson and Holmquist 1994) with a parameter set for concrete is investigated in a simple blast load scenario. The shockwave is implemented as a simplified Friedlander-curve and the overpressures are applied as a boundary condition for the structural simulation. In order to account for large displacements that can occur during blast loads, an updated Lagrangian formulation is utilised. A Runge-Kutta method with adaptive time stepping is used to advance the solution in time. The open source FEM software FEniCS (Logg et al. 2012) is used together with an implementation of the JH2 model which has been developed at BAM. An extensive convergence analysis with both timestep- and mesh-refinement is carried out to show the mesh dependency.
In order to make the results independent of the mesh, possible nonlocal versions of the JH2 model with gradient-enhancement are presented. Since many damage models for concrete share the damage mechanism of the JH2 model, the application of the regularisation methods to more complex material models, like the RHT model (Grunwald et al. 2017), is also discussed. Advantages of a gradient-enhanced formulation to simulate dynamic strength increase of concrete, as suggested in (Häußler-Combe and Kitzig 2009), is discussed as well.
Simulation-based digital twins have emerged as a powerful tool for evaluating the mechanical response of bridges. As virtual representations of physical systems, digital twins can provide a wealth of information that complements traditional inspection and monitoring data. By incorporating virtual sensors and predictive maintenance strategies, they have the potential to improve our understanding of the behavior and performance of bridges over time. However, as bridges age and undergo regular loading and extreme events, their tructural characteristics change, often differing from the predictions of their initial design. Digital twins must be continuously adapted to reflect these changes. In this article, we present a Bayesian framework for updating simulation-based digital twins in the context of bridges. Our approach integrates information from measurements to account for inaccuracies in the simulation model and quantify uncertainties. Through its implementation and assessment, this work demonstrates the potential for digital twins to provide a reliable and up-to-date representation of bridge behavior, helping to inform decision-making for maintenance and management.
Simulation-based digital twins have emerged as a powerful tool for evaluating the mechanical response of bridges. As virtual representations of physical systems, digital twins can provide a wealth of information that complements traditional inspection and monitoring data. By incorporating virtual sensors and predictive maintenance strategies, they have the potential to improve our understanding of the behavior and performance of bridges over time. However, as bridges age and undergo regular loading and extreme events, their structural characteristics change, often differing from the predictions of their initial design. Digital twins must be continuously adapted to reflect these changes. In this article, we present a Bayesian framework for updating simulation-based digital twins in the context of bridges. Our approach integrates information from measurements to account for inaccuracies in the simulation model and quantify uncertainties. Through its implementation and assessment, this work demonstrates the potential for digital twins to provide a reliable and up-to-date representation of bridge behavior, helping to inform decision-making for maintenance and management.
In this contribution, a methodology for fine scale modeling of large scale structures is proposed, which combines the variational multiscale method[1], domain decomposition and model order reduction. The influence of the fine scale on the coarse scale is modelled by the use of an additive split of the displacement field, addressing applications without a clear scale separation. Based on the work of Buhr and Smetana[2], local reduced spaces are constructed by solving an oversampling problem with random boundary conditions. Herein, we inform the boundary conditions by a global reduced problem and compare our approach using physically meaningful correlated samples with existing approaches using uncorrelated samples. The local spaces are designed such that the local contribution of each subdomain can be coupled in a conforming way, which also preserves the sparsity pattern of standard finite element assembly procedures. Several numerical experiments show the accuracy and efficiency of the method, as well as its potential to reduce the size of the local spaces and the number of training samples compared to the uncorrelated sampling.
In the field of computational science and engineering, workflows often entail the application of various software, for instance, for simulation or pre- and postprocessing.
Typically, these components have to be combined in arbitrarily complex workflows to address a specific research question. In order for peer researchers to understand, reproduce and (re)use the findings of a scientific publication, several challenges have to be addressed. For instance, the employed workflow has to be automated and information on all used software must be available for a reproduction of the results. Moreover, the results must be traceable and the workflow documented and readable to allow for external verification and greater trust.
In this paper, existing workflow management systems (WfMSs) are discussed regarding their suitability for describing, reproducing and reusing scientific workflows. To this end, a set of general requirements for WfMSs were deduced from user stories that we deem relevant in the domain of computational science and engineering. On the basis of an exemplary workflow implementation, publicly hosted at GitHub (https://github.com/BAMresearch/NFDI4IngScientificWorkflowRequirements), a selection of different WfMSs is compared with respect to these requirements, to support fellow scientists in identifying the WfMSs that best suit their requirements.
Die sprunghaft zunehmende Wichtigkeit von FAIR- und Open-Data für die Qualitätssicherung, aber auch für die Nachnutzbarkeit von Daten und den Erkenntnisfortschritt führt zu enormem Flandlungsbedarf in Forschung und Entwicklung. Damit verbunden laufen derzeit vielfältige, ambitionierte Aktionen, z. B. bezüglich der Erstellung von Ontologien und Wissensgraphen. Das Knowhow entwickelt sich rasant, die Ansätze zur Implementation entstehen in verschiedenen Fachwelten bzw. mit
unterschiedlichen Zielsetzungen parallel, so dass recht heterogene Herangehensweisen resultieren.
Diese Veröffentlichung fokussiert auf Arbeiten, die derzeit als möglichst ganzheitlicher Ansatz für Materialdaten im Rahmen der Digitalisierungsinitiative „Plattform MaterialDigital" vorangetrieben werden. Die Autoren bearbeiten baustoffbezogene Aspekte im Verbundprojekt „LeBeDigital - Lebenszyklus von Beton". Zielsetzung ist die digitale Beschreibung des Materialverhaltens von Beton über den kompletten Herstellungsprozess eines Fertigteils mit einer Integration von Daten und Modellen innerhalb eines Workflows zur probabilistischen Material- und Prozessoptimierung.
Es wird über die Vorgehensweise und die dabei gewonnenen Erfahrungen berichtet, nicht ohne den Blick auf die oft unterschätzte Komplexität der Thematik zu lenken.
Finite element (FE) models are widely used to capture the mechanical behavior of structures. Uncertainties in the underlying physics and unknown parameters of such models can heavily impact their performance. Thus, to satisfy high precision and reliability requirements, the performance of such models is often validated using experimental data. In such model updating processes, uncertainties in the incoming measurements should be accounted for, as well. In this context, Bayesian methods have been recognized as a powerful tool for addressing different types of uncertainties. Quasi-brittle materials subjected to damage pose a further challenge due to the increased uncertainty and complexity involved in modeling crack propagation effects. In this respect, techniques such as Digital Image Correlation (DIC) can provide full-field displacement measurements that are able to reflect the crack path up to a certain accuracy. In this study, DIC-based full field measurements are incorporated into a finite element model updating approach, to calibrate unknown/uncertain parameters of an ansatz constitutive model. In contrast to the standard FEMU, where measured displacements are compared to the displacements from the FE model response, in the force-version of the standard FEMU, termed FEMU-F [1], displacements are applied as Dirichlet constraints. This enables the evaluation of the internal forces, which are then compared to measured external forces, thus quantifying the fulfillment of the momentum balance equation as a metric for the model discrepancy. In the present work, the FEMU-F approach is further equipped with a Bayesian technique that accounts for uncertainties in the measured displacements, as well. Via this modification, displacements are treated as unknown variables to be subsequently identified, while they are allowed to deviate from the measured values up to a certain measurement accuracy. To be able to identify many unknown variables; including constitutive parameters and the aforementioned displacements, the Variational Bayesian technique proposed in [2] is utilized as an approximative technique. A numerical example of a three-point bending case study is presented first to demonstrate the effectiveness of the proposed approach. The parameters of a gradient-enhanced damage material model [4] are identified using noisy synthetic data, and the effect of measurement noise is studied. The ability of the suggested approach on identifying constitutive parameters is then validated using real experimental data from a three-point bending test from [3]. The full field displacements required as input to the inference setup are extracted through a digital image correlation (DIC) analysis of the provided raw images.
Bayesian updating of constitutive laws for Finite Element simulation using full field measurements
(2023)
Developed finite element (FE) models have been recognized as powerful tools for predicting the mechanical behaviour of engineered systems. As a prerequisite, those models need to be improved with respect to various uncertainties; most notably, concerning underlying physics assumptions and unknown parameters. This is very often accomplished by comparing the performance of a model (e.g. the model response) against available data measured from real experiments. Another challenge emerges in doing that, however, which is accounting for uncertainties of measured data. Bayesian methods have been widely considered and utilized as a suitable approach for coping with and quantifying the aforementioned uncertainties. Phenomena like damage - in particular in quasi-brittle materials - introduce further uncertainties due to the complexity underlying the crack propagation of phenomenon. This implies that, the fitting of a numerical model and an associated constitute law that can adequately describe such effects is non-trivial. The standard approach of finite element model updating (FEMU) is therefore modified to account for tracking of the crack propagation, as recorded during an experiment under increasing loading, via full field displacement measurements. The latter are fed as Dirichlet constraints to an available finite element model, leading to the evaluation of force residuals, which quantifies the accuracy of the model. This approach - which is known as FEMU-F (force-version of the standard FEMU) [1] - is here further equipped with a Bayesian technique, which accounts for the measurement uncertainties in the full field displacement. This is achieved by penalizing the discrepancy between the measured displacements and the modeled Dirichlet constraints, where the latter are considered as further unknowns. We specifically employ the Variational Bayesian technique, proposed in [2], as an approximating tool for the estimation of posterior parameters, including displacement variables that are allowed to deviate from the measurements. A Markov chain Monte Carlo (MCMC) is also used for sampling the posterior distribution of the unknown model parameters. The model updating procedure is first demonstrated through a numerically simulated example of threepoint bending, where the parameters of a gradient-enhanced damage material model [4] are identified in accordance with synthetic noisy data (displacements and reaction forces). For the validation, experimental data from a three-point bending test are used, where full field displacements are collected through a digital image correlation (DIC) analysis (raw data taken from [3]). The data is then used for the parameter identification of a gradient damage constitutive law, which is employed as an ansatz model