Filtern
Dokumenttyp
- Zeitschriftenartikel (10)
- Vortrag (9)
- Forschungsdatensatz (4)
- Beitrag zu einem Tagungsband (3)
Sprache
- Englisch (26) (entfernen)
Schlagworte
Organisationseinheit der BAM
- 7 Bauwerkssicherheit (26)
- 7.7 Modellierung und Simulation (25)
- 7.4 Baustofftechnologie (4)
- 7.0 Abteilungsleitung und andere (1)
- 7.2 Ingenieurbau (1)
- 8 Zerstörungsfreie Prüfung (1)
- 8.1 Sensorik, mess- und prüftechnische Verfahren (1)
- 9 Komponentensicherheit (1)
- 9.3 Schweißtechnische Fertigungsverfahren (1)
Paper des Monats
- ja (3)
Eingeladener Vortrag
- nein (9)
The High-Fidelity Generalized Method of Cells (HFGMC) is one technique, distinct from traditional finite-element approaches, for accurately simulating nonlinear composite material behavior. In this work, the HFGMC global system of equations for doubly periodic repeating unit cells with nonlinear constituents has been reduced in size through the novel application of a Petrov-Galerkin Proper Orthogonal Decomposition order-reduction scheme in order to improve its computational efficiency. Order-reduced models of an E-glass/Nylon 12 composite led to a 4.8–6.3x speedup in the equation assembly/solution runtime while maintaining model accuracy. This corresponded to a 21–38% reduction in total runtime.Thesignificant difference in assembly/solution and total runtimes was attributed to the evaluation of integration point inelastic field quantities; this step was identical between the unreduced and order-reduced models. Nonetheless, order-reduced techniques offer the potential to significantly improve the computational efficiency of multiscale calculations.
PGDrome
(2023)
One of the most important goals in civil engineering is to guarantee the safety of the construction. Standards prescribe a required failure probability in the order of 10−4 to 10−6. Generally, it is not possible to compute the failure probability analytically.
Therefore, many approximation methods have been developed to estimate the failure probability. Nevertheless, these methods still require a large number of evaluations of the investigated structure, usually finite element (FE) simulations, making full probabilistic design studies not feasible for relevant applications. The aim of this paper is to increase the efficiency of structural reliability analysis by means of reduced order models. The developed method paves the way for using full probabilistic approaches in industrial applications. In the proposed PGD reliability analysis, the solution of the structural computation is directly obtained from evaluating the PGD solution for a specific parameter set without computing a full FE simulation. Additionally, an adaptive importance sampling scheme is used to minimize the total number of required samples. The accuracy of the failure probability depends on the accuracy of the PGD model (mainly influenced on mesh discretization and mode truncation) as well as the number of samples in the sampling algorithm. Therefore, a general iterative PGD reliability procedure is developed to automatically verify the accuracy of the computed failure probability. It is based on a goal-oriented refinement of the PGD model around the adaptively approximated design point. The methodology is applied and evaluated for 1D and 2D examples. The computational savings compared to the method based on a FE model is shown and the influence of the accuracy of the PGD model on the failure probability is studied.
One of the main challenges regarding our civil infrastructure is the efficient operation over their complete design lifetime while complying with standards and safety regulations. Thus, costs for maintenance or replacements must be optimized while still ensuring specified safety levels. This requires an accurate estimate of the current state as well as a prognosis for the remaining useful life. Currently, this is often done by regular manual or visual inspections within constant intervals. However, the critical sections are often not directly accessible or impossible to be instrumented at all. Model‐based approaches can be used where a digital twin of the structure is set up. For these approaches, a key challenge is the calibration and validation of the numerical model based on uncertain measurement data. The aim of this contribution is to increase the efficiency of model updating by using the advantage of model reduction (Proper Generalized Decomposition, PGD) and applying the derived method for efficient model identification of a random stiffness field of a real bridge.”
Numerical models built as virtual-twins of a real structure (digital-twins) are considered the future ofmonitoring systems. Their setup requires the estimation of unknown parameters, which are not directly measurable. Stochastic model identification is then essential, which can be computationally costly and even unfeasible when it comes to real applications. Efficient surrogate models, such as reduced-order method, can be used to overcome this limitation and provide real time model identification. Since their numerical accuracy influences the identification process, the optimal surrogate not only has to be computationally efficient, but also accurate with respect to the identified parameters. This work aims at automatically controlling the Proper Generalized Decomposition (PGD) surrogate’s numerical accuracy for parameter identification. For this purpose, a sequence of Bayesian model identification problems, in which the surrogate’s accuracy is iteratively increased, is solved with a variational Bayesian inference procedure. The effect of the numerical accuracy on the resulting posteriors probability density functions is analyzed through two metrics, the Bayes Factor (BF) and a criterion based on the Kullback-Leibler (KL) divergence. The approach is demonstrated by a simple test example and by two structural problems. The latter aims to identify spatially distributed damage, modeled with a PGD surrogate extended for log-normal random fields, in two different structures: a truss with synthetic data and a small, reinforced bridge with real measurement data. For all examples, the evolution of the KL-based and BF criteria for increased accuracy is shown and their convergence indicates when model refinement no longer affects the identification results.
Using digital twins for decision making is a very promising concept which combines simulation models with corresponding experimental sensor data in order to support maintenance decisions or to investigate the reliability. The quality of the prognosis strongly depends on both the data quality and the quality of the digital twin. The latter comprises both the modeling assumptions as well as the correct parameters of these models. This article discusses the challenges when applying this concept to realmeasurement data for a demonstrator bridge in the lab, including the data management, the iterative development of the simulation model as well as the identification/updating procedure using Bayesian inference with a potentially large number of parameters. The investigated scenarios include both the iterative identification of the structural model parameters as well as scenarios related to a damage identification. In addition, the article aims at providing all models and data in a reproducibleway such that other researcher can use this setup to validate their methodologies.
Despite the advances in hardware and software techniques, standard numerical methods fail in providing real-time simulations, especially for complex processes such as additive manufacturing applications. A real-time simulation enables process control through the combination of process monitoring and automated feedback, which increases the flexibility and quality of a process. Typically, before producing a whole additive manufacturing structure, a simplified experiment in the form of a beadon-plate experiment is performed to get a first insight into the process and to set parameters suitably. In this work, a reduced order model for the transient thermal problem of the bead-on-plate weld simulation is developed, allowing an efficient model calibration and control of the process. The proposed approach applies the proper generalized decomposition (PGD) method, a popular model order reduction technique, to decrease the computational effort of each model evaluation required multiple times in parameter estimation, control, and optimization. The welding torch is modeled by a moving heat source, which leads to difficulties separating space and time, a key ingredient in PGD simulations. A novel approach for separating space and time is applied and extended to 3D problems allowing the derivation of an efficient separated representation of the temperature.
The results are verified against a standard finite element model showing excellent agreement. The reduced order model is also leveraged in a Bayesian model parameter estimation setup, speeding up calibrations and ultimately leading to an optimized real-time simulation approach for welding experiment using synthetic as well as real measurement data.
Thermal transient problems, essential for modeling applications like welding and additive metal manufacturing, are characterized by a dynamic evolution of temperature. Accurately simulating these phenomena is often computationally expensive, thus limiting their applications, for example for model parameter estimation or online process control. Model order reduction, a solution to preserve the accuracy while reducing the computation time, is explored. This article addresses challenges in developing reduced order models using the proper generalized decomposition (PGD) for transient thermal problems with a specific treatment of the moving heat source within the reduced model. Factors affecting accuracy, convergence, and computational cost, such as discretization methods (finite element and finite difference), a dimensionless formulation, the size of the heat source, and the inclusion of material parameters as additional PGD variables are examined across progressively complex examples. The results demonstrate the influence of these factors on the PGD model’s performance and emphasize the importance of their consideration when implementing such models. For thermal example, it is demonstrated that a PGD model with a finite difference discretization in time, a dimensionless representation, a mapping for a moving heat source, and a spatial domain non-separation yields the best approximation to the full order model.
Multiscale modeling of linear elastic heterogeneous structures via localized model order reduction
(2023)
In this paper, a methodology for fine scale modeling of large scale linear elastic structures is proposed, which combines the variational multiscale method, domain decomposition and model order reduction. The influence of the fine scale on the coarse scale is modelled by the use of an additive split of the displacement field, addressing applications without a clear scale separation. Local reduced spaces are constructed by solving an oversampling problem with random boundary conditions. Herein, we inform the boundary conditions by a global reduced problem and compare our approach using physically meaningful correlated samples with existing approaches using uncorrelated samples. The local spaces are designed such that the local contribution of each subdomain can be coupled in a conforming way, which also preserves the sparsity pattern of standard finite element assembly procedures. Several numerical experiments show the accuracy and efficiency of the method, as well as its potential to reduce the size of the local spaces and the number of training samples compared to the uncorrelated sampling.
Structural build-up describes the stability and early-age strength development of fresh mortar used in 3D printing. lt is influenced by several factors, i.e. the composition of the print able material, the printing regime, and the ambient conditions. The existing modelling approaches for structural build-up usually define the model parameters for a specific material composition with out considering the influence of the ambient conditions. The goal of this contribution is to explicitly include the temperature dependency in the modelling approach. Temperature changes have signifi cant impact on the structural build-up process: an increase of the temperature leads to a faster dissol ution of cement phases and accelerates hydration. The proposed extended model includes temperature dependency using the Arrhenius theory. The new model parameters are successfully calibrated based on Viskomat measurement data using Bayesian inference. Furthermore, a higher impact of the temperature in the re-flocculation as in the structuration stage is observed.
Despite the advances in hardware and software techniques, standard numerical methods fail in providing real-time simulations, especially for complex processes such as additive manufacturing applications. A real-time simulation enables process control through the combination of process monitoring and automated feedback, which increases the flexibil- ity and quality of a process. Typically, before producing a whole additive manufacturing structure, a simplified experiment in form of a bead-on- plate experiment is performed to get a first insight into the process and to set parameters suitably. In this work, a reduced order model for the transient thermal problem of the bead-on-plate weld simulation is devel- oped, allowing an efficient model calibration and control of the process. The proposed approach applies the proper generalized decomposition (PGD) method, a popular model order reduction technique, to decrease the computational effort of each model evaluation required multiple times in parameter estimation, control and optimization. The welding torch is modeled by a moving heat source, which leads to difficulties separating space and time, a key ingredient in PGD simulations. A novel approach for separating space and time is applied and extended to 3D problems allowing the derivation of an efficient separated representation of the tem- perature. The results are verified against a standard finite element model showing excellent agreement. The reduced order model is also leveraged in a Bayesian model parameter estimation setup, speeding up calibrations and ultimately leading to an optimized real-time simulation approach for welding experiment using synthetic as well as real measurement data.
Multiscale modeling of linear elastic heterogeneous structures via localized model order reduction
(2024)
In this paper, a methodology for fine scale modeling of large scale linear elastic structures is proposed, which combines the variational multiscale method, domain decomposition and model order reduction. The influence of the fine scale on the coarse scale is modelled by the use of an additive split of the displacement field, addressing applications without a clear scale separation. Local reduced spaces are constructed bysolving an oversampling problem with random boundary conditions. Herein, we inform the boundary conditions by a global reduced problem and compare our approach using physically meaningful correlated samples with existing approaches using uncorrelated samples. The local spaces are designed such that the local contribution of each subdomain can be coupled in a conforming way, which also preserves the sparsity pattern of standard finite element assembly procedures. Several numerical experiments show the accuracy and efficiency of the method, as well as its potential to reduce the size of the local spaces and the number of training samples compared to the uncorrelated sampling
The key point of structural reliability analysis is the estimation of the failure probability (Pf), typically a rare event. This probability is defined as the integral over the failure domain which is given by a limit state function. Usually, this function is only implicit given by an underlying finite element simulation of the structure. It is generally not possible to solve the integral for Pf analytically. For that reason, simulation-based methods as well as methods based on surrogate modeling (or Response surface methods) has been developed. Nevertheless, these variance reducing methods still require a few thousand calculations of the underlying finite element model, making reliability Analysis computationally expensive for real applications.
The efficiency of structural model updating and the subsequent reliability analysis is increased by using the advantages of reduced order models. Coupling a reduced model of the structure of interest with a Bayesian model updating approach or an reliability analysis to estimate the failure probability reduce the computational cost of such complex analyses drastically.
One of the most important goals in civil engineering is to guaranty the safety of constructions. National standards prescribe a required failure probability in the order of 10−6 (e.g. DIN EN 199:2010-12). The estimation of these failure probabilities is the key point of structural reliability analysis. Generally, it is not possible to compute the failure probability analytically.
Therefore, simulation-based methods as well as methods based on surrogate modeling or response surface methods have been developed. Nevertheless, these methods still require a few thousand evaluations of the structure, usually with finite element (FE) simulations, making reliability analysis computationally expensive for relevant applications.
The aim of this contribution is to increase the efficiency of structural reliability analysis by using the advantages of model reduction techniques. Model reduction is a popular concept to decrease the computational effort of complex numerical simulations while maintaining a reasonable accuracy. Coupling a reduced model with an efficient variance reducing sampling algorithm significantly reduces the computational cost of the reliability analysis without a relevant loss of accuracy.
The key point of structural reliability analysis is the estimation of the failure probability. This probability is defined as the integral over the failure domain which is given by a limit state function. Usually, this function is only implicit given by an underlying finite element simulation of the structure. It is generally not possible to solve the integral analytically. For that reason, numerical methods based on sampling and surrogates have been developed. Nevertheless, these sampling methods still require a few thousand calculations of the underlying finite element model, making reliability analysis computationally expensive for relevant applications.
Coupling a reduced order model (proper generalized decomposition) with an efficient variance reducing sampling algorithm can reduce the computational cost of reliability analysis drastically. In the proposed method, an importance sampling technique is coupled with a reduced structural model by means of PGD to estimate the failure probability. Instead of calculating the design point e.g. with optimization algorithms, the design point is adaptively estimated by using the idea of subset simulation. The failure probability is estimated in an iterative scheme based on adaptively computing the design point and refining the PGD model.
The main challenge using numerical models as digital twins in real applications is the calibration and validation of the model based on uncertain measurement data. Therefore, model updating approaches which are inverse optimization processes are applied. This requires a huge number of computations of the same numerical model with slightly different model parameters. For that reason, model updating becomes computationally very expensive for real applications.
Model reduction, e.g. the proper generalized decomposition method, is a popular concept to decrease the computational effort of complex numerical simulations. Therefore, a reduced model of the structure of interest is derived and will be used as surrogate model in a Variational Bayesian procedure to create a very efficient digital twin of the structure.
An efficient model updating approach by means of a PGD reduced model with random field material stiffness parameters is shown. The random field allows, to calibrate the model considering parameter changes over the spatial direction. These changes can be caused by local damages as well as by production. As an exemplary application a demonstrator bridge is used. Digital twins can reduce the costs for maintenance and inspections especially for the costly civil infrastructure with high requirements at their performance over the whole lifetime. Currently, the current state of the structure is determined by regular manual and visual inspections within constant intervals. However, the critical sections are often not directly accessible or impossible to be instrumented at all. In this case, model-based approaches where a digital twin is set up can improve the process. Based on this digital twin, a prognosis of the future performance of the structure, e.g. the failure probability, can be computed.
The influences of the reduction degree, the mesh discretization as well as the correlation length in the PGD Bayesian approach are studied by means of the digital twin of a simple pre-stressed concrete two field bridge.
Interlaboratory studies are common tools for collecting comparable data to implement standards for new materials or testing technologies. In the case of construction materials, these studies form the basis for recommendations and design codes. Depending on the study, the amount of data collected can be enormous, making manual handling and evaluation difficult. On the other hand, the importance of the FAIR (findable, accessible, interoperable, and reusable) principles for scientific data management, published by Wilkinson et al. in 2016, is constantly growing and changing the view on data usage.
The benefits of using data management tools such as data stores/repositories or electronic laboratory notebooks are many. Data is stored in a structured and accessible way (at least within a group) and data loss due to staff turnover is reduced. Tools usually support data publishing and analysis interfaces. In this way, data can be reused years later to generate new knowledge with future insights. On the other hand, there are many challenges in setting up a data repository, such as selecting suitable software tools, defining the data structure, enabling data access, and understanding by others and
ensuring maintenance, among others.
This talk discusses the advantages and challenges of setting up and applying a data repository using the interlaboratory study on the mechanical properties of printed concrete structures carried out in RILEM TC 304-ADC as example. First, the definition of a suitable data structure including all information is discussed. The tool-dependent upload process is then described. Here, the data
management system openBIS (open source software developed by ETH Zurich) is used. Since in most cases an open compute platform allowing access from different organisations is not possible or available due to data protection and maintenance issues, tool-independent export options are discussed and compared. Finally, the different query and analysis possibilities are demonstrated.
FenicsXConcrete
(2023)
For extrusion-based 3D concrete printing, the early age mechanical behavior is influenced by various time dependent phenomena: structural build-up, plasticity as well as viscosity. The structural build-up is governing the stability and early-age strength development of the fresh printable cementitious materials and with that influencing the printability, buildability, and open time of the printing process. Generally, it is influenced by a number of factors, i.e. composition of the printable material, printing regime, and ambient conditions (temperature, humidity, etc.). There are several approaches to model the structural build-up of cementitious materials. All models are based on a time-dependent internal structural parameter describing the flocculation state, which is assumed to be zero after mixing and increases with time. The approaches differ in the definition of the time dependency (linear, exponential, bi-linear). Usually, the parameters are defined for a specific material composition without considering the influence of ambient conditions.
In this contribution, the bi-linear structural build-up model [Kruger et al., Construction and Building Materials 224, 2019] is extended by the temperature influence. Temperature changes will occur in real life printing processes due to changing ambient conditions (summer, winter, day, night) as well as the printing process (pressure changes etc.) and have a significant impact on the structural build-up process: an increase of the temperature leads to a faster dissolution of cement phases, accelerates hydration and boosts the Brownian motion. For that reason, the model parameters are simulated as temperature dependent using an Arrhenius function. Furthermore, the proposed extended model is calibrated based on measurement data using Bayesian inference. A very good agreement of the predicted model data with the measured control data was reached. Additionally, the structural build-up model is integrated into a viscoelastic and elastoplastic mechanical model, simulating the whole mechanical behavior during layer deposition.
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general. When implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods and tools is outlined.
We begin with a motivation on potential use cases for the application of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data (including metadata schema) are discussed. The first one is the openBIS system, which is an open-source lab notebook and PostgreSQL based data management system. A second option is a semantic representation using RDF based on ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. A metadata schema for a rheological measurement setup is derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behavior, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
There is a rising attention of using numerical models for effcient structural monitoring and ensuring the structure's safety. Setting up virtual models as twin for real structures requires a model identification process calculating the unknown model parameters, which mostly are only indirectly measurable. This is a computationally very costly inverse optimization process, which often makes it unfeasible for real applications. Effcient surrogate models such as reduced order models can be used, to overcome this limitation. But the influence of the model accuracy on the identification process has then to be considered. The aim is to automatically control the influence of the model's accuracy on the identification. Here, a variational Bayesian inference approach[3] is coupled with a reduced forward model using the Proper Generalized Decomposition (PGD) method. The influence of the model accuracy on the inference result is studied and measured. Therefore, besides the commonly used Bayes factor the Kullback-Leibler divergences between the predicted posterior pdfs are proposed. In an adaptive inference procedure, the surrogate's accuracy is iteratively increased, and the convergence of the posterior pdf is analysed. The proposed adaptive identification process is applied to the identification of spatially distributed damage modeled by a random eld for a simple example with synthetic data as well as a small, reinforced bridge with real measurement data. It is shown that the proposed criteria can mirror the influence of the model accuracy and can be used to automatically select a suffciently accurate surrogate model.
With increasing focus on industrialized processing, investigating, understanding, and modelling the structural build-up of cementitious materials becomes more important. The structural build-up governs the key property of fresh printable materials -- buildability -- and it influences the mechanical properties after the deposition. The structural build-up rate can be adjusted by optimization of the mixture composition and the use of concrete admixtures. Additionally, it is known, that the environmental conditions, i.e. humidity and temperature have a significant impact on the kinetic of cement hydration and the resulting hardened properties, such as shrinkage, cracking resistance etc. In this study, small amplitude oscillatory shear (SAOS) tests are applied to examine the structural build-up rate of cement paste subject to different temperatures under controlled humidity. The results indicate significant influences of the ambient temperature on the intensity of the re-flocculation (Rthix) rate, while the structuration rate (Athix) is almost not affected. A bi-linear thixotropy model extended by temperature dependent parameters coupled with a linear viscoelastic material model is proposed to simulate the mechanical behaviour considering the structural build-up during the SAOS test.
With increasing focus on industrialized processing, investigating, understanding, and modelling the structural build-up of cementitious materials becomes more important. The structural build-up governs the key property of fresh printable materials -- buildability -- and it influences the mechanical properties after the deposition. The structural build-up rate can be adjusted by optimization of the mixture composition and the use of concrete admixtures. Additionally, it is known, that the environmental conditions, i.e. humidity and temperature have a significant impact on the kinetic of cement hydration and the resulting hardened properties, such as shrinkage, cracking resistance etc. In this study, small amplitude oscillatory shear (SAOS) tests are applied to examine the structural build-up rate of cement paste subject to different temperatures under controlled humidity. The results indicate significant influences of the ambient temperature on the intensity of the re-flocculation (Rthix) rate, while the structuration rate (Athix) is almost not affected. A bi-linear thixotropy model extended by temperature dependent parameters coupled with a linear viscoelastic material model is proposed to simulate the mechanical behaviour considering the structural build-up during the SAOS test
The main challenge in using numerical models as digital twins in real applications for prognosis purposes, such as reliability analysis, is the calibration and validation of the models based on uncertain measurement data. Uncertainties are not limited to the measurement data, but the numerical model itself will not be perfect due to the modelling assumptions.
In this contribution, a probabilistic inference method for model calibration, based on the Bayes’ Theorem, is used to face that issue. Such inference approaches include uncertainties on the data as well as on the model parameters, allowing to compute an a posteriori distribution for the model parameters as well as a noise term reflecting the measured data. However, such probabilistic inference methods require a lot of evaluations of the numerical forward model for different model parameters. An improvement of the efficiency is obtained by replacing the forward model with a reduced model. Model reduction, e.g. the proper generalized decomposition (PGD) method, is a popular concept to decrease the computational effort, where each evaluation of the reduced forward model is a pure less costly function evaluation.
The heterogeneous spatial distribution of material parameters in the forward model is described by a lognormal random field. This allows identifying a variable stiffness over the spatial directions by identifying the random field variables with given measurement data. These changes can e.g. be caused by damage. The lognormal field is approximated as series expansion for the PGD problem.
The derived efficient model identification procedure is shown using a real reinforced prestress demonstrator bridge and stereophotogrammetry measurement data. A digital twin for that demonstrator bridge is build up using a set of measurement data and verified by testing additional measurement data. PGD model error against the FEM model is discussed based on an importance sampling analysis computing the Bayes Factor.
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general, and when implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods, tools and procedures is outlined.
We begin with a motivation on potential use cases for the applications of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data are discussed. On the one hand, this includes a discussion on how to define actual data structures and in particular metadata schema, and on the other hand, two different systems for storing the data are discussed. The first one is the openBIS system, which is an open-source Lab notebook and PostgreSQL based data management system. A second option are a semantic representations using RDF based ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. Metadata schemata for a rheological measurement setup are derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behaviour, the automatic workflow to use the stored data for model parameter estimation is demonstrated.