Filtern
Erscheinungsjahr
- 2023 (14) (entfernen)
Dokumenttyp
- Zeitschriftenartikel (5)
- Beitrag zu einem Tagungsband (4)
- Forschungsdatensatz (3)
- Vortrag (2)
Schlagworte
- Data provenance (2)
- Digital Twins (2)
- Experimental data to trustworthy (2)
- Scientific workflows (2)
- Simulation models and Standards (2)
- 3D concrete printing (1)
- 7790634 (1)
- Bayesian Inference (1)
- Bayesian Uncertainty Quantification (1)
- Beton (1)
Organisationseinheit der BAM
Paper des Monats
- ja (2)
Eingeladener Vortrag
- nein (2)
Simulation-based digital twins have emerged as a powerful tool for evaluating the mechanical response of bridges. As virtual representations of physical systems, digital twins can provide a wealth of information that complements traditional inspection and monitoring data. By incorporating virtual sensors and predictive maintenance strategies, they have the potential to improve our understanding of the behavior and performance of bridges over time. However, as bridges age and undergo regular loading and extreme events, their tructural characteristics change, often differing from the predictions of their initial design. Digital twins must be continuously adapted to reflect these changes. In this article, we present a Bayesian framework for updating simulation-based digital twins in the context of bridges. Our approach integrates information from measurements to account for inaccuracies in the simulation model and quantify uncertainties. Through its implementation and assessment, this work demonstrates the potential for digital twins to provide a reliable and up-to-date representation of bridge behavior, helping to inform decision-making for maintenance and management.
Data provenance - from experimental data to trust worthy simulation models and standards Jörg F. Unger, Annika Robens-Radermacher, Erik Tamsen Bundesanstalt für Materialforschung und -prüfung (BAM). Unter den Eichen 87, 12205 Berlin, Germany FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general, and when implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods, tools and procedures is outlined. We begin with a motivation on potential use cases for the applications of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models. In a second part, different options for structuring the data are discussed. On the one hand, this includes a discussion on how to define actual data structures and in particular metadata schema, and on the other hand, two different systems for storing the data are discussed. The first one is the open BIS system, which is an opensource Lab notebook and Postgre SQL based data management system. A second option are a semantic representations using RDF based ontologies for the domain of interest. In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
In the field of computational science and engineering, workflows often entail the application of various software, for instance, for simulation or pre- and postprocessing.
Typically, these components have to be combined in arbitrarily complex workflows to address a specific research question. In order for peer researchers to understand, reproduce and (re)use the findings of a scientific publication, several challenges have to be addressed. For instance, the employed workflow has to be automated and information on all used software must be available for a reproduction of the results. Moreover, the results must be traceable and the workflow documented and readable to allow for external verification and greater trust.
In this paper, existing workflow management systems (WfMSs) are discussed regarding their suitability for describing, reproducing and reusing scientific workflows. To this end, a set of general requirements for WfMSs were deduced from user stories that we deem relevant in the domain of computational science and engineering. On the basis of an exemplary workflow implementation, publicly hosted at GitHub (https://github.com/BAMresearch/NFDI4IngScientificWorkflowRequirements), a selection of different WfMSs is compared with respect to these requirements, to support fellow scientists in identifying the WfMSs that best suit their requirements.
Die sprunghaft zunehmende Wichtigkeit von FAIR- und Open-Data für die Qualitätssicherung, aber auch für die Nachnutzbarkeit von Daten und den Erkenntnisfortschritt führt zu enormem Flandlungsbedarf in Forschung und Entwicklung. Damit verbunden laufen derzeit vielfältige, ambitionierte Aktionen, z. B. bezüglich der Erstellung von Ontologien und Wissensgraphen. Das Knowhow entwickelt sich rasant, die Ansätze zur Implementation entstehen in verschiedenen Fachwelten bzw. mit
unterschiedlichen Zielsetzungen parallel, so dass recht heterogene Herangehensweisen resultieren.
Diese Veröffentlichung fokussiert auf Arbeiten, die derzeit als möglichst ganzheitlicher Ansatz für Materialdaten im Rahmen der Digitalisierungsinitiative „Plattform MaterialDigital" vorangetrieben werden. Die Autoren bearbeiten baustoffbezogene Aspekte im Verbundprojekt „LeBeDigital - Lebenszyklus von Beton". Zielsetzung ist die digitale Beschreibung des Materialverhaltens von Beton über den kompletten Herstellungsprozess eines Fertigteils mit einer Integration von Daten und Modellen innerhalb eines Workflows zur probabilistischen Material- und Prozessoptimierung.
Es wird über die Vorgehensweise und die dabei gewonnenen Erfahrungen berichtet, nicht ohne den Blick auf die oft unterschätzte Komplexität der Thematik zu lenken.
Using digital twins for decision making is a very promising concept which combines simulation models with corresponding experimental sensor data in order to support maintenance decisions or to investigate the reliability. The quality of the prognosis strongly depends on both the data quality and the quality of the digital twin. The latter comprises both the modeling assumptions as well as the correct parameters of these models. This article discusses the challenges when applying this concept to realmeasurement data for a demonstrator bridge in the lab, including the data management, the iterative development of the simulation model as well as the identification/updating procedure using Bayesian inference with a potentially large number of parameters. The investigated scenarios include both the iterative identification of the structural model parameters as well as scenarios related to a damage identification. In addition, the article aims at providing all models and data in a reproducibleway such that other researcher can use this setup to validate their methodologies.
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general, and when implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods, tools and procedures is outlined.
We begin with a motivation on potential use cases for the applications of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data are discussed. On the one hand, this includes a discussion on how to define actual data structures and in particular metadata schema, and on the other hand, two different systems for storing the data are discussed. The first one is the openBIS system, which is an open-source Lab notebook and PostgreSQL based data management system. A second option are a semantic representations using RDF based ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. Metadata schemata for a rheological measurement setup are derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behaviour, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general, and when implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods, tools and procedures is outlined.
We begin with a motivation on potential use cases for the applications of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data are discussed. On the one hand, this includes a discussion on how to define actual data structures and in particular metadata schema, and on the other hand, two different systems for storing the data are discussed. The first one is the openBIS system, which is an open-source Lab notebook and PostgreSQL based data management system. A second option are a semantic representations using RDF based ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. Metadata schemata for a rheological measurement setup are derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behaviour, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
Despite the advances in hardware and software techniques, standard numerical methods fail in providing real-time simulations, especially for complex processes such as additive manufacturing applications. A real-time simulation enables process control through the combination of process monitoring and automated feedback, which increases the flexibil- ity and quality of a process. Typically, before producing a whole additive manufacturing structure, a simplified experiment in form of a bead-on- plate experiment is performed to get a first insight into the process and to set parameters suitably. In this work, a reduced order model for the transient thermal problem of the bead-on-plate weld simulation is devel- oped, allowing an efficient model calibration and control of the process. The proposed approach applies the proper generalized decomposition (PGD) method, a popular model order reduction technique, to decrease the computational effort of each model evaluation required multiple times in parameter estimation, control and optimization. The welding torch is modeled by a moving heat source, which leads to difficulties separating space and time, a key ingredient in PGD simulations. A novel approach for separating space and time is applied and extended to 3D problems allowing the derivation of an efficient separated representation of the tem- perature. The results are verified against a standard finite element model showing excellent agreement. The reduced order model is also leveraged in a Bayesian model parameter estimation setup, speeding up calibrations and ultimately leading to an optimized real-time simulation approach for welding experiment using synthetic as well as real measurement data.
Software-driven scientific workflows are often characterized by a complex interplay of various pieces of software executed in a particular order. The output of a computational step may serve as input to a subsequent computation, which requires them to be processed sequentially with a proper mapping of outputs to inputs. Other computations are independent of each other and can be executed in parallel. Thus, one of the main tasks of a workflow tool is a proper and efficient scheduling of the individual processing steps.
Each processing step, just as the workflow itself, typically processes some input and produces output data. Apart from changing the input data to operate on, processing steps can usually be configured by a set of parameters to change their behavior. Moreover, the behavior of a processing step is determined by its source code and/or executable binaries/packages that are called within it. Beyond this, the computation environment not only has a significant influence on its behavior, but is also crucial in order for the processing step to work at all. The environment includes the versions of the interpreters or compilers, as well as all third-party libraries and packages that contribute to the computations carried out in a processing step.
FenicsXConcrete
(2023)