Datei im Netzwerk der BAM verfügbar ("Closed Access")
Filtern
Dokumenttyp
- Vortrag (3)
- Beitrag zu einem Tagungsband (2)
- Zeitschriftenartikel (1)
Referierte Publikation
- nein (6)
Schlagworte
- Ontology (2)
- Optimization workflow (2)
- Performance oriented concrete design (2)
- Beton (1)
- Data provenance (1)
- Datenmanagement (1)
- Digitalisierung (1)
- Early-age concrete (1)
- Experimental data (1)
- Experimental data to trustworthy (1)
Organisationseinheit der BAM
Eingeladener Vortrag
- nein (3)
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general. When implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods and tools is outlined.
We begin with a motivation on potential use cases for the application of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data (including metadata schema) are discussed. The first one is the openBIS system, which is an open-source lab notebook and PostgreSQL based data management system. A second option is a semantic representation using RDF based on ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. A metadata schema for a rheological measurement setup is derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behavior, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
FAIR (findable, accessible, interoperable and reusable) data usage is one of the main principals that many of the research and funding organizations include in their strategic plans, which means that following the main principals of FAIR data is required in many research projects. The definition of data being FAIR is very general, and when implementing that for a specific application or project or even setting a standardized procedure within a working group, a company or a research community, many challenges arise. In this contribution, an overview about our experience with different methods, tools and procedures is outlined.
We begin with a motivation on potential use cases for the applications of FAIR data with increasing complexity starting from a reproducible research paper over collaborative projects with multiple participants such as Round-Robin tests up to data-based models within standardization codes, applications in machine learning or parameter estimation of physics-based simulation models.
In a second part, different options for structuring the data are discussed. On the one hand, this includes a discussion on how to define actual data structures and in particular metadata schema, and on the other hand, two different systems for storing the data are discussed. The first one is the openBIS system, which is an open-source Lab notebook and PostgreSQL based data management system. A second option are a semantic representations using RDF based ontologies for the domain of interest.
In a third section, requirements for workflow tools to automate data processing are discussed and their integration into reproducible data analysis is presented with an outlook on required information to be stored as metadata in the database.
Finally, the presented procedures are exemplarily demonstrated for the calibration of a temperature dependent constitutive model for additively manufactured mortar. Metadata schemata for a rheological measurement setup are derived and implemented in an openBIS database. After a short review of a potential numerical model predicting the structural build-up behaviour, the automatic workflow to use the stored data for model parameter estimation is demonstrated.
Concrete has a long history in the construction industry and is currently one of the most widely used building materials. Unfortunately, the concrete industry has a significant impact on the environment by contributing to about 9% of the total anthropogenic greenhouse gas (GHG) emissions. Concrete is a highly complex composite material. However, the main source of concrete's GHG emissions is the cement. This leads to two main strategies when trying to reduce the environmental impact. The first is to reduce the cement within the concrete mix. This can be done by substituting it using additives or increasing the amount of aggregates. Usually this will lead to decreased material properties, like compressive strength or stiffness. The second option is to reduce the amount of required concrete by optimizing the topology of the structure. However, this might require higher compressive strength. In addition, there are other properties like to workability which need to be considered. All in all, this leads to a highly complex optimization problem, which requires the estimation of effective concrete properties, based on the mixture as input to a predictive simulation.
We present an automated workflow framework which combines experimental data with simulations, calibrates the simulation and performs the desired optimization. This workflow includes classical FE models, design guidelines based on model codes, as well as data driven methods. The chosen example is a beam, for which the concrete mixture is optimized to reduce GHG emissions. The first step is an estimation of material parameters, based on experimental data. This includes measures of stochastic distribution, allowing the quantification of the quality of the estimated parameters. The second step is the optimization. It takes into account constraints like the loading capacity after 28 days, the maximum allowed temperature during cement hydration and the maximum time till demoulding. The applied models include a Mori-Tanaka-based homogenization method to estimate effective concrete parameters, an FE simulation including the evolution of the concrete compressive strength and stiffness, the temperature field, displacements, and stress. This research shows a way towards a more performance-oriented material design.
Concrete has a long history in the construction industry and is currently one of the most widely used building materials. Especially precast concrete elements are frequently utilized in construction projects for standardized applications, increasing the quality of the composite material, as well as reducing the required building time. Despite the accumulated knowledge, continuous research and development in this field is essential due to the complexity of the composite combined with the ever-growing number of applications and requirements. Especially in view of global climate change, design aspects as CO2 emissions and resource efficiency require new mix designs and optimization strategies. A result of the material’s high complexity and heterogeneity on multiple scales is that utilizing the full potential with changing demands is highly challenging, even for the established industry. We propose a framework based on an ontology, which automatically combines experimental data with numerical simulations. This not only simplifies experimental knowledge transfer, but allows the model calibration and the resulting simulation predictions to be reproducible and interpretable. This research shows a way towards a more performance oriented material design. Within this talk we present our workflow for an automated simulation of a precast element, demonstrating the interaction of the ontology and the finite element simulation. We show the automatic calibration of our early-age concrete model [1, 2], to improve the prediction of the optimal time for the removal of the form work.
The aim of the project LeBeDigital is to present opportunities of digitalization for concrete applications and show a way towards a performance oriented material design.
Due to the high complexity of the manufacturing process of concrete and the range of parameters affecting the effective composite properties, a global optimization is challenging.
Currently, most optimization is only carried out on a narrow scope related to the respective players, e.g. a mix optimization for a target strength, or a design optimization for minimum weight, using a given mix. To enable a path toward a full global optimization
requires a reproducible chain of data, accessible for all contributors.
We propose a framework based on an ontology, which automatically combines experimental data with numerical simulations. This not only simplifies experimental knowledge transfer, but allows the model calibration and the resulting simulation predictions to be
reproducible and interpretable. In addition to an optimized set of parameters, this setup allows to study the quality and uncertainty of the data and models, as well as giving information about optimal experiments to improve the data set.
We will present the proposed optimization workflow, using the example of a precast concrete element. The contribution will focus on the workflow and challenges of an interoperable FEM formulation.
Die sprunghaft zunehmende Wichtigkeit von FAIR- und Open-Data für die Qualitätssicherung, aber auch für die Nachnutzbarkeit von Daten und den Erkenntnisfortschritt führt zu enormem Flandlungsbedarf in Forschung und Entwicklung. Damit verbunden laufen derzeit vielfältige, ambitionierte Aktionen, z. B. bezüglich der Erstellung von Ontologien und Wissensgraphen. Das Knowhow entwickelt sich rasant, die Ansätze zur Implementation entstehen in verschiedenen Fachwelten bzw. mit
unterschiedlichen Zielsetzungen parallel, so dass recht heterogene Herangehensweisen resultieren.
Diese Veröffentlichung fokussiert auf Arbeiten, die derzeit als möglichst ganzheitlicher Ansatz für Materialdaten im Rahmen der Digitalisierungsinitiative „Plattform MaterialDigital" vorangetrieben werden. Die Autoren bearbeiten baustoffbezogene Aspekte im Verbundprojekt „LeBeDigital - Lebenszyklus von Beton". Zielsetzung ist die digitale Beschreibung des Materialverhaltens von Beton über den kompletten Herstellungsprozess eines Fertigteils mit einer Integration von Daten und Modellen innerhalb eines Workflows zur probabilistischen Material- und Prozessoptimierung.
Es wird über die Vorgehensweise und die dabei gewonnenen Erfahrungen berichtet, nicht ohne den Blick auf die oft unterschätzte Komplexität der Thematik zu lenken.