Filtern
Erscheinungsjahr
- 2022 (4) (entfernen)
Dokumenttyp
- Zeitschriftenartikel (2)
- Vortrag (2)
Sprache
- Englisch (4)
Schlagworte
- Bridge Monitoring (1)
- Concrete (1)
- Data infrastructures (1)
- Digital Twin (1)
- Digital representations (1)
- Digital twin (1)
- Digital workflows (1)
- Goal-oriented (1)
- Gradient damage (1)
- Heterogeneity (1)
Organisationseinheit der BAM
- 7 Bauwerkssicherheit (4)
- 7.7 Modellierung und Simulation (4)
- 1 Analytische Chemie; Referenzmaterialien (1)
- 1.3 Instrumentelle Analytik (1)
- 5 Werkstofftechnik (1)
- 5.1 Mikrostruktur Design und Degradation (1)
- 5.2 Metallische Hochtemperaturwerkstoffe (1)
- 8 Zerstörungsfreie Prüfung (1)
- 8.0 Abteilungsleitung und andere (1)
- VP Vizepräsident (1)
Paper des Monats
- ja (1)
Eingeladener Vortrag
- nein (2)
Numerical models built as virtual-twins of a real structure (digital-twins) are considered the future ofmonitoring systems. Their setup requires the estimation of unknown parameters, which are not directly measurable. Stochastic model identification is then essential, which can be computationally costly and even unfeasible when it comes to real applications. Efficient surrogate models, such as reduced-order method, can be used to overcome this limitation and provide real time model identification. Since their numerical accuracy influences the identification process, the optimal surrogate not only has to be computationally efficient, but also accurate with respect to the identified parameters. This work aims at automatically controlling the Proper Generalized Decomposition (PGD) surrogate’s numerical accuracy for parameter identification. For this purpose, a sequence of Bayesian model identification problems, in which the surrogate’s accuracy is iteratively increased, is solved with a variational Bayesian inference procedure. The effect of the numerical accuracy on the resulting posteriors probability density functions is analyzed through two metrics, the Bayes Factor (BF) and a criterion based on the Kullback-Leibler (KL) divergence. The approach is demonstrated by a simple test example and by two structural problems. The latter aims to identify spatially distributed damage, modeled with a PGD surrogate extended for log-normal random fields, in two different structures: a truss with synthetic data and a small, reinforced bridge with real measurement data. For all examples, the evolution of the KL-based and BF criteria for increased accuracy is shown and their convergence indicates when model refinement no longer affects the identification results.
Numerical simulators, such as finite element models, have become increasingly capable of predicting the behaviour of structures and components owing to more sophisticated underlying mathematical models and advanced computing power. A common challenge lies, however, in calibrating these models in terms of their unknown/uncertain parameters. When measurements exist, this can be achieved by comparing the model response against measured data. Besides uncertain model parameters, phenomena like damage can give rise to further uncertainties; in particular, quasi-brittle materials, like concrete, experience damage in a heterogeneous manner due to various imperfections, e.g. in geometry and boundary conditions. This hardens an accurate prediction of the damaged behaviour of real structures that comprise such materials.
In this study, which draws from a data-driven approach, we use the force-version of the finite element model updating method (FEMU-F) to incorporate measured displacements into the identification of the damage parameters, in order to cope with heterogeneity. In this method, instead of conducting a forward evaluation of the model and comparing the model response (displacements) against the data, we impose displacements to the model and compare the resulting force residuals with measured reaction forces. To account for uncertainties in the measurement of displacements, we endow this approach with a penalty term, which reflects the discrepancy between measured and imposed displacements, where the latter is assumed as unknown random variables to be identified as well. A Variational Bayesian approach is used as an approximating tool for computing posterior parameters. The underlying damage model considered in this work is a gradient-enhanced damage model.
We first establish the identification procedure through two virtual examples, where synthetic data (displacements) are generated over a certain spatially-dense set of points over the domain. The procedure is then validated on an experimental case-study; namely a 3-point bending experiment with displacement measurements resulting from a digital image correlation (DIC) analysis.
The amount of data generated worldwide is constantly increasing. These data come from a wide variety of sources and systems, are processed differently, have a multitude of formats, and are stored in an untraceable and unstructured manner, predominantly in natural language in data silos. This problem can be equally applied to the heterogeneous research data from materials science and engineering. In this domain, ways and solutions are increasingly being generated to smartly link material data together with their contextual information in a uniform and well-structured manner on platforms, thus making them discoverable, retrievable, and reusable for research and industry. Ontologies play a key role in this context. They enable the sustainable representation of expert knowledge and the semantically structured filling of databases with computer-processable data triples.
In this perspective article, we present the project initiative Materials-open-Laboratory (Mat-o-Lab) that aims to provide a collaborative environment for domain experts to digitize their research results and processes and make them fit for data-driven materials research and development. The overarching challenge is to generate connection points to further link data from other domains to harness the promised potential of big materials data and harvest new knowledge.