Filtern
Erscheinungsjahr
Dokumenttyp
- Vortrag (19)
- Zeitschriftenartikel (17)
- Beitrag zu einem Tagungsband (5)
- Video (2)
- Posterpräsentation (2)
- Forschungsdatensatz (2)
- Sonstiges (1)
Sprache
- Englisch (39)
- Deutsch (8)
- Mehrsprachig (1)
Schlagworte
- Concrete (10)
- Data fusion (9)
- Machine learning (9)
- Machine Learning (8)
- NDT (7)
- Corrosion (5)
- Clustering (4)
- Concrete evaluation (4)
- Honeycombing (4)
- Materials Informatics (4)
Organisationseinheit der BAM
- 8 Zerstörungsfreie Prüfung (34)
- 8.0 Abteilungsleitung und andere (32)
- 7 Bauwerkssicherheit (3)
- 5 Werkstofftechnik (2)
- 5.2 Metallische Hochtemperaturwerkstoffe (2)
- 7.6 Korrosion und Korrosionsschutz (2)
- 8.2 Zerstörungsfreie Prüfmethoden für das Bauwesen (2)
- 1 Analytische Chemie; Referenzmaterialien (1)
- 1.3 Instrumentelle Analytik (1)
- 5.1 Mikrostruktur Design und Degradation (1)
Paper des Monats
- ja (1)
Eingeladener Vortrag
- nein (19)
Half-Cell-Potential Mapping (HP) is the most popular non-destructive testing (NDT) method for the detection of active corrosion in reinforced concrete. HP is influenced by parameters such as moisture and chloride gradients in the component. The sensitivity to the spatially small, but dangerous pitting is low. In this study we show how additional measurement information can be used with multi-sensor data fusion to improve the detection performance and to automate data evaluation. The fusion is based on supervised machine learning (SML). SML are methods that recognize relationships in (sensor) data based on given labels. We use SML to distinguish "defective" and "intact" labeled areas in our dataset. It consists of 18 measurement - each contains HP, ground radar, microwave moisture and Wenner resistivity data. Exact labels for changing environmental conditions were determined in a laboratory study on a reinforced concrete slab, which deteriorated controlled and accelerated. The deterioration progress was monitored continuously and corrosion was generated targeted at a predefined location. The detection results are quantified and statistically evaluated. The data fusion shows a significant improvement over the best single method (HP). We describe the challenges of data-driven approaches in nondestructive testing and show possible solutions.
Concrete is a complex material. Its properties evolve over time, especially at early age, and are dependent on environmental conditions, i.e. temperature and moisture conditions, as well as the composition of the material.
This leads to a variety of macroscopic phenomena such as hydration/solidification/hardening, creep and shrinkage, thermal strains, damage and inelastic deformations. Most of these phenomena are characterized by specific set of model assumptions and often an additive decomposition of strains into elastic, plastic, shrinkage and creep components is performed. Each of these phenomena are investigated separately and a number of respective independent models have been designed. The interactions are then accounted for by adding appropriate correction factors or additional models for the particular interaction. This paper discusses the importance of reconsider even in the experimental phase the model assumptions required to generalize the experimental data into models used in design codes. It is especially underlined that the complex macroscopic behaviour of concrete is strongly influenced by its multiscale and multiphyscis nature and two examples (shrinkage and fatigue) of interacting phenomena are discussed.
The amount of data generated worldwide is constantly increasing. These data come from a wide variety of sources and systems, are processed differently, have a multitude of formats, and are stored in an untraceable and unstructured manner, predominantly in natural language in data silos. This problem can be equally applied to the heterogeneous research data from materials science and engineering. In this domain, ways and solutions are increasingly being generated to smartly link material data together with their contextual information in a uniform and well-structured manner on platforms, thus making them discoverable, retrievable, and reusable for research and industry. Ontologies play a key role in this context. They enable the sustainable representation of expert knowledge and the semantically structured filling of databases with computer-processable data triples.
In this perspective article, we present the project initiative Materials-open-Laboratory (Mat-o-Lab) that aims to provide a collaborative environment for domain experts to digitize their research results and processes and make them fit for data-driven materials research and development. The overarching challenge is to generate connection points to further link data from other domains to harness the promised potential of big materials data and harvest new knowledge.
This paper presents a novel approach for developing sustainable building materials through Sequential Learning. Data sets with a total of 1367 formulations of different types of alkali-activated building materials, including fly ash and blast furnace slag-based concrete and their respective compressive strength and CO2-footprint, were compiled from the literature to develop and evaluate this approach. Utilizing this data, a comprehensive computational study was undertaken to evaluate the efficacy of the proposed material design methodologies, simulating laboratory conditions reflective of real-world scenarios. The results indicate a significant reduction in development time and lower research costs enabled through predictions with machine learning. This work challenges common practices in data-driven materials development for building materials. Our results show, training data required for data-driven design may be much less than commonly suggested. Further, it is more important to establish a practical design framework than to choose more accurate models. This approach can be immediately implemented into practical applications and can be translated into significant advances in sustainable building materials development.
This data article introduces a dataset comprising 1630 alkali-activated concrete (AAC) mixes, compiled from 106 literature sources. The dataset underwent extensive curation to address feature redundancy, transcription errors, and duplicate data, yielding refined data ready for further data-driven science in the field of AAC, where this effort constitutes a novelty. The carbon footprint associated with each material used in the AAC mixes, as well as the corresponding CO2 footprint of every mix, were approximated using two published articles. Serving as a foundation for future expansions and rigorous data applications, this dataset enables the characterization of AAC properties through machine learning algorithms or as a benchmark for performance comparison among different formulations. In summary, the dataset provides a resource for researchers focusing on AAC and related materials and offers insights into the environmental benefits of substituting traditional Portland concrete with AAC.
In recent decades, the number of components in concrete has grown, particularly in formulations aimed at reducing carbon footprints. Innovations include diverse binders, supplementary cementitious materials, activators, concrete admixtures, and recycled aggregates. These developments target not only the enhancement of material properties but also the mitigation of the ecological and economic impacts of concrete — the most extensively used material by humankind. However, these advancements also introduce a greater variability in the composition of raw materials. The material’s behavior is significantly influenced by its nanoscale properties, which can pose challenges in accurate characterization. Consequently, there’s an increasing need for experimental tuning of formulations. This is accompanied by a more inconsistent composition of raw materials, which makes an experimental tuning of formulations more and more necessary. However, the increased complexity in composition presents a challenge in finding the ideal formulation through trial and error. Inverse design (ID) techniques offer a solution to this challenge by allowing for a comprehensive search of the entire design space to create new and improved concrete formulations. In this publication, we introduce the concept of ID and demonstrate how our open-source app “SLAMD” provides all necessary steps of the workflow to adapt it in the laboratory, lowering the application barriers. The intelligent screening process, guided by a predictive model, leads to a more efficient and effective data-driven material design process resulting in reduced carbon footprint and improved material quality while considering socio-economic factors in the materials design.
AbstractHigh-strength aluminum alloys used in aerospace and automotive applications obtain their strength through precipitation hardening. Achieving the desired mechanical properties requires precise control over the nanometer-sized precipitates. However, the microstructure of these alloys changes over time due to aging, leading to a deterioration in strength. Typically, the size, number, and distribution of precipitates for a quantitative assessment of microstructural changes are determined by manual analysis, which is subjective and time-consuming. In our work, we introduce a progressive and automatable approach that enables a more efficient, objective, and reproducible analysis of precipitates. The method involves several sequential steps using an image repository containing dark-field transmission electron microscopy (DF-TEM) images depicting various aging states of an aluminum alloy. During the process, precipitation contours are generated and quantitatively evaluated, and the results are comprehensibly transferred into semantic data structures. The use and deployment of Jupyter Notebooks, along with the beneficial implementation of Semantic Web technologies, significantly enhances the reproducibility and comparability of the findings. This work serves as an exemplar of FAIR image and research data management.
Large-language models (LLMs) such as GPT-4 caught the interest of many scientists. Recent studies suggested that these models could be useful in chemistry and materials science. To explore these possibilities, we organized a hackathon. This article chronicles the projects built as part of this hackathon. Participants employed LLMs for various applications, including predicting properties of molecules and materials, designing novel interfaces for tools, extracting knowledge from unstructured data, and developing new educational applications. The diverse topics and the fact that working prototypes could be generated in less than two days highlight that LLMs will profoundly impact the future of our fields. The rich collection of ideas and projects also indicates that the applications of LLMs are not limited to materials science and chemistry but offer potential benefits to a wide range of scientific disciplines.