Ingenieurwissenschaften und zugeordnete Tätigkeiten
Filtern
Dokumenttyp
- Vortrag (27)
- Forschungsdatensatz (17)
- Zeitschriftenartikel (10)
- Posterpräsentation (6)
- Beitrag zu einem Tagungsband (2)
Schlagworte
- Automation (62) (entfernen)
Organisationseinheit der BAM
- 6 Materialchemie (51)
- 6.0 Abteilungsleitung und andere (42)
- VP Vizepräsident (12)
- VP.1 eScience (12)
- 6.5 Synthese und Streuverfahren nanostrukturierter Materialien (5)
- 1 Analytische Chemie; Referenzmaterialien (4)
- 8 Zerstörungsfreie Prüfung (4)
- 1.4 Prozessanalytik (3)
- 8.6 Faseroptische Sensorik (3)
- 5 Werkstofftechnik (2)
Understanding the chemistry and nature of individual chemical bonds is essential for materials design. Bonding analysis via the LOBSTER software package has provided valuable insights into the properties of materials for thermoelectric and catalysis applications. Thus, the data generated from
bonding analysis becomes an invaluable asset that could be utilized as features in large-scale data analysis and machine learning of material properties. However, no systematic studies exist that conducted high-throughput materials simulations to curate and validate bonding data obtained from LOBSTER. Here we present an approach to constructing such a large database consisting of quantum-chemical bonding information.
A deep insight into the chemistry and nature of individual chemical bonds is essential for understanding materials. Bonding analysis is expected to provide important features for large-scale data analysis and machine learning of material properties. Such information on chemical bonds can be calculated using the LOBSTER (www.cohp.de) software package, which post-processes data from modern density functional theory computations by projecting plane wave-based wave functions onto a local atomic orbital basis. We have performed bonding analysis on 1520 compounds (insulators and semiconductors) using a fully automated workflow combining the VASP and LOBSTER software packages. We then automatically evaluated the data with LobsterPy (https://github.com/jageo/lobsterpy) and provide results as a database. The projected densities of states and bonding indicators are benchmarked on VASP projections and available heuristics, respectively. Lastly, we illustrate the predictive power of bonding descriptors by constructing a machine-learning model for phononic properties, which shows an increase in prediction accuracies by 27 % (mean absolute errors) compared to a benchmark model differing only by not relying on any quantum-chemical bonding features.
This database consists of bonding data computed using Lobster for 1520 solid-state compounds consisting of insulators and semiconductors. It consists of two kinds of json files. Smaller lightweight JSONS consists of summarized bonding information for each of the compounds. The files are named as per ID numbers in the materials project database.
Here we provide also the larger computational data json files for 700 compounds. This files consists of all important LOBSTER computation output files data stored as dictionary.
This database consists of bonding data computed using Lobster for 1520 solid-state compounds consisting of insulators and semiconductors. The files are named as per ID numbers in the materials project database.
Here we provide the larger computational data JSON files for the rest of the 820 compounds. This file consists of all important LOBSTER computation output files data stored as a dictionary.
The presentation describes a novel approach to dynamically adjusting the weaving motion of the electrode in narrow gap GMAW.
An event driven arc sensor is used to dynamically adjust the weaving angle to variations in gap width by detecting each groove sidewall independently and in real-time. The approach presented requires only minimal user configuration for spray-arc or pulsed-arc transfer modes and can effectively be used in double- and single-sided weaving applications. Furthermore displacements of the welding torch with regards to the groove center line or contact-tip to workpiece distance are compensated.
Industry 4.0 is all about interconnectivity, sensor-enhanced process control, and data-driven systems. Process analytical technology (PAT) such as online nuclear magnetic resonance (NMR) spectroscopy is gaining in importance, as it increasingly contributes to automation and digitalization in production. In many cases up to now, however, a classical evaluation of process data and their transformation into knowledge is not possible or not economical due to the insufficiently large datasets available. When developing an automated method applicable in process control, sometimes only the basic data of a limited number of batch tests from typical product and process development campaigns are available. However, these datasets are not large enough for training machine-supported procedures. In this work, to overcome this limitation, a new procedure was developed, which allows physically motivated multiplication of the available reference data in order to obtain a sufficiently large dataset for training machine learning algorithms. The underlying example chemical synthesis was measured and analyzed with both application-relevant low-field NMR and high-field NMR spectroscopy as reference method. Artificial neural networks (ANNs) have the potential to infer valuable process information already from relatively limited input data. However, in order to predict the concentration at complex conditions (many reactants and wide concentration ranges), larger ANNs and, therefore, a larger Training dataset are required. We demonstrate that a moderately complex problem with four reactants can be addressed using ANNs in combination with the presented PAT method (low-field NMR) and with the proposed approach to generate meaningful training data.
We created a workflow that fully automates bonding analysis using Crystal Orbital Hamilton Populations, which are bond-weighted densities of states. This enables understanding of crystalline material properties based on chemical bonding information. To facilitate data analysis and machine-learning research, our tools include automatic plots, automated text output, and output in machine-readable format.
Automated bonding analysis software has been developed based on Crystal Orbital Hamilton Populations to facilitate high-throughput bonding analysis and machine-learning of bonding features. This work presents the software and discusses its applications to simple and complex materials such as GaN, NaCl, the oxynitrides XTaO2N (X=Ca, Ba, Sr) and Yb14Mn1Sb11.
The overall interest in nanotoxicity, triggered by the increasing use of nanomaterials in the material and life sciences, and the synthesis of an ever increasing number of new functional nanoparticles calls for standardized test procedures1,2 and for efficient approaches to screen the potential genotoxicity of these materials. Aiming at the development of fast and easy to use, automated microscopic methods for the determination of the genotoxicity of different types of nanoparticles, we assess the potential of the fluorometric γH2AX assay for this purpose. This assay, which can be run on an automated microscopic detection system, relies on the detection of DNA double strand breaks as a sign for genotoxicity3. Here, we provide first results obtained with broadly used nanomaterials like CdSe/CdS and InP/ZnS quantum dots as well as iron oxide, gold, and polymer particles of different surface chemistry with previously tested colloidal stability and different cell lines like Hep-2 and 8E11 cells, which reveal a dependence of the genotoxicity on the chemical composition as well as the surface chemistry of these nanomaterials. These studies will be also used to establish nanomaterials as positive and negative genotoxicity controls or standards for assay performance validation for users of this fluorometric genotoxicity assay. In the future, after proper validation, this microscopic platform technology will be expanded to other typical toxicity assays.
Bonds and local atomic environments are crucial descriptors of material properties. They have been used to create design rules and heuristics and as features in machine learning of materials properties. Implementations and algorithms (e.g., ChemEnv and LobsterEnv) for identifying local atomic environments based on geometrical characteristics and quantum-chemical bonding analysis are nowadays available. Fully automatic workflows and analysis tools have been developed to use quantum-chemical bonding analysis on a large scale. The lecture will demonstrate how our tools, that assess local atomic environments and perform automatic bonding analysis, help to develop new machine learning models and a new intuitive understanding of materials. Furthermore, other recent workflow contributions to the Materials Project software infrastructure (pymatgen, atomate2) related to phonons and machine-learning potentials will be discussed.
The TED-GC-MS analysis is a two-step method. A sample is first decomposed in a thermogravimetric analyzer (TGA) and the gaseous decomposition products are then trapped on a solid-phase adsorber. Subsequently, the solid-phase adsorber is analyzed with thermal desorption gas chromatography mass spectrometry (TDU-GC-MS). This method is ideally suited for the analysis of polymers and their degradation processes. Here, a new entirely automated System is introduced which enables high sample throughput and reproducible automated fractioned collection of decomposition products. Strengths and limitations of the system configuration are elaborated via three examples focused on practical challenges in materials analysis and identification: i) separate analysis of the components of a wood-plastic-composite material, ii) quantitative determination of weight concentration of the constituents of a polymer blend and iii) quantitative analysis of model samples of microplastics in suspended particulate matter.
Automated Wall Thickness Evaluation for Turbine Blades Using Robot-Guided Ultrasonic Array Imaging
(2024)
Nondestructive testing has become an essential part of the maintenance of modern gas turbine blades and vanes since it provides an increase in both safety against critical failure and efficiency of operation. Targeted repairs of the blade’s airfoil require localized wall thickness information. This information, however, is hard to obtain by nondestructive testing due to the complex shapes of surfaces, cavities, and material characteristics. To address this problem, we introduce an automated nondestructive testing system that scans the part using an immersed ultrasonic array probe guided by a robot arm. For imaging, we adopt a two-step, surface-adaptive Total Focusing Method (TFM) approach.
For each test position, the TFM allows us to identify the outer surface, followed by calculating an adaptive image of the interior of the part, where the inner surface’s position and shape are obtained. To handle the large volumes of data, the surface features are automatically extracted from the TFM images using specialized image processing algorithms. Subsequently, the collection of 2D extracted surface data is merged and smoothed in 3D space to form the outer and inner surfaces, facilitating wall thickness evaluation. With this approach, representative zones on two gas turbine vanes were tested, and the reconstructed wall thickness values were evaluated via comparison with reference data from an optical scan. For the test zones on two turbine vanes, average errors ranging from 0.05 mm to 0.1 mm were identified, with a standard deviation of 0.06–0.16 mm.
Whereas the characterization of nanomaterials using different analytical techniques is often highly automated and standardized, the sample preparation that precedes it causes a bottleneck in nanomaterial analysis as it is performed manually. Usually, this pretreatment depends on the skills and experience of the analysts. Furthermore, adequate reporting of the sample preparation is often missing. In this overview, some solutions for techniques widely used in nano-analytics to overcome this problem are discussed. Two examples of sample preparation optimization by au-tomation are presented, which demonstrate that this approach is leading to increased analytical confidence. Our first example is motivated by the need to exclude human bias and focuses on the development of automation in sample introduction. To this end, a robotic system has been de-veloped, which can prepare stable and homogeneous nanomaterial suspensions amenable to a variety of well-established analytical methods, such as dynamic light scattering (DLS), small-angle X-ray scattering (SAXS), field-flow fractionation (FFF) or single-particle inductively coupled mass spectrometry (sp-ICP-MS). Our second example addresses biological samples, such as cells exposed to nanomaterials, which are still challenging for reliable analysis. An air–liquid interface has been developed for the exposure of biological samples to nanomaterial-containing aerosols. The system exposes transmission electron microscopy (TEM) grids under reproducible conditions, whilst also allowing characterization of aerosol composition with mass spectrometry. Such an approach enables correlative measurements combining biological with physicochemical analysis. These case studies demonstrate that standardization and automation of sample preparation setups, combined with appropriate measurement processes and data reduction are crucial steps towards more reliable and reproducible data.
The talk „Automation in computational materials science“ deals with the current state of automation in the field of computational materials science. It illustrates how automation can, for example, be used to speed up the search for new ferroelectric materials and spintronic materials. Furthermore, it lists current tools for automation and challenges in the field.
Automation simplifies the use of computational materials science software and makes it accessible to a wide range of users. This enables high-throughput calcula-tionsand makesiteasier for non-specialists to enter computational materials science. However, in-creasing automation also poses threats that should be considered while interacting with automated procedures.
Over the past couple of decades Non-Destructive Testing (NDT) has seen a significant increase in the use of automation. In addition to increased reliability, objectivity, consistency, repeatability, productivity, and so on, automating parts of the process is expected to decrease the potential for human error. However, the literature on human-automation interaction suggests that automation is not only associated with benefits, but also with new risks and risk sources. First, this paper will present the methodology used to identify—for the first time—possible risks associated with mechanised data acquisition and corresponding data evaluation. Moreover, it will highlight possible risks, their causes, consequences, and ways of preventing them. Second, those preventive measures will be further analysed by examining new risks that can arise from their implementation, i.e. potential for failure that can arise from (a) working with automated defect-detection and sizing aids, (b) implementing human redundancy, and (c) improvement of the inspection procedures without due consideration of the procedure users. And third, some optimisations strategies will be provided. The purpose of this work is to show that mechanised testing is associated with potential for failure and that the sources of those risks go beyond single inspectors and need to be looked at in the interaction of people with other systems, i.e. the technology, the team and, most importantly, the organisation.
Invited for this month’s cover are researchers from Bundesanstalt für Materialforschung und -prüfung (Federal Institute for Materials Research and Testing) in Germany, Friedrich Schiller University Jena, Université catholique de Louvain, University of Oregon, Science & Technology Facilities Council, RWTH Aachen University, Hoffmann Institute of Advanced Materials, and Dartmouth College. The cover picture shows a workflow for automatic bonding analysis with Python tools (green python). The bonding analysis itself is performed with the program LOBSTER (red lobster). The starting point is a crystal structure, and the results are automatic assessments of the bonding situation based on Crystal Orbital Hamilton Populations (COHP), including automatic plots and text outputs. Coordination environments and charges are also assessed. More information can be found in the Research Article by J. George, G. Hautier, and co-workers.
Bonds and local atomic environments are crucial descriptors of material properties. They have been used to create design rules and heuristics for materials. More and more frequently, they are used as features in machine learning. Implementations and algorithms (e.g., ChemEnv and LobsterEnv) for identifying these local atomic environments based on geometrical characteristics and quantum-chemical bonding analysis are nowadays available. Fully automatic workflows and analysis tools have been developed to use quantum-chemical bonding analysis on a large scale and for machine-learning approaches. The latter relates to a general trend toward automation in density functional-based materials science. The lecture will demonstrate how our tools, that assess local atomic environments, helped to test and develop heuristics and design rules and an intuitive understanding of materials.
Bonds and local atomic environments are crucial descriptors of material properties. They have been used to create design rules and heuristics and as features in machine learning of materials properties. Implementations and algorithms (e.g., ChemEnv and LobsterEnv) for identifying local atomic environments based on geometrical characteristics and quantum-chemical bonding analysis are nowadays available. Fully automatic workflows and analysis tools have been developed to use quantum-chemical bonding analysis on a large scale. The lecture will demonstrate how our tools, that assess local atomic environments and perform automatic bonding analysis, help to develop new machine learning models and a new intuitive understanding of materials.[5,6] Furthermore, the general trend toward automation in density functional-based materials science and some of our recent contributions will be discussed.
Bonds and local atomic environments are crucial descriptors of material properties. They have been used to create design rules and heuristics for materials. More and more frequently, they are used as features in machine learning. Implementations and algorithms (e.g., ChemEnv and LobsterEnv) for identifying these local atomic environments based on geometrical characteristics and quantum-chemical bonding analysis are nowadays available. Fully automatic workflows and analysis tools have been developed to use quantum-chemical bonding analysis on a large scale and for machine-learning approaches. The latter relates to a general trend toward automation in density functional-based materials science. The lecture will demonstrate how our tools, that assess local atomic environments, helped to test and develop heuristics and design rules and an intuitive understanding of materials.
Bonds and local atomic environments are crucial descriptors of material properties. They have been used to create design rules and heuristics and as features in machine learning of materials properties. Implementations and algorithms (e.g., ChemEnv and LobsterEnv) for identifying local atomic environments based on geometrical characteristics and quantum-chemical bonding analysis are nowadays available. Fully automatic workflows and analysis tools have been developed to use quantum-chemical bonding analysis on a large scale. The lecture will demonstrate how our tools, that assess local atomic environments and perform automatic bonding analysis, help to develop new machine learning models and a new intuitive understanding of materials. Furthermore, the general trend toward automation in density functional-based materials science and some of our recent contributions will be discussed.
Chemical bonding and coordination environments are crucial descriptors of material properties. They have previously been applied to creating chemical design guidelines and chemical heuristics. They are currently being used as features in machine learning more and more frequently. I will discuss implementations and algorithms (ChemEnv and LobsterEnv) for identifying these coordination environments based on geometrical characteristics and chemical bond quantum chemical analysis. I'll demonstrate how these techniques helped in testing chemical heuristics like the Pauling rule and thereby improved our understanding of chemistry. I'll also show how these tools can be used to create new design guidelines and a new understanding of chemistry. To use quantum-chemical bonding analysis on a large-scale and for machine-learning approaches, fully automatic workflows and analysis tools have been developed. After presenting the capabilities of these tools, I will also point out how these developments relate to the general trend towards automation in the field of density functional based materials science.
Chemical bonding and coordination environments are crucial descriptors of material properties. They have previously been applied to creating chemical design guidelines and chemical heuristics. They are currently being used as features in machine learning more and more frequently. I will discuss implementations and algorithms (ChemEnv and LobsterEnv) for identifying these coordination environments based on geometrical characteristics and chemical bond quantum chemical analysis. I will demonstrate how these techniques helped in testing chemical heuristics like the Pauling rule and thereby improved our understanding of chemistry. I will also show how these tools can be used to create new design guidelines and a new understanding of chemistry. To use quantum-chemical bonding analysis on a large-scale and for machine-learning approaches, fully automatic workflows and analysis tools have been developed. After presenting the capabilities of these tools, I will also point out how these developments relate to the general trend towards automation in the field of density functional based materials science.
Talk about my recent research on data-driven chemical understanding with geometrical and quantum-chemical bonding analysis.
Civilization and modern societies would not be possible without manmade materials. Considering their production volumes, their supporting role in nearly all industrial processes, and the impact of their sourcing and production on the environment, metals and alloys are and will be of prominent importance for the clean energy transition. The focus of materials discovery must move to more specialized, application-tailored green alloys that outperform the legacy materials not only in performance but also in sustainability and resource efficiency. This white paper summarizes a joint Canadian-German initiative aimed at developing a materials acceleration platform (MAP) focusing on the discovery of new alloy families that will address this challenge. We call our initiative the “Build to Last Materials Acceleration Platform” (B2L-MAP) and present in this perspective our concept of a three-tiered self-driving laboratory that is composed of a simulation-aided pre-selection module (B2L-select), an artificial intelligence (AI)-driven experimental lead generator (B2L-explore), and an upscaling module for durability assessment (B2L-assess). The resulting tool will be used to identify and subsequently demonstrate novel corrosion-resistant alloys at scale for three key applications of critical importance to an offshore, wind-driven hydrogen plant (reusable electrical contacts, offshore infrastructure, and oxygen evolution reaction catalysts).
Many microstructural features exhibit non-trivial geometries, which can only be derived to a limited extent from two-dimensional images. E.g., graphite arrangements in lamellar gray cast iron have complex geometries, and the same is true for additively manufactured materials and three-dimensional conductive path structures. Some can be visualized using tomographic methods, but some cannot be due to weak contrast and/or lack of resolution when analyzing macroscopic objects. Classic metallography can help but must be expanded to the third dimension. The method of reconstructing three-dimensional structures from serial metallographic sections surely is not new. However, the effort required to manually assemble many individual sections into image stacks is very high and stands in the way of frequent application. For this reason, an automated, robot-supported 3D metallography system is being developed at BAM, which carries out the steps of repeated preparation and image acquisition on polished specimen.
Preparation includes grinding, polishing and optionally etching of the polished surface. Image acquisition comprises autofocused light microscopic imaging at several magnification levels. The image stacks obtained are then pre-processed, segmented, and converted into 3D models, which in the result appear like microtomographic models, but with high resolution at large volume. Contrasting by classical chemical etching reveals structures that cannot be resolved using tomographic methods. The integration of further imaging and measuring methods into this system is underway. Some examples will be discussed in the presentation.
An approach to develop an arc sensor for gap-width estimation during automated NG-GMAW with a weaving electrode motion is introduced by combining arc sensor readings with optical measurement of the groove shape to allow precise analyses of the process. The two test specimen welded for this study were designed to feature a variable groove geometry in order to maximize efficiency of the conducted experimental efforts, resulting in 1696 individual weaving cycle records with associated arc sensor measurements, process parameters and groove shape information. Gap width was varied from 18 to 25 mm and wire feed rates in the range of 9 to 13 m/min were used in the course of this study. Artificial neural networks were used as a modelling tool to derive an arc sensor for estimation of gap width suitable for online process control that can adapt to changes in process parameters as well as changes in the weaving motion of the electrode. Wire feed rate, weaving current, sidewall dwell currents and angles were used as inputs to calculate the gap width. Evaluation the proposed arc sensor model show very good estimation capabilities for parameters sufficiently covered during experiments.
An approach to develop an arc sensor for gap-width estimation during automated NG-GMAW with a weaving electrode motion is introduced by combining arc sensor readings with optical measurement of the groove shape to allow precise analyses of the process. The two test specimen welded for this study were designed to feature a variable groove geometry in order to maximize efficiency of the conducted experimental efforts, resulting in 1696 individual weaving cycle records with associated arc sensor measurements, process parameters and groove shape information. Gap width was varied from 18 to 25 mm and wire feed rates in the range of 9 to 13 m/min were used in the course of this study. Artificial neural networks were used as a modelling tool to derive an arc sensor for estimation of gap width suitable for online process control that can adapt to changes in process parameters as well as changes in the weaving motion of the electrode. Wire feed rate, weaving current, sidewall dwell currents and angles were used as inputs to calculate the gap width. Evaluation the proposed arc sensor model show very good estimation capabilities for parameters sufficiently covered during experiments.
By automatically recording as much information as possible in automated laboratory setups, reproducibility and traceability of experiments are vastly improved. This presentation shows what such an approach means for the quality of experiments in an X-ray scattering laboratory and an automated synthesis set-up.
We present Jobflow, a domain-agnostic Python package for writing computational workflows tailored for high-throughput computing applications. With its simple decorator-based approach, functions and class methods can be transformed into compute jobs that can be stitched together into complex workflows. Jobflow fully supports dynamic workflows where the full acyclic graph of compute jobs is not known until runtime, such as compute jobs that launch
other jobs based on the results of previous steps in the workflow. The results of all Jobflow compute jobs can be easily stored in a variety of filesystem- and cloud-based databases without the data storage process being part of the underlying workflow logic itself. Jobflow has been intentionally designed to be fully independent of the choice of workflow manager used to dispatch the calculations on remote computing resources. At the time of writing, Jobflow
workflows can be executed either locally or across distributed compute environments via an adapter to the FireWorks package, and Jobflow fully supports the integration of additional workflow execution adapters in the future.
Jobflow is a free, open-source library for writing and executing workflows. Complex workflows can be defined using simple python functions and executed locally or on arbitrary computing resources using the FireWorks workflow manager.
Some features that distinguish jobflow are dynamic workflows, easy compositing and connecting of workflows, and the ability to store workflow outputs across multiple databases.
The LOBSTER (Deringer et al., 2011;Maintz et al., 2013 ,2016 ;Nelson et al., 2020 ) software aids in extracting quantum-chemical bonding information from materials by projecting the plane-wave based wave functions from density functional theory (DFT) onto an atomic orbital basis. LobsterEnv, a module implemented in pymatgen (Ong et al., 2013) by some of the authors of this package, facilitates the use of quantum-chemical bonding information obtained from LOBSTER calculations to identify neighbors and coordination environments. LobsterPy is a Python package that offers a set of convenient tools to further analyze and summarize the LobsterEnv outputs in the form of JSONs that are easy to interpret and process. These tools enable the estimation of (anti) bonding contributions, generation of textual descriptions, and visualization of LOBSTER computation results. Since its first release, both LobsterPy and LobsterEnv capabilities have been extended significantly. Unlike earlier versions, which could only automatically analyze Crystal Orbital Hamilton Populations (COHPs) (Dronskowski & Blöchl, 1993), both can now also analyze Crystal Orbital Overlap Populations (COOP) (Hughbanks & Hoffmann, 1983) and Crystal Orbital Bond Index (COBI) (Müller et al., 2021). Extracting the information about the most important orbitals contributing to the bonds is optional, and users can enable it as needed. Additionally, bonding-based features for machinelearning (ML) studies can be engineered via the sub-packages “featurize” and “structuregraphs”. Alongside its Python interface, it also provides an easy-to-use command line interface (CLI) that runs automatic analysis of the computations and generates a summary of results and publication-ready figures. LobsterPy has been used to produce the results in Ngo et al. (2023), Chen et al. (2024), Naik et al. (2023), and it is also part of Atomate2 (2023) bonding analysis workflow for generating bonding analysis data in a format compatible with the Materials Project (Jain et al., 2013) API.
McSAS3
(2023)
McSAS3 is a refactored version of the original McSAS (see DOI 10.1107/S1600576715007347). This software fits scattering patterns to obtain size distributions without assumptions on the size distribution form. The refactored version has some neat features:
- Multiprocessing is included, spread out over as many cores as number of repetitions!
- Full state of the optimization is stored in an organized HDF5 state file.
- Histogramming is separate from optimization and a result can be re-histogrammed as many times as desired.
- SasModels allow a wide range of models to be used
- If SasModels does not work (e.g. because of gcc compiler issues on Windows or Mac), an internal sphere model is supplied
- Simulated data of the scattering of a special shape can also be used as a McSAS fitting model. Your models are infinite!
- 2D fitting also works.
High-throughput computations are nowadays an established way to suggest new candidate materials for applications to experimentalists. Due to new packages for automation and access to databases of computed materials properties, these studies became more and more complex over the last years. Besides suggesting new candidate materials for applications, they also offer a way to understanding the materials properties based on chemical bonds. For example, we have recently used orbital-based bonding analysis to understand the results of high-throughput studies for spintronic materials, ferroelectric materials and photovoltaic materials in detail. To do so, we have developed Python tools for high-throughput bonding analysis with the programs VASP and Lobster (see www.cohp.de). They are based on the Python packages pymatgen, atomate, and custodian. This implementation will be discussed within the talk. We also expect that these tools offer possibilities to arrive at new descriptors based on chemical bonds for materials properties.
In recent years, many protocols in computational materials science have been automated and made available within software packages (primarily Python-based). This ranges from the automation of simple heuristics (oxidation states, coordination environments) to the automation of protocols, including multiple DFT and post-processing tools such as (an)harmonic phonon computations or bonding analysis. Such developments also shorten the time frames of projects after such developments have been made available and open new possibilities. For example, we can now easily make data-driven tests of well-known rules and heuristics or develop quantum chemistry-based materials descriptors for machine learning approaches. These tests and descriptors can have applications related to magnetic ground state predictions of materials relevant for spintronic applications or for predicting thermal properties relevant for thermal management in electronics. Combining high-throughput ab initio computations with fitting, fine-tuning machine learning models and predictions of such models within complex workflows is also possible and promises further acceleration in the field. In this talk, I will show our latest efforts to link automation with data-driven chemistry and materials science.
The PMD Core Ontology (PMDco) is a comprehensive set of building blocks produced via consensus building. The ontological building blocks provide a framework representing knowledge about fundamental concepts used in Materials Science and Engineering (MSE) today. The PMDco is a mid-level ontology that establishes connections between narrower MSE application ontologies and domain neutral concepts used in already established broader (top-level) ontologies. The primary goal of the PMDco design is to enable interoperability between various other MSE-related ontologies and other common ontologies.
PMDco’s class structure is both comprehensive and extensible, rendering it an efficient tool to structure MSE knowledge. The PMDco serves as a semantic middle-layer unifying common MSE concepts via semantic mapping to other semantic representations using well-known key terms used in the MSE domain. The PMDco enables straight-forward documentation and tracking of science data generation and in consequence enables high-quality FAIR data that allows for precise reproducibility of scientific experiments.
The design of PMDco is based on the W3C Provenance Ontology (PROV-O), which provides a standard framework for capturing the production, derivation, and attribution of resources. Via this foundation, the PMDco enables the integration of data from various data origins and the representation of complex workflows.
In summary, the PMDco is a valuable advancement for researchers and practitioners in MSE domains. It provides a common MSE vocabulary to represent and share knowledge, allowing for efficient collaboration and promoting interoperability between diverse domains. Its design allows for the systematic integration of data and metadata, enabling seamless tracing of science data. Overall, the PMDco is a crucial step towards a unified and comprehensive understanding of the MSE domain in general.