Ingenieurwissenschaften und zugeordnete Tätigkeiten
Filtern
Dokumenttyp
- Vortrag (27)
- Forschungsdatensatz (17)
- Zeitschriftenartikel (10)
- Posterpräsentation (6)
- Beitrag zu einem Tagungsband (2)
Schlagworte
- Automation (62) (entfernen)
Organisationseinheit der BAM
- 6 Materialchemie (51)
- 6.0 Abteilungsleitung und andere (42)
- VP Vizepräsident (12)
- VP.1 eScience (12)
- 6.5 Synthese und Streuverfahren nanostrukturierter Materialien (5)
- 1 Analytische Chemie; Referenzmaterialien (4)
- 8 Zerstörungsfreie Prüfung (4)
- 1.4 Prozessanalytik (3)
- 8.6 Faseroptische Sensorik (3)
- 5 Werkstofftechnik (2)
Over the past couple of decades Non-Destructive Testing (NDT) has seen a significant increase in the use of automation. In addition to increased reliability, objectivity, consistency, repeatability, productivity, and so on, automating parts of the process is expected to decrease the potential for human error. However, the literature on human-automation interaction suggests that automation is not only associated with benefits, but also with new risks and risk sources. First, this paper will present the methodology used to identify—for the first time—possible risks associated with mechanised data acquisition and corresponding data evaluation. Moreover, it will highlight possible risks, their causes, consequences, and ways of preventing them. Second, those preventive measures will be further analysed by examining new risks that can arise from their implementation, i.e. potential for failure that can arise from (a) working with automated defect-detection and sizing aids, (b) implementing human redundancy, and (c) improvement of the inspection procedures without due consideration of the procedure users. And third, some optimisations strategies will be provided. The purpose of this work is to show that mechanised testing is associated with potential for failure and that the sources of those risks go beyond single inspectors and need to be looked at in the interaction of people with other systems, i.e. the technology, the team and, most importantly, the organisation.
An approach to develop an arc sensor for gap-width estimation during automated NG-GMAW with a weaving electrode motion is introduced by combining arc sensor readings with optical measurement of the groove shape to allow precise analyses of the process. The two test specimen welded for this study were designed to feature a variable groove geometry in order to maximize efficiency of the conducted experimental efforts, resulting in 1696 individual weaving cycle records with associated arc sensor measurements, process parameters and groove shape information. Gap width was varied from 18 to 25 mm and wire feed rates in the range of 9 to 13 m/min were used in the course of this study. Artificial neural networks were used as a modelling tool to derive an arc sensor for estimation of gap width suitable for online process control that can adapt to changes in process parameters as well as changes in the weaving motion of the electrode. Wire feed rate, weaving current, sidewall dwell currents and angles were used as inputs to calculate the gap width. Evaluation the proposed arc sensor model show very good estimation capabilities for parameters sufficiently covered during experiments.
An approach to develop an arc sensor for gap-width estimation during automated NG-GMAW with a weaving electrode motion is introduced by combining arc sensor readings with optical measurement of the groove shape to allow precise analyses of the process. The two test specimen welded for this study were designed to feature a variable groove geometry in order to maximize efficiency of the conducted experimental efforts, resulting in 1696 individual weaving cycle records with associated arc sensor measurements, process parameters and groove shape information. Gap width was varied from 18 to 25 mm and wire feed rates in the range of 9 to 13 m/min were used in the course of this study. Artificial neural networks were used as a modelling tool to derive an arc sensor for estimation of gap width suitable for online process control that can adapt to changes in process parameters as well as changes in the weaving motion of the electrode. Wire feed rate, weaving current, sidewall dwell currents and angles were used as inputs to calculate the gap width. Evaluation the proposed arc sensor model show very good estimation capabilities for parameters sufficiently covered during experiments.
The overall interest in nanotoxicity, triggered by the increasing use of nanomaterials in the material and life sciences, and the synthesis of an ever increasing number of new functional nanoparticles calls for standardized test procedures1,2 and for efficient approaches to screen the potential genotoxicity of these materials. Aiming at the development of fast and easy to use, automated microscopic methods for the determination of the genotoxicity of different types of nanoparticles, we assess the potential of the fluorometric γH2AX assay for this purpose. This assay, which can be run on an automated microscopic detection system, relies on the detection of DNA double strand breaks as a sign for genotoxicity3. Here, we provide first results obtained with broadly used nanomaterials like CdSe/CdS and InP/ZnS quantum dots as well as iron oxide, gold, and polymer particles of different surface chemistry with previously tested colloidal stability and different cell lines like Hep-2 and 8E11 cells, which reveal a dependence of the genotoxicity on the chemical composition as well as the surface chemistry of these nanomaterials. These studies will be also used to establish nanomaterials as positive and negative genotoxicity controls or standards for assay performance validation for users of this fluorometric genotoxicity assay. In the future, after proper validation, this microscopic platform technology will be expanded to other typical toxicity assays.
The presentation describes a novel approach to dynamically adjusting the weaving motion of the electrode in narrow gap GMAW.
An event driven arc sensor is used to dynamically adjust the weaving angle to variations in gap width by detecting each groove sidewall independently and in real-time. The approach presented requires only minimal user configuration for spray-arc or pulsed-arc transfer modes and can effectively be used in double- and single-sided weaving applications. Furthermore displacements of the welding torch with regards to the groove center line or contact-tip to workpiece distance are compensated.
The TED-GC-MS analysis is a two-step method. A sample is first decomposed in a thermogravimetric analyzer (TGA) and the gaseous decomposition products are then trapped on a solid-phase adsorber. Subsequently, the solid-phase adsorber is analyzed with thermal desorption gas chromatography mass spectrometry (TDU-GC-MS). This method is ideally suited for the analysis of polymers and their degradation processes. Here, a new entirely automated System is introduced which enables high sample throughput and reproducible automated fractioned collection of decomposition products. Strengths and limitations of the system configuration are elaborated via three examples focused on practical challenges in materials analysis and identification: i) separate analysis of the components of a wood-plastic-composite material, ii) quantitative determination of weight concentration of the constituents of a polymer blend and iii) quantitative analysis of model samples of microplastics in suspended particulate matter.
Industry 4.0 is all about interconnectivity, sensor-enhanced process control, and data-driven systems. Process analytical technology (PAT) such as online nuclear magnetic resonance (NMR) spectroscopy is gaining in importance, as it increasingly contributes to automation and digitalization in production. In many cases up to now, however, a classical evaluation of process data and their transformation into knowledge is not possible or not economical due to the insufficiently large datasets available. When developing an automated method applicable in process control, sometimes only the basic data of a limited number of batch tests from typical product and process development campaigns are available. However, these datasets are not large enough for training machine-supported procedures. In this work, to overcome this limitation, a new procedure was developed, which allows physically motivated multiplication of the available reference data in order to obtain a sufficiently large dataset for training machine learning algorithms. The underlying example chemical synthesis was measured and analyzed with both application-relevant low-field NMR and high-field NMR spectroscopy as reference method. Artificial neural networks (ANNs) have the potential to infer valuable process information already from relatively limited input data. However, in order to predict the concentration at complex conditions (many reactants and wide concentration ranges), larger ANNs and, therefore, a larger Training dataset are required. We demonstrate that a moderately complex problem with four reactants can be addressed using ANNs in combination with the presented PAT method (low-field NMR) and with the proposed approach to generate meaningful training data.
Data set of low-field NMR spectra of continuous synthesis of nitro-4’-methyldiphenylamine (MNDPA). 1H spectra (43 MHz) were recorded as single scans.
Two different approaches for the generation of artificial neural networks training data for the prediction of reactant concentrations were used: (i) Training data based on combinations of measured pure component spectra and (ii) Training data based on a spectral model.
Synthetic low-field NMR spectra
First 4 columns in MAT-files represent component areas of each reactant within the synthetic mixture spectrum.
Xi (“pure component spectra dataset”)
Xii (“spectral model dataset”)
Experimental low-field NMR spectra from MNDPA-Synthesis
This data set represents low-field NMR-spectra recorded during continuous synthesis of nitro-4’-methyldiphenylamine (MNDPA). Reference values from high-field NMR results are included.
High-throughput computations are nowadays an established way to suggest new candidate materials for applications to experimentalists. Due to new packages for automation and access to databases of computed materials properties, these studies became more and more complex over the last years. Besides suggesting new candidate materials for applications, they also offer a way to understanding the materials properties based on chemical bonds. For example, we have recently used orbital-based bonding analysis to understand the results of high-throughput studies for spintronic materials, ferroelectric materials and photovoltaic materials in detail. To do so, we have developed Python tools for high-throughput bonding analysis with the programs VASP and Lobster (see www.cohp.de). They are based on the Python packages pymatgen, atomate, and custodian. This implementation will be discussed within the talk. We also expect that these tools offer possibilities to arrive at new descriptors based on chemical bonds for materials properties.
Automation simplifies the use of computational materials science software and makes it accessible to a wide range of users. This enables high-throughput calcula-tionsand makesiteasier for non-specialists to enter computational materials science. However, in-creasing automation also poses threats that should be considered while interacting with automated procedures.
The talk „Automation in computational materials science“ deals with the current state of automation in the field of computational materials science. It illustrates how automation can, for example, be used to speed up the search for new ferroelectric materials and spintronic materials. Furthermore, it lists current tools for automation and challenges in the field.
The Meticulous Approach: Fully traceable X-ray scattering data via a comprehensive lab methodology
(2021)
To find out if experimental findings are real, you need to be able to repeat them. For a long time, however, papers and datasets could not necessarily include sufficient details to accurately repeat experiments, leading to a reproducibility crisis. It is here, that the MOUSE project (Methodology Optimization for Ultrafine Structure Exploration) tries to implement change – at least for small- and wide-angle X-ray scattering (SAXS/WAXS).
In the MOUSE project, we have combined: a) a comprehensive laboratory workflow with b) a heavily modified, highly automated Xenocs Xeuss 2.0 instrumental component. This combination allows us to collect fully traceable scattering data, with a well-documented data flow (akin to what is found at the more automated beamlines). With two full-time researchers, the lab collects and interprets thousands of datasets, on hundreds of samples for dozens of projects per year, supporting many users along the entire process from sample selection and preparation, to the analysis of the resulting data.
While these numbers do not light a candle to those achieved by our hardworking compatriots at the synchrotron beamlines, the laboratory approach does allow us to continually modify and fine-tune the integral methodology. So for the last three years, we have incorporated e.g. FAIR principles, traceability, automated processing, data curation strategies, as well as a host of good scattering practices into the MOUSE system. We have concomitantly expanded our purview as specialists to include an increased responsibility for the entire scattering aspect of the resultant publications, to ensure full exploitation of the data quality, whilst avoiding common pitfalls.
This talk will discuss the MOUSE project1 as implemented to date, and will introduce foreseeable upgrades and changes. These upgrades include better pre-experiment sample scattering predictions to filter projects on the basis of their suitability, exploitation of the measurement database for detecting long-term changes and automated flagging of datasets, and enhancing MC fitting with sample scattering simulations for better matching of odd-shaped scatterers.
Automated bonding analysis software has been developed based on Crystal Orbital Hamilton Populations to facilitate high-throughput bonding analysis and machine-learning of bonding features. This work presents the software and discusses its applications to simple and complex materials such as GaN, NaCl, the oxynitrides XTaO2N (X=Ca, Ba, Sr) and Yb14Mn1Sb11.
Chemical bonding and coordination environments are crucial descriptors of material properties. They have previously been applied to creating chemical design guidelines and chemical heuristics. They are currently being used as features in machine learning more and more frequently. I will discuss implementations and algorithms (ChemEnv and LobsterEnv) for identifying these coordination environments based on geometrical characteristics and chemical bond quantum chemical analysis. I'll demonstrate how these techniques helped in testing chemical heuristics like the Pauling rule and thereby improved our understanding of chemistry. I'll also show how these tools can be used to create new design guidelines and a new understanding of chemistry. To use quantum-chemical bonding analysis on a large-scale and for machine-learning approaches, fully automatic workflows and analysis tools have been developed. After presenting the capabilities of these tools, I will also point out how these developments relate to the general trend towards automation in the field of density functional based materials science.
Whereas the characterization of nanomaterials using different analytical techniques is often highly automated and standardized, the sample preparation that precedes it causes a bottleneck in nanomaterial analysis as it is performed manually. Usually, this pretreatment depends on the skills and experience of the analysts. Furthermore, adequate reporting of the sample preparation is often missing. In this overview, some solutions for techniques widely used in nano-analytics to overcome this problem are discussed. Two examples of sample preparation optimization by au-tomation are presented, which demonstrate that this approach is leading to increased analytical confidence. Our first example is motivated by the need to exclude human bias and focuses on the development of automation in sample introduction. To this end, a robotic system has been de-veloped, which can prepare stable and homogeneous nanomaterial suspensions amenable to a variety of well-established analytical methods, such as dynamic light scattering (DLS), small-angle X-ray scattering (SAXS), field-flow fractionation (FFF) or single-particle inductively coupled mass spectrometry (sp-ICP-MS). Our second example addresses biological samples, such as cells exposed to nanomaterials, which are still challenging for reliable analysis. An air–liquid interface has been developed for the exposure of biological samples to nanomaterial-containing aerosols. The system exposes transmission electron microscopy (TEM) grids under reproducible conditions, whilst also allowing characterization of aerosol composition with mass spectrometry. Such an approach enables correlative measurements combining biological with physicochemical analysis. These case studies demonstrate that standardization and automation of sample preparation setups, combined with appropriate measurement processes and data reduction are crucial steps towards more reliable and reproducible data.
Invited for this month’s cover are researchers from Bundesanstalt für Materialforschung und -prüfung (Federal Institute for Materials Research and Testing) in Germany, Friedrich Schiller University Jena, Université catholique de Louvain, University of Oregon, Science & Technology Facilities Council, RWTH Aachen University, Hoffmann Institute of Advanced Materials, and Dartmouth College. The cover picture shows a workflow for automatic bonding analysis with Python tools (green python). The bonding analysis itself is performed with the program LOBSTER (red lobster). The starting point is a crystal structure, and the results are automatic assessments of the bonding situation based on Crystal Orbital Hamilton Populations (COHP), including automatic plots and text outputs. Coordination environments and charges are also assessed. More information can be found in the Research Article by J. George, G. Hautier, and co-workers.
We created a workflow that fully automates bonding analysis using Crystal Orbital Hamilton Populations, which are bond-weighted densities of states. This enables understanding of crystalline material properties based on chemical bonding information. To facilitate data analysis and machine-learning research, our tools include automatic plots, automated text output, and output in machine-readable format.
Measuring an X-ray scattering pattern is relatively easy, but measuring a steady stream of high-quality, useful patterns requires significant effort and good laboratory organization.
Such laboratory organization can help address the reproducibility crisis in science, and easily multiply the scientific output of a laboratory, while greatly elevating the quality of the measurements. We have demonstrated this for small- and wide-angle X-ray scattering in the MOUSE project (Methodology Optimization for Ultrafine Structure Exploration).
With the MOUSE, we have combined a comprehensive and highly automated laboratory workflow with a heavily modified X-ray scattering instrument. This combination allows us to collect fully traceable scattering data, within a well-documented, FAIR-compliant data flow (akin to what is found at the more automated synchrotron beamlines). With two full-time researchers, our lab collects and interprets thousands of datasets, on hundreds of samples, for dozens of projects per year, supporting many users along the entire process from sample selection and preparation, to the analysis of the resulting data.
This talk will briefly introduce the foundations of X-ray scattering, present the MOUSE project, and will highlight the proven utility of the methodology for materials science. Upgrades to the methodology will also be discussed, as well as possible avenues for transferring this holistic methodology to other instruments
Civilization and modern societies would not be possible without manmade materials. Considering their production volumes, their supporting role in nearly all industrial processes, and the impact of their sourcing and production on the environment, metals and alloys are and will be of prominent importance for the clean energy transition. The focus of materials discovery must move to more specialized, application-tailored green alloys that outperform the legacy materials not only in performance but also in sustainability and resource efficiency. This white paper summarizes a joint Canadian-German initiative aimed at developing a materials acceleration platform (MAP) focusing on the discovery of new alloy families that will address this challenge. We call our initiative the “Build to Last Materials Acceleration Platform” (B2L-MAP) and present in this perspective our concept of a three-tiered self-driving laboratory that is composed of a simulation-aided pre-selection module (B2L-select), an artificial intelligence (AI)-driven experimental lead generator (B2L-explore), and an upscaling module for durability assessment (B2L-assess). The resulting tool will be used to identify and subsequently demonstrate novel corrosion-resistant alloys at scale for three key applications of critical importance to an offshore, wind-driven hydrogen plant (reusable electrical contacts, offshore infrastructure, and oxygen evolution reaction catalysts).
Chemical bonding and coordination environments are crucial descriptors of material properties. They have previously been applied to creating chemical design guidelines and chemical heuristics. They are currently being used as features in machine learning more and more frequently. I will discuss implementations and algorithms (ChemEnv and LobsterEnv) for identifying these coordination environments based on geometrical characteristics and chemical bond quantum chemical analysis. I will demonstrate how these techniques helped in testing chemical heuristics like the Pauling rule and thereby improved our understanding of chemistry. I will also show how these tools can be used to create new design guidelines and a new understanding of chemistry. To use quantum-chemical bonding analysis on a large-scale and for machine-learning approaches, fully automatic workflows and analysis tools have been developed. After presenting the capabilities of these tools, I will also point out how these developments relate to the general trend towards automation in the field of density functional based materials science.
McSAS3
(2023)
McSAS3 is a refactored version of the original McSAS (see DOI 10.1107/S1600576715007347). This software fits scattering patterns to obtain size distributions without assumptions on the size distribution form. The refactored version has some neat features:
- Multiprocessing is included, spread out over as many cores as number of repetitions!
- Full state of the optimization is stored in an organized HDF5 state file.
- Histogramming is separate from optimization and a result can be re-histogrammed as many times as desired.
- SasModels allow a wide range of models to be used
- If SasModels does not work (e.g. because of gcc compiler issues on Windows or Mac), an internal sphere model is supplied
- Simulated data of the scattering of a special shape can also be used as a McSAS fitting model. Your models are infinite!
- 2D fitting also works.
Bonds and local atomic environments are crucial descriptors of material properties. They have been used to create design rules and heuristics and as features in machine learning of materials properties. Implementations and algorithms (e.g., ChemEnv and LobsterEnv) for identifying local atomic environments based on geometrical characteristics and quantum-chemical bonding analysis are nowadays available. Fully automatic workflows and analysis tools have been developed to use quantum-chemical bonding analysis on a large scale. The lecture will demonstrate how our tools, that assess local atomic environments and perform automatic bonding analysis, help to develop new machine learning models and a new intuitive understanding of materials. Furthermore, other recent workflow contributions to the Materials Project software infrastructure (pymatgen, atomate2) related to phonons and machine-learning potentials will be discussed.
Bonds and local atomic environments are crucial descriptors of material properties. They have been used to create design rules and heuristics for materials. More and more frequently, they are used as features in machine learning. Implementations and algorithms (e.g., ChemEnv and LobsterEnv) for identifying these local atomic environments based on geometrical characteristics and quantum-chemical bonding analysis are nowadays available. Fully automatic workflows and analysis tools have been developed to use quantum-chemical bonding analysis on a large scale and for machine-learning approaches. The latter relates to a general trend toward automation in density functional-based materials science. The lecture will demonstrate how our tools, that assess local atomic environments, helped to test and develop heuristics and design rules and an intuitive understanding of materials.
Understanding the chemistry and nature of individual chemical bonds is essential for materials design. Bonding analysis via the LOBSTER software package has provided valuable insights into the properties of materials for thermoelectric and catalysis applications. Thus, the data generated from
bonding analysis becomes an invaluable asset that could be utilized as features in large-scale data analysis and machine learning of material properties. However, no systematic studies exist that conducted high-throughput materials simulations to curate and validate bonding data obtained from LOBSTER. Here we present an approach to constructing such a large database consisting of quantum-chemical bonding information.
While the synthesis of Metal-Organic Framework (MOF) particles can be as easy as adding two solutions together, reproducibly obtaining the same particles, time and time again, is a lot harder. As laboratory-independent reproducibility is a cornerstone of the scientific method, we must put effort into finding and controlling all necessary parameters to achieve this.
An open-source Python/EPICS-controlled robotic platform (see picture) was adapted to systematically explore this for a 20 ml MOF synthesis of the Zeolitic Imidazole Framework-8 (ZIF-8) chemistry in methanol. Parameters that were explored included: 1) addition sequence, 2) addition speeds, 3) reaction times, 4) source chemicals, 5) stirring speeds, 6) stirring bar choice, 7) starting concentrations, and 8) workup methodologies. It was found that, by controlling these parameters, highly reproducible syntheses are obtained. Secondly, the variation of these parameters alone led to a dramatic difference in volume-weighted particle size means, which exceeds an order of magnitude as investigated by our in-house X-ray scattering instrument [1].
The syntheses are thoroughly documented in an automated fashion, and the synthesis libraries as well as analyses libraries will become available in batches soon. With this library, it will be possible to extract previously unknown correlations, and other laboratories can produce specific particles by following the exact procedures of the particles of their choice.
A deep insight into the chemistry and nature of individual chemical bonds is essential for understanding materials. Bonding analysis is expected to provide important features for large-scale data analysis and machine learning of material properties. Such information on chemical bonds can be calculated using the LOBSTER (www.cohp.de) software package, which post-processes data from modern density functional theory computations by projecting plane wave-based wave functions onto a local atomic orbital basis. We have performed bonding analysis on 1520 compounds (insulators and semiconductors) using a fully automated workflow combining the VASP and LOBSTER software packages. We then automatically evaluated the data with LobsterPy (https://github.com/jageo/lobsterpy) and provide results as a database. The projected densities of states and bonding indicators are benchmarked on VASP projections and available heuristics, respectively. Lastly, we illustrate the predictive power of bonding descriptors by constructing a machine-learning model for phononic properties, which shows an increase in prediction accuracies by 27 % (mean absolute errors) compared to a benchmark model differing only by not relying on any quantum-chemical bonding features.
Talk about my recent research on data-driven chemical understanding with geometrical and quantum-chemical bonding analysis.
This database consists of bonding data computed using Lobster for 1520 solid-state compounds consisting of insulators and semiconductors. It consists of two kinds of json files. Smaller lightweight JSONS consists of summarized bonding information for each of the compounds. The files are named as per ID numbers in the materials project database.
Here we provide also the larger computational data json files for 700 compounds. This files consists of all important LOBSTER computation output files data stored as dictionary.
This database consists of bonding data computed using Lobster for 1520 solid-state compounds consisting of insulators and semiconductors. The files are named as per ID numbers in the materials project database.
Here we provide the larger computational data JSON files for the rest of the 820 compounds. This file consists of all important LOBSTER computation output files data stored as a dictionary.
Understanding and Machine Learning of Materials Properties with Quantum-Chemical Bonding Analysis
(2023)
Bonds and local atomic environments are crucial descriptors for material properties. They have been used to create design rules for materials and are used as features in machine learning of material properties. This talk will show how our recently developed tools, that automatically perform quantum chemical bond analysis and enable the study of chemical bonds and local atomic environments, accelerate and improve the development of such heuristics and machine-learned models for materials properties.