Filtern
Erscheinungsjahr
Dokumenttyp
- Vortrag (8)
- Posterpräsentation (6)
- Zeitschriftenartikel (3)
- Beitrag zu einem Tagungsband (3)
- Forschungsdatensatz (2)
- Video (1)
Schlagworte
- Data analysis (23) (entfernen)
Organisationseinheit der BAM
- 6 Materialchemie (14)
- 6.5 Synthese und Streuverfahren nanostrukturierter Materialien (11)
- 6.0 Abteilungsleitung und andere (3)
- 5 Werkstofftechnik (2)
- 5.1 Mikrostruktur Design und Degradation (2)
- 1 Analytische Chemie; Referenzmaterialien (1)
- 1.5 Proteinanalytik (1)
- 6.3 Strukturanalytik (1)
- 6.6 Physik und chemische Analytik der Polymere (1)
- 7 Bauwerkssicherheit (1)
Eingeladener Vortrag
- nein (8)
Data analysis of SAS measurements has been dominated by the classical curve fitting approach. This method finds optimal parameters of a scattering model composed of analytical expressions. SASfit represents such a classical curve fitting toolbox: it is one of the mature programs for small-angle scattering data analysis and has been available and used for many years. The latest developments [1] will be extended by improving the interoperability of the extensive data base of models with third-party analysis software. An updated format of model definitions is also presented, which allows model function plug-ins to be used with the Python language.
To complement the classical curve fitting method, the user-friendly opensource Monte Carlo regression package McSAS was developed. Most importantly, the form-free Monte Carlo approach of McSAS means that it is not necessary to provide any further mathematical restrictions to the Parameter distribution. Future developments include separating the core optimization from the GUI (allowing 'headless' integration), as well as parallel computing which reduces the computing time proportional to the number of available computing cores. The headless mode is presented by an example of Operation within interactive programming environments such as a Jupyter notebook.
The promising results of Monte Carlo based data analysis for determining form-free Parameter distributions motivated the evaluation of the method with dynamic light scattering (DLS) data. For this purpose, the method was adapted for analyzing correlation curves such as those from multi-angle dynamic light scattering (DLS) data. The development of McDLS intends to overcome limitations of existing methods at reliably determining the modality of size distributions. An example of Monte Carlo based data analysis of multimodal DLS measurements will be presented.
The stability of materials is a very important property for materials discovery. In recent years, we have explored several approaches for advancing the prediction of the stability of crystals (e.g., classical chemical heuristics and machine-learned interatomic potentials). For example, we have assessed a famous chemical heuristic – the Pauling rules – regarding their usefulness for structure prediction. They are only of limited predictive power, i.e. the rules 2 to 5 only work for 13 % of all tested oxides. Furthermore, we have shown that machine-learned interatomic potentials can be used to predict phonon properties of a range of silicon allotropes accurately and, therefore, they can also be used to assess the dynamic stability of materials. To do so, we have developed new schemes to build reference databases for machine learned interatomic potentials.
Developments in density functional theory (DFT) calculations, their automation and therefore easier access to materials data have enabled ab initio high-throughput searches for new materials for numerous applications. These studies open up exciting opportunities to find new materials in a much faster way than based on experimental work alone. However, performing density functional theory calculations for several thousand materials can still be very time consuming. The use of, for example, faster chemical heuristics and machine-learned interatomic potentials would allow to consider a much larger number of candidate materials. In addition to DFT based high-throughput searches, the seminar will discuss two possible ways to accelerate high-throughput searches.
Using data analysis on the structures and coordination environments of 5000 oxides, we were able to investigate a chemical heuristic – the famous Pauling rules – regarding its usefulness for the fast prediction of stable materials.
We have also investigated how machine-learned interatomic potentials can be used to accelerate the prediction of (dynamically) stable materials. The use of these potentials makes vibrational properties accessible in a much faster way than based on DFT. Our results based on a newly developed potential for silicon allotropes showed excellent agreement with DFT reference data (agreement of the frequencies within 0.1-0.2 THz).
In addition, we have successfully used high-throughput calculations in the search for new candidate materials for spintronic applications and ferroelectrics
Active Thermography
(2016)
Active thermography is a nondestructive testing method for identification of near surface defects and signs of early deterioration. The presentation explains the potential and Limits of the method and the equipment for application to concrete structures. Advanced data analysis can increase the quality of imaging of the measurement. The method is usually applied once for analysis of the current condition. If applied periodically, the method can be used for observation of a deterioration process.
Phage display is used to find specific target binding peptides for polypropylene (PP) surfaces. PP is one of the most commonly used plastics in the world. Millions of tons are produced every year. PP binders are of particular interest because so far gluing or printing on PP is challenging due to its low surface energy. A phage display protocol for PP was developed followed by Next Generation DNA Sequencing of the whole phage library. Data analysis of millions of sequences yields promising peptide candidates which were synthesized as PEG conjugates. Fluorescence-based adsorption-elution-experiments show high adsorption on PP for several sequences.
For reaction monitoring and process control using NMR instruments, in particular, after acquisition of the FID the data needs to be corrected in real-time for common effects using fast interfaces and automated methods. When it comes to NMR data evaluation under industrial process conditions, the shape of signals can change drastically due to nonlinear effects. Additionally, the multiplet structure becomes more dominant because of the comparably low-field strengths which results in overlapping of multiple signals. However, the structural and quantitative information is still present but needs to be extracted by applying predictive models.
We present a range of approaches for the automated spectra analysis moving from statistical approach, (i.e., Partial Least Squares Regression) to physically motivated spectral models (i.e., Indirect Hard Modeling). By using the benefits of traditional qNMR experiments data analysis models can meet the demands of the PAT community (Process Analytical Technology) regarding low calibration effort/calibration free methods, fast adaptions for new reactants, or derivatives and robust automation schemes.
Currently research in chemical manufacturing moves towards flexible plug-and-play approaches focusing on modular plants, capable of producing small scales on-demand with short down-times between individual campaigns. This approach allows for efficient use of hardware, a faster optimization of the process conditions, and thus, an accelerated introduction of new products to the market [1]. Driven mostly by the search for chemical syntheses under biocompatible conditions, so-called “click” chemistry rapidly became a growing field of research. The resulting simple one-pot reactions are so far only scarcely accompanied by an adequate optimization via comparably straightforward and robust analysis techniques. Here we report on a fast and reliable calibration-free online high field NMR monitoring approach for technical mixtures. It combines a versatile fluidic system, continuous-flow measurement with a time interval of 20 s per spectrum, and a robust, automated algo-rithm to interpret the obtained data. All spectra were acquired using a 500 MHz NMR spectrometer (Varian) with a dual band flow probe having a 1/16 inch polymer tubing working as a flow cell. Single scan 1H spectra were recorded with an acquisition time of 5 s, relaxation delay of 15 s.
A large amount of data and information is collected in the field of non-destructive testing (NDT) in civil engineering. The weakly structured data are usually evaluated with regard to specific testing tasks (e.g. geometry determination, damage localization, quality assurance). While the data offers great economic potential, i.e. to support construction planning, monitoring and maintenance processes, the evaluation is manual and case-by-case and therefore too inefficient for broader applications. We present recent visions and approaches how these large amounts of data need to be handled in the future and how we aim to make the acquired knowledge accessible to our stakeholders. Building on initiatives in materials research, we stress the importance of further research in the field of semantic data integration particularly motivate why an ontology is needed for the area of NDT in civil engineering.