Analytische Chemie
Filtern
Dokumenttyp
- Posterpräsentation (6)
- Vortrag (4)
- Zeitschriftenartikel (2)
- Forschungsdatensatz (2)
- Video (1)
Sprache
- Englisch (15)
Schlagworte
- Data analysis (15) (entfernen)
Organisationseinheit der BAM
- 6 Materialchemie (11)
- 6.5 Synthese und Streuverfahren nanostrukturierter Materialien (11)
- 5 Werkstofftechnik (2)
- 5.1 Mikrostruktur Design und Degradation (2)
- 1 Analytische Chemie; Referenzmaterialien (1)
- 1.5 Proteinanalytik (1)
- 6.3 Strukturanalytik (1)
- 6.6 Physik und chemische Analytik der Polymere (1)
- 8 Zerstörungsfreie Prüfung (1)
- 8.0 Abteilungsleitung und andere (1)
Eingeladener Vortrag
- nein (4)
For reaction monitoring and process control using NMR instruments, in particular, after acquisition of the FID the data needs to be corrected in real-time for common effects using fast interfaces and automated methods. When it comes to NMR data evaluation under industrial process conditions, the shape of signals can change drastically due to nonlinear effects. Additionally, the multiplet structure becomes more dominant because of the comparably low-field strengths which results in overlapping of multiple signals. However, the structural and quantitative information is still present but needs to be extracted by applying predictive models.
We present a range of approaches for the automated spectra analysis moving from statistical approach, (i.e., Partial Least Squares Regression) to physically motivated spectral models (i.e., Indirect Hard Modeling). By using the benefits of traditional qNMR experiments data analysis models can meet the demands of the PAT community (Process Analytical Technology) regarding low calibration effort/calibration free methods, fast adaptions for new reactants, or derivatives and robust automation schemes.
Currently research in chemical manufacturing moves towards flexible plug-and-play approaches focusing on modular plants, capable of producing small scales on-demand with short down-times between individual campaigns. This approach allows for efficient use of hardware, a faster optimization of the process conditions, and thus, an accelerated introduction of new products to the market [1]. Driven mostly by the search for chemical syntheses under biocompatible conditions, so-called “click” chemistry rapidly became a growing field of research. The resulting simple one-pot reactions are so far only scarcely accompanied by an adequate optimization via comparably straightforward and robust analysis techniques. Here we report on a fast and reliable calibration-free online high field NMR monitoring approach for technical mixtures. It combines a versatile fluidic system, continuous-flow measurement with a time interval of 20 s per spectrum, and a robust, automated algo-rithm to interpret the obtained data. All spectra were acquired using a 500 MHz NMR spectrometer (Varian) with a dual band flow probe having a 1/16 inch polymer tubing working as a flow cell. Single scan 1H spectra were recorded with an acquisition time of 5 s, relaxation delay of 15 s.
Phage display is used to find specific target binding peptides for polypropylene (PP) surfaces. PP is one of the most commonly used plastics in the world. Millions of tons are produced every year. PP binders are of particular interest because so far gluing or printing on PP is challenging due to its low surface energy. A phage display protocol for PP was developed followed by Next Generation DNA Sequencing of the whole phage library. Data analysis of millions of sequences yields promising peptide candidates which were synthesized as PEG conjugates. Fluorescence-based adsorption-elution-experiments show high adsorption on PP for several sequences.
Data analysis of SAS measurements has been dominated by the classical curve fitting approach. This method finds optimal parameters of a scattering model composed of analytical expressions. SASfit represents such a classical curve fitting toolbox: it is one of the mature programs for small-angle scattering data analysis and has been available and used for many years. The latest developments [1] will be extended by improving the interoperability of the extensive data base of models with third-party analysis software. An updated format of model definitions is also presented, which allows model function plug-ins to be used with the Python language.
To complement the classical curve fitting method, the user-friendly opensource Monte Carlo regression package McSAS was developed. Most importantly, the form-free Monte Carlo approach of McSAS means that it is not necessary to provide any further mathematical restrictions to the Parameter distribution. Future developments include separating the core optimization from the GUI (allowing 'headless' integration), as well as parallel computing which reduces the computing time proportional to the number of available computing cores. The headless mode is presented by an example of Operation within interactive programming environments such as a Jupyter notebook.
The promising results of Monte Carlo based data analysis for determining form-free Parameter distributions motivated the evaluation of the method with dynamic light scattering (DLS) data. For this purpose, the method was adapted for analyzing correlation curves such as those from multi-angle dynamic light scattering (DLS) data. The development of McDLS intends to overcome limitations of existing methods at reliably determining the modality of size distributions. An example of Monte Carlo based data analysis of multimodal DLS measurements will be presented.
This poster deals with improvements and characteriztion of small-angle scattering limitations, by looking at the trifecta of Data collection and uncertainty propagation, data analysis methodologies, and real-world tests. It is found that - with appropriate care and instrumentation - accuracies of 1% on mean nanomaterial sizes, and 10% on the size distribution width as well as the volume fraction can be achieved.
A large amount of data and information is collected in the field of non-destructive testing (NDT) in civil engineering. The weakly structured data are usually evaluated with regard to specific testing tasks (e.g. geometry determination, damage localization, quality assurance). While the data offers great economic potential, i.e. to support construction planning, monitoring and maintenance processes, the evaluation is manual and case-by-case and therefore too inefficient for broader applications. We present recent visions and approaches how these large amounts of data need to be handled in the future and how we aim to make the acquired knowledge accessible to our stakeholders. Building on initiatives in materials research, we stress the importance of further research in the field of semantic data integration particularly motivate why an ontology is needed for the area of NDT in civil engineering.
Introduction to SAXS
(2020)
How much do we, the small-angle scatterers, influence the results of an investigation? What uncertainty do we add by our human diversity in thoughts and approaches, and is this significant compared to the uncertainty from the instrumental measurement factors?
After our previous Round Robin on data collection, we know that many laboratories can collect reasonably consistent small-angle scattering data on easy samples1. To investigate the next, human component, we compiled four existing datasets from globular (roughly spherical) scatterers, each exhibiting a common complication, and asked the participants to apply their usual methods and toolset to the quantification of the results https://lookingatnothing.com/index.php/archives/3274).
Accompanying the datasets was a modicum of accompanying information to help with the interpretation of the data, similar to what we normally receive from our collaborators. More than 30 participants reported back with volume fractions, mean sizes and size distribution widths of the particle populations in the samples, as well as information on their self-assessed level of experience and years in the field.
While the Round Robin is still underway (until the 25th of April, 2022), the initial results already show significant spread in the results. Some of these are due to the variety in interpretation of the meaning of the requested parameters, as well as simple human errors, both of which are easy to correct for. Nevertheless, even after correcting for these differences in understanding, a significant spread remains. This highlights an urgent challenge to our community: how can we better help ourselves and our colleagues obtain more reliable results, how could we take the human factor out of the equation, so to speak?
In this talk, we will introduce the four datasets, their origins and challenges. Hot off the press, we will summarize the anonymized, quantified results of the Data Analysis Round Robin. (Incidentally, we will also see if a correlation exists between experience and proximity of the result to the median). Lastly, potential avenues for improving our field will be offered based on the findings, ranging from low-effort yet somehow controversial improvements, to high-effort foundational considerations.