6 Materialchemie
Filtern
Dokumenttyp
- Vortrag (47) (entfernen)
Sprache
- Englisch (47)
Referierte Publikation
- nein (47)
Schlagworte
- X-ray scattering (23)
- MOUSE (14)
- Methodology (8)
- Small angle scattering (8)
- SAXS (7)
- Lab automation (6)
- Software (6)
- Automation (5)
- Small-angle scattering (5)
- Data stewardship (4)
Organisationseinheit der BAM
- 6 Materialchemie (47)
- 6.5 Synthese und Streuverfahren nanostrukturierter Materialien (46)
- 1 Analytische Chemie; Referenzmaterialien (1)
- 1.9 Chemische und optische Sensorik (1)
- 6.0 Abteilungsleitung und andere (1)
- 6.6 Physik und chemische Analytik der Polymere (1)
- VP Vizepräsident (1)
- VP.1 eScience (1)
The SPONGE
(2020)
Introduction
A good laboratory organization can help address the reproducibility crisis in science, and easily multiply the scientific output of a laboratory, while greatly elevating the quality of the measurements. We have demonstrated this for small- and wide-angle X-ray scattering in the MOUSE project (Methodology Optimization for Ultrafine Structure Exploration). In the MOUSE, we have combined: a) a comprehensive laboratory workflow with b) a heavily modified, highly automated X-ray scattering instrument. This combination allows us to collect fully traceable scattering data, with a well-documented data flow (akin to what is found at the more automated beamlines). With two full-time researchers, the lab collects and interprets thousands of datasets, on hundreds of samples for dozens of projects per year, supporting many users along the entire process from sample selection and preparation, to the analysis of the resulting data.
While these numbers do not light a candle to those achieved by our hardworking compatriots at the synchrotron beamlines, the laboratory approach does allow us to continually modify and fine-tune the integral methodology. So for the last three years, we have incorporated e.g. FAIR principles, traceability, automated processing, data curation strategies, as well as a host of good scattering practices into the MOUSE system. We have concomitantly expanded our purview as specialists to include an increased responsibility for the entire scattering aspect of the resultant publications. This ensures full exploitation of the data quality, whilst avoiding common pitfalls.
Talk scope
This talk will present the MOUSE project as implemented to date, and will introduce foreseeable upgrades and changes. These upgrades include better pre-experiment sample scattering predictions to filter projects on the basis of their suitability, exploitation of the measurement database for detecting long-term changes and automated flagging of datasets, extending the measurement range through an Ultra-SAXS module, and enhancing MC fitting with sample scattering simulations for better matching of odd-shaped scatterers.
The Meticulous Approach: Fully traceable X-ray scattering data via a comprehensive lab methodology
(2021)
To find out if experimental findings are real, you need to be able to repeat them. For a long time, however, papers and datasets could not necessarily include sufficient details to accurately repeat experiments, leading to a reproducibility crisis. It is here, that the MOUSE project (Methodology Optimization for Ultrafine Structure Exploration) tries to implement change – at least for small- and wide-angle X-ray scattering (SAXS/WAXS).
In the MOUSE project, we have combined: a) a comprehensive laboratory workflow with b) a heavily modified, highly automated Xenocs Xeuss 2.0 instrumental component. This combination allows us to collect fully traceable scattering data, with a well-documented data flow (akin to what is found at the more automated beamlines). With two full-time researchers, the lab collects and interprets thousands of datasets, on hundreds of samples for dozens of projects per year, supporting many users along the entire process from sample selection and preparation, to the analysis of the resulting data.
While these numbers do not light a candle to those achieved by our hardworking compatriots at the synchrotron beamlines, the laboratory approach does allow us to continually modify and fine-tune the integral methodology. So for the last three years, we have incorporated e.g. FAIR principles, traceability, automated processing, data curation strategies, as well as a host of good scattering practices into the MOUSE system. We have concomitantly expanded our purview as specialists to include an increased responsibility for the entire scattering aspect of the resultant publications, to ensure full exploitation of the data quality, whilst avoiding common pitfalls.
This talk will discuss the MOUSE project1 as implemented to date, and will introduce foreseeable upgrades and changes. These upgrades include better pre-experiment sample scattering predictions to filter projects on the basis of their suitability, exploitation of the measurement database for detecting long-term changes and automated flagging of datasets, and enhancing MC fitting with sample scattering simulations for better matching of odd-shaped scatterers.
The Dark side of Science
(2021)
We all may have started out as bright-eyed students trying to do science to the best of our abilities, but over time, some of us have gradually drifted to the dark side. The dark side of science has an impressive publication rate in high-ranking journals, good success with funding agencies, and rocks the world with stellar findings. Unfortunately, these findings aren't real, either by accident or on purpose. As the presenter and his colleagues found, trying to correct or even dispute any of these findings in literature is a supremely complex and time-consuming effort.
With no recent reduction in the frequency of such false findings, it is up to us to try to stem the flow. Besides looking at examples, we need to understand the underlying driving forces behind this dark scientific movement. By combining this understanding with a refresher of the core scientific principles, we can then develop the necessary argumentative tools and mechanisms that may prevent our own slide down the slippery slope.
This talk will therefore start out with several entertaining examples of probably accidental, as well as definitely deliberate, false scientific findings in literature (and in particular in the field of materials research). We will then take a brief look at the possible causes for these developments, after which some tools will be presented that can help both the fresh as well as the well-seasoned scientist to rise up against the dark side.
A brief introduction to the efforts we have done in our lab towards AI/ML analysis of SAXS data. For this, we need to extend the data with an extensive, structured hierarchy of metadata and associated data. A practical look into the information stored in our files, and the organization of the files in a data catalog is presented.
By automatically recording as much information as possible in automated laboratory setups, reproducibility and traceability of experiments are vastly improved. This presentation shows what such an approach means for the quality of experiments in an X-ray scattering laboratory and an automated synthesis set-up.
This talk introduces the expanded view that comes from wide-range X-ray scattering investigations.
Compared to X-ray diffraction studies alone, the additional angular range of this technique provides information on the larger structural dimensions present in your samples. This allows for the extraction of information on the size and size distribution of nanostructural components, such as nanoparticles, nanovoids, and any other structure exhibiting an electron density contrast.
The talk introduces the technique, the MOUSE instrument used for these investigations, and provides several real-world examples of its uses. The audience is invited to choose which examples captures their interest from a range of options, in the latter segment of the talk.
In this talk, the importance of metadata is underscored by real-world examples.
Metadata is essential to alleviating the reproducibility crises in science. This imples that a wide range of metadata must be collected, with a heavy emphasis on the automated collection of such metadata. This must subsequently be organized in an intelligible, archival structure, when possible with units and uncertainties.
Such metadata can aid in improving the usage efficiency of instrumentation, as is demonstrated on the MOUSE instrument. This metadata can now be used to connect the various aspects of the holistic experimental procedure to gain better insights on the materials structure.
A second example shows the extraction and organization of such metadata from an automated materials development platform, collected during the synthesis of 1200 samples. These metadata from the synthesis can then be linked to the results from the analysis of these samples, to find direct correlations between the synthesis parameters and the final structure of the materials.
A brief introduction is given into our data collection and organization procedure, and why we have settled on the HDF5-based NeXus format for describing experimental data.
The links between NeXus and the SciCat data catalog is also provided, showing how the NeXus metadata is automatically added as searchable metadata in the catalog.
How much do we, the small-angle scatterers, influence the results of an investigation? What uncertainty do we add by our human diversity in thoughts and approaches, and is this significant compared to the uncertainty from the instrumental measurement factors?
After our previous Round Robin on data collection, we know that many laboratories can collect reasonably consistent small-angle scattering data on easy samples1. To investigate the next, human component, we compiled four existing datasets from globular (roughly spherical) scatterers, each exhibiting a common complication, and asked the participants to apply their usual methods and toolset to the quantification of the results https://lookingatnothing.com/index.php/archives/3274).
Accompanying the datasets was a modicum of accompanying information to help with the interpretation of the data, similar to what we normally receive from our collaborators. More than 30 participants reported back with volume fractions, mean sizes and size distribution widths of the particle populations in the samples, as well as information on their self-assessed level of experience and years in the field.
While the Round Robin is still underway (until the 25th of April, 2022), the initial results already show significant spread in the results. Some of these are due to the variety in interpretation of the meaning of the requested parameters, as well as simple human errors, both of which are easy to correct for. Nevertheless, even after correcting for these differences in understanding, a significant spread remains. This highlights an urgent challenge to our community: how can we better help ourselves and our colleagues obtain more reliable results, how could we take the human factor out of the equation, so to speak?
In this talk, we will introduce the four datasets, their origins and challenges. Hot off the press, we will summarize the anonymized, quantified results of the Data Analysis Round Robin. (Incidentally, we will also see if a correlation exists between experience and proximity of the result to the median). Lastly, potential avenues for improving our field will be offered based on the findings, ranging from low-effort yet somehow controversial improvements, to high-effort foundational considerations.
Measuring an X-ray scattering pattern is relatively easy, but measuring a steady stream of high-quality, useful patterns requires significant effort and good laboratory organization.
Such laboratory organization can help address the reproducibility crisis in science, and easily multiply the scientific output of a laboratory, while greatly elevating the quality of the measurements. We have demonstrated this for small- and wide-angle X-ray scattering in the MOUSE project (Methodology Optimization for Ultrafine Structure Exploration).
With the MOUSE, we have combined a comprehensive and highly automated laboratory workflow with a heavily modified X-ray scattering instrument. This combination allows us to collect fully traceable scattering data, within a well-documented, FAIR-compliant data flow (akin to what is found at the more automated synchrotron beamlines). With two full-time researchers, our lab collects and interprets thousands of datasets, on hundreds of samples, for dozens of projects per year, supporting many users along the entire process from sample selection and preparation, to the analysis of the resulting data.
This talk will briefly introduce the foundations of X-ray scattering, present the MOUSE project, and will highlight the proven utility of the methodology for materials science. Upgrades to the methodology will also be discussed, as well as possible avenues for transferring this holistic methodology to other instruments
This presentation highlights ongoing scientific misconduct as found in academic literature. This includes data- and image manipulation, and paper mills. Starting with an expose of examples, it delves deeper into the causes and metrics driving this phenomenon. Finally a range of possible tools is presented, that the young researcher can use to prevent themselves from sliding into the dark scientific methods.
Glimpses of the Future ✨: Advancing X-ray Scattering in an Automated Materials Research Laboratory
(2023)
In our (dramatically understaffed) X-ray scattering laboratory, developing a systematic, holistic methodology1 let us provide scattering and diffraction information for more than 2100 samples for 200+ projects led by 120+ collaborators. Combined with automated data correction pipelines, and our analysis and simulation software, this led to more than 40 papers2 in the last 5 years with just over 2 full-time staff members.
This year, our new, modular synthesis platform has made more than 1000 additional samples for us to analyse and catalogue. By virtue of the automation, the synthesis of these samples is automatically documented in excruciating detail, preparing them for upload and exploitation in large-scale materials databases. Having developed these proof-of-concepts, we find that materials research itself is changed dramatically by automating dull tasks in a laboratory.
This talk is intended to spark ideas and invite collaborations by providing an overview of: 1) the current improvements in our wide-range X-ray scattering laboratory methodology, 2) Introduce some of our open-source analysis and simulation software, touching on scattering, diffraction and PDF, and 3) introducing our open, modular robotic platform for systematic sample preparation. Finally, the remaining bottlenecks and points of attention across all three are highlighted.
McSAS3 is a refactored software package for fitting large batches of (X-ray or Neutron) scattering data. It uses a Monte-Carlo acceptance-rejection algorithm to optimize model parameters - ideal for analysis of size-disperse scatterers.
The refactored code can exploit multiprocessing, traceably stores (multiple) results in the output file, and allows for re-histogramming of previous optimizations. Besides analysis of large batches, it can also be integrated in automated data processing pipelines.
The live demonstration will show how to use the software, what its limitations are, and what outcomes can look like for batches of results.
While the synthesis of Metal-Organic Framework (MOF) particles can be as easy as adding two solutions together, reproducibly obtaining the same particles, time and time again, is a lot harder. As laboratory-independent reproducibility is a cornerstone of the scientific method, we must put effort into finding and controlling all necessary parameters to achieve this.
An open-source Python/EPICS-controlled robotic platform (see picture) was adapted to systematically explore this for a 20 ml MOF synthesis of the Zeolitic Imidazole Framework-8 (ZIF-8) chemistry in methanol. Parameters that were explored included: 1) addition sequence, 2) addition speeds, 3) reaction times, 4) source chemicals, 5) stirring speeds, 6) stirring bar choice, 7) starting concentrations, and 8) workup methodologies. It was found that, by controlling these parameters, highly reproducible syntheses are obtained. Secondly, the variation of these parameters alone led to a dramatic difference in volume-weighted particle size means, which exceeds an order of magnitude as investigated by our in-house X-ray scattering instrument [1].
The syntheses are thoroughly documented in an automated fashion, and the synthesis libraries as well as analyses libraries will become available in batches soon. With this library, it will be possible to extract previously unknown correlations, and other laboratories can produce specific particles by following the exact procedures of the particles of their choice.
In our (dramatically understaffed) X-ray scattering laboratory, developing a systematic, holistic methodology let us provide scattering and diffraction information for more than 2100 samples for 200+ projects led by 120+ collaborators. Combined with automated data correction pipelines, and our analysis and simulation software, this led to more than 40 papers in the last 5 years with just over 2 full-time staff members.
This year, our new, modular synthesis platform has made more than 1000 additional samples for us to analyse and catalogue. By virtue of the automation, the synthesis of these samples is automatically documented in excruciating detail, preparing them for upload and exploitation in large-scale materials databases.
This talk is intended to spark ideas and invite collaborations by providing an overview of: 1) the current improvements in our wide-range X-ray scattering laboratory methodology, and 2) introducing our open, modular robotic platform for systematic sample preparation.
The second talk for the Swiss Society for Crystallography (SSCr) workshop on SAXS will highlight the data processing challenges, holistic experimental workflow developments, and the pitfalls. In particular, the following items will be addressed:
- The importance of data processing and estimating uncertainty
- A universal correction pipeline – away with the headaches, at least for this step!
- Experiment planning part 2, some tips and advice to improve your corrected data.
- Sample preparation, background selection, some tips and advice to improve your corrected data.
- Automate for your mental well-being; electronic logbooks, measurement catalogs and workflow management software
- Life on the edge: several pitfalls to avoid…