Forschungsdatensätze der BAM
Filtern
Erscheinungsjahr
Dokumenttyp
- Forschungsdatensatz (138)
Referierte Publikation
- nein (138)
Schlagworte
- SEM (23)
- XPS (23)
- HAXPES (22)
- Automation (17)
- NanoSolveIT (15)
- SAXS (15)
- X-ray scattering (15)
- Bonding analysis (14)
- Computational Chemistry (10)
- Database (10)
Organisationseinheit der BAM
- 6 Materialchemie (80)
- 6.1 Oberflächen- und Dünnschichtanalyse (29)
- 6.5 Synthese und Streuverfahren nanostrukturierter Materialien (28)
- 5 Werkstofftechnik (21)
- 6.0 Abteilungsleitung und andere (18)
- 8 Zerstörungsfreie Prüfung (18)
- 5.2 Metallische Hochtemperaturwerkstoffe (16)
- VP Vizepräsident (12)
- VP.1 eScience (12)
- 1 Analytische Chemie; Referenzmaterialien (11)
xraylib 3.1.0
(2014)
Quantitative estimate of elemental composition by spectroscopic and imaging techniques using X-ray fluorescence requires the availability of accurate data of X-ray interaction with matter. Although a wide number of computer codes and data sets are reported in literature, none of them is presented in the form of freely available library functions which can be easily included in software applications for X-ray fluorescence. This work presents a compilation of data sets from different published works and an xraylib interface in the form of callable functions. Although the target applications are on X-ray fluorescence, cross sections of interactions like photoionization, coherent scattering and Compton scattering, as well as form factors and anomalous scattering functions, are also available.
xraylib provides access to some of the most respected databases of physical data in the field of x-rays. The core of xraylib is a library, written in ANSI C, containing over 40 functions to be used to retrieve data from these databases. This C library can be directly linked with any program written in C, C++ or Objective-C. Furthermore, the xraylib package contains bindings to several popular programming languages: Fortran 2003, Perl, Python, Java, IDL, Lua, Ruby, PHP and .NET, as well as a command-line utility which can be used as a pocket-calculator. Although not officially supported, xraylib has been reported to be useable from within Matlab and LabView.
The source code is known to compile and run on the following platforms: Linux, Mac OS X, Solaris, FreeBSD and Windows.
Development occurs on Github: http://github.com/tschoonj/xraylib
Downloads are hosted by the X-ray Micro-spectroscopy and Imaging research group of Ghent University: http://lvserver.ugent.be/xraylib
Version 3.1.0 release notes:
- Database of commonly used radionuclides for X-ray sources added (new API: GetRadioNuclideDataByName, GetRadioNuclideDataByIndex, GetRadioNuclideDataList and FreeRadioNuclideData)
- numpy Python bindings added, generated with Cython. Performance basically the same as the core C library. (suggested by Matt Newville)
- docstring support added to Python bindings (suggested by Matt Newville)
- Windows SDKs now have support for Python 3.4.
- Windows 64-bit SDK now comes with IDL bindings
- Confirmed support for LabView (thanks to Dariush Hampai!)
- Universal Intel 32/64 bit Framework built for Mac OS X
- Perl support for Debian/Ubuntu
- Several bugfixes: thanks to those that reported them!
The datasets from (Hard Energy) X-ray photoelectron spectroscopy, X-ray diffraction and Scanning Electron Microsopy are related to the publication
G. Chemello, X. Knigge, D. Ciornii, B.P. Reed, A.J. Pollard, C.A. Clifford, T. Howe, N. Vyas, V.-D. Hodoroaba, J. Radnik
"Influence of the morphology on the functionalization of graphene nanoplatelets analyzed by comparative photoelectron spectroscopy with soft and hard X-rays"
Advanced Materials Interfaces (2023), DOI: 10.1002/admi.202300116.
XMI-MSIM 5.0
(2014)
XMI-MSIM is an open source tool designed for predicting the spectral response of energy-dispersive X-ray fluorescence spectrometers using Monte-Carlo simulations. It comes with a fully functional graphical user interface in order to make it as user friendly as possible. Considerable effort has been taken to ensure easy installation on all major platforms.
Development of this package was part of my PhD thesis. The algorithms were inspired by the work of my promotor Prof. Laszlo Vincze of Ghent University. Links to his and my own publications can be found in our manual.
A manuscript has been published in Spectrochimica Acta Part B that covers the algorithms that power XMI-MSIM. Please include a reference to this publication in your own work if you decide to use XMI-MSIM for academic purposes.
A second manuscript was published that covers our XMI-MSIM based quantification plug-in for PyMca. Soon information on using this plug-in will be added to the manual.
XMI-MSIM is released under the terms of the GPLv3.
Development occurs at Github: http://github.com/tschoonj/xmimsim
Downloads are hosted by the X-ray Micro-spectroscopy and Imaging research group of Ghent University: http://lvserver.ugent.be/xmi-msim
Version 5.0 release notes:
Changes:
1. Custom detector response function: build a own plug-in containing your own detector response function and load it at run-time to override the builtin routines. Instructions can be found in the manual.
2. Escape peak improvements: new algorithm is used to calculate the escape peak ratios based on a combined brute-force and variance-reduction approach. Ensures high accuracy even at high incoming photon energies and thin detector crystals. Downside: it's slower…
3. Removed maximum convolution energy option. Was a bit confusing anyway.
4. Number of channels: moved from simulation controls into input-file
5. Radionuclide support added: Now you can select one or more commonly used radionuclide sources from the X-ray sources widget.
6. Advanced Compton scattering simulation: a new alternative implementation of the Compton scattering has been implemented based on the work of Fernandez and Scot (http://dx.doi.org/10.1016/j.nimb.2007.04.203), which takes into account unpopulated atomic orbitals. Provides an improved simulation of the Compton profile, as well as fluorescence contributions due to Compton effect (extremely low!), but slows the code down considerably. Advanced users only. Default: OFF
7. Plot spectra before convolution in results
8. Windows: new Inno Setup installers. Contains the headers and import libraries
9. Windows: compilers changed to GCC 4.8.1 (TDM-GCC)
10. Windows: rand_s used to generate seeds on 64-bit version (requires Vista or later)
11. Windows: new gtk runtime for the 64-bit version (see also https://github.com/tschoonj/GTK-for-Windows-Runtime-Environment-Installer)
12. Mac OS X: compilers changed to clang 5.1 (Xcode) and gfortran 4.9.1 (MacPorts)
13. Original input-files from our 2012 publication (http://dx.doi.org/10.1016/j.sab.2012.03.011) added to examples
14. Updater performs checksum verification after download
15. X-ray sources last used values stored in preferences.ini
16. xmimsimdata.h5 modified: even bigger now...
Bugfixes:
1. Windows: support for usernames with unicode characters. Fixed using customized builds of HDF5. Thanks to Takashi Omori of Techno-X for the report!
2. Spectrum import from file fixes. Was never properly tested apparently
Note:
For those that compiled XMI-MSIM from source: you will need to regenerate the xmimsimdata.h5 file with xmimsim-db. Old versions of this file will not work with XMI-MSIM 5.0.
X-Ray computed tomography (XCT) scan of 11 individual metallic powder particles, made of (Mn,Fe)2(P,Si) alloy. The data set consists of 4 single XCT scans which have been stitched together [3] after reconstruction. The powder material is an (Mn,Fe)2(P,Si) alloy with an average density of 6.4 g/cm³. The particle size range is about 100 - 150 µm with equivalent pore diameters up to 75 µm. The powder and the metallic alloy are described in detail in [1, 2].
Wide-range X-ray scattering datasets and analyses for all samples described in the 2020 publication "Gold and silver dichroic nanocomposite in the quest for 3D printing the Lycurgus cup". These datasets are composed by combining multiple small-angle x-ray scattering and wide-angle x-ray scattering curves into a single dataset. They have been analyzed using McSAS to extract polydispersities and volume fractions. They have been collected using the MOUSE project (instrument and methodology).
X-ray scattering datasets for samples described in the 2022 publication "Side chain length dependent dynamics and conductivity in self assembled ion channels". This dataset includes both raw and processed X-ray scattering data for samples ILC8, ILC10, ILC12, ILC14 and ILC16 alongside background measurement files (BKG).
X-ray scattering datasets for samples described in the 2022 publication "Molecular Mobility of Polynorbornenes with Trimethylsiloxysilyl side groups: Influence of the Polymerization Mechanism". This dataset includes both raw and processed X-ray scattering data for samples APTCN and MPTCN, alongside background measurements files (BKG).
X-ray scattering datasets for samples described in the 2020 publication "Molecular Dynamics of Janus Polynorbornenes: Glass Transitions and Nanophase Separation". This dataset includes both raw and processed X-ray scattering data for samples PTCHSiO-Pr, Bu, Hx, Oc and De, alongside background measurements files (BKG). This data was collected using the MOUSE project (instrument and methodology).
This dataset contains the processed and analysed small-angle X-ray scattering data associated with all samples from the publications "Bio-SAXS of Single-Stranded DNA-Binding Proteins: Radiation Protection by the Compatible Solute Ectoine" (https://doi.org/10.1039/D2CP05053F).
Files associated with McSAS3 analyses are included, alongside the relevant SAXS data, with datasets labelled in accordance to the protein (G5P), its concentration (1, 2 or 4 mg/mL), and if Ectoine is present (Ect) or absent (Pure). PEPSIsaxs simulations of the GVP monomer (PDB structure: 1GV5 ) and dimer are also included.
TOPAS-bioSAXS-dosimetry extension for TOPAS-nBio based particle scattering simulations can be obtained from https://github.com/MarcBHahn/TOPAS-bioSAXS-dosimetry which is further described in https://doi.org/10.26272/opus4-55751.
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 442240902 (HA 8528/2-1 and SE 2999/2-1). We acknowledge Diamond Light Source for time on Beamline B21 under Proposal SM29806. This work has been supported by iNEXT-Discovery, grant number 871037, funded by the Horizon 2020 program of the European Commission.
Scientific welding data covers a wide range of physical domains and timescales and are measured using various different sensors. Complex and highly specialized experimental setups at different welding institutes complicate the exchange of welding research data further. The WelDX research project aims to foster the exchange of scientific data inside the welding community by developing and establishing a new open source file format suitable for the documentation of experimental welding data and upholding associated quality standards. In addition to fostering scientific collaboration inside the national and international welding community an associated advisory committee will be established to oversee the future development of the file format. The proposed file format will be developed with regard to current needs of the community regarding interoperability, data quality and performance and will be published under an appropriate open source license. By using the file format objectivity, comparability and reproducibility across different experimental setups can be improved.
WEBSLAMD
(2022)
The objective of SLAMD is to accelerate materials research in the wet lab through AI. Currently, the focus is on sustainable concrete and binder formulations, but it can be extended to other material classes in the future.
1. Summary
Leverage the Digital Lab and AI optimization to discover exciting new materials Represent resources and processes and their socio-economic impact.
Calculate complex compositions and enrich them with detailed material knowledge. Integrate laboratory data and apply it to novel formulations. Tailor materials to the purpose to achieve the best solution.
Workflow
Digital Lab
Specify resources: From base materials to manufacturing processes – "Base" enables a detailed and consistent description of existing resources
Combine resources: The combination of base materials and processes offers an almost infinite optimization potential. "Blend" makes it easier to design complex configurations.
Digital Formulations: With "Formulations" you can effortlessly convert your resources into the entire spectrum of possible concrete formulations. This automatically generates a detailed set of data for AI optimization.
AI-Optimization
Materials Discovery: Integrate data from the "Digital Lab" or upload your own material data. Enrich the data with lab results and adopt the knowledge to new recipes via artificial intelligence. Leverage socio-economic metrics to identify recipes tailored to your requirements.
Trinamic TMCL IOC is a Python package designed for controlling stepper motors connected to a Trinamic board using the TMCL language (all boards supported by PyTrinamic should now work, has been tested on the TMCM 6110 and the TMCM 6214). Since it is implementing the TMCL protocol, it should be easy to adapt to other Trinamic motor controller boards. This package assumes the motor controller is connected over a machine network via a network-to-serial converter, but the underlying PyTrinamic package allows for other connections too.
This allows the control of attached motors via the EPICS Channel-Access virtual communications bus. If EPICS is not desired, plain Pythonic control via motion_control should also be possible. An example for this will be provided in the example.ipynb Jupyter notebook.
This package leverages Caproto for EPICS IOCs and a modified PyTrinamic library for the motor board control, and interfaces between the two via an internal set of dataclasses. Configuration for the motors and boards are loaded from YAML files (see tests/testdata/example_config.yaml).
The modifications to PyTrinamic involved extending their library with a socket interface. This was a minor modification that should eventually find its way into the official package (a pull request has been submitted).
Data set of low-field NMR spectra of continuous synthesis of nitro-4’-methyldiphenylamine (MNDPA). 1H spectra (43 MHz) were recorded as single scans.
Two different approaches for the generation of artificial neural networks training data for the prediction of reactant concentrations were used: (i) Training data based on combinations of measured pure component spectra and (ii) Training data based on a spectral model.
Synthetic low-field NMR spectra
First 4 columns in MAT-files represent component areas of each reactant within the synthetic mixture spectrum.
Xi (“pure component spectra dataset”)
Xii (“spectral model dataset”)
Experimental low-field NMR spectra from MNDPA-Synthesis
This data set represents low-field NMR-spectra recorded during continuous synthesis of nitro-4’-methyldiphenylamine (MNDPA). Reference values from high-field NMR results are included.
These files contain cell models for TOPAS/Geant4 and the inclusion of nano particles in particle scattering simulations. A simple spherical cell with nanoparticles can be generated in a fast manner. The user has the option to include the following organelles: nucleus, mitochondria, cell membrane. Additionally nanoparticles can be included in the cytosol and at the surface of the nucleus and/or the mitochondria.
The C++ classes in this repository extend the functionality of the TOPAS (http://www.topasmc.org/) Monte-Carlo program, which is itself a wrapper of the Geant4 MCS Toolkit (http://geant4.org). The sourcecode together with examples and scorers are provided.
"If you use this extension please cite the following literature:
Hahn, M.B., Zutta Villate, J.M. "Combined cell and nanoparticle models for TOPAS to study radiation dose enhancement in cell organelles." Sci Rep 11, 6721 (2021).
https://doi.org/10.1038/s41598-021-85964-2 "
Simulates X-ray and Neutron scattering patterns from arbitrary shapes defined by STL files.
Features:
- Uses multithreading to compute a number of independent solutions, then uses the variance of the results to estimate an uncertainty on the output.
- Can be launched from the command line using an excel sheet to define settings, or from a jupyter notebook.
- Outputs scattering patterns in absolute units if the contrast is set.
- A Gaussian size distribution is available, where the relative scaling of objects for each repetion can be varied. Recommended to be used with limited width (max. 10%) to avoid artefacts.
- Writes results with settings to an archival HDF5 file.
Application examples:
This software has been used in several studies to date. For example, it has been used here to simulate a model scattering pattern for a cuboid shape, which was then fed forward into the McSAS3 analysis program for analyzing scattering patterns of polydisperse cuboids. A second use is here, where it was used for the modeling of flattened helices. In this paper, scattering pattern features could be matched with particular morphological changes in the structure. Lastly, this paper has an example where it was used to validate the analytical analysis model, and explore the realistic limits of application of the analytical model.
Test artifact for fs-LDW
(2023)
Tensile Test Ontology (TTO)
(2023)
This is the stable version 2.0.1 of the PMD ontology module of the tensile test (Tensile Test Ontology - TTO) as developed on the basis of the 2019 standard ISO 6892-1: Metallic materials - Tensile Testing - Part 1: Method of test at room temperature.
The TTO was developed in the frame of the PMD project. The TTO provides conceptualizations valid for the description of tensile test and corresponding data in accordance with the respective standard. By using TTO for storing tensile test data, all data will be well structured and based on a common vocabulary agreed on by an expert group (generation of FAIR data) which will lead to enhanced data interoperability. This comprises several data categories such as primary data, secondary data and metadata. Data will be human and machine readable. The usage of TTO facilitates data retrieval and downstream usage. Due to a close connection to the mid-level PMD core ontology (PMDco), the interoperability of tensile test data is enhanced and data querying in combination with other aspects and data within the broad field of material science and engineering (MSE) is facilitated.
The TTO class structure forms a comprehensible and semantic layer for unified storage of data generated in a tensile test including the possibility to record data from analysis, re-evaluation and re-use. Furthermore, extensive metadata allows to assess data quality and reproduce experiments. Following the open world assumption, object properties are deliberately low restrictive and sparse.
To simulate the movement of the macroscopic magnetic moment in ferromagnetic systems under the influence of elevated temperatures, the stochastic version of the Landau-Lifshitz (LL) or the Landau-Lifshitz-Gilbert equation with a spin density of one per unit cell has to be used.
To apply the stochastic LL to micromagnetic simulations, where the spin density per unit cell is generally higher, a conversion has to be performed. OOMMF sample files MIF) are provided which can be used to determine the Curie temperature for the classical bulk magnets, iron, nickel and cobalt.
OpenSCAD, STL and technical drawings for the solid sample rack designed primarily for use with The MOUSE instruments.
This solid sample rack can be used in conjunction with:
- Laser-cut sample holder (10.5281/zenodo.7499437)
- Modular sample holder (10.5281/zenodo.7499416)
- Capillary flow-cell for liquid samples (10.5281/zenodo.7499421)
PDF file for the laser-cut sample holder designed primarily for use with The MOUSE instruments.
This sample holder can be used in conjunction with:
- Solid sample rack/plate (10.5281/zenodo.7499424)
- Modular sample holder (10.5281/zenodo.7499416)
- Capillary flow-cell for liquid samples (10.5281/zenodo.7499421)
OpenSCAD, STL and technical drawings for the capillary flow-through cell designed primarily for use with The MOUSE instruments.
This flow-through cell can be used in conjunction with:
- Modular sample holder (10.5281/zenodo.7499416)
- Solid sample rack/plate (10.5281/zenodo.7499424)
- Laser-cut sample holder (10.5281/zenodo.7499437)
In recent years, we have come to appreciate the astounding intricacy of the formation process of minerals from ions in aqueous solutions. In this context, a number of studies have revealed that nucleation in the calcium sulfate system is non-classical, involving the aggregation and reorganization of nanosized prenucleation particles. In a recent work we have shown that this particle-mediated nucleation pathway is actually imprinted in the resultant single micron-sized CaSO4 crystals. This property of CaSO4 minerals provides us with an unique opportunity to search for evidence of non-classical nucleation pathways in geological environments. In particular, we focused on the quintessential single crystals of anhydrite extracted from the Naica mine in Mexico. We elucidated the growth history from this mineral sample by mapping growth defects at different length scales. Based on these data we argue that the nano-scale misalignment of the structural sub-units observed in the initial calcium sulfate crystal seed propagate through different length-scales both in morphological, as well as strictly crystallographic aspects, eventually causing the formation of large mesostructured single crystals of anhydrite. Hence, the nanoparticle mediated nucleation mechanism introduces a 'seed of imperfection', which leads to a macroscopic single crystal, in which its fragments do not fit together at different length-scales in a self-similar manner. Consequently, anisotropic voids of various sizes are formed with very well-defined walls/edges. But, at the same time the material retains its essential single crystal nature. These findings shed new light on the longstanding concept of crystal structure.
Small-angle Scattering Data Analysis Round Robin: anonymized results, figures and Jupyter notebook
(2023)
The intent of this round robin was to find out how comparable results from different researchers are, who analyse exactly the same processed, corrected dataset.
This zip file contains the anonymized results and the jupyter notebook used to do the data processing, analysis and visualisation. Additionally, TEM images of the samples are included
These are four datasets that were made available to the participants of the Small-angle Scattering data analysis round robin. The intent was to find out how comparable results from different researchers are, who analyse exactly the same processed, corrected dataset.
In this repository, there are:
1) a PDF document with more details for the study,
2) the datasets for people to try and fit
3) an Excel spreadsheet to document the results.
Datasets 1 and 2 were modified from: Deumer, Jerome, & Gollwitzer, Christian. (2022). npSize_SAXS_data_PTB (Version 5) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.5886834
Datasets 3 and 4 were collected in-house on the MOUSE instrument.
SLAMD-FIB-Case-Study
(2022)
With 8% of man-made CO2 emissions, cement production is an important driver of the climate crisis. By using alkali-activated binders part of the energy-intensive clinker production process can be dispensed with. However, because numerous chemicals are involved in the manufacturing process here, the complexity of the materials increases by orders of magnitude. Finding a properly balanced cement formulation is like looking for a needle in a haystack. We have shown for the first time that artificial intelligence (AI)-based optimization of cement formulations can significantly accelerate research. The „Sequential Learning App for Materials Discovery“ (SLAMD) aims to accelerate practice transfer. With SLAMD, materials scientists have low-threshold access to AI through interactive and intuitive user interfaces. The value added by AI can be determined directly. For example, the CO2 emissions saved per ton of cement can be determined for each development cycle: the more efficient the AI optimization, the greater the savings. Our material database already includes more than 120,000 data points of alternative cements and is constantly being expanded with new parameters. We are currently driving the enrichment of the data with a life cycle analysis of the building materials. Based on a case study we show how intuitive access to AI can drive the adoption of techniques that make a real contribution to the development of resource-efficient and sustainable building materials of the future and make it easy to identify when classical experiments are more efficient.
SI Files for "Towards automation of the polyol process for the synthesis of silver nanoparticles"
(2022)
The graphml file: reaction_graph_AgNP.graphml is included. It contains topological information (Fig. 1 in the main text) about the reaction setup and metadata with reaction condtions. It used by the Python API used to control the Chemputer.
SAXS reports. The complete report sheets generated by McSAS are included. They contain extended information characterising the size distributions and the fitting parameters.
NP3_I: saxs_report_NP3_I.pdf
NP3_II: saxs_report_NP3_II.pdf
NP3_III: saxs_report_NP3_III.pdf
NP3_IV: saxs_report_NP3_IV.pdf
NP5_I: saxs_report_NP5_I.pdf
NP5_II: saxs_report_NP5_II.pdf
NP5_III: saxs_report_NP5_III.pdf
This is a set of drawings accompanying the submitted paper entitled "Extending Synchrotron SAXS instrument ranges through addition of a portable, inexpensive USAXS module with vertical rotation axes". The parts described herein will combine with commercial off-the-shelf components to build a high precision pair of rotation stages for accurate measurement of scattering angles with a sub-microradian precision.
SASfit 0.94.12
(2023)
Small-angle scattering is an increasingly common method for characterizing particle ensembles in a wide variety of sample types and for diverse areas of application. SASfit has been one of the most comprehensive and flexible curve-fitting programs for decades, with many specialized tools for various fields.
In case study one of the CONSENS project, two aromatic substances were coupled by a lithiation reaction, which is a prominent example in pharmaceutical industry. The two aromatic reactants (Aniline and o-FNB) were mixed with Lithium-base (LiHMDS) in a continuous modular plant to produce the desired product (Li-NDPA) and a salt (LiF). The salt precipitates which leads to the formation of particles. The feed streams were subject to variation to drive the plant to its optimum.
The uploaded data comprises the results from four days during continuous plant operation time. Each day is denoted from day 1-4 and represents the dates 2017-09-26, 2017-09-28, 2017-10-10, 2017-10-17.
In the following the contents of the files are explained.
This dataset is a complete set of raw, processed and analyzed data, associated with the manuscript mentioned in the title.
All associated metadata and processing history has been added. Particle size distribution analyses using McSAS are included as well.
The samples consisted of a 4.2 mass% dispersion of yttria-stabilized zirconia nanoparticles in a cross-linked matrix. The measurements show a good dispersion with minimal agglomeration. The wide-angle region shows diffraction information consistent with zirconia.
PMD Core Ontology (PMDco)
(2023)
The PMD Core Ontology (PMDco) is a comprehensive framework for representing knowledge that encompasses fundamental concepts from the domains of materials science and engineering (MSE). The PMDco has been designed as a mid-level ontology to establish a connection between specific MSE application ontologies and the domain neutral concepts found in established top-level ontologies. The primary goal of the PMDco is to promote interoperability between diverse domains. PMDco's class structure is both understandable and extensible, making it an efficient tool for organizing MSE knowledge. It serves as a semantic intermediate layer that unifies MSE knowledge representations, enabling data and metadata to be systematically integrated on key terms within the MSE domain. With PMDco, it is possible to seamlessly trace data generation. The design of PMDco is based on the W3C Provenance Ontology (PROV-O), which provides a standard framework for capturing the generation, derivation, and attribution of resources. By building on this foundation, PMDco facilitates the integration of data from various sources and the creation of complex workflows. In summary, PMDco is a valuable tool for researchers and practitioners in the MSE domains. It provides a common language for representing and sharing knowledge, allowing for efficient collaboration and promoting interoperability between diverse domains. Its design allows for the systematic integration of data and metadata, enabling seamless traceability of data generation. Overall, PMDco is a crucial step towards a unified and comprehensive understanding of the MSE domain. PMDco at GitHub: https://github.com/materialdigital/core-ontology
Here a dataset of XPS, HAXPES and SEM measurements for the physico-chemical characterization of Au nanoparticles is presented. The measurements are part of the H2020 project “NanoSolveIT”.
Here a dataset of XPS, HAXPES and SEM measurements for the physico-chemical characterization of Fe3O4 nanoparticles is presented. The measurements are part of the H2020 project “NanoSolveIT”.
PGDrome
(2023)
Optical constants of In2O3-SnO2 (Indium tin oxide, ITO)
Minenkov et al. 2024: on glass; n,k 0.191–1.69 µm
Optical constants of In2O3-SnO2 (Indium tin oxide, ITO)
Minenkov et al. 2024: on Si wafer, top; n,k 0.191–1.69 µm
Optical constants of In2O3-SnO2 (Indium tin oxide, ITO)
Minenkov et al. 2024: on Si wafer, bottom; n,k 0.191–1.69 µm
This dataset consists of indentation data measured with a conospherical tip in a Hysitron-Bruker TI980 Nanoindenter on the surface of a <100> Silicon wafer and a polished cross-sectional cut of a Zr65Cu25Al10 bulk metallic glass.
It is associated with the following publication:
Birte Riechers, Catherine Ott, Saurabh Mohan Das, Christian H. Liebscher, Konrad Samwer, Peter M. Derlet and Robert Maass "On the elastic microstructure of bulk metallic glasses" Materials and Design xxx, (2023) 111929. https://doi.org/10.1016/j.matdes.2023.111929
All experimental information can be found in this paper and in the accompanying supplementary information.
This electronic version of the data was published on the "Zenodo Data repository" found at http://zenodo.org/deposit in the community "Bundesanstalt fuer Materialforschung und -pruefung (BAM)".
The authors have copyright to these data. You are welcome to use the data for further analysis, but are requested to cite the original publication whenever use is made of the data in publications, presentations, etc.
Any questions regarding the data can be addressed to birte.riechers@bam.de who would also appreciate a note if you find the data useful.
Multiscale modeling of linear elastic heterogeneous structures via localized model order reduction
(2024)
In this paper, a methodology for fine scale modeling of large scale linear elastic structures is proposed, which combines the variational multiscale method, domain decomposition and model order reduction. The influence of the fine scale on the coarse scale is modelled by the use of an additive split of the displacement field, addressing applications without a clear scale separation. Local reduced spaces are constructed bysolving an oversampling problem with random boundary conditions. Herein, we inform the boundary conditions by a global reduced problem and compare our approach using physically meaningful correlated samples with existing approaches using uncorrelated samples. The local spaces are designed such that the local contribution of each subdomain can be coupled in a conforming way, which also preserves the sparsity pattern of standard finite element assembly procedures. Several numerical experiments show the accuracy and efficiency of the method, as well as its potential to reduce the size of the local spaces and the number of training samples compared to the uncorrelated sampling
Raw data from metabolomics experiments are initially subjected to peak identification and signal deconvolution to generate raw data matrices m × n, where m are samples and n are metabolites. We describe here simple statistical procedures on such multivariate data matrices, all provided as functions in the programming environment R, useful to normalize data, detect biomarkers, and perform sample classification.
Mechanical testing ontology
(2023)
The materials mechanical testing ontology (MTO) was developed by collecting the mechanical testing vocabulary from ISO 23718 standard, as well as the standardized testing processes described for various mechanical testing of materials like tensile testing, Brinell hardness test, Vickers hardness test, stress relaxation test, and fatigue testing. Confirming the ISO/IEC 21838-2 standard, MTO utilizes the Basic Formal Ontology (BFO), Common Core Ontology (CCO), Industrial Ontologies Foundry (IOF), Quantities, Units, Dimensions, and data Types ontologies (QUDT), and Material Science and Engineering Ontology (MSEO) as the upper-level ontologies.
McSAS3
(2023)
McSAS3 is a refactored version of the original McSAS (see DOI 10.1107/S1600576715007347). This software fits scattering patterns to obtain size distributions without assumptions on the size distribution form. The refactored version has some neat features:
- Multiprocessing is included, spread out over as many cores as number of repetitions!
- Full state of the optimization is stored in an organized HDF5 state file.
- Histogramming is separate from optimization and a result can be re-histogrammed as many times as desired.
- SasModels allow a wide range of models to be used
- If SasModels does not work (e.g. because of gcc compiler issues on Windows or Mac), an internal sphere model is supplied
- Simulated data of the scattering of a special shape can also be used as a McSAS fitting model. Your models are infinite!
- 2D fitting also works.
Metaproteomics, the study of the collective proteome within a microbial ecosystem, has substantially grown over the past few years. This growth comes from the increased awareness that it can powerfully supplement metagenomics and metatranscriptomics analyses. Although metaproteomics is more challenging than single-species proteomics, its added value has already been demonstrated in various biosystems, such as gut microbiomes or biogas plants. Because of the many challenges, a variety of metaproteomics workflows have been developed, yet it remains unclear what the impact of the choice of workflow is on the obtained results. Therefore, we set out to compare several well-established workflows in the first community-driven, multi-lab comparison in metaproteomics: the critical assessment of metaproteome investigation (CAMPI) study. In this benchmarking study, we evaluated the influence of different workflows on sample preparation, mass spectrometry acquisition, and bioinformatic analysis on two samples: a simplified, lab-assembled human intestinal sample and a complex human fecal sample. We find that the same overall biological meaning can be inferred from the metaproteome data, regardless of the chosen workflow. Indeed, taxonomic and functional annotations were very similar across all sample-specific data sets. Moreover, this outcome was consistent regardless of whether protein groups or peptides, or differences at the spectrum or peptide level were used to infer these annotations. Where differences were observed, those originated primarily from different wet-lab methods rather than from different bioinformatic pipelines. The CAMPI study thus provides a solid foundation for benchmarking metaproteomics workflows, and will therefore be a key reference for future method improvement. [doi:10.25345/C5SX64D9M] [dataset license: CC0 1.0 Universal (CC0 1.0)]
This dataset contains raw data acquired in ultrasound measurements on a reference specimen made of concrete at Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin (Germany). The internal specimen identifier is “Pk266”. The measurements were conducted using the pulse-echo method. The upper surface of the specimen was defined as measuring area. The aim of the measurements is to determine both the geometrical dimensions (thickness) and the position of tendons to the measuring area. In addition to this, a second dataset of a second specimen with identifier is existing named “Pk050” has been acquired. Pk050 has the same geometrical dimensions and concrete recipe as Pk266 recipe but does not contain tendons [Reference: https://doi.org/10.7910/DVN/9EID5D].
This dataset contains raw data acquired in ultrasound measurements on a reference specimen made of concrete at Bundesanstalt für Materialforschung und -prüfung (BAM), Berlin (Germany). The internal specimen identifier is “Pk050”. The measurements were conducted using the pulse-echo method. The upper surface of the specimen was defined as measuring area. The aim of the measurements is to determine the geometrical dimensions (thickness) of the specimen “Pk050”. In addition to this, a dataset of a second specimen with identifier “Pk266” has been acquired. Pk266 has the same geometrical dimensions and concrete recipe as Pk050, but contains tendons [Reference: https://doi.org/10.7910/DVN/NUU0WZ].
KupferDigital mechanical testing datasets: Stress relaxation and low-cycle fatigue (LCF) tests
(2024)
The KupferDigital project deals with the development of a data ecosystem for digital materials research on the basis of ontology-based digital representations of copper and copper alloys. This document provides exemplary mechanical testing datasets for training the developed KupferDigital infrastructures. Different types of cast copper alloys were provided for this research and their mechanical testing (stress relaxation and low-cycle fatigue) was performed in the accredited materials testing laboratory, while the test results were reported according to the DIN/ISO standards and attached with the maximum possible metadata about the sample history, equipment, and calibration. The attached content file consisted of the obtained primary raw testing data as well as the secondary datasets of these tests containing the detailed metadata of mechanical testing methods. Such test data files are processed by the KupferDigital digital tools to be converted to standardized machine-readable data files.
The KupferDigital project aims to develop digital methods, tools, and data space infrastructures for digitalizing the entire life cycle of copper materials. The mechanical testing process is one of the main chains of such life cycles which generates lots of important testing data about the mechanical properties of the materials and their related materials and testing metadata. To train the digitalization of the mechanical testing process, different kinds of copper alloys were provided for this project, and their mechanical properties were measured by typical methods like Brinell and Vickers hardness and tensile testing. The primary raw testing data as well as the secondary datasets of these tests are provided. The detailed materials specifications, the utilized mechanical testing methods, and provided datasets are described in the content file. The test data files of heterogeneous structures are processed by the KupferDigital digital tools to be converted to standardized machine-readable data files.
AI-reflectivity is a code based on artificial neural networks trained with simulated reflectivity data that quickly predicts film parameters from experimental X-ray reflectivity curves. This project has a common root with (ML-reflectivity)[https://github.com/schreiber-lab/ML-reflectivity] and evolved in parallel. Both are linked to the following publication:
Fast Fitting of Reflectivity Data of Growing Thin Films Using Neural Networks A. Greco, V. Starostin, C. Karapanagiotis, A. Hinderhofer, A. Gerlach, L. Pithan, S. Liehr, F. Schreiber, S. Kowarik (2019). J. Appl. Cryst.
For an online live demonstration using a pre-trained network have a look at github.
## Summary:
This notebook and associated datasets (including VASP details) accompany a manuscript available on the ArXiv (https://doi.org/10.48550/arXiv.2303.13435) and hopefully soon in a journal as short communication as well. Most of the details needed to understand this notebook are explained in that paper with the same title as above. For convenience, the abstract is repeated here:
## Paper abstract:
We demonstrate a strategy for simulating wide-range X-ray scattering patterns, which spans the small- and wide scattering angles as well as the scattering angles typically used for Pair Distribution Function (PDF) analysis. Such simulated patterns can be used to test holistic analysis models, and, since the diffraction intensity is presented coupled to the scattering intensity, may offer a novel pathway for determining the degree of crystallinity.
The ``Ultima Ratio'' strategy is demonstrated on a 64-nm Metal Organic Framework (MOF) particle, calculated from $Q<0.01$\,$\mathrm{nm}^{-1}$ up to $Q\approx150$\,$\mathrm{nm}^{-1}$, with a resolution of 0.16\,\AA. The computations exploit a modified 3D Fast Fourier Transform (3D-FFT), whose modifications enable the transformations of matrices at least up to $8000^3$ voxels in size. Multiple of these modified 3D-FFTs are combined to improve the low-$Q$ behaviour.
The resulting curve is compared to a wide-range scattering pattern measured on a polydisperse MOF powder.
While computationally intensive, the approach is expected to be useful for simulating scattering from a wide range of realistic, complex structures, from (poly-)crystalline particles to hierarchical, multicomponent structures such as viruses and catalysts.
This tutorial is aimed at developers who would like to develop workflows with Jobflow. This could include contributions to atomate2 and quacc. Jobflow workflows can also be executed with Fireworks on Supercomputers.
This tutorial includes information on how to write a job for jobflows, how to connect jobs to a workflow including dynamic features and how to store job results in databases. The structure of the workflow is inspired by workflows that have been developed for atomate2 and quacc.
This tutorial is also connected to google collab so that you can execute the code via their services.
Please access the tutorial here: https://jageo.github.io/Advanced_Jobflow_Tutorial/intro.html
IsoCor
(2022)
Despite numerous advantages offered by hyphenation of chromatography and electrokinetic separation methods with multicollector (MC) ICP-MS for isotope analysis, the main limitation of such systems is the decrease in precision and increase in uncertainty due to generation of short transient signals. To minimize this limitation, most authors compare several isotope ratio calculation methods and establish a multi-step data processing routine based on the precision and accuracy of the methods. However, to the best of our knowledge, there is no universal data processing tool available that incorporates all important steps of the treatment of the transient signals. Thus, we introduce a data processing application (App) IsoCor that facilitates automatic calculation of isotope ratios from transient signals and eases selection of the most suitable method. The IsoCor App performs baseline subtraction, peak detection, mass bias correction, isotope ratio calculation and delta calculation. The feasibility and reliability of the App was proven by reproducing the results from isotope analysis of three elements (neodymium, mercury and sulfur) measured on-line via hyphenated systems. The IsoCor App provides trackability of the results to ensure quality control of the analysis.
This data set contains three different data types obtained from concrete specimens. For each specimen, the rebound numbers, ultrasonic data (ultrasonic velocity, time of flight), and destructive concrete strength are given. Two kind of specimen geometries were tested: cubes and drilled cores. The files are labeled according to the specimen geometry as "cube" or "core" and the type of measurement data as "compressive_strength", "rn_R" and "rn_Q" for rebound numbers as well as "us" for ultrasonic data. The ultrasonic data were generated by six independent laboratories, the rebound numbers by five independent laboratories and the destructive tests by one laboratory. The designation of each specimen establishes the relationship between the different data types.
"This data set contains three different data types obtained from concrete specimens. For each specimen, the rebound numbers, ultrasonic data (ultrasonic velocity, time of flight), and destructive concrete strength are given. Two kind of specimen geometries were tested: cubes and drilled cores. The files are labeled according to the specimen geometry as "cube" or "core" and the type of measurement data as "compressive_strength", "rn_R" and "rn_Q" for rebound numbers as well as "us" for ultrasonic data. The ultrasonic data were generated by six independent laboratories, the rebound numbers by five independent laboratories and the destructive tests by one laboratory. The designation of each specimen establishes the relationship between the different data types."
Gas chromatography using atmospheric pressure chemical ionization coupled to mass spectrometry (GC/APCI-MS) is an emerging metabolomics platform, providing much-enhanced capabilities for structural mass spectrometry as compared to traditional electron ionization (EI)-based techniques. To exploit the potential of GC/APCI-MS for more comprehensive metabolite annotation, a major bottleneck in metabolomics, we here present the novel R-based tool InterpretMSSpectrum assisting in the common task of annotating and evaluating in-source mass spectra as obtained from typical full-scan experiments. After passing a list of mass-intensity pairs, InterpretMSSpectrum locates the molecular ion (M0), fragment, and adduct peaks, calculates their most likely sum formula combination, and graphically summarizes results as an annotated mass spectrum. Using (modifiable) filter rules for the commonly used methoximated-trimethylsilylated (MeOx-TMS) derivatives, covering elemental composition, typical substructures, neutral losses, and adducts, InterpretMSSpectrum significantly reduces the number of sum formula candidates, minimizing manual effort for postprocessing candidate lists. We demonstrate the utility of InterpretMSSpectrum for 86 in-source spectra of derivatized standard compounds, in which rank-1 sum formula assignments were achieved in 84% of the cases, compared to only 63% when using mass and isotope information on the M0 alone. We further use, for the first time, automated annotation to evaluate the purity of pseudospectra generated by different metabolomics preprocessing tools, showing that automated annotation can serve as an integrative quality measure for peak picking/deconvolution methods. As an R package, InterpretMSSpectrum integrates flexibly into existing metabolomics pipelines and is freely available from CRAN (https://cran.r-project.org/).
This dataset contains raw data resulting from Impact-Echo measurements at the reference conrete block "Radarplatte", located at BAM (German Federal Institute for Materials Research and Testing). This specimen has been described in detail by Niederleithinger et al. (2021), who applied muon tomography, ultrasonic echo measurements, radar and X-ray laminography to visualize its internal structure.
The Impact-Echo method is based on the excitation of the zero-group-velocity frequency of the first symmetric Lamb mode of a plate-like structure, in order to assess its thickness. Numerous publication elaborate on Impact-Echo theory, examples are (Gibson and Popovics 2005, Schubert and Köhler 2008 , Abraham and Popovics 2010).
The measurements have been conducted using a setup that contains only commercially available components. The setup consists of an Olson CTG-2 concrete thickness gauge (Olsen Instruments, USA) for actuation and sensing and an 8-bit NI USB-5132 digital storage oscilloscope (National Instruments, USA) combined with the Echolyst software (Schweizerischer Verein für technische Inspektionen (SVTI), Switzerland) for data acquisition.
Measurements were conducted using a grid of 23x23 points with a spacing of 50 mm. At each point 8192 samples were recorded at a sampling rate of 1 MS/s.
The dataset contains the (X,Y) location in mm of the individual measurement points as well as the raw measurement data at those points.
The data is provided in the formats *.mir/*.mhdr (Echolyst), *.npy (Python) and *.mat (Matlab) and *.csv to ease the import in various post-processing tools.
This dataset contains raw data resulting from Impact-Echo measurements at the reference concrete block "IE Platte", located at BAM (German Federal Institute for Materials Research and Testing).
The specimen contains three polystyrene slabs and one polyethylene foil to act as reflectors. The specimen was produced in a three-step process. First, the base plate was cast. Second, the reflectors were taped to the base plate. Finally, the upper layer was cast on top of base plate and reflectors. A drawing is contained in the dataset.
The Impact-Echo method is based on the excitation of the zero-group-velocity frequency of the first symmetric Lamb mode of a plate-like structure, in order to assess its thickness. Numerous publications elaborate on Impact-Echo theory, examples are (Gibson and Popovics 2005, Schubert and Köhler 2008 , Abraham and Popovics 2010).
The measurements have been conducted using a setup that contains only commercially available components. The setup consists of an Olson CTG-2 concrete thickness gauge (Olsen Instruments, USA) for actuation and sensing and an 8-bit NI USB-5132 digital storage oscilloscope (National Instruments, USA) combined with the Echolyst software (Schweizerischer Verein für technische Inspektionen (SVTI), Switzerland) for data acquisition.
Measurements were conducted using a grid of 29x29 points with a spacing of 50 mm. At each point 8192 samples were recorded at a sampling rate of 1 MS/s.
The dataset contains the (X,Y) location in mm of the individual measurement points as well as the raw measurement data at those points.
The data is provided in the formats *.mir/*.mhdr (Echolyst), *.npy (Python) and *.mat (Matlab) and *.csv to ease the import in various post-processing tools.
This sequence of X-Ray images shows how one of the most common Italian moka pots actually work! The sequence starts with a completely prepared moka pot (water in the bottom part, coffee in the middle and hot plate on). During the process the water starts to boil and the steam pressure pushes the hot water through the coffee into the bassin at the top of the pot.
This video sequence and additional explanations can also be found on Wikipedia.
Technical drawings and documents for building a compact, heated, vacuum compatible flow-through sample holder. This holder is in use at the BAM MOUSE instrument as well as at the I22 beamline at the Diamond Light Source (see references for instrument details).
This holder has several features:
- The holder can be used in vacuum environments as well as in atmosphere
- It has two G 1/4" UNF fittings to attach HPLC tubing for (optionally) flowing a medium through the sample cell
- There are two additional (unflowed) sample positions for backgrounds and calibrants, held at the same temperature
- The low-mass design coupled with a 250W heating element can achieve heating rates of 1 degree C per second, when coupled (for example) with an Omron E5CC PID controller.
- The sample holder insert can be made from various materials depending on the application. Sealing the sample from the vacuum can be achieved using kapton, teflon or Magic tape, depending on the temperature requirements. The inlet and outlet holes will need to be punctured with a needle to enable flow.
- Large exit cones ensure a clear exit angle of at least 45 degrees two theta.
- It has been tested with temperatures up to 400 degrees C.
- Compression area has been raised and polished to ensure a good vacuum seal.
Python Materials Genomics (pymatgen) is a robust materials analysis code that defines classes for structures and molecules with support for many electronic structure codes. This open-source software package powers the Materials Project.
In this particular contribution, the handling of obital-resolved "ICOHPLIST.lobster" files from Lobster was implemented in the software package (github handle: @JaGeo).
Related work
Laboratory Study:
Combining Signal Features of Ground-Penetrating Radar to Classify Moisture Damage in Layered Building Floors
https://doi.org/10.3390/app11198820
On-Site Study:
TBA
Doctoral Thesis:
Non-destructive classification of moisture deterioration in layered building floors using ground penetrating radar
https://doi.org/10.14279/depositonce-19306
Measurement Parameters
The GPR measurements were carried out with the SIR 20 from GSSI and a 2 GHz antenna pair (bandwidth 1 GHz to 3 GHz) in common-offset configuration. Each B-Scan consists of N A-Scans, each including 512 samples of a 11 ns time window. Survey lines were recorded with 250 A-Scans/ meter, which equals a 4 mm spacing between each A-Scan No Gains were applied.
Folder Description:
Lab_dry, Lab_insulDamage, Lab_screedDamage
- each contain 168 Measurements (B-Scans) in .csv on 84 dry floors, floors with insulation damage and screed damage.
- each floor setup was measured twice on two orthogonal survey lines, indicated by _Line1_ and _Line2_ in the file name.
- the file names encode the building floor setup e.g. CT50XP100 describes a 50 mm cement screed with 100 mm extruded polystyrene below
- the material codes are
CT: cement screed, CA: anhydrite screed, EP: expanded polystyrene, XP: extruded polystyrene, GW: glass wool, PS: perlites
further information can be found in the publication https://doi.org/10.3390/app11198820
OnSite_
- 5 folders containing B-Scans on 5 different practical moisture damages
- the building floor setup is encoded according to the lab with an additional measurement point numbering at the start and a damage case annotation at the end of the file name with _dry, _insulationDamage and_screedDamage
File Description:
B-Scans, Measurement files - no header
- dimension: 512 x N data point with N beeing the number of A-Scans including 512 samples of a 11 ns time window.
- survey lines were recorded with 250 A-Scans/ meter, which equals a 4 mm spacing between each A-Scan
Moisture References
- Moist_Reference of On-Site Locations include the columns MeasPoint: Measurement point, wt%Screed: moisture content of screed layer in mass percent; wt%Insul: moisture content of insulation layer in mass percent. References were obtained by drilling cores with 68 mm diameter in the center of each survey line.
- Moist_Reference_Screed of Lab data include the columns Screed: Screed material and thickness in mm, wt%Screed moisture content of screed layer in mass percent
- Moist Reference_Insul of Lab data include the columns Insulation: Insulation material and thickness in mm, water addition in l: water added to the insulation layer in liters, V%Insulation: water added to the insulation layer in volume percent, RH%: resulting relative humidy in the insulation layer during measurement. These References are only avaible for Lab measurements on insulation damages.
This is the stable version of the full-notch creep test ontology (OntoFNCT) that ontologically represents the full-notch creep test. OntoFNCT has been developed in accordance with the corresponding test standard ISO 16770:2019-09 Plastics - Determination of environmental stress cracking (ESC) of polyethylene - Full-notch creep test (FNCT).
The OntoFNCT provides conceptualizations that are supposed to be valid for the description of full-notch creep tests and associated data in accordance with the corresponding test standard. By using OntoFNCT for storing full-notch creep test data, all data will be well structured and based on a common vocabulary agreed on by an expert group (generation of FAIR data) which is meant to lead to enhanced data interoperability. This comprises several data categories such as primary data, secondary data and metadata. Data will be human and machine readable. The usage of OntoFNCT facilitates data retrieval and downstream usage. Due to a close connection to the mid-level PMD core ontology (PMDco), the interoperability of full-notch creep test data is enhanced and querying in combination with other aspects and data within the broad field of materials science and engineering (MSE) is facilitated.
The class structure of OntoFNCT forms a comprehensible and semantic layer for unified storage of data generated in a full-notch creep test including the possibility to record data from analysis and re-evaluation. Furthermore, extensive metadata allows to assess data quality and reliability. Following the open world assumption, object properties are deliberately low restrictive and sparse.
The dataset provided in this repository comprises data obtained from a series of characterization tests performed to a sheet of typical S355 (material number: 1.0577) structural steel (designation of steel according to DIN EN 10025-2:2019). The tests include methods for the determination of mechanical properties such as, e.g., tensile test, Charpy test and sonic resonance test. This dataset is intended to be extended by the inclusion of data obtained from further test methods. Therefore, the entire dataset (concept DOI) comprises several parts (versions), each of which is addressed by a unique version DOI.
The data were generated in the frame of the digitization project Innovationplatform MaterialDigital (PMD) which, amongst other activities, aims to store data in a semantically and machine understandable way. Therefore, data structuring and data formats are focused in addition to aspects in the field of material science and engineering (MSE). Hence, this data is supposed to provide reference data as basis for experimental data inclusion, conversion and structuring (data management and processing) that leads to semantical expressivity as well as for MSE experts being generally interested in the material properties and knowledge.
FenicsXConcrete
(2023)
This is the repository of all experimental raw data used in the Scientific Reports publication "Specific adsorption sites and conditions derived by thermal decomposition of activated carbons and adsorbed carbamazepine" by Daniel Dittmann, Paul Eisentraut, Caroline Goedecke, Yosri Wiesner, Martin Jekel, Aki Sebastian Ruhl, and Ulrike Braun.
It includes
- overview_measurements.xlsx and overview_measurements.ods containing a list of all TGA experiments (TGA, TGA-FTIR, TED-GC-MS, and ramp-kinetics)
- TED-GC-MS.zip containing gas chromatography-mass spectrometry experimtent files for the Chemstation and OpenChrom
- TGA.zip containing thermogravimetric analyses raw data on evolved gas analyses experiments (TGA-FTIR and TED-GC-MS)
- TGA_kinetics.zip containing thermogravimetric analyses raw data on decomposition kinetic experiments (ramp-kinetics)
- TGA-FTIR.zip containing Fourier-transform infrared spectroscopy series files for OMNIC
- XRF.zip containing x-ray flourescence data on elemental composition
These data sets serve as models for calculating the specific surface area (BET method) using gas sorption in accordance with ISO 9277.
The present measurements were carried out with nitrogen at 77 Kelvin and argon at 87 Kelvin.
It is recommended to use the following requirements for the molecular cross-sectional area:
Nitrogen: 0.1620 nm²
Argon: 0.1420 nm²
Titanium dioxides certified with nitrogen sorption and additionally measured with argon for research purposes were used as sample material.
The resulting data sets are intended to serve as comparative data for own measurements and show the differences in sorption behaviour and evaluations between nitrogen and argon.
These data are stored in the universal AIF format (adsorption information file), which allows flexible use of the data.
This is a set of use examples for the HDF5Translator framework. This framework lets you translate measurement files into a different (e.g. NeXus-compatible) structure, with some optional checks and conversions on the way. For an in-depth look at what it does, there is a blog post here.
The use examples provided herein are each accompanied by the measurement data necessary to test and replicate the conversion. The README.md files in each example show the steps necessary to do the conversion for each.
We encourage those who have used or adapted one or more of these exampes to create their own conversion, to get in touch with us so we may add your example to the set.
In the field of computational science and engineering, workflows often entail the application of various software, for instance, for simulation or pre- and postprocessing. Typically, these components have to be combined in arbitrarily complex workflows to address a specific research question. In order for peer researchers to understand, reproduce and (re)use the findings of a scientific publication, several challenges have to be addressed. For instance, the employed workflow has to be automated and information on all used software must be available for a reproduction of the results. Moreover, the results must be traceable and the workflow documented and readable to allow for external verification and greater trust. In this paper, existing workflow management systems (WfMSs) are discussed regarding their suitability for describing, reproducing and reusing scientific workflows. To this end, a set of general requirements for WfMSswere deduced from user stories that we deem relevant in the domain of computational science and engineering. On the basis of an exemplary workflow implementation, publicly hosted at GitHub (https:// this http URL), a selection of different WfMSs is compared with respect to these requirements, to support fellow scientists in identifying the WfMSs that best suit their requirements.