Forschungsdatensätze der BAM
Filtern
Erscheinungsjahr
Dokumenttyp
- Forschungsdatensatz (154)
Referierte Publikation
- nein (154)
Schlagworte
- SEM (23)
- XPS (23)
- HAXPES (22)
- Automation (20)
- NanoSolveIT (15)
- SAXS (15)
- X-ray scattering (15)
- Bonding analysis (14)
- Computational Chemistry (10)
- Database (10)
Organisationseinheit der BAM
- 6 Materialchemie (88)
- 6.1 Oberflächen- und Dünnschichtanalyse (31)
- 6.5 Synthese und Streuverfahren nanostrukturierter Materialien (31)
- 5 Werkstofftechnik (27)
- 5.2 Metallische Hochtemperaturwerkstoffe (21)
- 6.0 Abteilungsleitung und andere (19)
- 8 Zerstörungsfreie Prüfung (19)
- 1 Analytische Chemie; Referenzmaterialien (12)
- VP Vizepräsident (12)
- VP.1 eScience (12)
With increasing wind energy capacity and installation of wind turbines, new inspection techniques are being explored to examine wind turbine rotor blades, especially during operation. A common result of surface damage phenomena (such as leading-edge erosion) is the premature transition of laminar to turbulent flow on the surface of rotor blades. In the KI-VISIR (Künstliche Intelligenz Visuell und Infrarot Thermografie – Artificial Intelligence-Visual and Infrared Thermography) project, infrared thermography is used as an inspection tool to capture so-called thermal turbulence patterns (TTP) that result from such surface contamination or damage. To compliment the thermographic inspections, high-resolution photography is performed to visualise, in detail, the sites where these turbulence patterns initiate. A convolutional neural network (CNN) was developed and used to detect and localise the turbulence patterns. A unique dataset combining the thermograms and visual images of operational wind turbine rotor blades has been provided, along with the simplified annotations for the turbulence patterns. Additional tools are available to allow users to use the data requiring only basic Python programming skills.
The RILEM TC 304-ADC has set up a large interlaboratory study on the mechanical properties of 3D printed concrete (ILS-mech). The study was prepared in 2022 by a preparation group leading to a Study Plan which the TC approved on 29 November 2022 (https://doi.org/10.14459/2023mp1705940). The ILS-mech was performed in 2023. The data was collected using a pre-prepared spreadsheet template. For data management, a database was derived and set-up in openBIS. The underlying Postgres database of openBIS was exported to the here-published SQLite database for sharing without maintaining a server. The structure of the database is described in (doi). The results are discussed in three associated papers focusing on the overall outcomes and evaluation of the procedures (doi), the compressive test results (doi), and the tensile test results (doi).
These data sets serve as models for calculating the specific surface area (BET method) using gas sorption in accordance with ISO 9277.
The present measurements were carried out with nitrogen at 77 Kelvin and argon at 87 Kelvin.
It is recommended to use the following requirements for the molecular cross-sectional area:
Nitrogen: 0.1620 nm²
Argon: 0.1420 nm²
Expected specific surface area for nitrogen (BET): 140 to 154 m²/g
Expected specific surface area for argon (BET): 129 to 135 m²/g
Titanium dioxides certified with nitrogen sorption and additionally measured with argon for research purposes were used as sample material.
The resulting data sets are intended to serve as comparative data for own measurements and show the differences in sorption behaviour and evaluations between nitrogen and argon.
These data are stored in the universal AIF format (adsorption information format), which allows flexible use of the data.
These data sets serve as models for calculating the specific surface area (BET method) using gas sorption in accordance with ISO 9277.
The present measurements were carried out with nitrogen at 77 Kelvin and argon at 87 Kelvin.
It is recommended to use the following requirements for the molecular cross-sectional area:
Nitrogen: 0.1620 nm²
Argon: 0.1420 nm²
Expected specific surface area for nitrogen (BET): 24 to 25 m²/g
Expected specific surface area for argon (BET): 20 m²/g
Titanium dioxides certified with nitrogen sorption and additionally measured with argon for research purposes were used as sample material.
The resulting data sets are intended to serve as comparative data for own measurements and show the differences in sorption behaviour and evaluations between nitrogen and argon.
These data are stored in the universal AIF format (adsorption information format), which allows flexible use of the data.
This dataset accompanies the following publication:
Hülagü, D., Tobias, C., Dao, R., Komarov, P., Rurack, K., Hodoroaba, V.-D., Towards 3D determination of the surface roughness of core-shell microparticles as a routine quality control procedure by scanning electron microscopy. Sci.Rep, 14, 17936 (2024), https://doi.org/10.1038/s41598-024-68797-7.
It contains SEM and AFM-in-SEM images of polystyrene (PS) core particles, polystyrene-iron oxide (PS/Fe3O4) core-shell particles, and polystyrene-iron oxide-silica (PS/Fe3O4/SiO2) core-shell-shell particles. Please refer to the publication and its supporting information for more details on the acquisition and contents of the dataset, as well as the GitHub repository at https://github.Com/denizhulagu/roughness-analysis-by-electron-microscopy.
The investigated particles were produced at BAM laboratories as previously described in:
Hülagü, D. et al. Generalized analysis approach of the profile roughness by electron microscopy with the example of hierarchically grown polystyrene–iron oxide–silica core–shell–shell particles. Adv. Eng. Mater. 24, 2101344, https://doi.org/10.1002/adem.202101344 (2022).
Tobias, C., Climent, E., Gawlitza, K. & Rurack, K. Polystyrene microparticles with convergently grown mesoporous silica shells as a promising tool for multiplexed bioanalytical assays. ACS Appl. Mater. Interfaces 13, 207, https://dx.doi.org/10.1021/acsami.0c17940 (2020).
Spectroscopic ellipsometry was used to determine the thickness and dielectric function of a Aluminium Nitride (AlN) layer on a Si wafer. The layer was determined to be 170 nm thick. The layer was provided by AIXTRON and manufactured by means of MOVPE.
The data was created using a M2000DI spectroscopic ellipsometer from Woollam Co. Inc. Analysis was done using the CompleteEASE software. The model used is a multi-peak oscillator model for the AlN layer.
The data resembles common database values for the material AlN.
The DACHS (Database for Automation, Characterization and Holistic Synthesis) project aims to create completely traceable experimental data, covering syntheses, measurements, analyses, and interpretations. DACHS_MOFs focuses on the synthesis and characterisation of metal-organic frameworks, across multiple, automation-assisted experimental series (AutoMOFs), with the overall goal of producing reproducible MOF samples through tracking of the synthesis parameters.
DACHS_MOFs is simultaneously used to test the DACHS principles.
This upload contain synthesis data from AutoMOFs_3 in HDF5 format (.h5). Each .h5 file contains detailed information on the chemical, experimenal, and synthesis parameters used during the synthesis of a single AutoMOF sample.
The DACHS (Database for Automation, Characterization and Holistic Synthesis) project aims to create completely traceable experimental data, covering syntheses, measurements, analyses, and interpretations. DACHS_MOFs focuses on the synthesis and characterisation of metal-organic frameworks, across multiple, automation-assisted experimental series (AutoMOFs), with the overall goal of producing reproducible MOF samples through tracking of the synthesis parameters.
DACHS_MOFs is simultaneously used to test the DACHS principles.
This upload contain synthesis data from AutoMOFs_2 in HDF5 format (.h5). Each .h5 file contains detailed information on the chemical, experimenal, and synthesis parameters used during the synthesis of a single AutoMOF sample.
Fatigue test ontology (FTO)
(2024)
Fatigue Test Ontology (FTO) has developed for representing the fatigue testing process, testing equipment requirements, test pieces charactristics, and related testing parameters and their measurement procedure according to DIN EN ISO 12106 standard.
Versions info:
V2 developed using PROVO+PMDco top-level ontologies.
V3 developed using BFO+IOF top-level ontologies.
Repositories:
GitLab: https://gitlab.com/kupferdigital/process-graphs/lcf-test
GitHub: https://github.com/HosseinBeygiNasrabadi/Fatigue-Test-Ontology-FTO-
MatPortal: https://matportal.org/ontologies/FTO
IndustryPortal: https://industryportal.enit.fr/ontologies/FTO
Tensile Stress Relaxation Test Ontology (TSRTO) has developed for representing the tensile stress relaxation testing process, testing equipment requirements, test pieces charactristics, and related testing parameters and their measurement procedure according to DIN EN ISO 10319-1 standard.
Versions info:
V1 developed using BFO+CCO top-level ontologies.
V3 developed using PROV+PMDco top-level ontologies.
Repositories:
GitLab: https://gitlab.com/kupferdigital/process-graphs/relaxation-test
GitHub: https://github.com/HosseinBeygiNasrabadi/Tensile-Stress-Relaxation-Test-Ontology-TSRTO
MatPortal: https://matportal.org/ontologies/TSRTO
IndustryPortal: https://industryportal.enit.fr/ontologies/TSRTO
Tensile test ontology (TTO)
(2024)
Tensile Test Ontology (TTO) has developed for representing the Tensile testing process, testing equipment requirements, test pieces charactristics, and related testing parameters and their measurement procedure according to DIN EN ISO 6892-1 standard.
Versions info:
V2 developed using BFO+CCO top-level ontologies.
V3 developed using PROVO+PMDco top-level ontologies.
Repositories:
GitLab: https://gitlab.com/kupferdigital/process-graphs/tensile-test
GitHub: https://github.com/HosseinBeygiNasrabadi/Tensile-Test-Ontology-TTO-
MatPortal: https://matportal.org/ontologies/TTO
IndustryPortal: https://industryportal.enit.fr/ontologies/TTO
Vickers test ontology (VTO)
(2024)
Vickers Test Ontology (VTO) has developed for representing the Vickers testing process, testing equipment requirements, test pieces charactristics, and related testing parameters and their measurement procedure according to DIN EN ISO 6507-1 standard.
Versions info:
V2 developed using BFO+CCO top-level ontologies.
Repositories:
GitLab: https://gitlab.com/kupferdigital/process-graphs/vickers-hardness-test
GitHub: https://github.com/HosseinBeygiNasrabadi/Vickers-Test-Ontology-VTO-
MatPortal: https://matportal.org/ontologies/VTO
IndustryPortal: https://industryportal.enit.fr/ontologies/VTO
Brinell test ontology (BTO)
(2024)
Brinell Test Ontology (BTO) has developed for representing the Brinell testing process, testing equipment requirements, test pieces charactristics, and related testing parameters and their measurement procedure according to DIN EN ISO 6506-1 standard.
Versions info:
V2 developed using BFO+CCO top-level ontologies.
V3 developed using EMMO+CHAMEO top-level ontologies.
V4 developed using PROVO+PMDco top-level ontologies.
V5 developed using BFO+IOF top-level ontologies.
Repositories:
GitLab: https://gitlab.com/kupferdigital/process-graphs/brinell-hardness-test
GitHub: https://github.com/HosseinBeygiNasrabadi/Brinell-Test-Ontology-BTO-
MatPortal: https://matportal.org/ontologies/BTO
IndustryPortal: https://industryportal.enit.fr/ontologies/BTO
The DACHS (Database for Automation, Characterization and Holistic Synthesis) project aims to create completely traceable experimental data, covering syntheses, measurements, analyses, and interpretations. DACHS_MOFs focuses on the synthesis and characterisation of metal-organic frameworks, across multiple, automation-assisted experimental series (AutoMOFs), with the overall goal of producing reproducible MOF samples through tracking of the synthesis parameters.
DACHS_MOFs is simultaneously used to test the DACHS principles.
This upload contain synthesis data from AutoMOFs_1 in HDF5 format (.h5). Each .h5 file contains detailed information on the chemical, experimenal, and synthesis parameters used during the synthesis of a single AutoMOF sample.
This is the stable version of the full-notch creep test ontology (OntoFNCT) that ontologically represents the full-notch creep test. OntoFNCT has been developed in accordance with the corresponding test standard ISO 16770:2019-09 Plastics - Determination of environmental stress cracking (ESC) of polyethylene - Full-notch creep test (FNCT).
The OntoFNCT provides conceptualizations that are supposed to be valid for the description of full-notch creep tests and associated data in accordance with the corresponding test standard. By using OntoFNCT for storing full-notch creep test data, all data will be well structured and based on a common vocabulary agreed on by an expert group (generation of FAIR data) which is meant to lead to enhanced data interoperability. This comprises several data categories such as primary data, secondary data and metadata. Data will be human and machine readable. The usage of OntoFNCT facilitates data retrieval and downstream usage. Due to a close connection to the mid-level PMD core ontology (PMDco), the interoperability of full-notch creep test data is enhanced and querying in combination with other aspects and data within the broad field of materials science and engineering (MSE) is facilitated.
The class structure of OntoFNCT forms a comprehensible and semantic layer for unified storage of data generated in a full-notch creep test including the possibility to record data from analysis and re-evaluation. Furthermore, extensive metadata allows to assess data quality and reliability. Following the open world assumption, object properties are deliberately low restrictive and sparse.
In the field of computational science and engineering, workflows often entail the application of various software, for instance, for simulation or pre- and postprocessing. Typically, these components have to be combined in arbitrarily complex workflows to address a specific research question. In order for peer researchers to understand, reproduce and (re)use the findings of a scientific publication, several challenges have to be addressed. For instance, the employed workflow has to be automated and information on all used software must be available for a reproduction of the results. Moreover, the results must be traceable and the workflow documented and readable to allow for external verification and greater trust. In this paper, existing workflow management systems (WfMSs) are discussed regarding their suitability for describing, reproducing and reusing scientific workflows. To this end, a set of general requirements for WfMSswere deduced from user stories that we deem relevant in the domain of computational science and engineering. On the basis of an exemplary workflow implementation, publicly hosted at GitHub (https:// this http URL), a selection of different WfMSs is compared with respect to these requirements, to support fellow scientists in identifying the WfMSs that best suit their requirements.
Multiscale modeling of linear elastic heterogeneous structures via localized model order reduction
(2024)
In this paper, a methodology for fine scale modeling of large scale linear elastic structures is proposed, which combines the variational multiscale method, domain decomposition and model order reduction. The influence of the fine scale on the coarse scale is modelled by the use of an additive split of the displacement field, addressing applications without a clear scale separation. Local reduced spaces are constructed bysolving an oversampling problem with random boundary conditions. Herein, we inform the boundary conditions by a global reduced problem and compare our approach using physically meaningful correlated samples with existing approaches using uncorrelated samples. The local spaces are designed such that the local contribution of each subdomain can be coupled in a conforming way, which also preserves the sparsity pattern of standard finite element assembly procedures. Several numerical experiments show the accuracy and efficiency of the method, as well as its potential to reduce the size of the local spaces and the number of training samples compared to the uncorrelated sampling
This is a set of use examples for the HDF5Translator framework. This framework lets you translate measurement files into a different (e.g. NeXus-compatible) structure, with some optional checks and conversions on the way. For an in-depth look at what it does, there is a blog post here.
The use examples provided herein are each accompanied by the measurement data necessary to test and replicate the conversion. The README.md files in each example show the steps necessary to do the conversion for each.
We encourage those who have used or adapted one or more of these exampes to create their own conversion, to get in touch with us so we may add your example to the set.
Optical constants of In2O3-SnO2 (Indium tin oxide, ITO)
Minenkov et al. 2024: on glass; n,k 0.191–1.69 µm
Optical constants of In2O3-SnO2 (Indium tin oxide, ITO)
Minenkov et al. 2024: on Si wafer, top; n,k 0.191–1.69 µm
Optical constants of In2O3-SnO2 (Indium tin oxide, ITO)
Minenkov et al. 2024: on Si wafer, bottom; n,k 0.191–1.69 µm
Software-driven scientific workflows are often characterized by a complex interplay of various pieces of software executed in a particular order. The output of a computational step may serve as input to a subsequent computation, which requires them to be processed sequentially with a proper mapping of outputs to inputs. Other computations are independent of each other and can be executed in parallel. Thus, one of the main tasks of a workflow tool is a proper and efficient scheduling of the individual processing steps.
Each processing step, just as the workflow itself, typically processes some input and produces output data. Apart from changing the input data to operate on, processing steps can usually be configured by a set of parameters to change their behavior. Moreover, the behavior of a processing step is determined by its source code and/or executable binaries/packages that are called within it. Beyond this, the computation environment not only has a significant influence on its behavior, but is also crucial in order for the processing step to work at all. The environment includes the versions of the interpreters or compilers, as well as all third-party libraries and packages that contribute to the computations carried out in a processing step.
KupferDigital mechanical testing datasets: Stress relaxation and low-cycle fatigue (LCF) tests
(2024)
The KupferDigital project deals with the development of a data ecosystem for digital materials research on the basis of ontology-based digital representations of copper and copper alloys. This document provides exemplary mechanical testing datasets for training the developed KupferDigital infrastructures. Different types of cast copper alloys were provided for this research and their mechanical testing (stress relaxation and low-cycle fatigue) was performed in the accredited materials testing laboratory, while the test results were reported according to the DIN/ISO standards and attached with the maximum possible metadata about the sample history, equipment, and calibration. The attached content file consisted of the obtained primary raw testing data as well as the secondary datasets of these tests containing the detailed metadata of mechanical testing methods. Such test data files are processed by the KupferDigital digital tools to be converted to standardized machine-readable data files.
Despite the advances in hardware and software techniques, standard numerical methods fail in providing real-time simulations, especially for complex processes such as additive manufacturing applications. A real-time simulation enables process control through the combination of process monitoring and automated feedback, which increases the flexibil- ity and quality of a process. Typically, before producing a whole additive manufacturing structure, a simplified experiment in form of a bead-on- plate experiment is performed to get a first insight into the process and to set parameters suitably. In this work, a reduced order model for the transient thermal problem of the bead-on-plate weld simulation is devel- oped, allowing an efficient model calibration and control of the process. The proposed approach applies the proper generalized decomposition (PGD) method, a popular model order reduction technique, to decrease the computational effort of each model evaluation required multiple times in parameter estimation, control and optimization. The welding torch is modeled by a moving heat source, which leads to difficulties separating space and time, a key ingredient in PGD simulations. A novel approach for separating space and time is applied and extended to 3D problems allowing the derivation of an efficient separated representation of the tem- perature. The results are verified against a standard finite element model showing excellent agreement. The reduced order model is also leveraged in a Bayesian model parameter estimation setup, speeding up calibrations and ultimately leading to an optimized real-time simulation approach for welding experiment using synthetic as well as real measurement data.
Trinamic TMCL IOC is a Python package designed for controlling stepper motors connected to a Trinamic board using the TMCL language (all boards supported by PyTrinamic should now work, has been tested on the TMCM 6110 and the TMCM 6214). Since it is implementing the TMCL protocol, it should be easy to adapt to other Trinamic motor controller boards. This package assumes the motor controller is connected over a machine network via a network-to-serial converter, but the underlying PyTrinamic package allows for other connections too.
This allows the control of attached motors via the EPICS Channel-Access virtual communications bus. If EPICS is not desired, plain Pythonic control via motion_control should also be possible. An example for this will be provided in the example.ipynb Jupyter notebook.
This package leverages Caproto for EPICS IOCs and a modified PyTrinamic library for the motor board control, and interfaces between the two via an internal set of dataclasses. Configuration for the motors and boards are loaded from YAML files (see tests/testdata/example_config.yaml).
The modifications to PyTrinamic involved extending their library with a socket interface. This was a minor modification that should eventually find its way into the official package (a pull request has been submitted).
Related work
Laboratory Study:
Combining Signal Features of Ground-Penetrating Radar to Classify Moisture Damage in Layered Building Floors
https://doi.org/10.3390/app11198820
On-Site Study:
TBA
Doctoral Thesis:
Non-destructive classification of moisture deterioration in layered building floors using ground penetrating radar
https://doi.org/10.14279/depositonce-19306
Measurement Parameters
The GPR measurements were carried out with the SIR 20 from GSSI and a 2 GHz antenna pair (bandwidth 1 GHz to 3 GHz) in common-offset configuration. Each B-Scan consists of N A-Scans, each including 512 samples of a 11 ns time window. Survey lines were recorded with 250 A-Scans/ meter, which equals a 4 mm spacing between each A-Scan No Gains were applied.
Folder Description:
Lab_dry, Lab_insulDamage, Lab_screedDamage
- each contain 168 Measurements (B-Scans) in .csv on 84 dry floors, floors with insulation damage and screed damage.
- each floor setup was measured twice on two orthogonal survey lines, indicated by _Line1_ and _Line2_ in the file name.
- the file names encode the building floor setup e.g. CT50XP100 describes a 50 mm cement screed with 100 mm extruded polystyrene below
- the material codes are
CT: cement screed, CA: anhydrite screed, EP: expanded polystyrene, XP: extruded polystyrene, GW: glass wool, PS: perlites
further information can be found in the publication https://doi.org/10.3390/app11198820
OnSite_
- 5 folders containing B-Scans on 5 different practical moisture damages
- the building floor setup is encoded according to the lab with an additional measurement point numbering at the start and a damage case annotation at the end of the file name with _dry, _insulationDamage and_screedDamage
File Description:
B-Scans, Measurement files - no header
- dimension: 512 x N data point with N beeing the number of A-Scans including 512 samples of a 11 ns time window.
- survey lines were recorded with 250 A-Scans/ meter, which equals a 4 mm spacing between each A-Scan
Moisture References
- Moist_Reference of On-Site Locations include the columns MeasPoint: Measurement point, wt%Screed: moisture content of screed layer in mass percent; wt%Insul: moisture content of insulation layer in mass percent. References were obtained by drilling cores with 68 mm diameter in the center of each survey line.
- Moist_Reference_Screed of Lab data include the columns Screed: Screed material and thickness in mm, wt%Screed moisture content of screed layer in mass percent
- Moist Reference_Insul of Lab data include the columns Insulation: Insulation material and thickness in mm, water addition in l: water added to the insulation layer in liters, V%Insulation: water added to the insulation layer in volume percent, RH%: resulting relative humidy in the insulation layer during measurement. These References are only avaible for Lab measurements on insulation damages.
PMD Core Ontology (PMDco)
(2023)
The PMD Core Ontology (PMDco) is a comprehensive framework for representing knowledge that encompasses fundamental concepts from the domains of materials science and engineering (MSE). The PMDco has been designed as a mid-level ontology to establish a connection between specific MSE application ontologies and the domain neutral concepts found in established top-level ontologies. The primary goal of the PMDco is to promote interoperability between diverse domains. PMDco's class structure is both understandable and extensible, making it an efficient tool for organizing MSE knowledge. It serves as a semantic intermediate layer that unifies MSE knowledge representations, enabling data and metadata to be systematically integrated on key terms within the MSE domain. With PMDco, it is possible to seamlessly trace data generation. The design of PMDco is based on the W3C Provenance Ontology (PROV-O), which provides a standard framework for capturing the generation, derivation, and attribution of resources. By building on this foundation, PMDco facilitates the integration of data from various sources and the creation of complex workflows. In summary, PMDco is a valuable tool for researchers and practitioners in the MSE domains. It provides a common language for representing and sharing knowledge, allowing for efficient collaboration and promoting interoperability between diverse domains. Its design allows for the systematic integration of data and metadata, enabling seamless traceability of data generation. Overall, PMDco is a crucial step towards a unified and comprehensive understanding of the MSE domain. PMDco at GitHub: https://github.com/materialdigital/core-ontology
PGDrome
(2023)
FenicsXConcrete
(2023)
These data sets serve as models for calculating the specific surface area (BET method) using gas sorption in accordance with ISO 9277.
The present measurements were carried out with nitrogen at 77 Kelvin and argon at 87 Kelvin.
It is recommended to use the following requirements for the molecular cross-sectional area:
Nitrogen: 0.1620 nm²
Argon: 0.1420 nm²
Titanium dioxides certified with nitrogen sorption and additionally measured with argon for research purposes were used as sample material.
The resulting data sets are intended to serve as comparative data for own measurements and show the differences in sorption behaviour and evaluations between nitrogen and argon.
These data are stored in the universal AIF format (adsorption information file), which allows flexible use of the data.
Particle size determination of a commercially available CeO2 nano powder - SOPs and reference data
(2023)
Compilation of detailed SOPs for characterization of a commercially available CeO2 nano powder including
- suspension preparation (indirect and direct sonication),
- particle size determination (Dynamic Light Scattering DLS and Centrifugal Liquid Sedimentation CLS) with reference data, respectively.
For sample preparation and analysis by Scanning Electron Microscopy (SEM) of this powder see related works (submitted, coming soon).
The dataset provided in this repository comprises data obtained from a series of full-notch creep tests (FNCT) performed on selected high-density polyethylene (PE-HD) materials (for further details, see section 1 Materials in this document) in accordance with the corresponding standard ISO 16770.
The FNCT is one of the mechanical testing procedures used to characterize polymer materials with respect to their environmental stress cracking (ESC) behavior. It is widely applied for PE-HD materials, that are predominantly used for pipe and container applications. It is based on the determination of the time to failure for a test specimen under constant mechanical load in a well-defined and temperature controlled liquid environment. The test device used here also allows for continuous monitoring of applied force, specimen elongation and temperature.
Data file (RData) containing measurement data recorded during the production process of the Certified Reference Material BAM-A001 containing Polycyclic Aromatic Hydrocarbons (PAH) in olive oil. The data can be most conveniently openend using the Shiny-App eCerto which is accessible at https://www.bam.de/eCerto.
The materials mechanical testing ontology (MTO) was developed by collecting the mechanical testing vocabulary from ISO 23718 standard, as well as the standardized testing processes described for various mechanical testing of materials like tensile testing, Brinell hardness test, Vickers hardness test, stress relaxation test, and fatigue testing.
Versions info:
V2 developed using BFO+CCO top-level ontologies.
V3 developed using PROVO+PMDco top-level ontologies.
V4 developed using BFO+IOF top-level ontologies.
Repositories:
GitLab: https://gitlab.com/kupferdigital/ontologies/mechanical-testing-ontology
GitHub: https://github.com/HosseinBeygiNasrabadi/Mechanical-Testing-Ontology
MatPortal: https://matportal.org/ontologies/MTO
IndustryPortal: https://industryportal.enit.fr/ontologies/MTO
Test artifact for fs-LDW
(2023)
Metaproteomics, the study of the collective proteome within a microbial ecosystem, has substantially grown over the past few years. This growth comes from the increased awareness that it can powerfully supplement metagenomics and metatranscriptomics analyses. Although metaproteomics is more challenging than single-species proteomics, its added value has already been demonstrated in various biosystems, such as gut microbiomes or biogas plants. Because of the many challenges, a variety of metaproteomics workflows have been developed, yet it remains unclear what the impact of the choice of workflow is on the obtained results. Therefore, we set out to compare several well-established workflows in the first community-driven, multi-lab comparison in metaproteomics: the critical assessment of metaproteome investigation (CAMPI) study. In this benchmarking study, we evaluated the influence of different workflows on sample preparation, mass spectrometry acquisition, and bioinformatic analysis on two samples: a simplified, lab-assembled human intestinal sample and a complex human fecal sample. We find that the same overall biological meaning can be inferred from the metaproteome data, regardless of the chosen workflow. Indeed, taxonomic and functional annotations were very similar across all sample-specific data sets. Moreover, this outcome was consistent regardless of whether protein groups or peptides, or differences at the spectrum or peptide level were used to infer these annotations. Where differences were observed, those originated primarily from different wet-lab methods rather than from different bioinformatic pipelines. The CAMPI study thus provides a solid foundation for benchmarking metaproteomics workflows, and will therefore be a key reference for future method improvement. [doi:10.25345/C5SX64D9M] [dataset license: CC0 1.0 Universal (CC0 1.0)]
Tensile Test Ontology (TTO)
(2023)
This is the stable version 2.0.1 of the PMD ontology module of the tensile test (Tensile Test Ontology - TTO) as developed on the basis of the 2019 standard ISO 6892-1: Metallic materials - Tensile Testing - Part 1: Method of test at room temperature.
The TTO was developed in the frame of the PMD project. The TTO provides conceptualizations valid for the description of tensile test and corresponding data in accordance with the respective standard. By using TTO for storing tensile test data, all data will be well structured and based on a common vocabulary agreed on by an expert group (generation of FAIR data) which will lead to enhanced data interoperability. This comprises several data categories such as primary data, secondary data and metadata. Data will be human and machine readable. The usage of TTO facilitates data retrieval and downstream usage. Due to a close connection to the mid-level PMD core ontology (PMDco), the interoperability of tensile test data is enhanced and data querying in combination with other aspects and data within the broad field of material science and engineering (MSE) is facilitated.
The TTO class structure forms a comprehensible and semantic layer for unified storage of data generated in a tensile test including the possibility to record data from analysis, re-evaluation and re-use. Furthermore, extensive metadata allows to assess data quality and reproduce experiments. Following the open world assumption, object properties are deliberately low restrictive and sparse.
SASfit 0.94.12
(2023)
Small-angle scattering is an increasingly common method for characterizing particle ensembles in a wide variety of sample types and for diverse areas of application. SASfit has been one of the most comprehensive and flexible curve-fitting programs for decades, with many specialized tools for various fields.
The datasets from (Hard Energy) X-ray photoelectron spectroscopy, X-ray diffraction and Scanning Electron Microsopy are related to the publication
G. Chemello, X. Knigge, D. Ciornii, B.P. Reed, A.J. Pollard, C.A. Clifford, T. Howe, N. Vyas, V.-D. Hodoroaba, J. Radnik
"Influence of the morphology on the functionalization of graphene nanoplatelets analyzed by comparative photoelectron spectroscopy with soft and hard X-rays"
Advanced Materials Interfaces (2023), DOI: 10.1002/admi.202300116.
Gas chromatography using atmospheric pressure chemical ionization coupled to mass spectrometry (GC/APCI-MS) is an emerging metabolomics platform, providing much-enhanced capabilities for structural mass spectrometry as compared to traditional electron ionization (EI)-based techniques. To exploit the potential of GC/APCI-MS for more comprehensive metabolite annotation, a major bottleneck in metabolomics, we here present the novel R-based tool InterpretMSSpectrum assisting in the common task of annotating and evaluating in-source mass spectra as obtained from typical full-scan experiments. After passing a list of mass-intensity pairs, InterpretMSSpectrum locates the molecular ion (M0), fragment, and adduct peaks, calculates their most likely sum formula combination, and graphically summarizes results as an annotated mass spectrum. Using (modifiable) filter rules for the commonly used methoximated-trimethylsilylated (MeOx-TMS) derivatives, covering elemental composition, typical substructures, neutral losses, and adducts, InterpretMSSpectrum significantly reduces the number of sum formula candidates, minimizing manual effort for postprocessing candidate lists. We demonstrate the utility of InterpretMSSpectrum for 86 in-source spectra of derivatized standard compounds, in which rank-1 sum formula assignments were achieved in 84% of the cases, compared to only 63% when using mass and isotope information on the M0 alone. We further use, for the first time, automated annotation to evaluate the purity of pseudospectra generated by different metabolomics preprocessing tools, showing that automated annotation can serve as an integrative quality measure for peak picking/deconvolution methods. As an R package, InterpretMSSpectrum integrates flexibly into existing metabolomics pipelines and is freely available from CRAN (https://cran.r-project.org/).
Raw data from metabolomics experiments are initially subjected to peak identification and signal deconvolution to generate raw data matrices m × n, where m are samples and n are metabolites. We describe here simple statistical procedures on such multivariate data matrices, all provided as functions in the programming environment R, useful to normalize data, detect biomarkers, and perform sample classification.
Metabolomics, the analysis of potentially all small molecules within a biological system, has become a valuable tool for biomarker identification and the elucidation of biological processes. While metabolites are often present in complex mixtures at extremely different concentrations, the dynamic range of available analytical methods to capture this variance is generally limited. Here, we show that gas chromatography coupled to atmospheric pressure chemical ionization mass spectrometry (GC-APCI-MS), a state of the art analytical technology applied in metabolomics analyses, shows an average linear range (LR) of 2.39 orders of magnitude for a set of 62 metabolites from a representative compound mixture. We further developed a computational tool to extend this dynamic range on average by more than 1 order of magnitude, demonstrated with a dilution series of the compound mixture, using robust and automatic reconstruction of intensity values exceeding the detection limit. The tool is freely available as an R package (CorrectOverloadedPeaks) from CRAN (https://cran.r-project.org/) and can be incorporated in a metabolomics data processing pipeline facilitating large screening assays.
This dataset represents the electronic supplementary material (ESM) of the publication entitled "Characterisation of conventional 87Sr/86Sr isotope ratios in cement, limestone and slate reference materials based on an interlaboratory comparison study", which is published in Geostandards and Geoanalytical Research under the DOI: 10.1111/GGR.12517. It consists of four files. 'ESM_Data.xlsx' contains all reported data of the participants, a description of the applied analytical procedures, basic calculations, the consensus values, and part of the uncertainty assessment. 'ESM_Figure-S1' displays a schematic on how measurements, sequences and replicates are treated for the uncertainty calculation carried out by PTB. 'ESM_Technical-protocol.pdf' is the technical protocol of the interlaboratory comparison, which has been provided to all participants together with the samples and which contains bedside others the definition of the measurand and guidelines for data assessment and calculations. 'ESM_Reporting-template.xlsx' is the Excel template which has been submitted to all participants for reporting their results within the interlaboratory comparison. Excel files with names of the the structure 'GeoReM_Material_Sr8786_Date.xlsx' represent the Rcon(87Sr/86Sr) data for a specific reference material downloaded from GeoReM at the specified date, e.g. 'GeoReM_IAPSO_Sr8786_20221115.xlsx' contains all Rcon(87Sr/86Sr) data for the IAPSO seawater standard listed in GeoReM until 15 November 2022.
Here a dataset of XPS, HAXPES and SEM measurements for the physico-chemical characterization of Au nanoparticles is presented. The measurements are part of the H2020 project “NanoSolveIT”.