Chemie und Prozesstechnik
Filtern
Dokumenttyp
- Vortrag (1069) (entfernen)
Sprache
- Englisch (1069) (entfernen)
Referierte Publikation
- nein (1069)
Schlagworte
- Concrete (39)
- LIBS (36)
- Nanoparticles (32)
- NDT (30)
- Fluorescence (28)
- Traceability (25)
- Synchrotron (24)
- Additive manufacturing (23)
- X-ray scattering (23)
- Metrology (22)
Organisationseinheit der BAM
- 1 Analytische Chemie; Referenzmaterialien (339)
- 6 Materialchemie (339)
- 8 Zerstörungsfreie Prüfung (287)
- 6.1 Oberflächen- und Dünnschichtanalyse (122)
- 1.1 Anorganische Spurenanalytik (99)
- 8.2 Zerstörungsfreie Prüfmethoden für das Bauwesen (88)
- 8.0 Abteilungsleitung und andere (87)
- 6.3 Strukturanalytik (84)
- 4 Material und Umwelt (70)
- 1.4 Prozessanalytik (61)
Inorganic nanocrystals with linear and nonlinear luminescence in the ultraviolet, visible, near infrared and short-wave infrared like semiconductor quantum dots and spectrally shifting lanthanide-based nanophosphors have meanwhile found applications in the life and material sciences ranging from optical reporters for bioimaging and sensing over security barcodes to solid state lighting and photovoltaics. These nanomaterials commonly have increasingly sophisticated core/shell particle architectures with shells of different chemical composition and thickness to minimize radiationless deactivation at the particle surface that is usually the main energy loss mechanism [1]. For lanthanide-based spectral shifters, particularly for very small nanoparticles, also surface coatings are needed which protect near-surface lanthanide ions from luminescence quenching by high energy vibrators like O-H groups and prevent the disintegration of these nanoparticles under high dilution conditions. [2,3,4]. The identification of optimum particle structures requires quantitative spectroscopic studies focusing on the key performance parameter photoluminescence quantum yield [5,6], ideally flanked by single particle studies to assess spectroscopic inhomogeneities on a particle-to-particle level for typical preparation methods [7,8], Moreover, in the case of upconversion nanoparticles with a multi-photonic and hence, excitation power density (P)-dependent luminescence, quantitative luminescence studies over a broad P range are required to identify particle architectures that are best suited for applications in fluorescence assays up to fluorescence microscopy. Here, we present methods to quantify the photoluminescence of these different types of emitters in the vis/NIR/SWIR and as function of P and demonstrate the importance of such measurements for a profound mechanistic understanding of the nonradiative deactivation pathways in semiconductor and upconversion nanocrystals of different size and particle architecture in different environments.
Inductively coupled plasma mass spectrometry (ICP-MS) emerged as a powerful technique for trace analysis of soil due to its multi-element capability, high sensitivity and low sample consumption. However, despite its success and widespread use, ICP-MS has several persistent drawbacks, such as high argon gas consumption, argon-based polyatomic interferences and the need for complicated RF-power generators. Unlike argon-based ICP, the nitrogen microwave inductively coupled atmospheric pressure mass spectrometry (MICAP-MS) uses nitrogen as plasma gas, which eliminates high operating costs associated with argon-gas consumption as well as the argon-based interferences1. In this work, the applicability of MICAP-MS for elemental analysis in different matrices is investigated. For this purpose, reference soil samples and steel samples are digested with aqua regia and used for analysis. Concentrations of selected elements are determined using MICAP-MS and validated with ICP-MS und certified values. Sensitivities, limits of detection and gas consumption for both methods are compared and discussed in detail. Performance of MICAP-MS under different nitrogen plasma gas concentrations is investigated and compared. Moreover, the performance of MICAP-MS in alloy matrices is investigated and discussed.
Sediments and soils can act as sinks of species of inorganic mercury (Hg2+), while they are simultaneously sources of organic species, such as monomethylmercury (MMHg). Although the fraction of MMHg in total Hg of sediments is suggested to be only 0.1–1%, MMHg poses a threat for humans and wildlife due to its toxic properties, high bioaccumulation potential and the ability to pass the blood-brain barrier. One example of a highly Hg contaminated waterbody is the Finow Canal, the oldest artificial waterway still in operation in Germany. Here, Hg mass fractions of up to 100 µg/g were found in the sediment in previous studies. These are suggested to be associated with a chemical plant producing mercury-based seed dressings. Despite this high mass fraction of Hg, no Hg speciation studies have been conducted there up to now.
In this study, Hg speciation in sediments of Finow Canal at locations before and after the known polluted site was conducted using species-specific isotope dilution (SSID) GC-ICP-ToF-MS. Mass fractions of up to 0.41 µg/g MMHg were determined. In addition, waterbodies around the initially polluted site were investigated and elevated concentrations were also determined around 14 km downstream. For MMHg analysis, the performance of ICP-ToF-MS for SSID GC/ICP-MS was compared with ICP-Q-MS and ICP-SF-MS. Here, isotope ratio precision was similar between the tested instruments. However, the (quasi-) simultaneous detection of the whole mass spectrum will probably offer a much higher precision of ICP-ToF-MS, when more than one isotope system is used.
These results are the first evidence of the occurrence of MMHg in this region and show the need for further investigations of the whole regional ecosystem, as well as the consideration of possible measures of remediation. SSID GC-ICP-(ToF)-MS is a suitable tool for investigating species-specific (multi) isotope systems for environmental monitoring.
Since its discovery, graphene has got growing attention in the industrial and application research due to its unique properties . However, graphene has not been yet implemented into the industrial market, in particularly due to the difficulty of properly characterizing this challenging material. As most of other nanomaterials, graphene’s properties are closely linked to its chemical and structural properties, such as number of layers, flake thickness, degree of functionalisation and C/O ratio. For the commercialization, suitable procedures for the measurement and characterization of the ultrathin flakes, of lateral dimensions in the range from µm to tens of µm, are essential.Surface chemical methods, especially XPS, have an outstanding role of providing chemical information on the composition. Thereby, one well-known problem for surface analytical methods is the influence of contamination on the composition as in the case of adventitious carbon. The differentiation between carbon originated from the contamination or from the graphene sample itself is often not obvious, which can lead to altered results in the determination of the composition. To overcome this problem, Hard Energy X-ray Photoelectron Spectroscopy (HAXPES) offers new possibilities due to its higher information depth. Therefore, XPS measurement obtained with Al Kα radiation (E = 1486. 6 eV) were compared with analyses performed with a Cr Kα (E = 5414. 8 eV) excitation on functionalized graphene samples. Differences are discussed in terms of potential carbon contamination, but also of oxygen on the composition of the samples. Measurements are performed on O-, N- and F-functionalized graphene. Different preparation procedures (powder, pellet, drop cast from liquid suspension) will be also discussed, correlation of the results with the flakes morphology as well as their validation with other independent methods are in progress.
In mass spectrometry based proteomics, protein homology leads to
many shared peptides within and between species. This complicates
taxonomic inference. inference. We introduce PepGM, a graphical model for taxonomic profiling of viral proteomes and metaproteomic datasets.
Using the graphical model, our approach computes statistically sound
scores for taxa based on peptide scores from a previous database
search, eliminating the need for commonly used heuristics.
In mass spectrometry based proteomics, protein homology leads to
many shared peptides within and between species. This complicates
taxonomic inference. inference. We introduce PepGM, a graphical model for taxonomic profiling of viral proteomes and metaproteomic datasets.
Using the graphical model, our approach computes statistically sound
scores for taxa based on peptide scores from a previous database
search, eliminating the need for commonly used heuristics. heuristics.
In general, wind turbines transform the kinetic energy of the wind into electric power. Thereby, the wind turbine blades are facing unsteady loads which are transferred to the hub to generate a rotation of the turbine’s axis. This brief introduction focuses on the aerodynamics of the blades and the corresponding loads. Starting with the basic flow field and loads of an airfoil, terms like stagnation point, boundary layer, Reynolds number, transition, and separation are introduced. For different geometries, lift and drag coefficient curves are discussed. Then, full wings will be considered, including their three-dimensional flow field due to wing tip vortices and crossflows. As a main source of increased loads, unsteady effects are explained in more detail such as gusts, tower passing, earth boundary layer crossing, free stream turbulences, yaw misalignment, etc. At the end, extra loads due to an oscillating free stream are introduced.
Applying data-driven AI systems makes it possible to extract patterns from given data, generate predictions and helps making decisions. Material research and testing holds a plethora of AI-based applications, for example, for the automatized search and synthesis of new materials, the detection of materials defects, or the prediction of process and materials parameters (inverse problems). However, AI algorithms can often only be as good as the training data from which the corresponding models are learned. Therefore, it is also indispensable to develop measures for the standardization and quality assurance of such data.
For this purpose, we develop and implement methods from transferring data from various sources into a homogeneous data repository with uniform data descriptions. Through the standardization and corresponding machine-readable interfaces, research data can be made usable and reusable for further data analyses. In addition to the technical implementation of integrative platforms, it is crucial that quality-assured research data management is recognized and implemented as an integral part of daily scientific work. Finally, we provide a vision of how the Federal Institute for Materials Research and Testing can benefit from data-driven AI systems. We discuss early applications and take a peek at future research.
Metaproteomics has substantially grown over the past years and supplements other omics approaches by bringing valuable functional information, enabling genotype- phenotype linkages and connections to metabolic outputs. Currently, a wide variety of metaproteomic workflows is available, yet their impact on the results remains to be thoroughly assessed.
Here, we carried out the first community-driven, multi-lab comparison in metaproteomics: the critical assessment of metaproteome investigation (CAMPI) study. Based on well-established workflows, we evaluated the influence of sample preparation, mass spectrometry acquisition, and bioinformatic analysis using two samples: a simplified, lab-assembled human intestinal model and a human fecal sample.
Although bioinformatic pipelines contributed to variability in peptide identification, wet-lab workflows were the most important source of differences between analyses. Overall, these peptide-level differences largely disappeared at the protein group level. Differences were observed between peptide- and protein-centric approaches for the predicted community composition but similar functional profiles were found across workflows.
The CAMPI findings demonstrate the robustness of current metaproteomics research and provide a perspective for future benchmarking studies.
Driven by recent technological advances and the need for improved viral diagnostic applications, mass spectrometry-based proteomics comes into play for detecting viral pathogens accurately and efficiently. However, the lack of specific algorithms and software tools presents a major bottleneck for analyzing data from host-virus samples. For example, accurate species- and strain-level classification of a priori unidentified organisms remains a very challenging task in the setting of large search databases. Another prominent issue is that many existing solutions suffer from the protein inference issue, aggravated because many homologous proteins are present across multiple species. One of the contributing factors is that existing bioinformatic algorithms have been developed mainly for single-species proteomics applications for model organisms or human samples. In addition, a statistically sound framework was lacking to accurately assign peptide identifications to viral taxa. In this presentation, an overview is given on current bioinformatics developments that aim to overcome the above-mentioned issues using algorithmic and statistical methods. The presented methods and software tools aim to provide tailored solutions for both discovery-driven and targeted proteomics for viral diagnostics and taxonomic sample profiling. Furthermore, an outlook is provided on how the bioinformatic developments might serve as a generic toolbox, which can be transferred to other research questions, such as metaproteomics for profiling microbiomes and identifying bacterial pathogens.
In ultrasonic testing, the time of flight (ToF) of a signal can be used to infer material and structural properties of a test item. In dispersive media, extracting the bulk wave velocity from a received signal is challenging as the waveform changes along its path of propagation. When using signal features such as the first peak or the envelope maximum, the calculated velocity changes with the propagation distance. This does not occur when picking the signal onset. Borrowing from seismology, researchers used the Akaike information criterion (AIC) picker to automatically obtain onset times. In addition to being dependent on arbitrarily set parameters, the AIC picker assumes no prior knowledge of the spectral properties of the signal. This is unnecessary in ultrasonic through-transmission testing, where the signal spectrum is known to differ significantly from noise. In this contribution, a novel parameter-free onset picker is proposed, that is based on a spectral entropy criterion (SEC) to model the signal using the AIC framework. Synthetic and experimental data are used to compare the performance of SEC and AIC pickers, showing an improved accuracy for densely sampled data.
Climate change and related energy policies, exacerbated by unforeseen geopolitical developments, pose new challenges for gas analytics, such as the use of hydrogen, hydrogen-containing alternative gaseous fuels (NH3, etc.), the use of alternative methane-based energy gases (LNG, LPG, etc.) or decarbonisation via CCSU. In all topics, the quality, i.e. the actual chemical composition of the gases, naturally plays a decisive role. BAM is meeting this strategic importance with the further development of hydrogen analytics and is continuing to develop the methods used in order to support the German economy and research landscape with traceability, reference materials and analytical procedures as quickly as possible.
Mass spectrometry plays an important role for trace analysis in hydrogen matrix. The presentation shows first experimental results from the application of PTR-TOF-MS (Proton Transfer Reaction Time-of-Flight Mass Spectrometry).
Mycotoxins (toxins formed by fungi) in food and have caused problems for mankind since the beginning of time. The group of ergot alkaloids plays a special role in human history. Several tens of thousands of deaths during the middle ages caused by to ergotism (the disease caused by continuous intake of ergot alkaloid contaminated food) underscore the importance of reliable analytical methods to ensure food safety.
More than 50 compounds belong to the group of ergot alkaloids. The 12 most found structures – the major ergot alkaloids – are typically measured, when it comes to ergot alkaloid quantification. High performance liquid chromatography (HPLC) with a fluorescence detector (FLD) is typically used to quantify the ergot alkaloid content. The main disadvantage of this method are the high costs for calibration standards (12 different calibration substances are required). But also the time and effort required for the analysis of 12 peaks and overlapping signals that occur in complex food samples such as bread. As all ergot alkaloids share the ergoline structure and just differ in the substituents attached to this backbone, measurement of all ergot alkaloids in one sum parameter presents a time and cost saving alternative. The most important step for the development of such a sum parameter method is the reaction used to transfer all ergot alkaloids to one uniform structure. In the talk two promising reactions, the acidic esterification to lysergic acid methyl ester and hydrazinolysis to lysergic acid hydrazide, are examined for possible use in a routine analysis method. In addition to yield and reaction rate, factor such as the handling of the reaction and the possibility of parallelization play a role. Next to the current status of the ongoing research project, in this talk, current approaches to ergot alkaloid quantitation will be discussed.
Investigation of degradation of the aluminum current collector in lithium-ion batteries by GD-OES
(2022)
Lithium-ion batteries (LIBs) are one technology to overcome the challenges of climate and energy crisis. They are widely used in electric vehicles, consumer electronics, or as storage for renewable energy sources. However, despite innovations in batteries' components like cathode and anode materials, separators, and electrolytes, the aging mechanism related to metallic aluminum current collector degradation causes a significant drop in their performance and prevents the durable use of LIBs. Glow-discharge optical emission spectroscopy (GD-OES) is a powerful method for depth-profiling of batteries' electrode materials. This work investigates aging-induced aluminum deposition on commercial lithium cobalt oxide (LCO) batteries' cathodes. The results illustrate the depth-resolved elemental distribution from the cathode surface to the current collector. An accumulation of aluminum is found on the cathode surface by GD-OES, consistent with results from energy-dispersive X-ray spectroscopy (EDX) combined with focused ion beam (FIB) cutting. In comparison to FIB-EDX, GD-OES allows a fast and manageable depth-profiling. Results from different positions on an aged cathode indicate an inhomogeneous aluminum film growth on the surface. The conclusions from these experiments can lead to a better understanding of the degradation of the aluminum current collector, thus leading to higher lifetimes of LIBs.
In industrialised countries more than 80% of the time is spent indoors. Products, such as building materials and furniture, emit volatile organic compounds (VOCs), which are therefore ubiquitous in indoor air. VOC in combination may, under certain environmental and occupational conditions, result in reported sensory irritation and health complaints. Emission concentrations can become further elevated in new or refurbished buildings where the rate of air exchange with fresh ambient air may be limited due to improved energy saving aspects. A healthy indoor environment can be achieved by controlling the sources and by eliminating or limiting the release of harmful substances into the air. One way is to use (building) materials proved to be low emitting. Meanwhile, a worldwide network of professional commercial and non-commercial laboratories performing emission tests for the evaluation of products for interior use has been established. Therefore, comparability of test results must be ensured. A laboratory’s proficiency can be proven by internal and external validation measures that both include the application of suitable emission reference materials (ERM). For the emission test chamber procedure according to EN 16516, no artificial ERM is commercially available. The EU-funded EMPIR project MetrIAQ aims to fill this gap by developing new and improved ERMs. The goal is to obtain a material with a reproducible and temporally constant compound release (less than 10% variability over 14 days). Different approaches, such as the impregnation of porous materials, are being tested. The generation as well as results of the most promising materials will be presented.
Geführte Ultraschallwellen sind für die Materialcharakterisierung hervorragend geeignet, da ihr Ausbreitungsverhalten abhängig von den Materialeigenschaften des untersuchten Werkstoffs ist.
Um aus dem messtechnisch ermittelten Ausbreitungsverhalten geführter Ultraschallwellen Rückschlüsse auf die Materialparameter zu ziehen, werden in der aktuellen Forschung verschiedene inverse Methoden diskutiert. Dispersionsabbildungen im Frequenz-Wellenzahl-Bereich repräsentieren das Ausbreitungsverhalten geführter Ultraschallwellen. Maschinelles Lernen und insbesondere Convolutional-Neural-Networks (CNNs) sind eine Möglichkeit der automatisierten inversen Bestimmung der Materialparameter aus den Dispersionsabbildungen.
In diesem Beitrag wird anhand synthetischer Daten gezeigt, wie das Ausbreitungsverhalten von geführten Ultraschallwellen unter Verwendung von CNNs und Dispersionsabbildungen genutzt werden kann, um die elastischen Konstanten einer isotropen plattenförmigen Struktur zu bestimmen. Anhand dieses Beispiels wird das generelle Vorgehen zur Anwendung maschineller neuronaler Lernverfahren aufgezeigt. Hierfür werden die verwendeten Daten analysiert, das Preprocessing erläutert und eine einfache CNN-Architektur gewählt. Im Rahmen der Auswertung wird insbesondere Wert auf die Erklärbarkeit und Zuverlässigkeit des verwendeten CNNs gelegt und so Grenzen und Möglichkeiten aufgezeigt.