Chemie und Prozesstechnik
Filtern
Dokumenttyp
- Zeitschriftenartikel (1173) (entfernen)
Sprache
- Englisch (1173) (entfernen)
Schlagworte
- Fluorescence (62)
- Quantum yield (49)
- Nanoparticles (39)
- Mechanochemistry (36)
- Quality assurance (31)
- Mass spectrometry (30)
- SAXS (30)
- Sensor (30)
- Imaging (29)
- XPS (29)
Organisationseinheit der BAM
- 1 Analytische Chemie; Referenzmaterialien (495)
- 6 Materialchemie (393)
- 8 Zerstörungsfreie Prüfung (242)
- 6.3 Strukturanalytik (170)
- 6.1 Oberflächen- und Dünnschichtanalyse (137)
- 1.1 Anorganische Spurenanalytik (119)
- 1.2 Biophotonik (104)
- 1.7 Organische Spuren- und Lebensmittelanalytik (77)
- 8.2 Zerstörungsfreie Prüfmethoden für das Bauwesen (70)
- 8.0 Abteilungsleitung und andere (66)
Paper des Monats
- ja (26)
Raman spectroscopy is a well established tool for the analysis of vibration spectra, which then allow for the determination of individual substances in a chemical sample, or for their phase transitions. In the time-resolved-Raman-sprectroscopy the vibration spectra of a chemical sample are recorded sequentially over a time interval, such that conclusions for intermediate products (transients) can be drawn within a chemical process. The observed data-matrix M from a Raman spectroscopy can be regarded as a matrix product of two unknown matrices W and H, where the first is representing the contribution of the spectra and the latter represents the chemical spectra. One approach for obtaining W and H is the non-negative matrix factorization. We propose a novel approach, which does not need the commonly used separability assumption. The performance of this approach is shown on a real world chemical example.
This work addresses the problem of determining the number of components from sequential spectroscopic data analyzed by non-negative matrix factorization without separability assumption (SepFree NMF). These data are stored in a matrix M of dimension “measured times” versus “measured wavenumbers” and can be decomposed to obtain the spectral fingerprints of the states and their evolution over time. SepFree NMF assumes a memoryless (Markovian) process to underline the dynamics and decomposes M so that M = WH, with W representing the components’ fingerprints and H their kinetics. However, the rank of this decomposition (i.e., the number of physical states in the process) has to be guessed from pre-existing knowledge on the observed process. We propose a measure for determining the number of components with the computation of the minimal memory effect resulting from the decomposition; by quantifying how much the obtained factorization is deviating from the Markovian property, we are able to score factorizations of a different number of components. In this way, we estimate the number of different entities which contribute to the observed system, and we can extract kinetic information without knowing the characteristic spectra of the single components. This manuscript provides the mathematical background as well as an analysis of computer generated and experimental sequentially measured Raman spectra.
Performance Assessment for a Guided Wave-Based SHM System Applied to a Stiffened Composite Structure
(2022)
To assess the ability of structural health monitoring (SHM) systems, a variety of prerequisites and contributing factors have to be taken into account. Within this publication, this variety is analyzed for actively introduced guided wave-based SHM systems. For these systems, it is not possible to analyze their performance without taking into account their structure and their applied system parameters. Therefore, interdependencies of performance assessment are displayed in an SHM pyramid based on the structure and its monitoring requirements. Factors influencing the quality, capability and reliability of the monitoring system are given and put into relation with state-of-the-art performance analysis in a non-destructive evaluation. While some aspects are similar and can be treated in similar ways, others, such as location, environmental condition and structural dependency, demand novel solutions. Using an open-access data set from the Open Guided Waves platform, a detailed method description and analysis of path-based performance assessment is presented.The adopted approach clearly begs the question about the decision framework, as the threshold affects the reliability of the system. In addition, the findings show the effect of the propagation path according to the damage position. Indeed, the distance of damage directly affects the system performance. Otherwise, the propagation direction does not alter the potentiality of the detection approach despite the anisotropy of composites. Nonetheless, the finite waveguide makes it necessary to look at the whole paths, as singular phenomena associated with the reflections may appear. Numerical investigation helps to clarify the centrality of wave mechanics and the necessity to take sensor position into account as an influencing factor. Starting from the findings achieved, all the issues are discussed, and potential future steps are outlined.
The mechanism of mixed-ligand metal–organic framework (MOF) formation, and the possible role of intermediate single-ligand metal complexes during mechanosynthesis, have not been explored yet. For the first time, we report here in situ real-time monitoring of the mechanochemical formation mechanism of mixed-ligand MOFs. Our results show that binary phases can act as intermediates or competing products in one-pot and stepwise synthesis.
One of the challenges of high-temperature polymer electrolyte membrane fuel cells is the poisoning of the Pt catalyst with H3PO4. H3PO4 is imbibed into the routinely used polybenzimidazole-based membranes, which facilitate proton conductivity in the temperature range of 120−200 °C. However, when leached out of the membrane by water produced during operation, H3PO4 adsorbs on the Pt catalyst surface, blocking the active sites and hindering the oxygen reduction reaction (ORR).
The reduction of H3PO4 to H3PO3, which occurs at the anode due to a combination of a low potential and the presence of gaseous H2, has been investigated as an additional important contributing factor to the observed poisoning effect. H3PO3 has an affinity toward adsorption on Pt surfaces even greater than that of H2PO4 −. In this work, we investigated the poisoning effect of both H3PO3 and H3PO4 using a half-cell setup with a gas diffusion electrode under ambient conditions. By means of in situ X-ray absorption spectroscopy, it was possible to follow the signature of different species adsorbed on the Pt nanoparticle catalyst (H, O, H2PO4 −, and H3PO3) at different potentials under ORR conditions in various electrolytes (HClO4, H3PO4, and H3PO3). It was found that H3PO3 adsorbs in a pyramidal configuration P(OH)3 through a Pt−P bond. The competition between H3PO4 and H3PO3 adsorption was studied, which should allow for a better understanding of the catalyst poisoning mechanism and thus assist in the development of strategies to mitigate this phenomenon in the future by minimizing H3PO3 generation by, for example, improved catalyst design or adapted operation conditions or changes in the electrolyte composition.
The Boltzmann plot is one of the most widely used methods for determining the temperature in different types of laboratory plasmas. It operates on the logarithm as a function of the dimensional argument, which assumes that the correct physical units are used. In many works using the Boltzmann method, there is no analysis of the dimension of this argument, which may be the cause of a potential error. This technical note offers a brief description of the method and shows how to correctly use physical units when using transcendental functions like the logarithm.
The performance of the Monte Carlo (MC) algorithm for calibration-free LIBS was studied on the example of a simulated spectrum that mimics a metallurgical slag sample. The underlying model is that of a uniform, isothermal, and stationary plasma in local thermodynamical equilibrium.
Based on the model, the algorithm generates from hundreds of thousands to several millions of simultaneous configurations of plasma parameters and the corresponding number of spectra. The parameters are temperature, plasma size, and concentrations of species. They are iterated until a cost function, which indicates a difference between synthetic and simulated slag spectra, reaches its minimum. After finding the minimum, the concentrations of species are read from the model and compared to the certified values. The algorithm is parallelized on a graphical processing unit (GPU) to reduce computational time. The minimization of the cost function takes several minutes on the GPU NVIDIA Tesla K40 card and depends on the number of elements to be iterated. The intrinsic accuracy of the MC calibration-free method is found to be around 1% for the eight elements tested. For a real experimental spectrum, however, the efficiency may turn out to be worse due to the idealistic nature of the model, as well as incorrectly chosen experimental conditions. Factors influencing the performance of the method are discussed.
Shape memory alloy structures for actuator and vibration damper applications may be manufactured using wire arc additive manufacturing (W AAM), which is one of the additive manufacturing technologies. Multilayer deposition causes heat accumulation during W AAM, which rises the preheat temperature of the previously created layer. This leads to process instabilities, which result in deviations from the desired dimensions and mechanical properties changes. During W AAM deposition of the wall structure, a systematic research is carried out by adjusting the interlayer delay from 10 to 30 s. When the delay period is increased from 10 to 30 s, the breadth decreases by 45% and the height increases by 33%. Grain refinement occurs when the interlayer delay duration is increased, resulting in better hardness, phase transformation temperature, compressive strength, and shape recovery behavior. This study shows how the interlayer delay affects the behavior of W AAM-built nickel-titanium alloy (NiTi) structures in a variety of applications.
A versatile software package in the form of a Python extension, named CDEF (computing Debye’s scattering formula for extraordinary form factors), is proposed to calculate approximate scattering profiles of arbitrarily shaped nanoparticles for small-angle X-ray scattering (SAXS). CDEF generates a quasi-randomly distributed point cloud in the desired particle shape and then applies the open-source software DEBYER for efficient evaluation of Debye’s scattering formula to calculate the SAXS pattern (https://github.com/j-from-b/CDEF). If self-correlation of the scattering signal is not omitted, the quasi-random distribution provides faster convergence compared with a true-random distribution of the scatterers, especially at higher momentum transfer. The usage of the software is demonstrated for the evaluation of scattering data of Au nanocubes with rounded edges, which were measured at the four-crystal monochromator beamline of PTB at the synchrotron radiation facility BESSY II in Berlin. The implementation is fast enough to run on a single desktop computer and perform model fits within minutes. The accuracy of the method was analyzed by comparison with analytically known form factors and verified with another implementation, the SPONGE, based on a similar principle with fewer approximations. Additionally, the SPONGE coupled to McSAS3 allows one to retrieve information on the uncertainty of the size distribution using a Monte Carlo uncertainty estimation algorithm.
This study will present a new approach to distinguishing writing inks that have the same elemental compositions and visual appearances. The approach is based on displaying the intensity of elemental distributions as heat maps that represent data recorded with a scanning μX-ray fluorescence spectrometer. The heat maps present the data so as to facilitate digitally identifying and distinguishing between inks used to produce, correct, and reink two medieval Torah scrolls. As ritual objects, Torah scrolls had to be written in accordance with exacting standards that evolved over time. This requirement led to successive stages of modifications, sometimes over centuries. Both vitriolic and non-vitriolic inks used to modify Torah scrolls can be visually identical to each other. Furthermore, different non-vitriolic inks usually have an identical elemental composition. The solid material analysis evidence and its presentation as heat maps made it possible to discriminate between original and altered portions of text that in some cases would have been impossible. Our interdisciplinary work brought together conservation, material science, paleog-raphy, and philology to enable the identification of complex stratigraphy in multiple stages of production, correction, and reinking. ©2 0 2 2 Published by Elsevier Masson SAS.
Isotope ratio applications are on the increase and a major part of which are delta measurements, because they are easier to perform than the determination of absolute isotope ratios while offering lower measurement uncertainties. Delta measurements use artefact-based scales and therefore scale conversions are required due to the lack of the scale defining standards. Such scale conversions often form the basis for comparing data being generated in numerous projects andtherefore need to be as accurate as possible. In practice, users are tempted to apply linear approximations, which are not sufficiently exact, because delta values are defined by nonlinear relationships. The bias of such approximations often is beyond typical measurement uncertainties and its extent can hardly be predicted. Therefore, exact calculations are advised. Here, the exact equations and the bias of the approximations are presented, and calculations are illustrated by real-world examples. Measurement uncertainty is indispensable in this context and therefore, its calculation is described as well for determining delta values but also for scale conversions. Approaches for obtaining a single delta measurement and for repeated measurements are presented. For the latter case, a new approach for calculating the measurement uncertainty is presented, which considers covariances between the isotope ratios.
DNA long-term stability and integrity is of importance for applications in DNA based bio-dosimetry, data-storage, pharmaceutical quality-control, donor insemination and DNA based functional nanomaterials. Standard protocols for these applications involve repeated freeze-thaw cycles of the DNA, which can cause detrimental damage to the nucleobases, as well as the sugar-phosphate backbone and therefore the whole molecule. Throughout the literature three hypotheses can be found about the underlying mechanisms occurring during freeze-thaw cycles. It is hypothesized that DNA single-strand breaks during freezing can be induced by mechanical stress leading to shearing of the DNA molecule, by acidic pH causing damage through depurination and beta elimination or by the presence of metal ions catalyzing oxidative damage via reactive oxygen species (ROS). Here we test these hypotheses under well defined conditions with plasmid DNA pUC19 in high-purity buffer (1xPBS) at physiological salt and pH 7.4 conditions, under pH 6 and in the presence of metal ions in combination with the radical scavengers DMSO and Ectoine. The results show for the 2686 bp long plasmid DNA, that neither mechanical stress, nor pH 6 lead to degradation during repeated freeze-thaw cycles. In contrast, the presence of metal ions (Fe2+) leads to degradation of DNA via the production of radical species.
Measurement and calculation of x-ray production efficiencies for copper, zirconium, and tungsten
(2022)
Electron probe microanalysis (EPMA) is based on physical relations between measured X-ray intensities of characteristic lines and their Xray production efficiency, which depends on the specimen composition. The quality of the analysis results relies on how realistically the physical relations describe the generation and emission of X-rays. Special experiments are necessary to measure X-ray production efficiencies. A challenge in these experiments is the determination of the detection efficiency of the spectrometer as a function of the photon energy. An energy-dispersive spectrometer was used in this work, for which the efficiency was determined at metrological synchrotron beamlines with an accuracy of ±2%. X-ray production efficiencies for the L series and the Kα series of copper and zirconium and for the M and L series of tungsten were determined at energies up to 30 keV in a scanning electron microscope. These experimental values were compared with calculated X-ray production efficiencies using physical relations and material constants applied in EPMA. The objective of the comparison is the further improvement of EPMA algorithms as well as extending the available database for X-ray production efficiencies. Experimental data for the X-ray production efficiency are also useful for the assessment of spectrum simulation software.
The BAMline at the BESSY II synchrotron X-ray source has enabled research for more than 20 years in widely spread research fields such as materials science, biology, cultural heritage and medicine. As a nondestructive characterization method, synchrotron X-ray imaging, especially tomography, plays a particularly important role in structural characterization. A recent upgrade of key equipment of the BAMline widens its imaging capabilities: shorter scan acquisition times are now possible, in situ and operando studies can now be routinely performed, and different energy spectra can easily be set up. In fact, the upgraded double-multilayer monochromator brings full flexibility by yielding different energy spectra to optimize flux and energy resolution as desired. The upgraded detector (based on an sCMOS camera) also allows exploiting the higher flux with reduced readout times. Furthermore, an installed slip ring allows the sample stage to continuously rotate. The latter feature enables tomographic observation of processes occurring in the time scale of a few seconds.
A systematic study has been carried out to investigate the neutron transmission signal as a function of sample temperature. In particular, the experimentally determined wavelength-dependent neutron attenuation spectra for a martensitic steel at temperatures ranging from 21 to 700°C are compared with simulated data. A theoretical description that includes the Debye–Waller factor in order to describe the temperature influence on the neutron cross sections was implemented in the nxsPlotter software and used for the simulations. The analysis of the attenuation coefficients at varying temperatures shows that the missing contributions due to elastic and inelastic scattering can be clearly distinguished: while the elastically scattered intensities decrease with higher temperatures, the inelastically scattered intensities increase, and the two can be separated from each other by analysing unique sharp features in the form of Bragg edges. This study presents the first systematic approach to quantify this effect and can serve as a basis , for example, to correct measurements taken during in situ heat treatments, in many cases being a prerequisite for obtaining quantifiable results.
During their life span, concrete structures interact with many kinds of external mechanical loads. Most of these loads are considered in advance and result in reversible deformations. Nevertheless, some of the loads cause irreversible, sometimes unnoticed changes below the macroscopic scale depending on the type and dimension of the impact. As the functionality of concrete structures is often relevant to safety and society, their condition must be known and, therefore, assessed on a regular basis. Out of the spectrum of non-destructive monitoring methods, Coda Wave Interferometry using embedded ultrasonic sensors is one particularly sensitive technique to evaluate changes to heterogeneous media. However, there are various influences on Coda waves in concrete, and the interpretation of their superimposed effect is ambiguous. In this study, we quantify the relations of uniaxial compression and uniaxial tension on Coda waves propagating in normal concrete. We found that both the signal correlation of ultrasonic signals as well as their velocity variation directly reflect the stress change in concrete structures in a laboratory environment. For the linear elastic range up to 30% of the strength, we calculated a velocity variation of −0.97‰/MPa for compression and 0.33%/MPa for tension using linear regression. In addition, these parameters revealed even weak irreversible changes after removal of the load. Furthermore, we show the time-dependent effects of shrinkage and creep on Coda waves by providing the development of the signal parameters over time during half a year together with creep recovery. Our observations showed that time-dependent material changes must be taken into account for any comparison of ultrasonic signals that are far apart in time. The study’s results demonstrate how Coda Wave Interferometry is capable of monitoring stress changes and detecting even small-size microstructural changes. By indicating the stated relations and their separation from further impacts, e.g., temperature and moisture, we anticipate our study to contribute to the qualification of Coda Wave Interferometry for its application as an early-warning system for concrete structures.
What is meant by ‘Micro Non-Destructive Testing and Evaluation’? This was the central subject of debate in this Special Issue.
At present, sub-millimeter-size components or even assemblies are pervading the industrial and scientific world. Classic examples are electronic devices and watches (as well as parts thereof), but recent examples encompass additively manufactured lattice structures, stents, or other microparts. Moreover, most assemblies contain micro-components. Testing such components or their miniaturized parts would fit well within the topic of micro non-destructive testing and evaluation.
In all cases, performance and integrity testing, quality control, and dimensional tolerances need to be measured at the sub-millimeter level (ideally with a spatial resolution of about a micron); most of the time, such features and components are embedded in much larger assemblies, which also need to be taken into account. The solution to this dilemma (i.e. measuring large parts with high resolution) depends on the part and on the problem under consideration.
Another possible definition of micro non-destructive testing and evaluation can relate to the characterization of micro-features (e.g., the microstructure) in much larger specimens, such as damage in concrete cores or porosity in additively manufactured components. A further aspect is the use of microscopic probes to evaluate macroscopic properties. This is the case, for instance but not at all exclusively, in the use of diffraction techniques to determine macroscopic stress.
The splits between testing and characterization at the micro-level (or of micro parts) from one side and handling of macroscopic assemblies on the other represent a great challenge for many fields of materials characterization. On top of that, including the use of microscopic methods to test integrity would add a further level of complexity.
Imaging, mechanical testing, non-destructive testing, measurement of properties, structural health monitoring, and dimensional metrology all need to be re-defined if we want to cope with the multi-faceted topic of micro non-destructive testing and evaluation.
The challenge has already been accepted by the scientific and engineering communities for a while but is still far from being universally tackled. This Special Issue yields an interesting answer to the questions posed above. It presents the progress made and the different aspects of the challenge as well as at indicates the paths for the future of NDT&E.
Current trends in materials and life sciences are flanked by the need to push detection limits to single molecules or single cells, enable the characterization of increasingly complex matrices or sophisticated nanostructures, speed up the time of analysis, reduce instrument complexity and costs, and improve the reliability of data. This requires suitable analytical tools such as spectroscopic, separation and imaging techniques, mass spectrometry, and hyphenated techniques as well as sensors and their adaptation to application-specific challenges in the environmental, food, consumer product, health sector, nanotechnology, and bioanalysis. Increasing concerns about health threatening known or emerging pollutants in drinking water, consumer products, and food and about the safety of nanomaterials led to a new awareness of the importance of analytical sciences. Another important driver in this direction is the increasing demand by legislation, particularly in view of the 17 sustainable development goals by the United Nations addressing clean energy, industry, and innovation, sustainable cities, clean water, and responsible consumption and production. In this respect, also the development of analytical methods that enable the characterization of material flows in production processes and support recycling concepts of precious raw materials becomes more and more relevant. In the future, this will provide the basis for greener production in the chemical industry utilizing recycled or sustainable starting materials.
This makes analytical chemistry an essential player in terms of the circular economy helping to increase the sustainability of production processes. In the life sciences sector, products based on proteins, such as therapeutic and diagnostic antibodies, increase in importance. These increasingly biotechnologically produced functional biomolecules pose a high level of complexity of matrix and structural features that can be met only by highly advanced methods for separation, characterization, and detection. In addition, metrological traceability and target definition are still significant challenges for the future, particularly in the life sciences.
However, innovative reference materials as required for the health and food sector and the characterization of advanced materials can only be developed when suitable analytical protocols are available. The so-called reproducibility crisis in sciences underlines the importance of improved measures of quality control for all kinds of measurements and material characterization. This calls for thorough method validation concepts, suitable reference materials, and regular interlaboratory comparisons of measurements as well as better training of scientists in analytical sciences.
The important contribution of analytical sciences to these developments is highlighted by a broad collection of research papers, trend articles, and critical reviews from these different application fields. Special emphasis is dedicated to often-overlooked quality assurance and reference materials.
The acquisition and appropriate processing of relevant information about the considered system remains a major challenge in assessment of existing structures. Both the values and the validity of computed results such as failure probabilities essentially depend on the quantity and quality of the incorporated knowledge. One source of information are onsite measurements of structural or material characteristics to be modeled as basic variables in reliability assessment. The explicit use of (quantitative) measurement results in assessment requires the quantification of the quality of the measured information, i.e., the uncertainty associated with the information acquisition and processing. This uncertainty can be referred to as measurement uncertainty. Another crucial aspect is to ensure the comparability of the measurement results.This contribution attempts to outline the necessity and the advantages of measurement uncertainty calculations in modeling of measurement data-based random variables to be included in reliability assessment. It is shown, how measured data representing time-invariant characteristics, in this case non-destructively measured inner geometrical dimensions, can be transferred into measurement results that are both comparable and quality-evaluated. The calculations are based on the rules provided in the guide to the expression of uncertainty in measurement (GUM). The GUM-framework is internationally accepted in metrology and can serve as starting point for the appropriate processing of measured data to be used in assessment. In conclusion, the effects of incorporating the non-destructively measured data into reliability analysis are presented using a prestressed concrete bridge as case-study.
Designing the shape and size of catalyst particles, and their interfacial charge, at the nanometer scale can radically change their performance. We demonstrate this with ceria nanoparticles. In aqueous media, nanoceria is a functional mimic of haloperoxidases, a group of enzymes that oxidize organic substrates, or of peroxidases that can degrade reactive oxygen species (ROS) such as H2O2 by oxidizing an organic substrate. We show that the chemical activity of CeO2−x nanoparticles in haloperoxidase- and peroxidaselike reactions scales with their active surface area, their surface charge, given by the ζ-potential, and their surface defects (via the Ce3+/Ce4+ ratio). Haloperoxidase-like reactions are controlled through the ζ-potential as they involve the adsorption of charged halide anions to the CeO2 surface, whereas peroxidase-like reactions without charged substrates are controlled through the specific surface area SBET. Mesoporous CeO2−x particles, with large surface areas, were prepared via template-free hydrothermal reactions and characterized by small-angle X-ray scattering. Surface area, ζ-potential and the Ce3+/Ce4+ ratio are controlled in a simple and predictable manner by the synthesis time of the hydrothermal reaction as demonstrated by X-ray photoelectron spectroscopy, sorption and ζ-potential measurements. The surface area increased with synthesis time, whilst the Ce3+/Ce4+ ratio scales inversely with decreasing ζ-potential. In this way the catalytic activity of mesoporous CeO2−x particles could be tailored selectively for haloperoxidase- and peroxidase-like reactions. The ease of tuning the surface properties of mesoporous CeO2x particles by varying the synthesis time makes the synthesis a powerful general tool for the preparation of nanocatalysts according to individual needs.