Analytische Chemie
Filtern
Erscheinungsjahr
- 2022 (615) (entfernen)
Dokumenttyp
- Vortrag (217)
- Zeitschriftenartikel (191)
- Posterpräsentation (89)
- Beitrag zu einem Tagungsband (59)
- Forschungsbericht (17)
- Sonstiges (14)
- Forschungsdatensatz (10)
- Buchkapitel (7)
- Dissertation (2)
- Handbuch (2)
Sprache
- Englisch (529)
- Deutsch (85)
- Mehrsprachig (1)
Schlagworte
- Fluorescence (24)
- Digitalization (22)
- Certification (20)
- Conformity assessment (20)
- COVID-19 (19)
- Digital transformation (19)
- Sensor (19)
- Calibration (18)
- Conformity assessment body (18)
- Quality infrastructure (18)
Organisationseinheit der BAM
- 8 Zerstörungsfreie Prüfung (235)
- 1 Analytische Chemie; Referenzmaterialien (215)
- 6 Materialchemie (111)
- 8.2 Zerstörungsfreie Prüfmethoden für das Bauwesen (54)
- 8.4 Akustische und elektromagnetische Verfahren (53)
- 8.0 Abteilungsleitung und andere (52)
- 1.1 Anorganische Spurenanalytik (44)
- 8.5 Röntgenbildgebung (44)
- 6.1 Oberflächen- und Dünnschichtanalyse (42)
- 1.2 Biophotonik (40)
- 6.3 Strukturanalytik (40)
- 1.8 Umweltanalytik (35)
- S Qualitätsinfrastruktur (32)
- 8.1 Sensorik, mess- und prüftechnische Verfahren (31)
- S.2 Digitalisierung der Qualitätsinfrastruktur (30)
- 1.3 Instrumentelle Analytik (28)
- 4 Material und Umwelt (27)
- 1.4 Prozessanalytik (26)
- 1.9 Chemische und optische Sensorik (21)
- 8.6 Faseroptische Sensorik (20)
- 1.7 Organische Spuren- und Lebensmittelanalytik (19)
- 6.6 Physik und chemische Analytik der Polymere (19)
- 1.5 Proteinanalytik (18)
- 1.6 Anorganische Referenzmaterialien (16)
- VP Vizepräsident (16)
- VP.1 eScience (16)
- 6.5 Synthese und Streuverfahren nanostrukturierter Materialien (15)
- 6.0 Abteilungsleitung und andere (14)
- 4.3 Schadstofftransfer und Umwelttechnologien (10)
- 4.2 Material-Mikrobiom Wechselwirkungen (9)
- 7 Bauwerkssicherheit (9)
- 9 Komponentensicherheit (9)
- 3 Gefahrgutumschließungen; Energiespeicher (7)
- 4.5 Kunst- und Kulturgutanalyse (6)
- 5 Werkstofftechnik (5)
- 3.1 Sicherheit von Gefahrgutverpackungen und Batterien (4)
- 9.0 Abteilungsleitung und andere (4)
- 5.1 Mikrostruktur Design und Degradation (3)
- 6.2 Material- und Oberflächentechnologien (3)
- 7.4 Baustofftechnologie (3)
- 7.6 Korrosion und Korrosionsschutz (3)
- 9.5 Tribologie und Verschleißschutz (3)
- 1.0 Abteilungsleitung und andere (2)
- 2 Prozess- und Anlagensicherheit (2)
- 3.5 Sicherheit von Gasspeichern (2)
- 4.1 Biologische Materialschädigung und Referenzorganismen (2)
- 4.4 Thermochemische Reststoffbehandlung und Wertstoffrückgewinnung (2)
- 5.4 Multimateriale Fertigungsprozesse (2)
- 9.4 Integrität von Schweißverbindungen (2)
- P Präsident (2)
- P.0 Präsident und andere (2)
- S.1 Qualität im Prüfwesen (2)
- 2.1 Sicherheit von Energieträgern (1)
- 2.5 Konformitätsbewertung Explosivstoffe/Pyrotechnik (1)
- 3.2 Gefahrguttanks und Unfallmechanik (1)
- 5.2 Metallische Hochtemperaturwerkstoffe (1)
- 6.7 Materialsynthese und Design (1)
- 7.1 Baustoffe (1)
- 7.3 Brandingenieurwesen (1)
- 7.5 Technische Eigenschaften von Polymerwerkstoffen (1)
- 7.7 Modellierung und Simulation (1)
- 9.2 Versuchsanlagen und Prüftechnik (1)
- 9.6 Additive Fertigung metallischer Komponenten (1)
- VP.2 Informationstechnik (1)
Paper des Monats
- ja (5)
XPS of GR2M
(2022)
In den letzten Jahren haben additive Fertigungstechnologien an Bedeutung gewonnen. Für komplexe Funktionsbauteile oder die Produktion von Werkstücken in kleinen Stückzahlen kann das Laser-Pulverbettschmelzen eingesetzt werden. Hohe Sicherheitsanforderungen, z. B. in der Luft- und Raumfahrt, erfordern eine umfassende Qualitätskontrolle. Daher werden nach der Fertigung zerstörungsfreie Offline-Prüfverfahren wie die Computertomographie eingesetzt. In jüngster Zeit wurden zur Verbesserung der Rentabilität und Praktikabilität zerstörungsfreie Online-Prüfverfahren wie die optische Tomographie entwickelt. In diesem Beitrag wird die Anwendbarkeit der Wirbelstromprüfung mit GMR Sensoren für die online Prüfung von PBF-LB/M Teilen demonstriert. Die Ergebnisse einer online Wirbelstromprüfung mit GMR Sensoren und einer Ein-Draht-Anregung werden gezeigt. Während des Produktionsprozesses wird für jede Lage eine Wirbelstromprüfung durchgeführt. Trotz hochauflösender Arrays mit 128 Elementen wird durch eine angepasste Hardware die Prüfdauer geringgehalten. So kann die Messung während des Beschichtungsvorgangs durchgeführt werden, ohne den Fertigungsprozess signifikant zu verlangsamen. Eine online Wirbelstromprüfung eines stufenförmigen Testkörpers aus Haynes282 über 184 Lagen zeigt, dass die Kanten nicht nur in der aktuellen Lage detektiert werden können, sondern auch in einer Tiefe von 400 µm, wenn eine Anregungsfrequenz von 1,2 MHz gewählt wird.
Die zerstörungsfreie Prüfung von metallischen Bauteilen hergestellt mit additiver Fertigung (Additive Manufacturing - AM) gewinnt zunehmend an industrieller Bedeutung. Grund dafür ist die Feststellung von Qualität, Reproduzierbarkeit und damit auch Sicherheit für Bauteile, die mittels AM gefertigt wurden. Jedoch wird noch immer ex-situ geprüft, wobei Defekte (z.B. Poren, Risse etc.) erst nach Prozessabschluss entdeckt werden. Übersteigen Anzahl und/oder Abmessung die vorgegebenen Grenzwerte für diese Defekte, so kommt es zu Ausschuss, was angesichts sehr langer Bauprozessdauern äußerst unrentabel ist. Eine Schwierigkeit ist dabei, dass manche Defekte sich erst zeitverzögert zum eigentlichen Materialauftrag bilden, z.B. durch thermische Spannungen oder Schmelzbadaktivitäten. Dementsprechend sind reine Monitoringansätze zur Detektion ggf. nicht ausreichend.
Daher wird in dieser Arbeit ein Verfahren zur aktiven Thermografie an dem AM-Prozess Laser Powder Bed Fusion (L-PBF) untersucht. Das Bauteil wird mit Hilfe des defokussierten Prozesslasers bei geringer Laserleistung zwischen den einzelnen gefertigten Lagen unabhängig vom eigentlichen Bauprozess erwärmt. Die entstehende Wärmesignatur wird ort- und zeitaufgelöst durch eine Infrarotkamera erfasst. Durch diese der Lagenfertigung nachgelagerte Prüfung werden auch zum Bauprozess zeitversetzte Defektbildungen nachweisbar.
In dieser Arbeit finden die Untersuchungen als Proof-of-Concept, losgelöst vom AM-Prozess, an einem typischen metallischen Testkörper statt. Dieser besitzt eine Nut als oberflächlichen Defekt. Die durchgeführten Messungen finden an einer eigens entwickelten L-PBF-Forschungsanlage innerhalb der Prozesskammer statt. Damit wird ein neuartiger Ansatz zur aktiven Thermografie für L-PBF erforscht, der eine größere Bandbreite an Defektarten auffindbar macht. Der Ansatz wird validiert und Genauigkeit sowie Auflösungsvermögen geprüft. Eine Anwendung am AM-Prozess wird damit direkt forciert und die dafür benötigten Zusammenhänge werden präsentiert.
Aktuelles aus der Normung
(2022)
Due to their unique physico-chemical properties, nanoparticles are well established in research and industrial applications. A reliable characterization of their size, shape, and size distribution is not only mandatory to fully understand and exploit their potential and develop reproducible syntheses, but also to manage environmental and health risks related to their exposure and for regulatory requirements. To validate and standardize methods for the accurate and reliable particle size determination nanoscale reference materials (nanoRMs) are necessary. However, there is only a very small number of nanoRMs for particle size offered by key distributors such as the National Institute of Standards and Technology (NIST) and the Joint Research Centre (JRC) and, moreover, few provide certified values. In addition, these materials are currently restricted to polymers, silica, titanium dioxide, gold and silver, which have a spherical shape except for titania nanorods. To expand this list with other relevant nanomaterials of different shapes and elemental composition, that can be used for more than one sizing technique, we are currently building up a platform of novel nanoRMs relying on iron oxide nanoparticles of different shape, size and surface chemistry. Iron oxide was chosen as a core material because of its relevance for the material and life sciences.
Quaternary ammonium compounds (QACs) are widely used as active agents in disinfectants, antiseptics, and preservatives. Despite being in use since the 1940s, there remain multiple open questions regarding their detailed mode-of-action and the mechanisms, including phenotypic heterogeneity, that can make bacteria less susceptible to QACs. To facilitate studies on resistance mechanisms towards QACs, we synthesized a fluorescent quaternary ammonium compound, namely N-dodecyl-N,N-dimethyl-[2-[(4-nitro-2,1,3-benzoxadiazol-7-yl)amino]ethyl]azanium-iodide (NBD-DDA). NBD-DDA is readily detected by flow cytometry and fluorescence microscopy with standard GFP/FITC-settings, making it suitable for molecular and single-cell studies. As a proof-of-concept, NBD-DDA was then used to investigate resistance mechanisms which can be heterogeneous among individual bacterial cells. Our results reveal that the antimicrobial activity of NBD-DDA against Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa is comparable to that of benzalkonium chloride (BAC), a widely used QAC, and benzyl-dimethyl-dodecylammonium chloride (BAC12), a mono-constituent BAC with alkyl-chain length of 12 and high structural similarity to NBD-DDA. Characteristic time-kill kinetics and increased tolerance of a BAC tolerant E. coli strain against NBD-DDA suggest that the mode of action of NBD-DDA is similar to that of BAC. As revealed by confocal laser scanning microscopy (CLSM), NBD-DDA is preferentially localized to the cell envelope of E. coli, which is a primary target of BAC and other QACs. Leveraging these findings and NBD-DDA‘s fluorescent properties, we show that reduced cellular accumulation is responsible for the evolved BAC tolerance in the BAC tolerant E. coli strain and that NBD-DDA is subject to efflux mediated by TolC. Overall, NBD-DDA’s antimicrobial activity, its fluorescent properties, and its ease of detection render it a powerful tool to study resistance mechanisms of QACs in bacteria and highlight its potential to gain detailed insights into its mode-of-action.
Protein adsorption at the air–water interface is a serious problem in cryogenic electron microscopy (cryoEM) as it restricts particle orientations in the vitrified ice-film and promotes protein denaturation. To address this issue, the preparation of a graphene-based modified support film for coverage of conventional holey carbon transmission electron microscopy (TEM) grids is presented. The chemical modification of graphene sheets enables the universal covalent anchoring of unmodified proteins via inherent surface-exposed lysine or cysteine residues in a one-step reaction. Langmuir–Blodgett (LB) trough approach is applied for deposition of functionalized graphene sheets onto commercially available holey carbon TEM grids. The application of the modified TEM grids in single particle analysis (SPA) shows high protein binding to the surface of the graphene-based support film. Suitability for high resolution structure determination is confirmed by SPA of apoferritin. Prevention of protein denaturation at the air–water interface and improvement of particle orientations is shown using human 20S proteasome, demonstrating the potential of the support film for structural biology.
Biopolymers are the building blocks of life. Their properties are exploited for material functionalization on the nanoscale in a flexible manner. An overview over current research activities in the field of sensing, nanostrcuturing, radiation damage measurements on DNA and proteins and microfluidics is given.
Core–shell nanoparticles have attracted much attention in recent years due to their unique properties and their increasing importance in many technological and consumer products. However, the chemistry of nanoparticles is still rarely investigated in comparison to their size and morphology. In this review, the possibilities, limits, and challenges of X-ray photoelectron spectroscopy (XPS) for obtaining more insights into the composition, thickness, and homogeneity of nanoparticle coatings are discussed with four examples: CdSe/CdS quantum dots with a thick coating and a small core; NaYF4-based upconverting nanoparticles with a large Yb-doped core and a thin Er-doped coating; and two types of polymer nanoparticles with a poly(tetrafluoroethylene) core with either a poly(methyl methacrylate) or polystyrene coating. Different approaches for calculating the thickness of the coating are presented, like a simple numerical modelling or a more complex simulation of the photoelectron peaks. Additionally, modelling of the XPS background for the investigation of coating is discussed. Furthermore, the new possibilities to measure with varying excitation energies or with hard-energy X-ray sources (hard-energy X-ray photoelectron spectroscopy) are described. A discussion about the sources of uncertainty for the determination of the thickness of the coating completes this review.
VAMAS-Enabling international standardisation for increasing the take up of Emerging Materials
(2022)
VAMAS (Versailles Project on Advanced Materials and Standards) supports world trade in products dependent on advanced materials technologies by providing technical basis for harmonized measurements, testing, specification, reference materials and standards. The major tools for fulfilling this task are interlaboratory comparisons (ILC). The organisation structure of VAMAS is presented. It is discussed, how a new technical activity can initiate.
Climate change and related energy policies, exacerbated by unforeseen geopolitical developments, pose new challenges for gas analytics, such as the use of hydrogen, hydrogen-containing alternative gaseous fuels (NH3, etc.), the use of alternative methane-based energy gases (LNG, LPG, etc.) or decarbonisation via CCSU. In all topics, the quality, i.e. the actual chemical composition of the gases, naturally plays a decisive role. BAM is meeting this strategic importance with the further development of hydrogen analytics and is continuing to develop the methods used in order to support the German economy and research landscape with traceability, reference materials and analytical procedures as quickly as possible.
Mass spectrometry plays an important role for trace analysis in hydrogen matrix. The presentation shows first experimental results from the application of PTR-TOF-MS (Proton Transfer Reaction Time-of-Flight Mass Spectrometry).
Since its discovery, graphene has got growing attention in the industrial and application research due to its unique properties . However, graphene has not been yet implemented into the industrial market, in particularly due to the difficulty of properly characterizing this challenging material. As most of other nanomaterials, graphene’s properties are closely linked to its chemical and structural properties, such as number of layers, flake thickness, degree of functionalisation and C/O ratio. For the commercialization, suitable procedures for the measurement and characterization of the ultrathin flakes, of lateral dimensions in the range from µm to tens of µm, are essential.Surface chemical methods, especially XPS, have an outstanding role of providing chemical information on the composition. Thereby, one well-known problem for surface analytical methods is the influence of contamination on the composition as in the case of adventitious carbon. The differentiation between carbon originated from the contamination or from the graphene sample itself is often not obvious, which can lead to altered results in the determination of the composition. To overcome this problem, Hard Energy X-ray Photoelectron Spectroscopy (HAXPES) offers new possibilities due to its higher information depth. Therefore, XPS measurement obtained with Al Kα radiation (E = 1486. 6 eV) were compared with analyses performed with a Cr Kα (E = 5414. 8 eV) excitation on functionalized graphene samples. Differences are discussed in terms of potential carbon contamination, but also of oxygen on the composition of the samples. Measurements are performed on O-, N- and F-functionalized graphene. Different preparation procedures (powder, pellet, drop cast from liquid suspension) will be also discussed, correlation of the results with the flakes morphology as well as their validation with other independent methods are in progress.
Whereas the characterization of nanomaterials using different analytical techniques is often highly automated and standardized, the sample preparation that precedes it causes a bottleneck in nanomaterial analysis as it is performed manually. Usually, this pretreatment depends on the skills and experience of the analysts. Furthermore, adequate reporting of the sample preparation is often missing. In this overview, some solutions for techniques widely used in nano-analytics to overcome this problem are discussed. Two examples of sample preparation optimization by au-tomation are presented, which demonstrate that this approach is leading to increased analytical confidence. Our first example is motivated by the need to exclude human bias and focuses on the development of automation in sample introduction. To this end, a robotic system has been de-veloped, which can prepare stable and homogeneous nanomaterial suspensions amenable to a variety of well-established analytical methods, such as dynamic light scattering (DLS), small-angle X-ray scattering (SAXS), field-flow fractionation (FFF) or single-particle inductively coupled mass spectrometry (sp-ICP-MS). Our second example addresses biological samples, such as cells exposed to nanomaterials, which are still challenging for reliable analysis. An air–liquid interface has been developed for the exposure of biological samples to nanomaterial-containing aerosols. The system exposes transmission electron microscopy (TEM) grids under reproducible conditions, whilst also allowing characterization of aerosol composition with mass spectrometry. Such an approach enables correlative measurements combining biological with physicochemical analysis. These case studies demonstrate that standardization and automation of sample preparation setups, combined with appropriate measurement processes and data reduction are crucial steps towards more reliable and reproducible data.
Catalysts derived from pyrolysis of metal organic frameworks (MOFs) are promising candidates to replace expensive and scarce platinum-based electrocatalysts commonly used in polymer electrolyte membrane fuel cells. MOFs contain ordered connections between metal centers and organic ligands. They can be pyrolyzed into metal- and nitrogen-doped carbons, which show electrocatalytic activity toward the oxygen reduction reaction (ORR). Furthermore, metal-free heteroatom-doped carbons, such as N-F-Cs, are known for being active as well. Thus, a carbon material with Co-N-F doping could possibly be even more promising as ORR electrocatalyst. Herein, we report the mechanochemical synthesis of two polymorphs of a zeolitic imidazole framework, Co-doped zinc 2-trifluoromethyl-1H-imidazolate (Zn0.9Co0.1(CF3-Im)2). Time-resolved in situ X-ray diffraction studies of the mechanochemical formation revealed a direct conversion of starting materials to the products. Both polymorphs of Zn0.9Co0.1(CF3-Im)2 were pyrolyzed, yielding Co-N-F containing carbons, which are active toward electrochemical ORR.
A facile and efficient methodology is described for the solvothermal synthesis of size-tunable, stable, and uniform NiCu core–shell nanoparticles (NPs) for application in catalysis. The diameter of the NPs is tuned in a range from 6 nm to 30 nm and to adjust the Ni:Cu ratio from 30:1 to 1:1. Furthermore, the influence of different reaction parameters on the final NPs is studied. The NPs are structurally characterized by a method combination of transmission electron microscopy, anomalous small-angle X-ray scattering, X-ray absorption fine structure, and X-ray photoelectron spectroscopy. Using these analytical methods, it is possible to elucidate a core–shell–shell structure of all particles and their chemical composition. In all cases, a depletion from the core to the shell is observed, with the core consisting of NiCu alloy, surrounded by an inner Ni-rich shell and an outer NiO shell. The SiO2-supported NiCu core–shell NPs show pronounced selectivity of >99% for CO in the catalytic reduction of CO2 to CO using hydrogen as reactant (reverse water–gas shift reaction) independent of size and Ni:Cu ratio.
microscopy (AFM), or X-ray reflectometry. For the additional determination of thin film composition, techniques like X-ray photoelectron spectroscopy (XPS) or mass spectrometry-based techniques can be used. An alternative non-destructive technique is electron probe microanalysis (EPMA). This method assumes a sample of homogenous (bulk) chemical composition, so that it cannot be usually applied to thin film samples. However, in combination with the thin film software StrataGEM, the thickness as well as the composition of such films on a substrate can be determined.
This has been demonstrated for FeNi on Si and SiGe on Al2O3 film systems. For both systems five samples with different elemental composition and a reference were produced and characterised by Korean research institute KRISS using inductively coupled plasma mass spectrometry (ICP-MS), Rutherford backscattering (RBS), and transmission electron microscopy (TEM). These samples were used for an international round robin test.
In 2021, a new and open-source thin film evaluation programme called BadgerFilm has been released. It can also be used to determine thin film composition and thickness from intensity ratios of the unknown sample and standards (k-ratios).
In this contribution, we re-evaluated the data acquired for the FeNi and SiGe systems using the BadgerFilm software package and compared the resulting composition and thickness with the results of the established StrataGEM software and other reference methods. With the current evaluation, the BadgerFilm software shows good agreement with the composition and thickness calculated by StrataGEM and as the reference values provided by the KRISS.
MXenes are a new family of two-dimensional (2D) transition metal carbides, carbonitrides, and nitrides discovered in 2011. Among many reported family members, titanium carbide is the most widely studied and explored due to the optimized synthesis conditions and promising characteristics like good mechanical strength, solution processability, and excellent conductivity. Here, we report the development of an electrochemical biosensor involving the amine-functionalized Few-Layered-Titanium Carbide Nanosheets and monoclonal antibodies against the SARS-CoV-2 nucleocapsid protein (anti-SARS-CoV-2 mAb) to design a point-of-care device for detection of the SARS-CoV-2 nucleocapsid protein (SARS-CoV-2 NP) antigen.
The prerequisites for a successful energy transition and the economic use of hydrogen as a clean green energy carrier and for H2 readiness are a rapid market ramp-up and the establishment of the required value chains. Reliable quality and safety standards for innovative technologies are the prerequisite for ensuring supply security, for environmental compatibility and sustainable climate protection, for building trust in these technologies and thus enable product and process innovations.
With the Competence Centre "H2Safety@BAM", BAM is creating the safety-related prere-quisites for the successful implementation of hydrogen technologies at national as well as European level. BAM uses decades of experience in dealing with hydrogen technologies to develop the necessary quality and safety standards.
The presentation will draw a bow from the typical basic tasks of BAM in the field of competence "Sensors, analytics and certified reference materials", such as maintenance and dissemination of the national gas composition standards for calorific value determination as Designated Institute for Metrology in Chemistry within the framework of the Metre Convention, to the further development of measurement and sensor technology for these tasks. For the certification of reference materials, a mostly slow and time-consuming but solid reference analysis is common. With hydrogen and its special properties, completely new requirements are added. In addition, fast and simple online analysis is required for process control, for example to register quality changes, e.g., during load changes or refuelling processes.
An alternative method for lithium isotope analysis by using high-resolution atomic absorption spectrometry (HR-CS-AAS) is proposed herein. This method is based on monitoring the isotope shift of approximately 15 pm for the electronic transition 22P←22S at around the wavelength of 670.8 nm, which can be measured by state-of-the-art HR-CS-AAS. Isotope analysis can be used for (i) the traceable determination of Li concentration and (ii) isotope amount ratio analysis based on a combination of HR-CS-AAS and spectral data analysis by machine learning (ML).
In the first case, the Li spectra are described as the linear superposition of the contributions of the respective isotopes, each consisting of a spin-orbit doublet, which can be expressed as Gaussian components with constant spectral position and width and different relative intensity, reflecting the isotope ratio in the sample. Precision was further improved by using lanthanum as internal spectral standard. The procedure has been validated using human serum-certified reference materials. The results are metrologically comparable and compatible with the certified values.
In the second case, for isotope amount ratio analysis, a scalable tree boosting ML algorithm (XGBoost) was employed and calibrated using a set of samples with 6Li isotope amount fractions ranging from 0.06 to 0.99 mol mol−1. The training ML model was validated with certified reference materials. The procedure was applied to the isotope amount ratio determination of a set of stock chemicals and a BAM candidate reference material NMC111 (LiNi1/3Mn1/3Co1/3O2), a Li-battery cathode material. These determinations were compared with those obtained by MC-ICP-MS and found to be metrologically comparable and compatible. The residual bias was −1.8‰, and the precision obtained ranged from 1.9‰ to 6.2‰. This precision was sufficient to resolve naturally occurring variations. The NMC111 cathode candidate reference material was analyzed using high-resolution continuum source atomic absorption spectrometry with and without matrix purification to assess its suitability for technical applications. The results obtained were metrologically compatible with each other.
Gold films coated with a graphene sheet are being widely used as sensors for the detection of label-free binding interactions using surface plasmon resonance (SPR). During the preparation of such sensors, it is often essential to subject the sensor chips to a high-temperature treatment in order to ensure a clean graphene surface. However, sensor chips used currently, which often use chromium as an adhesion promoter, cannot be subjected to temperatures above 250 °C, because under such conditions, chromium is found to reorganize and diffuse to the surface, where it is easily oxidized, impairing the quality of SPR spectra. Here we present an optimized preparation strategy involving a three-cycle tempering coupled with chromium (oxide) etching, which allows the graphene-coated SPR chips to be annealed up to 500 °C with little deterioration of the surface morphology. In addition, the treatment delivers a surface that shows a clear enhancement in spectral response together with a good refractive index sensitivity. We demonstrate the applicability of our sensors by studying the kinetics of avidin–biotin binding at different pH repeatedly on the same chip. The possibility to anneal can be exploited to recover the original surface after sensing trials, which allowed us to reuse the sensor for at least six cycles of biomolecule adsorption.
Avoiding the formation of defects such as keyhole pores is a major challenge for the production of metal parts by Laser Powder Bed Fusion (LPBF). The use of in-situ monitoring by thermographic cameras is a promising approach to detect defects, however the data is hard to analyze by conventional algorithms. Therefore, we investigate the use of Machine Learning (ML) in this study, as it is a suitable tool to model complex processes with many influencing factors. A ML model for defect prediction is created based on features extracted from process thermograms. The porosity information calculated from an x-ray Micro Computed Tomography (µCT) scan is used as reference. Physical characteristics of the keyhole pore formation are incorporated into the model to increase the prediction accuracy. Based on the prediction result, the quality of the input data is inferred and future demands on in-situ monitoring of LPBF processes are derived.
The formation of irregularities such as keyhole porosity pose a major challenge to the manufacturing of metal parts by laser powder bed fusion (PBF-LB/M). In-situ thermography as a process monitoring technique shows promising potential in this manner since it is able to extract the thermal history of the part which is closely related to the formation of irregularities. In this study, we investigate the utilization of machine learning algorithms to detect keyhole porosity on the base of thermographic features. Here, as a referential technique, x-ray micro computed tomography is utilized to determine the part's porosity. An enhanced preprocessing workflow inspired by the physics of the keyhole irregularity formation is presented in combination with a customized model architecture. Furthermore, experiments were performed to clarify the role of important parameters of the preprocessing workflow for the task of defect detection . Based on the results, future demands on irregularity prediction in PBF-LB/M are derived.
Each engineering decision is based on a number of more or less accurate information. In assessment of existing structures, additional relevant information collected with on-site inspections facilitate better decisions. However, observed data basically represents the physical characteristic of interest with an uncertainty. This uncertainty is a measure of the inspection quality and can be quantified by expressing the measurement uncertainty. The internationally accepted rules for calculating measurement uncertainty are well established and can be applied straightforwardly in many practical cases. Nevertheless, the calculations require the occasionally time-consuming development of an individually suitable measurement model. This contribution attempts to emphasize proposals for modelling the non-destructive depth measurement of tendons in concrete using the ultrasonic echo technique. The proposed model can serve as guideline for the determination of the quality of the measured information in future comparable inspection scenarios.
Existing concrete structures were usually designed for lifetimes of several decades. The current and urgently required efforts to increase sustainability and protect the environment will likely result in extended service lives up to 100 years. To achieve such objectives, it is required to assess structures over their entire lifecycles. Non-destructive testing (NDT) methods can reliably support the assessment of existing structures during the construction, operational, and decommissioning phases. One of the most important and safety-relevant components of a prestressed concrete structure are the tendons. NDT methods such as the ultrasonic echo method are suitable for both the detection and the localization of the tendons, i.e., the measurement of their geometrical position inside the component. The uniqueness of structures, concrete heterogeneity, and varying amounts of secondary components such as the reinforcement represent obstacles in the application of these methods in practice. The aim of this contribution is to demonstrate a practicable procedure, that can be used in the field to determine the parameters required for the measuring data analysis without extensive knowledge about the investigated components. For this purpose, a polyamide reference specimen is used to show which steps are required to obtain reliable imaging information on the position of tendons from the measurement data. The procedure is then demonstrated on a concrete test specimen that covers various relevant and practice-oriented test scenarios, such as varying tendon depths and component thicknesses.
Development of an Accurate and Robust Air-Coupled Ultrasonic Time-of-Flight Measurement Technique
(2022)
Ultrasonic time-of-flight (ToF) measurements enable the non-destructive characterization of material parameters as well as the reconstruction of scatterers inside a specimen. The time-consuming and potentially damaging procedure of applying a liquid couplant between specimen and transducer can be avoided by using air-coupled ultrasound. However, to obtain accurate ToF results, the waveform and travel time of the acoustic signal through the air, which are influenced by the ambient conditions, need to be considered. The placement of microphones as signal receivers is restricted to locations where they do not affect the sound field. This study presents a novel method for in-air ranging and ToF determination that is non-invasive and robust to changing ambient conditions or waveform variations. The in-air travel time was determined by utilizing the azimuthal directivity of a laser Doppler vibrometer operated in refracto-vibrometry (RV) mode. The time of entry of the acoustic signal was determined using the autocorrelation of the RV signal. The same signal was further used as a reference for determining the ToF through the specimen in transmission mode via cross-correlation. The derived signal processing procedure was verified in experiments on a polyamide specimen. Here, a ranging accuracy of <0.1 mm and a transmission ToF accuracy of 0.3μs were achieved. Thus, the proposed method enables fast and accurate non-invasive ToF measurements that do not require knowledge about transducer characteristics or ambient conditions.
The dataset presented contains ultrasonic data recorded in pulse echo mode. The investigated specimen is made of the isotropic homogeneous material polyamide and has a drill hole of constant diameter running parallel to the surface, which was scanned in a point grid using an automatic scanner system. At each measuring position, a pitch-catch measurement was performed using a sampling rate of 2 MHz. The probes used are arrays consisting of a spatially separated receiving and in-phase transmitting unit. The transmitting and receiving sides each consist of 12 point-shaped single probes. These dry-point contact (DPC) probes operate according to the piezoelectric principle at nominal frequencies of 55 kHz (shear waves) and 100 kHz (longitudinal waves), respectively, and do not require a coupling medium. The measurements are performed with longitudinal (100 kHz) and transverse (55 kHz) waves with different geometric orientations of the probe on the measurement surface. The data presented in the article provide a valid source for evaluating reconstruction algorithms for imaging in the low-frequency ultrasound range.
EXAFS analysis of pure elements, binary and ternary equiatomic refractory alloys within the Nb-Zr-Ti-Hf- Ta system is performed at the Nb and Zr K-edges to analyze the evolution of the chemical local environ- ment and the lattice distortion. A good mixing of the elements is found at the atomic scale. For some compounds, a distribution of distances between the central atom and its neighbors suggests a distortion of the structure. Finally, analysis of the Debye-Waller parameters shows some correlation with the lat- tice distortion parameter δ², and allows to quantify experimentally the static disorder in medium entropy alloys.
In this talk an overview about artificial intelligence/machine learning applications @BAMline is given. In the first part, the use of neural networks for the quantification of XRF measurements and the decoding of coded-aperture measurements are shown. Then it is shown how Gaussian processes and Bayesian statistics can be used to achieve an optimal alignment of the set-up and in general for optimization of measurements.
Gold is one of the seven metals already known in antiquity and was used from time immemorial as a medium of exchange and for the production of jewelry because of its luster and rarity. In addition, it is easy to work and largely resistant to chemical influences. Investigations of gold using synchrotron radiation excited X-ray fluorescence analysis are non-destructive and provide information about the chemical elements present in the sample under investigation. The investigations presented here at BAMline focus on questions such as the origin, manufacturing process, and association of gold findings. The different questions are explained by a number of examples ranging from the Viking treasure from Hiddensee to the Nebra Sky Disk and finds from Egypt. The find from Bernstorf is discussed in detail. A Bayesian treatment of the authenticity is shown.
News from the BAMline
(2022)
Getting more efficient – The use of Bayesian optimization and Gaussian processes at the BAMline
(2022)
For more than 20 years, BAM is operating the BAMline at the synchrotron BESSY II in Berlin Adlershof. During this time, the complexity of the setup and the amount of data generated have multiplied. To increase the effectiveness and in preparation for BESSY III, algorithms from the field of machine learning are increasingly used.
In this paper, several examples in the areas of beamline alignment and measurement time optimization based on Bayesian optimization (BO) with Gaussian processes (GP) are presented. BO is a method for finding the global optimum of a function using a probabilistic model represented by a GP. The advantage of this method is that it can handle high-dimensional problems, does not depend on the initial estimate, and also provides uncertainty estimates.
After a short introduction to BO and GP, the first example is the automatic alignment of our double multilayer monochromator (DMM). To achieve optimal performance, up to three linear and two angular motor positions have to be optimized. To achieve this with a grid scan, at least 100^5 measurement points would be required. Assuming that all positions can be aligned independently, 100*5 points are still necessary. We show that with BO and GP less than 100 points are sufficient to achieve equal or better results.
The second example is the optimization of measurement time in XRF scanning. Here we will show the advantage of the BO GP approach over point-by-point scanning. As can be seen in Fig. 1, the number of points required and thus the measurement time can be reduced by a factor of 50, while the loss in image quality is acceptable. The advantages and limitations of this approach will be discussed.
Getting more efficient – The use of Bayesian optimization and Gaussian processes at the BAMline
(2022)
For more than 20 years, BAM is operating the BAMline at the synchrotron BESSY II in Berlin Adlershof. During this time, the complexity of the setup and the amount of data generated have multiplied. To increase the effectiveness and in preparation for BESSY III, algorithms from the field of machine learning are increasingly used.
After a short introduction to BO and GP, the first example is the automatic alignment of our double multilayer monochromator (DMM).
The second example is the optimization of measurement time in XRF scanning.
"The reassessment of bridges continues to take great importance both nationally and internationally. A major challenge is to find computation models reflecting the actual properties of the considered structures sufficiently accurate. Besides regular inspections, the conduction of advanced measurements is suitable to generate reliable information about a structure to be assessed. Prior to incorporating measurement results in reassessment, the relevance, the trueness, and the precision of the measured information needs to be stated. On the one hand, the use of information whose quality has not been assessed can lead to errors with serious consequences. On the other, the measurement of irrelevant information is inefficient. Although the use of measured data in assessment is currently mostly unregulated, their appreciation in reliability analyses is beneficial since the built environment can be assessed more realistically. Utilizing NDT in reassessment has the potential to extend remaining lifetimes of a structure, save resources, and improve infrastructural availabilities. The power of judgment regarding the decision on the reliability of an existing structure can be increased.
In this contribution, an approach is outlined to process non-destructively gathered measurement data in a comparableway in order to include themeasured information in probabilistic reliability assessments of existing structures. An essential part is the calculation of measurement uncertainties. The effect of incorporating evaluated NDT-results is demonstrated by means of a prestressed concrete bridge and GPR measurements conducted on this bridge as a case-study. The bridge is assessed regarding SLS Decompression using the NDT-results."
Corrosion of concrete reinforcement is one of the major damage mechanisms affecting both the load-bearing capacity and the serviceability of re-inforced concrete structures significantly. When externally discernible damages are observed during visual inspections on the structure, the extent of the damage inside the concrete is often already significant. Corrosion caused by carbonation often leads to severe discoloration of the surface or even large-area spalling of the concrete cover. In contrast, chloride-induced corrosion is usually difficult to observe visually but can cause much more serious damage in less time. The effect occurs locally and can lead to weakening of the cross-section of the reinforce-ment. This, in turn, can cause sudden structural collapses without prior notice.
In the meanwhile, various non-destructive and minimally invasive testing methods are available to evaluate the resistance to penetration of corrosion-pro-moting pollutants and to detect active corrosion. In this paper, a bridge crossing the river Regen is used as a case-study to demonstrate how the information ob-tained applying different testing methods can be combined and evaluated in the context of structural reassessments. Both the results of the permeability testing and the electrical resistance measurement are considered, as well as active corro-sion areas are localized using the half-cell potential mapping combined with the concrete cover measurement with the eddy current method and ground penetrat-ing radar. The results are evaluated using drill cores and in addition laser-induced breakdown spectroscopy was applied to obtain information about possible chlo-ride ion transport into the concrete.
Existing concrete structures were usually designed for lifetimes of several decades. The current and urgently required efforts to increase sustainability and protect the environment will likely result in extended service lives up to 100 years. To achieve such objectives, it is required to assess structures over their entire lifecycles. Non-destructive testing (NDT) methods can reliably support the assessment of existing structures during the construction, operational, and decommissioning phases. One of the most important and safety-relevant components of a prestressed concrete structure are the tendons. NDT methods such as the ultrasonic echo method are suitable for both the detection and the localization of the tendons, i.e., the measurement of their geometrical position inside the component. The uniqueness of structures, concrete heterogeneity, and varying amounts of secondary components such as the reinforcement represent obstacles in the application of these methods in practice. The aim of this contribution is to demonstrate a practicable procedure, that can be used in the field to determine the parameters required for the measuring data analysis without extensive knowledge about the investigated components. For this purpose, a polyamide reference specimen is used to show which steps are required to obtain reliable imaging information on the position of tendons from the measurement data. The procedure is then demonstrated on a concrete test specimen that covers various relevant and practice-oriented test scenarios, such as varying tendon depths and component thicknesses.
Approaches to the development of a (GUM-) measurement model for the calculation of measurement uncertainties in the localization of tendons in concrete structures using the ultrasonic echo method including demonstration were presented.
Imaging of results in NDT-CE: Strength and limitations in the use of Radar vs. Ultrasonic Echo
(2022)
Presentation on behalf of the co-author specified in BAM-Publica.
Study on capabilities of volume methods (GPR, Ultrasonic Echo) regarding lateral and depth localisation of reinforcement and tendons in concrete components. Varied boundary conditions: spacing and diameter of both the near surface rebars of the mesh, and the reflectors of interest, as well as component thickness and concrete cover. Please find corresponding references on slide 2.
Mechanically stable structures with interconnected hierarchical porosity combine the benefits of both small
and large pores, such as high surface area, pore volume, and good mass transport capabilities. Hence, lightweight micro-/meso-/macroporous monoliths are prepared from ordered mesoporous silica COK-12 by means of spark plasma sintering (SPS, S-sintering) and compared to conventionally (C-) sintered monoliths. A multi-scale model is developed to fit the small angle X-ray scattering data and obtain information on the hexagonal lattice parameters, pore sizes from the macro to the micro range, as well as the dimensions of the silica population. For both sintering techniques, the overall mesoporosity, hexagonal pore ordering, and amorphous character are preserved. The monoliths' porosity (77–49%), mesopore size (6.2–5.2 nm), pore volume (0.50–0.22 g cm-3
), and specific surface area (451–180 m2 g-1) decrease with increasing processing temperature and pressure. While the difference in porosity is enhanced, the
structural parameters between the C-and S-sintered monoliths are largely converging at 900 C, except for the mesopore size and lattice parameter, whose dimensions are more extensively preserved in the
S-sintered monoliths, however, coming along with larger deviations from the theoretical lattice. Their higher mechanical properties (biaxial strength up to 49 MPa, 724 MPa HV 9.807 N) at comparable porosities and ability to withstand ultrasonic treatment and dead-end filtration up to 7 bar allow S-sintered monoliths to reach a high permeance (2634 L m-2 h-1 bar-1), permeability (1.25 x 10^-14 m2), and ability to reduce the chemical oxygen demand by 90% during filtration of a surfactant-stabilized oil in water emulsion, while indicating reasonable resistance towards fouling.
A versatile software package in the form of a Python extension, named CDEF (computing Debye’s scattering formula for extraordinary form factors), is proposed to calculate approximate scattering profiles of arbitrarily shaped nanoparticles for small-angle X-ray scattering (SAXS). CDEF generates a quasi-randomly distributed point cloud in the desired particle shape and then applies the open-source software DEBYER for efficient evaluation of Debye’s scattering formula to calculate the SAXS pattern (https://github.com/j-from-b/CDEF). If self-correlation of the scattering signal is not omitted, the quasi-random distribution provides faster convergence compared with a true-random distribution of the scatterers, especially at higher momentum transfer. The usage of the software is demonstrated for the evaluation of scattering data of Au nanocubes with rounded edges, which were measured at the four-crystal monochromator beamline of PTB at the synchrotron radiation facility BESSY II in Berlin. The implementation is fast enough to run on a single desktop computer and perform model fits within minutes. The accuracy of the method was analyzed by comparison with analytically known form factors and verified with another implementation, the SPONGE, based on a similar principle with fewer approximations. Additionally, the SPONGE coupled to McSAS3 allows one to retrieve information on the uncertainty of the size distribution using a Monte Carlo uncertainty estimation algorithm.
Iron nitride (Fe3N) and iron carbide (Fe3C) nanoparticles can be prepared via sol−gel synthesis. While sol−gel methods are simple, it can be difficult to control the crystalline composition, i.e., to achieve a Rietveld-pure product. In a previous in situ synchrotron study of the sol−gel synthesis of Fe3N/Fe3C, we showed that the reaction proceeds as follows:
Fe3O4 → FeOx → Fe3N → Fe3C. There was considerable overlap between the different phases, but we were unable to ascertain whether this was due to the experimental setup (side-on heating of a quartz capillary which could lead to thermal gradients) or whether individual particle reactions proceed at different rates. In this paper, we use in situ wide- and small-angle X-ray scattering (wide-angle X-ray scattering (WAXS) and small-angle X-ray scattering (SAXS)) to demonstrate that the overlapping phases are indeed due to variable reaction rates. While the initial oxide nanoparticles have a small range of diameters, the size range expands considerably and very rapidly during the oxide−nitride transition. This has implications for the isolation of Rietveld-pure Fe3N, and in an extensive laboratory study, we were indeed unable to isolate phasepure Fe3N. However, we made the surprising discovery that Rietveld-pure Fe3C nanoparticles can be produced at 500 °C with a sufficient furnace dwell time. This is considerably lower than the previous reports of the sol−gel synthesis of Fe3C nanoparticles.
The performance of the Monte Carlo (MC) algorithm for calibration-free LIBS was studied on the example of a simulated spectrum that mimics a metallurgical slag sample. The underlying model is that of a uniform, isothermal, and stationary plasma in local thermodynamical equilibrium.
Based on the model, the algorithm generates from hundreds of thousands to several millions of simultaneous configurations of plasma parameters and the corresponding number of spectra. The parameters are temperature, plasma size, and concentrations of species. They are iterated until a cost function, which indicates a difference between synthetic and simulated slag spectra, reaches its minimum. After finding the minimum, the concentrations of species are read from the model and compared to the certified values. The algorithm is parallelized on a graphical processing unit (GPU) to reduce computational time. The minimization of the cost function takes several minutes on the GPU NVIDIA Tesla K40 card and depends on the number of elements to be iterated. The intrinsic accuracy of the MC calibration-free method is found to be around 1% for the eight elements tested. For a real experimental spectrum, however, the efficiency may turn out to be worse due to the idealistic nature of the model, as well as incorrectly chosen experimental conditions. Factors influencing the performance of the method are discussed.
The deposition of titanium oxides during titanium laser ablation in air has been experimentally and numerically investigated. A titanium sample was irradiated by nanosecond pulses from an Yb-fber laser with a beam scanned across the sample surface for its texturing. As a result, the hierarchical structure was observed consisting of a microrelief formed by the laser ablation and a nanoporous coating formed by the reverse deposition from the laser induced plasma plume. The chemical and phase composition of the nanoporous coating, as well as the morphology and structure of the surface, were studied using scanning electron microscopy, atomic force microscopy, and X-ray microanalysis. It was found that the deposit consists mostly of porous TiO2 with 26% porosity and inclusions of TiO, Ti2O3, and Ti2O3N. Optical emission spectroscopy was used to control the plasma composition and estimate the effective temperature of plasma plume. The chemical-hydrodynamic model of laser induced plasma was developed to get a deeper insight into the deposition process. The model predicts that condensed titanium oxides, formed in peripheral plasma
zones, gradually accumulate on the surface during the plasma plume evolution. A satisfactory agreement between the experimental and calculated chemical composition of the plasma plume as well as between the experimental and calculated composition and thickness of the deposited film was demonstrated. This allows a cautious conclusion that the formation of condensed oxides in the plasma and their consequent deposition onto the ablation surface are among the key mechanisms of formation of porous surface films.
The Boltzmann plot is one of the most widely used methods for determining the temperature in different types of laboratory plasmas. It operates on the logarithm as a function of the dimensional argument, which assumes that the correct physical units are used. In many works using the Boltzmann method, there is no analysis of the dimension of this argument, which may be the cause of a potential error. This technical note offers a brief description of the method and shows how to correctly use physical units when using transcendental functions like the logarithm.
Efforts are rising in opening up science by making data more transparent and more easily available, including the data reduction and evaluation procedures and code. A strong foundation for this is the F.A.I.R. principle, building on Findability, Accessibility, Interoperability, and Reuse of digital assets, complemented by the letter T for trustworthyness of the data. Here, we have used data, which was made available by the Institute Laue-Langevin and can be identified using a DOI, to follow the F.A.I.R.+T. principle in extracting, evaluating and publishing triple axis data, recorded at IN3.
A brief introduction is given into our data collection and organization procedure, and why we have settled on the HDF5-based NeXus format for describing experimental data.
The links between NeXus and the SciCat data catalog is also provided, showing how the NeXus metadata is automatically added as searchable metadata in the catalog.
This course will provide an introduction to plasma diagnostic techniques. The major focus of the course will be on the discussions of the practical procedures as well as the underlying physical principles for the measurements of plasma fundamental characteristics (e.g., temperatures and electron number density). Particular emphasis will be placed on laser induced plasma–atomic emission spectrometry, but other analytical plasmas will also be used as examples when appropriate. Selected examples on how one can manipulate the operating conditions of the plasma source, based on the results of plasma diagnostic measurements, to improve its performance used for spectrochemical analysis will also be covered. Topics to be covered include thermal equilibrium, line profiles, temperatures, electron densities, excitation processes, temporal and spatial resolution.
Additive manufacturing by laser metal deposition (LMD) requires continuous online monitoring to ensure quality of printed parts. Optical emission spectroscopy (OES) is proposed for the online detection of printing defects by monitoring minute variations in the temperature of a printed spot during laser scan. A two-lens optical system is attached to a moving laser head and focused on a molten pool created on a substrate during LMD. The light emitted by the pool is collected by an ultraviolet–visible (UV–vis) spectrometer and processed.
Two metrics are used to monitor variations in the surface temperature: the spectrally integrated emission intensity and correlation coefficient. The variations in the temperature are introduced by artificial defects, shallow grooves, and holes of various widths and diameters carved on a substrate surface. The metrics show sufficient sensitivity for revealing the surface defects, except for the smallest
holes with an under-millimeter diameter. Additionally, numeric simulations are carried out for the detection of emission in the UV–vis and near-infrared (NIR) spectral ranges at various surface temperatures. It is concluded that both the metrics perform better in the NIR range. In general, this work demonstrates that spectrally resolved OES suits well for monitoring surface defects during 3D metal
printing.
Laser induced dielectric breakdown (LIDB) on a surface of solid Mo in H2/BF3 atmosphere at 30-760 Torr and in a gaseous mixture MoF6/H2/BF3 + at 760 Torr pressure is tested for synthesis and deposition of superhard molybdenum borides that are needed in many areas of industry and technology. The emission spectra of the plasma and the dynamics of the gas discharge near the substrate are investigated. A comparative analysis of the gas mixture before and after exposure to LIDB plasma is carried out using IR spectroscopy. The conditions for the formation of molybdenum borides are determined. A thermodynamic analysis of the MoF6/H2/BF3 and Mo/H2/BF3 systems is carried out to determine the temperature range for the formation of molybdenum borides and establish the main chemical reactions responsible for their formation. Deposits containing MoB and MoB2 phases are obtained. For the mixture MoF6/H2/BF3, the deposit exhibits an amorphous layered structure, which contains 19.15 wt% F, 30.45% O, and 0.8% Si. For the Mo/H2/BF3 system at the pressures 30 and 160 Torr, nanopowder of molybdenum boride is produced with a characteristic grain size of 100 nm. At pressures above 160 Torr, Mo nanopowder with a grain size <30 nm is obtained.
Equilibrium model of titanium laser induced plasma in air with reverse deposition of titanium oxides
(2022)
A chemical-hydrodynamic model of laser induced plasma is developed to study a process of deposition of titanium oxides from titanium laser induced plasma to the titanium target surface. The model is relevant to texturing and coating of titanium bone implants that is done by scanning the ablation laser across implant surfaces. Such the procedure improves the biocompatibility and durability of the implants. The model considers plasma chemical reactions, formation of condensed species inside the plasma plume, and deposition and accumulation of these species on the ablation surface. A chemical part of the model is based on minimization of Gibbs free energy of the chemical system; it is used to calculate the chemical composition of the plasma. A hydrodynamic part uses the 2D fluid-dynamic equations that model a 3D axisymmetric plasma plume and assumes the mass and energy exchange between the plasma and the surface. The initial parameters for the model are inferred from experiment.
The model shows that condensed titanium oxides, mostly TiO2, form in a peripheral plasma zone and gradually adhere to the surface during the plasma plume evolution. The model predicts the major component and thickness of the deposit and can be applied for the optimization of experiments aimed at surface modification.
Laser-induced plasmas are widely used in many areas of science and technology; examples include spectrochemical analysis, thin film deposition, and material processing. Several topics will be addressed. First, general phenomenology of laser-induced plasmas will be discussed. Then, a chemical model will be presented based on a coupled solution of Navier-Stokes, state, radiative transfer, material transport, and chemical equations. Results of computer simulations for several chemical systems will be shown and compared to experimental observations obtained by optical imaging, spectroscopy, and tomography. The latter diagnostic tools will also be briefly discussed. Finally, a prospective application of laser-induced plasma and plasma modeling will be illustrated on the example of chemical vapor deposition of molybdenum borides and micro processing and coating of titanium dental implants.
A brief introduction will be given on modeling chemical reactions in laser induced plasmas using stoichiometric and non-stoichiometric approaches. Several applications will be considered, which can benefit from such modeling. Those include plasma enhanced chemical vapor deposition (PECVD), surface modification and surface coating, and molecular analysis by LIBS. Each application will be illustrated by simulations of relevant chemical systems. For PECVD, chemical systems are BCl3/H2/Ar, BF3/H2/Ar, BCl3/BF3, Mo/BF3/H2; for surface modification/coating it is Ti/air; for molecular LIBS they are CaCO3/Ar, Ca(OH)2/Ar, and CaCl2/Ar. Advantages and shortcomings of equilibrium chemical hydrodynamic models of laser induced plasmas will be discussed.
In Coastal Systems, pollutants as pharmaceutical drugs exert changes from the molecular to the organism level in marine bivalves. Besides pollutants, Coastal Systems are prone to changes in environmental Parameters, as the alteration of salinity values because of Climate Change. Together, these Stressors (pharmaceutical drugs and salinity changes) can exert different threats than each Stressor acting individually; for example, salinity can change the physical-chemical properties of the drugs and/or the sensitivity of the organisms to them. However, limited Information is available on this subject, with variable results, and for this reason, this study aimed to evaluate the impacts of salinity changes (15,25 and 35) on the effects of the antiepileptic carbamazepine (CBZ, 1 (ig/L) and the antihistamine cetirizine (CTZ, 0.6 pg/L), when acting individually and combined (CBZ + CTZ), in the edible clam Ruditapes philippinarum. After 28 days ofexposure, drugs concentrations, bioconcentration factors and biochemical parameters, related to clam's metabolic caparity and oxidative stress were evaluated. The results showed that dams under low salinity suffered more changes in metabolic, antioxidant and biotransformation activities, in comparison with the remaining salinities under study. However, limited impacts were observed when comparing drug effects at low salinity. Indeed, it seemed that CTZ and CBZ + CTZ, under high salinity (salinity 35) were the worst exposure conditions for the dams, since they caused higher leveis of cellular damage. It Stands out that salinity changes altered the impact of pharmaceutical drugs on marine bivalves.
In coastal systems, organisms are exposed to amultitude of stressors whose interactions and effects are poorly studied. Pharmaceutical drugs and Climate Change consequences, such as lowered pH, are examples of stressors affecting marine organisms, as bivalves. Although a vast literature is available for the effects of these stressors when acting individually, very limited information exists on the impacts that the combination of both can have on marine bivalves. For this reason, this study aimed to evaluate the impacts of a simulated ocean acidification scenario (control pH, 8.0; lowered pH, pH 7.6) on the effects of the antiepileptic carbamazepine (CBZ, 1 μg/L) and the antihistamine cetirizine (CTZ, 0.6 μg/L), when acting individually and combined (CBZ + CTZ), on the edible clam Ruditapes philippinarum. After 28 days of exposure, drug concentrations, bioconcentration factors and biochemical parameters related to the clams' metabolic capacity and oxidative stress were evaluated. The results showed that R. philippinarum clams responded differently to pharmaceutical drugs depending on the pH tested, influencing both bioconcentration and biological responses. In general, drug combined treatments showed fewer impacts than drugs acting alone, and acidification seemed to activate at a higher extension the elimination processes that were not activated under control pH. Also, lowered pH per se exerted negative impacts (e.g., cellular damage) on R. philippinarum and the combination with pharmaceutical drugs did not enhance the toxicity.
The outbreak of SARS-CoV-2 in December of 2019, led to a worldwide still on-going pandemic. Since then, several so-called waves of SARS-CoV-2 infections, a time period with a high and fast rising number of new infections, have occurred all over the world. Classic surveillance approaches are hardly applicable, and further, non-detected cases cannot be covered by them. Wastewater-based Epidemiology (WBE) was proven to be a reliable tool for the prediction of new SARS-CoV-2 infection waves, due to the discharge of virus particles in fecal shedding of infectious people. Until now, for the monitoring of SARS-CoV-2 in wastewater, Polymerase Chain Reaction (PCR) is used as analytical tool. Even though PCR is a highly sensitive analytical tool, is presents several disadvantages, such as the need for trained personnel, specific technical equipment, as well as a difficult performance. An analytical tool, to which these disadvantaged do not apply, are immunoassays. In this work, a sandwich Enzyme-Linked Immunosorbent Assay (ELISA), with the immobilization of the capture antibodies on the surface of a Microtiter Plate (MTP), as well as a sandwich Magnetic Bead-Based Assay (MBBA), with immobilization of the capture antibodies on the surface of Magnetic Beads (MBs), targeting the SARS-CoV-2 N-protein, were developed and optimized. Both assay formats were performed with a colorimetric and chemiluminescent detection. The developed assay is composed of the two monoclonal antibodies (mAb) AH2 and DE6 - which was biotinylated in the course of the work - which bind to two different epitops of the antigen N-protein. As tracer, Neutravidin-HRP was used, which binds, through interaction of the Neutravidin with the biotin, to the mAb DE6-Biotin. The assay development and optimization procedure included the investigation of the surface saturation with the mAb AH2, the concentration and dilution of the mAb DE6-Biotin and Neutravidin-HRP, the ideal MBs, the ideal coating as well as dilution buffers, and the colorimetric and chemiluminescent substrates. For the developed and fully optimized colorimetric ELISA, a test midpoint x0 of 388 μg/L, for the chemiluminsecent ELISA of 371 μg/L, for the colorimetric MBBA of 251 μg/L and for the chemiluminescent MBBA of 243 μg/L was obtained. Validation of the colorimetric MBBA was done by measurement of three wastewater samples collected at the Wastewater Treatment Plant (WWTP) Potsdam. Whilst no N-protein could be detected in the samples, by spiking of the wastewater samples with certain concentrations of the N-protein, 10- to 18-times lower concentrations could be back-calculated, which can be attributed to matrix-effects of the wastewater sample. Next to the matrix-effects, also several other reason exist, why no N-protein could be determined in the samples. Because of that, further investigation of the handling, and the measurement of the wastewater samples, as well as the improvement of the assay sensitivity through further optimization steps or exchange of the antibodies, is still necessary.
Nebivolol (NEB), a β-blocker frequently used to treat cardiovascular diseases, has been widely detected in aquatic environments, and can be degraded under exposure to UV radiation, leading to the formation of certain transformation products (UV-TPs). Thus, the toxic effects of NEB and its UV-TPs on aquatic organisms are of great importance for aquatic ecosystems. In the present study, the degradation pathway of NEB under UV radiation was investigated. Subsequently, zebrafish embryos/larvae were used to assess the median lethal concentration (LC50) of NEB, and to clarify the sub-lethal effects of NEB and its UV-TPs for the first time. It was found that UV radiation could reduce the toxic effects of NEB on the early development of zebrafish. Transcriptomic analysis identified the top 20 enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways in zebrafish larvae exposed to NEB, most of which were associated with the antioxidant, nervous, and immune systems. The number of differentially expressed genes (DEGs) in the pathways were reduced after UV radiation. Furthermore, the analysis of protein biomarkers, including CAT and GST (antioxidant response), AChE and ACh (neurotoxicity), CRP and LYS (immune response), revealed that NEB exposure reduced the activity of these biomarkers, whereas UV radiation could alleviate the effects. The present study provides initial insights into the mechanisms underlying toxic effects of NEB and the detoxification effects of UV radiation on the early development of zebrafish. It highlights the necessity of considering the toxicity of UV-TPs when evaluating the toxicity of emerging pollutants in aquatic systems.
ADAMTS4-specific MR-probe to assess aortic aneurysms in vivo using synthetic peptide libraries
(2022)
The incidence of abdominal aortic aneurysms (AAAs) has substantially increased during the last 20 years and their rupture remains the third most common cause of sudden death in the cardiovascular field after myocardial infarction and stroke. The only established clinical parameter to assess AAAs is based on the aneurysm size. Novel biomarkers are needed to improve the assessment of the risk of rupture. ADAMTS4 (A Disintegrin And Metalloproteinase with ThromboSpondin motifs 4) is a strongly upregulated proteoglycan cleaving enzyme in the unstable course of AAAs. In the screening of a one-bead-one-compound library against ADAMTS4, a low-molecular-weight cyclic peptide is discovered with favorable properties for in vivo molecular magnetic resonance imaging applications. After identification and characterization, it’s potential is evaluated in an AAA mouse model. The ADAMTS4-specific probe enables the in vivo imaging-based prediction of aneurysm expansion and rupture.
The 87Sr/86Sr isotope ratio can, in principle, be used for provenancing of cement. However, while commercial cements consist of multiple components, no detailed investigation into their individual 87Sr/86Sr isotope ratios or their influence on the integral 87Sr/86Sr isotope ratio of the resulting cement was conducted previously. Therefore, the present study aimed at determining and comparing the conventional 87Sr/86Sr isotope ratios of a diverse set of Portland cements and their corresponding Portland clinkers, the major component of these cements. Two approaches to remove the additives from the cements, i.e. to measure the conventional 87Sr/86Sr isotopic fingerprint of the clinker only, were tested, namely, treatment with a potassium hydroxide/sucrose solution and sieving on a 11-µm sieve. Dissolution in concentrated hydrochloric acid/nitric acid and in diluted nitric acid was employed to determine the 87Sr/86Sr isotope ratios of the cements and the individual clinkers. The aim was to find the most appropriate sample preparation procedure for cement provenancing, and the selection was realised by comparing the 87Sr/86Sr isotope ratios of differently treated cements with those of the corresponding clinkers. None of the methods to separate the clinkers from the cements proved to be satisfactory. However, it was found that the 87Sr/86Sr isotope ratios of clinker and cement generally corresponded, meaning that the latter can be used as a proxy for the clinker 87Sr/86Sr isotope ratio. Finally, the concentrated hydrochloric acid/nitric acid dissolution method was found to be the most suitable sample preparation method for the cements; it is thus recommended for 87Sr/86Sr isotope analyses for cement provenancing.
In this presentation, I will give a brief overview of my personal experience with laser induced plasma (LIP). I will start from my and colleagues’ early works, where we used LIP as an atomic reservoir for laser induced fluorescence (LIP). We applied LIP-LIF for a sensitive detection of trace elements in various materials and demonstrated that under certain conditions the technique can even be used for isotope analysis. Next, I will discuss the application of LIP spectroscopy, i.e., LIBS, to material identification that nowadays constitutes one of the best applications of this technique. In those early days, we used correlation analysis for spectra processing; it is now replaced by more powerful chemometric methods. Further, I will stop on our efforts in modeling LIP that we first intended for the improved quality of spectroscopic analysis and later extended to non-spectroscopic fields such as chemical vapor deposition and surface structuring. We developed a version of calibration-free LIBS, in which we iterated model-generated spectra until a close match was achieved between experimental and synthetic spectra to determine concentrations. Next, I will briefly overview our recent developments in plasma modeling that include plasma chemistry. This was important in view of widening application of LIBS as a molecular technique. I will also address several plasma diagnostics, e.g., Radon transform tomography that we developed to get more insight about LIP that was helpful for both analytic spectroscopy and modeling. Finally, I will mention several exotic applications of LIP such as LIP-based lasers and chemical reactors to illustrate a real multifaceted character of laser induced plasma and usefulness of its study for many science fields.
Laser metal deposition is a rapidly evolving method for additive manufacturing that combines high performance and simplified production routine. Quality of production depends on an instrumental design and operational parameters, which require constant control during the process. In this work, feasibility of using optical spectroscopy as a control method is studied via modeling and experimentally. A simplified thermal model is developed based on the time-dependent diffusion-conduction heat equation and geometrical light collection into detection optics. Intense light emitted by a laser-heated spot moving across a sample surface is collected and processed to yield the temperature and other temperature-related parameters. In the presence of surface defects, the temperature field is distorted in a specific manner that depends on shape and size of the defect. Optical signals produced by such the distorted temperature fields are simulated and verified experimentally using a 3D metal printer and a sample with artificially carved defects. Three quantities are tested as possible metrics for monitoring the process: temperature, integral intensity, and correlation coefficient. The shapes of the simulated signals qualitatively agree with the experimental signals; this allows for a cautious inference that optical spectroscopy can detect surface defects and, possibly, predict their characters, e.g., inner or protruding.
Back Deposition of Titanium Oxides under Laser Ablation of Titanium: Simulation and Experiment
(2022)
Titanium is widely used in medicine for implants and prostheses, thanks to its high biocompatibility, good mechanical properties, and high corrosion resistance. Pure titanium, however, has low wear resistance and may release metallic titanium into surrounding tissues. Structuring and coating its surface with oxide layers are necessary for high wear resistance and improved biocompatibility. In this work, a combination of theoretical and experimental methods was used to study processes responsible for deposition of titanium oxides during ablation of titanium in air.
The deposition process was modeled via the Navier-Stokes equations that accounted for the material removal and accumulation of the deposit on the ablation surface. The chemical part was based on the equilibrium model embedded into the hydrodynamic code. Simulations showed that the most active zone of production of condensed titanium oxides were at plasma periphery whereas a zone of strong condensation of titanium metal was above the molten pool.
In experiment, a pulsed Yb fiber laser was scanned across a titanium surface. The temperature and composition of the plasma were inferred from plasma emission spectra. The post-ablation surface was analyzed by SEM, TEM, STEM, AFM, and XRD.
The developed model well reproduced the main features of experimental data. It was concluded that the deposition of condensed metal oxides from the plasma is a principal mechanism of formation of nanoporous oxide layer on the metal surface. The method of surface structuring and modification by nanosecond laser ablation can be developed into a useful technology that may find applications in medicine, photonics, and other areas.
Concrete structures often show severe damage during their lifetime. One such damage is pitting corrosion of the steel reinforcement caused by chloride ingress into the porous concrete structure. Laser-induced breakdown spectroscopy (LIBS) is a promising method in civil engineering, which is used for detection of chlorine in concrete structures in addition to conventional methods of wet chemistry. To assess LIBS as a trustful analytical technique, its accuracy and robustness is carefully tested. The presentation will outline the results of the interlaboratory comparison of chlorine quantification in cement paste samples, which was carried out by 12 laboratories in 10 countries. Two sets of samples with chloride content ranging from 0.06-1.95 wt.% in the training set and 0.23-1.51 wt.% in the test sample set (“unknowns”), with additional variations in the type of cement and chlorine source (salt type) were sent to the laboratories. The overall result demonstrates that LIBS is suitable for the quantification of the investigated sample compositions: average relative bias was mostly below 15 %. Considering that the laboratories did not receive instructions on how to perform the analysis or how to process the data, the results can be evaluated as a true status quo of the LIBS technique for this type of analysis.
How much do we, the small-angle scatterers, influence the results of an investigation? What uncertainty do we add by our human diversity in thoughts and approaches, and is this significant compared to the uncertainty from the instrumental measurement factors?
After our previous Round Robin on data collection, we know that many laboratories can collect reasonably consistent small-angle scattering data on easy samples1. To investigate the next, human component, we compiled four existing datasets from globular (roughly spherical) scatterers, each exhibiting a common complication, and asked the participants to apply their usual methods and toolset to the quantification of the results https://lookingatnothing.com/index.php/archives/3274).
Accompanying the datasets was a modicum of accompanying information to help with the interpretation of the data, similar to what we normally receive from our collaborators. More than 30 participants reported back with volume fractions, mean sizes and size distribution widths of the particle populations in the samples, as well as information on their self-assessed level of experience and years in the field.
While the Round Robin is still underway (until the 25th of April, 2022), the initial results already show significant spread in the results. Some of these are due to the variety in interpretation of the meaning of the requested parameters, as well as simple human errors, both of which are easy to correct for. Nevertheless, even after correcting for these differences in understanding, a significant spread remains. This highlights an urgent challenge to our community: how can we better help ourselves and our colleagues obtain more reliable results, how could we take the human factor out of the equation, so to speak?
In this talk, we will introduce the four datasets, their origins and challenges. Hot off the press, we will summarize the anonymized, quantified results of the Data Analysis Round Robin. (Incidentally, we will also see if a correlation exists between experience and proximity of the result to the median). Lastly, potential avenues for improving our field will be offered based on the findings, ranging from low-effort yet somehow controversial improvements, to high-effort foundational considerations.
Measuring an X-ray scattering pattern is relatively easy, but measuring a steady stream of high-quality, useful patterns requires significant effort and good laboratory organization.
Such laboratory organization can help address the reproducibility crisis in science, and easily multiply the scientific output of a laboratory, while greatly elevating the quality of the measurements. We have demonstrated this for small- and wide-angle X-ray scattering in the MOUSE project (Methodology Optimization for Ultrafine Structure Exploration).
With the MOUSE, we have combined a comprehensive and highly automated laboratory workflow with a heavily modified X-ray scattering instrument. This combination allows us to collect fully traceable scattering data, within a well-documented, FAIR-compliant data flow (akin to what is found at the more automated synchrotron beamlines). With two full-time researchers, our lab collects and interprets thousands of datasets, on hundreds of samples, for dozens of projects per year, supporting many users along the entire process from sample selection and preparation, to the analysis of the resulting data.
This talk will briefly introduce the foundations of X-ray scattering, present the MOUSE project, and will highlight the proven utility of the methodology for materials science. Upgrades to the methodology will also be discussed, as well as possible avenues for transferring this holistic methodology to other instruments
Surface modification of titanium by laser ablation is investigated theoretically and experimentally. The modification consists in texturing the surface and redeposition of chemically transformed material from the ablation plasma. The redeposition is driven by the hydrodynamic flow in the plasma. Such surface modification improves the biocompatibility of titanium implants.
An extremely brief summary of what X-ray scattering can do for you (X-ray scattering encompasses small-angle X-ray scattering (SAXS), and wide-angle X-ray scattering (WAXS/XRD), amongst others). See my other videos for more detailed explanations on sample selection, data correction, data analysis, etc.
Probe diagnostics is used to determine the electron temperature and electron number density in a low pressure inductively coupled plasma (ICP) ignited in the mixture of SiF4, Ar and H2. Emission spectra of mixtures with different stoichiometry of components are investigated and the electron density distribution function (EDDF) is estimated. The optimal conditions for high conversion of SiF4 into Si are found by studying the dependence of the yield of silicon upon the ratio of reagents. The maximum achieved yield of silicon is 85% under the optimal conditions. Based on the analysis of IR and MS spectra of exhaust gases, 5% of initial SiF4 converts into volatile fluorosilanes. A rate of production of Si is 0.9 g/h at the energy consumption 0.56 kWh /g.
Modeling is an important tool for understanding a physical phenomenon. It helps to interpret results of experiments and optimize experimental parameters for obtaining a desirable result. Modeling laser induced plasma is beneficial for many scientific and industrial fields, e.g., analytical chemistry, pulsed laser deposition, plasma enhanced chemical vapor deposition, laser welding, additive manufacturing etc. In this presentation, a personal experience in development of a physical model of laser induced plasma will be given in a chronological sequence starting from early 2000th and until now.
Over the time, the model evolved from its simple analytical form that described plasma emission spectra to its current numerical form that describes plasma dynamics, chemistry, and interaction with a substrate surface. Several examples will be given for the application of the model to practical problems such as spectroscopic chemical analysis, plasma enhanced chemical vapor deposition, and surface modification by laser ablation.
Calibration-free methods in laser-induced breakdown spectroscopy, CF LIBS, serve as an alternative to calibration-based LIBS techniques. Their major advantage is the ability for fast chemical analysis in situations where matrix-matched standards are not readily available (as, e.g., in the analysis of biological materials and remote analysis) or amount of samples are limited. Their main applications are in the industry, geology, biology, archeology, and even space exploration. This chapter overviews the principle of operation and performance of CF LIBS techniques.
Direct laser writing grows to an industry-ready manufacturing platform. A stable quality infrastructure is needed for robust processing. However, flaws, errors or impurities are neither described nor communicated in the community of two-photon polymerization.
In contrast, additive manufacturing errors in metals manufacturing are described thoroughly.
Wide-range X-ray scattering datasets and analyses for all samples described in the 2020 publication "Gold and silver dichroic nanocomposite in the quest for 3D printing the Lycurgus cup". These datasets are composed by combining multiple small-angle x-ray scattering and wide-angle x-ray scattering curves into a single dataset. They have been analyzed using McSAS to extract polydispersities and volume fractions. They have been collected using the MOUSE project (instrument and methodology).
X-ray fluorescence imaging is a well-established tool in materials characterization. In this work, we present the adaption of coded aperture imaging to full-field X-ray fluorescence imaging at the synchrotron. Coded aperture imaging has its origins in astrophysics, and has several advantages: Coded apertures are relatively easy to fabricate, achromatic, allow a high photon throughput, and high angular acceptance. Coded aperture imaging is a two-step-process, consisting of the measurement process and a reconstruction step. Different programs have been written, for the raytracing/forward projection and the reconstruction. Experiments with coded aperture in combination with a Color X-ray Camera and an energy-dispersive area detector, have been conducted at the BAMline. Measured samples were successfully reconstructed, and gave a 9.1-fold increase in count rate compared to a polycapillary optic.
The components that are used in structural and in high temperature applications generally face significant challenges with respect to oxidation behaviours and metalworking processes. In most of the cases, harsh environmental conditions lead materials to degrade due to corrosion. To thoroughly investigate the corrosion processes and to determine oxidation states of metal components within the reaction products, we need special analytical tools. Grazing exit X-ray fluorescence (GEXRF) offers a non-destructive way to collect this information in sub-micrometre depth range.
In order to obtain structural information, such as regarding oxidation states or atomic/molecular geometric arrangement, the GEXRF approach can also be combined with the X-ray absorption spectroscopy (XAS) method. The position and energy sensitive detector, with 264x264 pixel detector area, provides information regarding the signal emitted from the sample as a function of the emission angle and thus allows depth-sensitive analysis. Furthermore, the data collected from samples of an incidence energy which can be controlled with a resolution of 0.5 eV provides XANES data to determine oxidation states.
We address the feasibility of our setup and provide a new optimization procedure (Bayesian Optimization and Gaussian Regression) to decrease measuring time. The results settle on a conceptual study on a reference sample (Cr-Oxide layer (300nm) on Cr layer (500nm) on Si wafer).
Time is the most valuable parameter in synchrotron experiments. This is costly and some of the experiments suffer from low efficiency due to low counting statistics. With today's high processing power long experiments are run in a shorter time and increase efficiency. With optimization algorithms time in "counting-hungry" experiments reduced by factor of 10. Our project is to develop a new method to analyze the chemical properties of complex materials non-destructively and efficiently, such as high entropy materials subjected to corrosion processes. A better understanding of the corrosion process will help to develop corrosion-resistant materials and reduce the cost of corrosion damage, which averages around 2.5 trillion USD annually.
Our aim is to develop a simple and inexpensive method for full field X-ray fluorescence imaging.We combine an energydispersive array detector with a coded aperture to obtain high resolut ion images. To obtain the information from the recorded image a reconstruction step is necessary. The reconstruction methods we have developed, were tested on simulated data and then applied to experimental data. The first tests were carried out at the BAMline @BESSY II. This method enables the simultaneous detection of multiple elements,which is important e.g. in the field of catalysis.
The acquisition and appropriate processing of relevant information about the considered system remains a major challenge in assessment of existing structures. Both the values and the validity of computed results such as failure probabilities essentially depend on the quantity and quality of the incorporated knowledge. One source of information are onsite measurements of structural or material characteristics to be modeled as basic variables in reliability assessment. The explicit use of (quantitative) measurement results in assessment requires the quantification of the quality of the measured information, i.e., the uncertainty associated with the information acquisition and processing. This uncertainty can be referred to as measurement uncertainty. Another crucial aspect is to ensure the comparability of the measurement results.This contribution attempts to outline the necessity and the advantages of measurement uncertainty calculations in modeling of measurement data-based random variables to be included in reliability assessment. It is shown, how measured data representing time-invariant characteristics, in this case non-destructively measured inner geometrical dimensions, can be transferred into measurement results that are both comparable and quality-evaluated. The calculations are based on the rules provided in the guide to the expression of uncertainty in measurement (GUM). The GUM-framework is internationally accepted in metrology and can serve as starting point for the appropriate processing of measured data to be used in assessment. In conclusion, the effects of incorporating the non-destructively measured data into reliability analysis are presented using a prestressed concrete bridge as case-study.
A tool for merging and/or rebinning single or multiple datasets to achieve a lower point density with best possible statistics. highly scriptable, CLI, no GUI
Version 0.1: works but could do with a cleanup. Weighting by uncertainty currently always on, but should be optional for use as an azimuthal or radial averager
This article presents deep unfolding neural networks to handle inverse problems in photothermal radiometry enabling super-resolution (SR) imaging. The photothermal SR approach is a well-known technique to overcome the spatial resolution limitation in photothermal imaging by extracting high-frequency spatial components based on the deconvolution with the thermal point spread function (PSF). However, stable deconvolution can only be achieved by using the sparse structure of defect patterns, which often requires tedious, handcrafted tuning of hyperparameters and results in computationally intensive algorithms. On this account, this article proposes Photothermal-SR-Net, which performs deconvolution by deep unfolding considering the underlying physics. Since defects appear sparsely in materials, our approach includes trained block-sparsity thresholding in each convolutional layer. This enables to super-resolve 2-D thermal images for nondestructive testing (NDT) with a substantially improved convergence rate compared to classic approaches. The performance of the proposed approach is evaluated on various deep unfolding and thresholding approaches. Furthermore, we explored how to increase the reconstruction quality and the computational performance. Thereby, it was found that the computing time for creating high-resolution images could be significantly reduced without decreasing the reconstruction quality by using pixel binning as a preprocessing step.
Learned block iterative shrinkage thresholding algorithm for photothermal super resolution imaging
(2022)
Block-sparse regularization is already well known in active thermal imaging and is used for multiple-measurement-based inverse problems. The main bottleneck of this method is the choice of regularization parameters which differs for each experiment. We show the benefits of using a learned block iterative shrinkage thresholding algorithm (LBISTA) that is able to learn the choice of regularization parameters, without the need to manually select them. In addition, LBISTA enables the determination of a suitable weight matrix to solve the underlying inverse problem. Therefore, in this paper we present LBISTA and compare it with state-of-the-art block iterative shrinkage thresholding using synthetically generated and experimental test data from active thermography for defect reconstruction. Our results show that the use of the learned block-sparse optimization approach provides smaller normalized mean square errors for a small fixed number of iterations. Thus, this allows us to improve the convergence speed and only needs a few iterations to generate accurate defect reconstruction in photothermal super-resolution imaging.
Das thermografische Nachweisprinzip beruht auf der Analyse von instationären Temperaturverteilungen, welche durch die Wechselwirkung eines von außen zugeführten Wärmestroms mit der inneren Geometrie des Prüfobjekts oder mit darin eingeschlossenen Inhomogenitäten verursacht werden. Eine äquivalente Beschreibung dieses wechselwirkenden Wärmestroms ist die Ausbreitung von Wärmewellen im Inneren des Prüfobjekts. Obwohl die Thermografie für die Erkennung einer Vielzahl von Inhomogenitäten und für die Prüfung verschiedenster Materialien geeignet ist, besteht die grundlegende Einschränkung in der diffusen Natur der Wärmewellen und der Notwendigkeit, ihre Wirkung nur an der Prüfobjektoberfläche radiometrisch messen zu können. Der fundamentale Nachteil von diffusen Wärmewellen gegenüber propagierenden Wellen, wie sie z. B. im Ultraschall vorkommen, ist die dadurch verursachte schnelle Verschlechterung der räumlichen Auflösung mit zunehmender Defekttiefe. Diese Verschlechterung schränkt in der Regel die Anwendbarkeit der Thermografie bei der Suche nach kleinen tiefliegenden Defekten ein.
Ein vielversprechender Ansatz zur Verbesserung der räumlichen Auflösung und damit der Erkennungsempfindlichkeit und der Rekonstruktionsqualität in der thermografischen Prüfung liegt in der speziellen Formung dieser diffusen Wärmewellenfelder mittels strukturierter Laserthermografie bzw. photothermischer Anregung. Einige Beispiele sind:
- Schmale rissartige Defekte unterhalb der Oberfläche können durch Überlagerung mehrerer interferierender Wärmewellenfelder mit hoher Empfindlichkeit detektiert werden,
- Nahe beieinander liegende Defekte können durch mehrere Messungen mit unterschiedlichen Heizstrukturen getrennt werden,
- Defekte in unterschiedlichen Tiefen können durch eine optimierte zeitliche Gestaltung der thermischen Anregungsfunktion unterschieden werden,
- Schmale Risse auf der Oberfläche können durch robotergestütztes Scannen mit fokussierten Laserspots gefunden werden,
- Defekte, die während der additiven Fertigung auftreten, können bereits im Bauraum und mit dem Fertigungslaser detektiert werden.
Wir präsentieren die neuesten Ergebnisse dieser Technologie, die mit Hochleistungslasersystemen und modernen numerischen Methoden erzielt wurden.
n this study, two green synthesis routes were used for the synthesis of Ag/ZnO nanoparticles, using cassava starch as a simple and low-cost effective fuel and Aloe vera as a reducing and stabilizing agent. The Ag/ZnO nanoparticles were characterized and used for bacterial dis-
infection of lake water contaminated with Escherichia coli (E. coli). Characterization indicated the formation of a face-centered cubic structure of metallic silver nanoparticles with no insertion of Ag into the ZnO hexagonal wurtzite structure. Physicochemical and bacteriological analyses described in “Standard Methods for the Examination of Water and Wastewater” were used to evaluate the efficiency of the treatment. In comparison to pure ZnO, the synthesized Ag/ZnO nanoparticles showed high efficiencies against Escherichia coli (E. coli) and general coliforms present in the lake
water. These pathogens were absent after treatment using Ag/ZnO nanoparticles. The results indicate that Ag/ZnO nanoparticles synthesized via green chemistry are a promising candidate for the treatment of wastewaters contaminated by bacteria, due to their facile preparation, low-cost synthesis,and disinfection efficiency.
Fire-gilding is a historic technique for the application of golden layers on a number of different base materials utilizing a gold amalgam. This technique leaves a significant amount of Hg in the golden layer, giving archeometrists a reliable indicator to identify firegildings.
Recent findings on presumably fire-gilded objects have shown in several cases significantly lower Hg content than previously studied objects. This prompted a synchrotron-based X-ray fluorescence investigation into the Hg distribution along the material–gilding interface, as well as a series of measurements regarding the Hg content development in fire-gilded samples during artificial aging. This work presents findings on laboratory-prepared fire-gildings, indicating an Hg enrichment at the interface of firegilded silver samples. Notably, such an enrichment is missing in fire-gilded copper samples. Further, it is confirmed that fire-gilded layers typically do not undercut an Hg bulk content of 5%. In this light, it seems improbable that ancient samples that contain <5% Hg are fire-gilded. The results presented in this study might lead to a non-destructive method to identify the Hg enrichment at the interface. This might be obtained by a combination of different non-destructive measurements and might also work unambiguously in samples in which the gold top layer is altered.
X-ray absorption spectroscopy (XAS) provides a unique, atom-specific tool to probe the electronic structure of solids. By surmounting long-held limitations of powder-based XAS using a dynamically averaged powder in a Resonant Acoustic Mixer (RAM), we demonstrate how time-resolved in situ (TRIS) XAS provides unprecedented detail of mechanochemical synthesis. The use of a custom-designed dispersive XAS (DXAS) setup allows us to increase the time resolution over existing fluorescence measurements from ∼15 min to 2 s for a complete absorption spectrum. Hence, we here establish TRIS-XAS as a viable method for studying mechanochemical reactions and sampling reaction kinetics. The generality of our approach is demonstrated through RAM-induced (i) bottom-up Au nanoparticle mechanosynthesis and (ii) the synthesis of a prototypical metal organic framework, ZIF-8. Moreover, we demonstrate that our approach also works with the addition of a stainless steel milling ball, opening the door to using TRIS-DXAS for following conventional ball milling reactions. We expect that our TRIS-DXAS approach will become an essential part of the mechanochemical tool box.
Thermographic photothermal super resolution reconstruction enables the resolution of internal defects/inhomogeneities below the classical limit, which is governed by the diffusion properties of thermal wave propagation. Based on a combination of the application of special sampling strategies and a subsequent numerical optimization step in post-processing, thermographic super resolution has already proven to be superior to standard thermographic methods in the detection of one-dimensional defect/inhomogeneity structures. In our work, we report an extension of the capabilities of the method for efficient detection and resolution of defect cross sections with fully two-dimensional structured laser-based heating. The reconstruction is carried out using one of two different algorithms that are proposed within this work. Both algorithms utilize the combination of several coherent measurements using convex optimization and exploit the sparse nature of defects/inhomogeneities as is typical for most nondestructive testing scenarios. Finally, the performance of each algorithm is rated on reconstruction quality and algorithmic complexity. The presented experimental approach is based on repeated spatially structured heating by a high power laser. As a result, a two-dimensional sparse defect/inhomogeneity map can be obtained. In addition, the obtained results are compared with those of conventional thermographic inspection methods that make use of homogeneous illumination. Due to the sparse nature of the reconstructed defect/inhomogeneity map, this comparison is performed qualitatively.
The use of polycapillary optics in confocal micro-X-ray fluorescence analysis (CMXRF) enables the destruction-free 3D investigation of the elemental composition of samples. The energy-dependent transmission properties, concerning intensity and spatial beam propagation of three polycapillary half lenses, which are vital for the quantitative interpretation of such CMXRF measurements, are investigated in a monochromatic confocal laboratory setup at the Atominstitut of TU Wien, and a synchrotron setup on the BAMline beamline at the BESSY II Synchrotron, Helmholtz-Zentrum-Berlin. The empirically established results, concerning the intensity of the transmitted beam, are compared with theoretical values calculated with the polycap software package and a newly presented analytical model for the transmission function.
The resulting form of the newly modelled energy-dependent transmission function is shown to be in good agreement with Monte Carlo simulated results for the complete energy regime, as well as the empirically established results for the energy regime between 6 keV and 20 keV. An analysis of possible fabrication errors was conducted via pinhole scans showing only minor fabrication errors in two of the investigated polycapillary optics. The energy-dependent focal spot size of the primary polycapillary was investigated in the laboratory via the channel-wise evaluation of knife-edge scans. Experimental results are compared with data given by the manufacturer as well as geometric estimations for the minimal focal spot size. Again, the resulting measurement points show a trend in agreement with geometrically estimated results and manufacturer data.
Für die aktive Thermografie als zerstörungsfreie Prüfmethode galt lange Zeit die Faustformel, dass die Auflösung interner Defekte/Inhomogenitäten auf ein Verhältnis von Defekttiefe/Defektgröße ≤ 1 beschränkt ist. Die Ursache hierfür liegt in der diffusiven Natur der Wärmeleitung in Festkörpern. Sogenannte Super-Resolution-Ansätze erlauben seit Kurzem die Überwindung dieser physikalischen Grenze um ein Vielfaches. Damit ergibt sich die attraktive Möglichkeit die Thermografie von einem rein oberflächensensitiven Prüfverfahren hin zu einem Verfahren mit verbesserter Tiefenreichweite zu entwickeln. Wie weit diese Entwicklung getrieben werden kann, ist Gegenstand aktueller Forschung. Wir konnten bereits zeigen, dass diese klassische Einschränkung für ein- und zweidimensionale Defektgeometrien überwunden werden kann, indem das Prüfobjekt mit einzelnen Laserspots sequenziell strukturiert beleuchtet wird und damit anschließend aus den resultierenden Messdaten durch Anwendung photothermischer Super-ResolutionRekonstruktion eine Defektkarte berechnet werden kann, welche eine deutlich verbesserte Trennung einzelner naheliegender Defekte erlaubt. Dieses Verfahren profitiert dabei im Ergebnis stark von der Kombination von sequenzieller räumlich strukturierter Beleuchtung und modernen numerischen Optimierungsverfahren, was jedoch in Summe stark auf Kosten der experimentellen Komplexität geht. Dies führt im Gegensatz zur Anwendung von etablierten thermografischen Standardverfahren mit vollflächiger Beleuchtung zu langen Messzeiten, großen Datensätzen und langwieriger numerischer Auswertung. In dieser Arbeit berichten wir über die Anwendung vollflächig räumlichstrukturierter zweidimensionaler Beleuchtungsmuster, welche es durch den Einsatz modernster Laserprojektortechnik in Verbindung mit einem Hochleistungslaser überhaupt erst erlaubt, eine effiziente Umsetzung von photothermischer Super-ResolutionRekonstruktion auch für größere Prüfflächen zu erreichen.
Für die aktive Thermografie als zerstörungsfreie Prüfmethode galt lange Zeit die Faustformel, dass die Auflösung interner Defekte/Inhomogenitäten auf ein Verhältnis von Defekttiefe/Defektgröße ≤ 1 beschränkt ist. Die Ursache hierfür liegt in der diffusiven Natur der Wärmeleitung in Festkörpern. Sogenannte Super-Resolution-Ansätze erlauben seit Kurzem die Überwindung dieser physikalischen Grenze um ein Vielfaches. Damit ergibt sich die attraktive Möglichkeit die Thermografie von einem rein oberflächensensitiven Prüfverfahren hin zu einem Verfahren mit verbesserter Tiefenreichweite zu entwickeln. Wie weit diese Entwicklung getrieben werden kann, ist Gegenstand aktueller Forschung.
Wir konnten bereits zeigen, dass diese klassische Grenze für 1D- und 2D Defektgeometrien mit Hilfe des Abscannens des Prüfkörpers mittels einzelner Laserspots und der anschließenden Anwendung von photothermischer Super-Resolution-Rekonstruktion überwunden werden kann. Bei dieser Methode wird eine Kombination aus sequenzieller räumlich strukturierter Beleuchtung und numerischen Optimierungsmethoden eingesetzt. Dies geschieht allerdings auf Kosten der experimentellen Komplexität, die zu einer langen Messdauer, großen Datensätzen und langwieriger numerischer Auswertung führt.
In dieser Arbeit berichten wir über einen neuen experimentellen Ansatz, bei dem räumlich strukturierte 2D-Beleuchtungsmuster in Verbindung mit Compressed-Sensing und Computational-Imaging-Methoden verwendet werden, um die experimentelle Komplexität deutlich zu verringern und die Methode für die Untersuchung größerer Prüfflächen nutzbar zu machen.
Der experimentelle Ansatz basiert dabei auf der wiederholten (blinden) photothermischen Anregung mit räumlich strukturierten 2D-Mustern unter Verwendung moderner Projektortechnik und eines Hochleistungslasers. In der anschließenden numerischen Rekonstruktion werden mehrere Messungen unter Ausnutzung der Joint-Sparsity der Defekte innerhalb des Prüfkörpers mittels nichtlinearer konvexer Optimierungsmethoden kombiniert. Als Ergebnis kann eine 2D-sparse Defekt-/Inhomogenitätskarte erstellt werden.
Die Thermografie ist trotz ihrer ausgereiften wissenschaftlichen und technologischen Grundlagen ein noch relativ junges Mitglied in der Familie der zerstörungsfreien Prüfverfahren. Sie erschließt sich aufgrund einer Reihe von Vorzügen eine wachsende Anwenderschaft. Für eine weitere Verbreitung insbesondere im industriellen Kontext spielen Normen, Standards und Richtlinien eine wichtige Rolle. In diesem Beitrag wird der aktuelle Stand der Normierung vorgestellt. Wir werden zeigen, welche Grundlagennormen und Anwendungsnormen es für die Thermografie in Deutschland und international gibt und wir wagen einen Blick in die Zukunft. Darüber hinaus lebt auch die Normierungsarbeit von der Beteiligung durch interessierte Kreise. Dies können industrielle und akademische Anwender*innen, Hersteller*innen von Geräten, Forschungseinrichtungen oder Dienstleistungsunternehmen sein. Sie können gern Ihre Bedarfe bezüglich Normierungsprojekten mitbringen und/oder direkt an die Autoren senden.
Due to the diffusive nature of heat propagation in solids, the detection and resolution of internal defects with active thermography based non-destructive testing is commonly limited to a defect-depth-to-defect-size ratio greater than or equal to one. In the more recent past, we have already demonstrated that this limitation can be overcome by using a spatially modulated illumination source and photothermal super resolution-based reconstruction. Furthermore, by relying on compressed sensing and computational imaging methods we were able to significantly reduce the experimental complexity to make the method viable for investigating larger regions of interest. In this work we share our progress on improving the defect/inhomogeneity characterization using fully 2D spatially structured illumination patterns instead of scanning with a single laser spot. The experimental approach is based on the repeated blind pseudo-random illumination using modern projector technology and a high-power laser. In the subsequent post-processing, several measurements are then combined by taking advantage of the joint sparsity of the defects within the sample applying 2D-photothermal super resolution reconstruction. Here, enhanced nonlinear convex optimization techniques are utilized for solving the underlying ill-determined inverse problem for typical simple defect geometries. As a result, a higher resolution defect/inhomogeneity map can be obtained at a fraction of the measurement time previously needed.
Zur Gewährleistung der Dauerhaftigkeit von Bauteilen sind regelmäßige Prüfungen notwendig. Für oberflächennahe Risse wurde bereits das Potenzial von Flying-Spot-Untersuchungen gezeigt, bei denen das Messfeld mit einem Laserpunkt, z.B. mittels eines Laserscanners, abgerastert wird. Eine Beschleunigung der Messung durch die Verwendung von Laserlinien ist möglich, wobei die Detektierbarkeit von Rissen u.a. von ihrer Ausrichtung zur Scanrichtung abhängt. Zudem können bei stark gekrümmten Oberflächen, wie z.B. denen von Turbinenschaufeln oder Maschinenteilen mit einem einzelnen stationären Messaufbau nur ein Teil der Oberfläche mit aktiver Thermografie auf Risse untersucht werden da die begrenzte Tiefenschärfe der optischen Systeme (Laser und Kamera)die mechanischen Nachführung innerhalb des Schärfentiefe-Bereichs erforderlich macht.
Um eine vollständige Untersuchung der Oberfläche durchzuführen, sind daher mehrere Perspektiven notwendig. Die hier angewandte Laserthermografie erzeugt dabei die Relativbewegung durch die Manipulation des Prüfobjektes mit einem Roboterarm, welcher es erlaubt, komplexe Oberflächen abzuscannen. Es erfolgt ein systematisches Abfahren mit einer Laserlinie entlang zuvor geplanter Bahnen der gesamten erreichbaren Oberfläche. Da der Roboterarm das Prüfobjekt trägt, sind die eingesetzten Messsysteme unbeeinflusst. Die Bewegung des Prüfobjektes ist dabei mit vielen Freiheitsgraden möglich, was eine Optimierung für das Messproblem erlaubt. Es können unter anderem die Scangeschwindigkeit, Laserleistung, Laserspotgeometrie, Laserwellenlänge, Scanschema und Kamerabildrate variiert werden. Mithilfe der Positionsdaten des Roboterarms kann jedem Punkt auf dem Prüfkörper ein Temperaturverlauf zugeordnet werden, um einen ortsaufgelösten Temperaturverlauf zu erzeugen. Das Ziel ist es, die oberflächennahen Defekte zu detektieren und deren Position auf der Oberfläche des 3D Models positionsgenau darstellen zu können.
In diesem Vortrag werden die Ergebnisse zur robotergestützten Thermografie an unterschiedlichsten Prüfkörpern vorgestellt. Vorteile gegenüber herkömmlichen Methoden werden erläutert und aktuelle Herausforderungen auf der Hard- und Softwareseite für den praktischen Einsatz diskutiert.
Im Auftrag des Bundesministeriums für Wirtschaft und Klimaschutz haben DIN und DKE im Januar 2022 die Arbeiten an der zweiten Ausgabe der Deutschen Normungsroadmap Künstliche Intelligenz gestartet. In einem breiten Beteiligungsprozess und unter Mitwirkung von mehr als 570 Fachleuten aus Wirtschaft, Wissenschaft, öffentlicher Hand und Zivilgesellschaft wurde damit der strategische Fahrplan für die KI-Normung weiterentwickelt. Koordiniert und begleitet wurden diese Arbeiten von einer hochrangigen Koordinierungsgruppe für KI-Normung und -Konformität.
Mit der Normungsroadmap wird eine Maßnahme der KI-Strategie der Bundesregierung umgesetzt und damit ein wesentlicher Beitrag zur „KI – Made in Germany“ geleistet.
Die Normung ist Teil der KI-Strategie und ein strategisches Instrument zur Stärkung der Innovations- und Wettbewerbsfähigkeit der deutschen und europäischen Wirtschaft. Nicht zuletzt deshalb spielt sie im geplanten europäischen Rechtsrahmen für KI, dem Artificial Intelligence Act, eine besondere Rolle.
In this work, we report on our progress for investigating a new experimental approach for thermographic detection of internal defects by performing 2D photothermal super resolution reconstruction. We use modern high-power laser projector technology to repeatedly excite the sample surface photothermally with varying spatially structured 2D pixel patterns. In the subsequent (blind) numerical reconstruction, multiple measurements are combined by exploiting the joint-sparse nature of the defects within the specimen using nonlinear convex optimization methods. As a result, a 2D-sparse defect/inhomogeneity map can be obtained. Using such spatially structured heating combined with compressed sensing and computational imaging methods allows to significantly reduce the experimental complexity and to study larger test surfaces as compared to the one-dimensional approach reported earlier.
Thermographic non-destructive testing is based on the interaction of thermal waves with inhomogeneities. The propagation of thermal waves from the heat source to the inhomogeneity and to the detection surface according to the thermal diffusion equation leads to the fact that two closely spaced defects can be incorrectly detected as one defect in the measured thermogram. In order to break this spatial resolution limit (super resolution), the combination of spatially structured heating and numerical methods of compressed sensing can be used. The improvement of the spatial resolution for defect detection then depends in the classical sense directly on the number of measurements. Current practical implementations of this super resolution detection still suffer from long measurement times, since not only the achievable resolution depends on performing multiple measurements, but due to the use of single spot laser sources or laser arrays with low pixel count, also the scanning process itself is quite slow. With the application of most recent high-power digital micromirror device (DMD) based laser projector technology this issue can now be overcome.
Für die aktive Thermografie als zerstörungsfreie Prüfmethode galt lange Zeit die Faustformel, dass die Auflösung interner Defekte/Inhomogenitäten auf ein Verhältnis von Defekttiefe/Defektgröße ≤ 1 beschränkt ist. Die Ursache hierfür liegt in der diffusiven Natur der Wärmeleitung in Festkörpern. Sogenannte Super-Resolution-Ansätze erlauben seit Kurzem die Überwindung dieser physikalischen Grenze um ein Vielfaches. Damit ergibt sich die attraktive Möglichkeit die Thermografie von einem rein oberflächensensitiven Prüfverfahren hin zu einem Verfahren mit verbesserter Tiefenreichweite zu entwickeln. Wie weit diese Entwicklung getrieben werden kann, ist Gegenstand aktueller Forschung.
Wir konnten bereits zeigen, dass diese klassische Grenze für 1D- und 2D-Defektgeometrien mit Hilfe des Abscannens des Prüfkörpers mittels einzelner Laserspots und der anschließenden Anwendung von photothermischer Super-Resolution-Rekonstruktion überwunden werden kann. Bei dieser Methode wird eine Kombination aus sequenzieller räumlich strukturierter Beleuchtung und numerischen Optimierungsmethoden eingesetzt. Dies geschieht allerdings auf Kosten der experimentellen Komplexität, die zu einer langen Messdauer, großen Datensätzen und langwieriger numerischer Auswertung führt.
In dieser Arbeit berichten wir über einen neuen experimentellen Ansatz, bei dem räumlich strukturierte 2D-Beleuchtungsmuster in Verbindung mit Compressed-Sensing und Computational-Imaging-Methoden verwendet werden, um die experimentelle Komplexität deutlich zu verringern und die Methode für die Untersuchung größerer Prüfflächen nutzbar zu machen.
Der experimentelle Ansatz basiert dabei auf der wiederholten (blinden) photothermischen Anregung mit räumlich strukturierten 2D-Mustern unter Verwendung moderner Projektortechnik und eines Hochleistungslasers. In der anschließenden numerischen Rekonstruktion werden mehrere Messungen unter Ausnutzung der Joint-Sparsity der Defekte innerhalb des Prüfkörpers mittels nichtlinearer konvexer Optimierungsmethoden kombiniert. Als Ergebnis kann eine 2D-sparse Defekt-/Inhomogenitätskarte erstellt werden.