Chemie und Prozesstechnik
Filtern
Dokumenttyp
- Zeitschriftenartikel (1176)
- Vortrag (1068)
- Posterpräsentation (442)
- Beitrag zu einem Tagungsband (175)
- Forschungsdatensatz (56)
- Sonstiges (30)
- Buchkapitel (24)
- Forschungsbericht (23)
- Dissertation (15)
- Beitrag zu einem Sammelband (13)
- Preprint (13)
- Handbuch (5)
- Video (5)
- Zeitschriftenheft (Herausgeberschaft für das komplette Heft) (3)
- Sammelband (Herausgeberschaft für den kompletten Band) (2)
- Tagungsband (Herausgeberschaft für den kompletten Band) (1)
- Newsletter (1)
Sprache
- Englisch (3052) (entfernen)
Schlagworte
- Nanoparticles (114)
- Fluorescence (111)
- Concrete (89)
- LIBS (81)
- Mechanochemistry (72)
- Quantum yield (71)
- Ultrasound (71)
- SAXS (69)
- Non-destructive testing (68)
- XPS (61)
Organisationseinheit der BAM
- 1 Analytische Chemie; Referenzmaterialien (1086)
- 6 Materialchemie (975)
- 8 Zerstörungsfreie Prüfung (775)
- 6.1 Oberflächen- und Dünnschichtanalyse (348)
- 6.3 Strukturanalytik (323)
- 1.1 Anorganische Spurenanalytik (276)
- 8.2 Zerstörungsfreie Prüfmethoden für das Bauwesen (232)
- 1.2 Biophotonik (210)
- 4 Material und Umwelt (186)
- 8.5 Röntgenbildgebung (169)
Paper des Monats
- ja (27)
Temperature Compensation Strategies for Lamb Wave Inspection using Distributed Sensor Networks
(2022)
The application of temperature compensation strategies is crucial in structural health monitoring approaches based on guided waves. Actually, the varying temperature influences the performance of the inspection system inducing false alarms or missed detection, with a consequent reduction of reliability. This paper quantitatively describes a method to compensate the temperature effect, namely the optimal baseline selection (OBS), extending its application to the case of distributed sensor networks (DSN). The effect of temperature separation between baseline time-traces in OBS are investigated considering multiple couples of sensors employed in the DSN. A combined strategy that uses both OBS and frequent value warning is considered. Theoretical results are compared, using data from two several experiments, which use different frequency analysis with either predominantly A0 mode or S0 mode data or both. The focus is given on the fact that different paths are available in a sensor network and several possible combination of results are available. Nonetheless, introducing a frequent value warning it is possible to increase the efficiency of the OBS approach making use of fewer signal processing algorithms. These confirm that the performance of OBS quantitatively agrees with predictions and also demonstrate that the use of compensation strategies improve detectability of damage.
A systematic study has been carried out to investigate the neutron transmission signal as a function of sample temperature. In particular, the experimentally determined wavelength-dependent neutron attenuation spectra for a martensitic steel at temperatures ranging from 21 to 700°C are compared with simulated data. A theoretical description that includes the Debye–Waller factor in order to describe the temperature influence on the neutron cross sections was implemented in the nxsPlotter software and used for the simulations. The analysis of the attenuation coefficients at varying temperatures shows that the missing contributions due to elastic and inelastic scattering can be clearly distinguished: while the elastically scattered intensities decrease with higher temperatures, the inelastically scattered intensities increase, and the two can be separated from each other by analysing unique sharp features in the form of Bragg edges. This study presents the first systematic approach to quantify this effect and can serve as a basis , for example, to correct measurements taken during in situ heat treatments, in many cases being a prerequisite for obtaining quantifiable results.
Tempo-spectral multiplexing in flow cytometry with lifetime detection using QD-encoded polymer beads
(2020)
Semiconductor quantum dots (QDs) embedded into polymer microbeads are known to be very attractive emitters for spectral multiplexing and colour encoding. Their luminescence lifetimes or decay kinetics have been, however, rarely exploited as encoding parameter, although they cover time ranges which are not easily accessible with other luminophores. We demonstrate here the potential of QDs made from II/VI semiconductors with luminescence lifetimes of several 10 ns to expand the lifetime range of organic encoding luminophores in multiplexing applications using time-resolved flow cytometry (LT-FCM). For this purpose, two different types of QD-loaded beads were prepared and characterized by photoluminescence measurements on the ensemble level and by single-particle confocal laser scanning microscopy. Subsequently, these lifetime-encoded microbeads were combined with dye-encoded microparticles in systematic studies to demonstrate the potential of these QDs to increase the number of lifetime codes for lifetime multiplexing and combined multiplexing in the time and colour domain (tempo-spectral multiplexing). These studies were done with a recently developed novel luminescence lifetime flow cytometer (LT-FCM setup) operating in the time-domain, that presents an alternative to reports on phase-sensitive lifetime detection in flow cytometry.
In civil engineering, the laser-induced breakdown spectroscopy has been applied as a fast and reliable method for a quantitative evaluation of concrete cores. Due to a two-dimensional scanning, the heterogeneity of concrete can be evaluated and elements like Cl, Na, and S are related to the cement matrix only. This study deals with the temporal evaluation and imaging of laser-induced plasmas on cement-based materials, in order to investigate the impact of aggregates with diffrent grain size on the spectral response in LIBS.
The quality of research antibodies is an issue for decades. Although several papers have been published to improve the situation, their impact seems to be limited. This publication makes the effort to simplify the description of validation criteria in a way that the occasional antibody user is able to assess the validation level of an immunochemical reagent. A simple, 1-page checklist is supplied for the practical application of these criteria.
Fluorides are well-known as wood preservatives. One of the limitations of fluoride-based wood preservatives is their high leachability. Alternative to current fluoride salts such as NaF used in wood protection are low water-soluble fluorides. However, impregnation of low water-soluble fluorides into wood poses a challenge. To address this challenge, low water-soluble fluorides like calcium fluoride (CaF2) and magnesium fluoride (MgF2) were synthesized as nanoparticles via the fluorolytic sol−gel synthesis and then impregnated into wood specimens. In this study, the toxicity of nano metal fluorides was assessed by termite mortality, mass loss and visual analysis of treated specimens after eight weeks of exposure to termites, Coptotermes formosanus. Nano metal fluorides with sol concentrations of 0.5 M and higher were found to be effective against termites resulting in 100% termite mortality and significantly inhibited termite feeding. Among the formulations tested, the least damage was found for specimens treated with combinations of CaF2 and MgF2 with an average mass loss less than 1% and visual rating of “1”. These results demonstrate the efficacy of low water-soluble nano metal fluorides to protect wood from termite attack.
The process of ensuring reliability of NDT applications contains various aspects, such as determining the performance and probability of success, the uncertainty in measurement, the provision of clear and functional procedures and ensuring the correct application accordingly. Test specimens have become powerful elements in supporting many of these aspects. Within the committee for NDT in Civil Engineering (NDT-CE) of the German Society for Nondestructive Testing (DGZfP), the subcommittee on Quality Assurance (UA-QS) therefore addresses the design and the integration of test specimens in the quality assurance process. Depending on the specific purpose, the requirements on test specimens can vary significantly based on the defined simulated scenario. The most prominent purposes of test specimens might be seen in providing references for inspection systems in regard to function control, calibration and validation. Further aspects can be parametric studies, basic investigation of physical principles related to NDT or a simplified and therefore comprehensive demonstration of inspection concepts (e.g. for teaching purposes). The specific purpose of a test specimen dictates the requirements regarding its conception, including the exact design, the material or the fabrication accuracy and the conditioning. In the development of a general guideline by the UA-QS for application-specific procedures and their validation, the use of test specimens is addressed and specific concepts for the design of test specimens are made. This includes the analysis of the measurement process regarding any given application, deriving an adequate calibration approach for it and designing test specimens (calibration specimens) accordingly. Furthermore, it includes the validation of the procedure taking into account all conditions related to the specific application in the field. The validation requires a statistically sufficient number of trials. Thorough evaluation of each trial can only be established if the ground-truth is known. Therefore, test specimens providing a realistic but controlled simulation of the inspection problem are valuable and indispensable elements in the validation process. The requirement of being fully realistic will often not be possible to fulfill due to practical restrictions. Any aspect that cannot be included in the simulation realistically needs to be simulated conservatively. This again, requires a sufficient understanding of the inspection principle and technique to ensure conservativeness. Among other quality-assurance-related aspects, the UA-QS establishes concepts and guidelines regarding sound and efficient approaches for the specific purposes of test specimens. This subcommittee brings together representatives of different Groups along the entire value chain of NDT-CE, including researchers, practitioners, manufacturers and clients. They all work together in establishing a common understanding and level of quality assurance in the industry.
One of the objectives of the EU Project EC4SafeNano (European Centre for Risk Management and Safe Innovation in Nanomaterials & Nanotechnologies) was to test and benchmark the services in order to check their relevance to address identified stakeholder needs, but also to evaluate the governance of the structure delivering the proposed services. The aim is to demonstrate the technical relevance of the services and the overall open structure organisation, including governance rules and operating procedures, by answering relevant identified questions (case studies) selected by a panel of stakeholders. Therefore, a significant part of the project will be devoted to this demonstration of the operational and functional basis of the organized network.
Corrosion of concrete reinforcement is one of the major damage mechanisms affecting both the load-bearing capacity and the serviceability of reinforced concrete structures significantly. The challenge of detecting corrosion is that the corrosion process in its various forms is not immediately visible, especially in the corrosion initiation phase inside the concrete. When externally discernible damages are observed during visual inspections on the structure, the extent of the damage inside the concrete is often already significant. Corrosion caused by carbonation often leads to severe discoloration of the surface or even large-area spalling of the concrete cover. In contrast, chloride-induced corrosion is usually difficult to observe visually, but can cause much more serious damage in less time. The effect occurs locally and can lead to weakening of the cross-section of the reinforcement. This, in turn, can cause sudden structural collapses without prior notice. Therefore, it is important to investigate whether there is protection against corrosion of the reinforcement in the concrete and to detect active corrosion in the structure at an early stage.
In the meanwhile, various non-destructive and minimally invasive testing methods are available to evaluate the resistance to penetration of corrosion-promoting pollutants and to detect active corrosion. In this paper, a bridge crossing the river Regen (Germany) is used as a case-study to demonstrate how the information obtained applying different testing methods can be combined and evaluated in the context of structural reassessments. Both the results of the permeability testing (Torrent tester) and the electrical resistance measurement (Wenner probe) are considered, as well as active corrosion areas are localized using the half-cell potential mapping combined with the concrete cover measurement with the eddy current method and ground penetrating radar (GPR). The results are evaluated using drill cores and in addition laser-induced breakdown spectroscopy (LIBS) was applied to obtain information about possible ion transport in the concrete.
Corrosion of concrete reinforcement is one of the major damage mechanisms affecting both the load-bearing capacity and the serviceability of re-inforced concrete structures significantly. When externally discernible damages are observed during visual inspections on the structure, the extent of the damage inside the concrete is often already significant. Corrosion caused by carbonation often leads to severe discoloration of the surface or even large-area spalling of the concrete cover. In contrast, chloride-induced corrosion is usually difficult to observe visually but can cause much more serious damage in less time. The effect occurs locally and can lead to weakening of the cross-section of the reinforce-ment. This, in turn, can cause sudden structural collapses without prior notice.
In the meanwhile, various non-destructive and minimally invasive testing methods are available to evaluate the resistance to penetration of corrosion-pro-moting pollutants and to detect active corrosion. In this paper, a bridge crossing the river Regen is used as a case-study to demonstrate how the information ob-tained applying different testing methods can be combined and evaluated in the context of structural reassessments. Both the results of the permeability testing and the electrical resistance measurement are considered, as well as active corro-sion areas are localized using the half-cell potential mapping combined with the concrete cover measurement with the eddy current method and ground penetrat-ing radar. The results are evaluated using drill cores and in addition laser-induced breakdown spectroscopy was applied to obtain information about possible chlo-ride ion transport into the concrete.
Although layer-based additive manufacturing methods such as laser powder bed fusion (PBF-LB) offer an immense geometrical freedom in design, they are typically subject to a build-up of internal stress (i.e. thermal stress) during manufacturing. As a consequence, significant residual stress (RS) is retained in the final part as a footprint of these internal stresses. Furthermore, localized melting and solidification inherently induce columnar-type grain growth accompanied by crystallographic texture. Although diffraction-based methods are commonly used to determine the RS distribution in PBF-LB parts, such features pose metrological challenges in their application. In theory, preferred grain orientation invalidates the hypothesis of isotropic material behavior underlying the common methods to determine RS. In this work, more refined methods are employed to determine RS in PBF-LB/M/IN718 prisms, based on crystallographic texture data. In fact, the employment of direction-dependent elastic constants (i.e. stress factors) for the calculation of RS results in insignificant differences from conventional approaches based on the hypothesis of isotropic mechanical properties. It can be concluded that this result is directly linked to the fact that the {311} lattice planes typically used for RS analysis in nickel-based alloys have high multiplicity and less strong texture intensities compared with other lattice planes. It is also found that the length of the laser scan vectors determines the surface RS distribution in prisms prior to their removal from the baseplate. On removal from the baseplate the surface RS considerably relaxes and/or redistributes; a combination of the geometry and the scanning strategy dictates the sub-surface RS distribution.
The 2023 Nobel Prize in Chemistry was awarded to Aleksey I. Ekimov (prize share 1/3), Louis E. Brus (prize share 1/3), and Moungi G. Bawendi (prize share 1/3) for groundbreaking inventions in the field of nanotechnology, i.e., for the discovery and synthesis of semiconductor nanocrystals, also termed quantum dots, that exhibit size-dependent physicochemical properties enabled by quantum size effects. This feature article summarizes the main milestones of the discoveries and developments of quantum dots that paved the road to their versatile applications in solid-state lighting, display technology, energy conversion, medical diagnostics, bioimaging, and image-guided surgery.
Most factors acting on concrete rheology work at an extremely small-scale level. Influencing factors in the millimetre or centimetre area are essentially restricted to sand and aggregates. The latter, however, make up 50 to 70% of the total volume of most concretes – a fact often ignored in research on controlling concrete processing properties.
Whereas suitably chosen concrete admixtures and additives can influence rheology in a very targeted manner, sand and aggregates are less suitable for controlling rheology but nonetheless contribute to the rheology of the Overall system. The actions of sand and aggregate can impose themselves upon the actions of admixtures and additives
and, in unfavourable circumstances, even render them redundant. For this reason, any results concerning the processability of binding agent systems can only be transferred to concrete with great care. It is important to better understand the action of sand and aggregates in order to be able to harmonise them in such a way that they complement the action of superplasticisers positively, instead of
working against them. Savings on costs can also be made by this targeted fine-tuning.
The rails of modern railways face an enormous wear and tear from ever increasing train speeds and loads. This necessitates diligent non-destructive testing for defects of the entire railway system.
Non-destructive testing of rail tracks is carried out by rail inspection trains equipped with ultrasonic and eddy current test devices. However, the evaluation of the gathered data is mainly done manually with a strong focus on ultrasonic data, and defects are checked on-site using hand-held testing equipment. Maintenance measures are derived based on these on-site findings.
The aim of the AIFRI project (Artificial Intelligence For Rail Inspection) is to
- increase the degree of automation of the inspection process, from the evaluation of the data to the planning of maintenance measures,
- increase the accuracy of defect detection,
- automatically classify detected indications into risk classes.
These aims will be achieved by training a neural network for defect detection and classification. Since the current testing data is unbalanced, insufficiently labeled and largely unverified we will supplement fused, simulated eddy current and ultrasonic testing data in form of a configurable digital twin.
This paper reports the outcome of an interdisciplinary team’s application of multispectral imaging techniques and material analysis to a music fragment from the first decades of the fifteenth century: Atri, Archivio Capitolare, Museo della Basilica Cattedrale, Biblioteca del Capitolo della Cattedrale, Frammento 17. This important parchment leaf has rarely been investigated since its discovery 45 years ago. Thanks to the applied techniques and methods (such as the evaluation of the data using the fingerprint model), it is now possible to discuss new evidence supporting conclusions regarding the fragment’s origin and afterlife.
The BAM Data Store
(2023)
As a partner in several NFDI consortia, the Bundesanstalt für Materialforschung und -prüfung (BAM, German federal institute for materials science and testing) contributes to research data standardization efforts in various domains of materials science and engineering (MSE). To implement a central research data management (RDM) infrastructure that meets the requirements of MSE groups at BAM, we initiated the Data Store pilot project in 2021. The resulting infrastructure should enable researchers to digitally document research processes and store related data in a standardized and interoperable manner. As a software solution, we chose openBIS, an open-source framework that is increasingly being used for RDM in MSE communities.
The pilot project was conducted for one year with five research groups across different organizational units and MSE disciplines. The main results are presented for the use case “nanoPlattform”. The group registered experimental steps and linked associated instruments and chemicals in the Data Store to ensure full traceability of data related to the synthesis of ~400 nanomaterials. The system also supported researchers in implementing RDM practices in their workflows, e.g., by automating data import and documentation and by integrating infrastructure for data analysis.
Based on the promising results of the pilot phase, we will roll out the Data Store as the central RDM infrastructure of BAM starting in 2023. We further aim to develop openBIS plugins, metadata standards, and RDM workflows to contribute to the openBIS community and to foster RDM in MSE.
In view of the increasing digitization of research and the use of data-intensive measurement and analysis methods, research institutions and their staff are faced with the challenge of documenting a constantly growing volume of data in a comprehensible manner, archiving them for the long term, and making them available for discovery and re-use by others in accordance with the FAIR principles. At BAM, we aim to facilitate the integration of research data management (RDM) strategies during the whole research cycle from the creation and standardized description of materials datasets to their publication in open repositories. To this end, we present the BAM Data Store, a central system for internal RDM that fulfills the heterogenous demands of materials science and engineering labs. The BAM Data Store is based on openBIS, an open-source software developed by the ETH Zurich that has originally been created for life science laboratories but that has since been deployed in a variety of research domains. The software offers a browser-based user interface for the digital representation of lab inventory entities (e.g., samples, chemicals, instruments, and protocols) and an electronic lab notebook for the standardized documentation of experiments and analyses.
To investigate whether openBIS is a suitable framework for the BAM Data Store, we carried out a pilot phase during which five research groups with employees from 16 different BAM divisions were introduced to the software. The pilot groups were chosen to represent a diverse array of domain use cases and RDM requirements (e.g., small vs big data volume, heterogenous vs structured data types) as well as varying levels of prior IT knowledge on the users’ side.
Overall, the results of the pilot phase are promising: While the creation of custom data structures and metadata schemas can be time-intensive and requires the involvement of domain experts, the system offers specific benefits in the form of a simplified documentation and automation of research processes, as well as constituting a basis for data-driven analysis. In this way, heterogeneous research workflows in various materials science research domains could be implemented, from the synthesis and characterization of nanomaterials to the monitoring of engineering structures. In addition to the technical deployment and the development of domain-specific metadata standards, the pilot phase also highlighted the need for suitable institutional infrastructures, processes, and role models. An institute-wide rollout of the BAM Data Store is currently being planned.
The occurrence of small particles consisting of organic polymers, so-called microplastic (MP), in aquatic Environments attracts increasing interest in both public and science. Recent sampling campaigns in surface Waters revealed substantial numbers of particles in the size range from a few micrometers to a few millimeters. In order to validate sample preparation, identification and quantification and to investigate the behavior of MP particles and potential toxic effects on organisms, defined MP model particles are needed. Many studies use spherical compounds that probably behave differently compared to irregularly shaped MP found in environmental samples.
However, preparation and handling of MP particles are challenging tasks and have been systematically investigated in the present study. Polystyrene (PS) as a commonly found polymer with a density slightly above that of water was selected as polymer type for milling and fractionation studies. A cryogenic ball mill proved to be practical and effective to produce particles in the size range from 1 to 200 μm. The yield of small particles increased with increasing pre-cooling and milling durations. Depending on the concentration and the size, PS particles do not completely disperse in water and particles partly creep vertically up along glass walls. Stabilized MP suspensions without use of surfactants that might harm organisms are needed for toxicological studies. The stabilization of PS particle suspensions with ozone treatment reduced the wall effect and increased the number of dispersed PS particles but increased the dissolved organic carbon concentration and changed the size Distribution of the particles.
X-ray photoelectron-spectroscopy (XPS) allows simultaneous irradiation and damage monitoring. Although water radiolysis is essential for radiation damage, all previous XPS studies were performed in vacuum. Here we present near-ambient-pressure XPS experiments to directly measure DNA damage under water atmosphere. They permit in-situ monitoring of the effects of radicals on fully hydrated double-stranded DNA. Our results allow us to distinguish direct damage, by photons and secondary low-energy electrons (LEE), from damage by hydroxyl radicals or hydration induced modifications of damage pathways. The exposure of dry DNA to x-rays leads to strand-breaks at the sugar-phosphate backbone, while deoxyribose and nucleobases are less affected. In contrast, a strong increase of DNA damage is observed in water, where OH-radicals are produced. In consequence, base damage and base release become predominant, even though the number of strand-breaks increases further.
X-ray photoelectron-spectroscopy (XPS) allows simultaneous irradiation and damage monitoring. Although water radiolysis is essential for radiation damage, all previous XPS studies were performed in vacuum. Here we present near-ambient-pressure XPS experiments to directly measure DNA damage under water atmosphere. They permit in-situ monitoring of the effects of radicals on fully hydrated double-stranded DNA. Our results allow us to distinguish direct damage, by photons and secondary low-energy electrons (LEE), from damage by hydroxyl radicals or hydration induced modifications of damage pathways. The exposure of dry DNA to x-rays leads to strand-breaks at the sugar-phosphate backbone, while deoxyribose and nucleobases are less affected. In contrast, a strong increase of DNA damage is observed in water, where OH-radicals are produced. In consequence, base damage and base release become predominant, even though the number of strand-breaks increases further. Furthermore, first data about the degradation of single-stranded DNA binding-proteins (G5P / GV5 and hmtSSB) under vacuum and NAP-XPS conditions are presented.
The change of DNA radiation damage upon hydration: In-situ observations by near-ambient-pressure XPS
(2023)
Ionizing radiation damage to DNA plays a fundamental role in cancer therapy. X-ray photoelectron-spectroscopy (XPS) allows simultaneous irradiation and damage monitoring. Although water radiolysis is essential for radiation damage, all previous XPS studies were performed in vacuum. Here we present near-ambient-pressure XPS experiments to directly measure DNA damage under water atmosphere. They permit in-situ monitoring of the effects of radicals on fully hydrated double-stranded DNA. The results allow us to distinguish direct damage, by photons and secondary low-energy electrons (LEE), from damage by hydroxyl radicals or hydration induced modifications of damage pathways. The exposure of dry DNA to x-rays leads to strand-breaks at the sugar-phosphate backbone, while deoxyribose and nucleobases are less affected. In contrast, a strong increase of DNA damage is observed in water, where OH-radicals are produced. In consequence, base damage and base release become predominant, even though the number of strand-breaks increases further.
The Color X-ray Camera CXC or SLcam® is an energy-resolving X-ray camera capable of energy- and space-resolved measurements. It consists of a high-speed CCD detector coupled to a polycapil-lary optic that conducts the X-ray photons from the probe to distinct pixels onto the detector. The camera is capable of fast acquisition of spatially and energy resolved fluorescence images. A dedicated software enables the acquisition and the online processing of the spectral data for all 69696 pixels, leading to a real-time visualization of the elements distribution in a sample. It was developed in a joint project with BAM, IFG Berlin and PN Sensors. In this contribution we will mainly discuss the use of the CXC at our beamline, the BAMline at BESSY II and imaging applications of the CXC from different areas, like biology and archaeometry. Additionally new developments for the use of the detector without optics, like wavelength dispersive detection or 1shot-XANES, will be presented.
Alternative to conventional transmission-based radiography and computed tomography, X-ray refraction techniques are being increasingly used to detect damage in light materials. In fact, their range of application has been recently extended even to metals. The big advantage of X-ray refraction techniques is that they are able to detect nanometric defects, whose size would lie below the resolution of even state-of-the-art synchrotron-based X-ray computed tomography (SXCT). The superiority of synchrotron X-ray refraction radiography and tomography (SXRR and SXRCT) has been shown in the case of light materials, in particular composites. X-ray refraction techniques also yield a quantitifaction of the amount of damage (the so-called relative internal specific surface) and can well be compared with damage models. At the same time, it is impossible for SXRR and SXRCT to image single defects. We show that the combination of refraction- and transmission-based imaging techniques yields an impressive amount of additional information about the type and amount of defects in microstructured materials such as additively manufactured metals or metal matrix composites. We also show that the use of data fusion techniques allows the classification of defects in statistically significant representative volume elements.
Alternative to conventional transmission-based radiography and computed tomography, X-ray refraction techniques are being increasingly used to detect damage in light materials. In fact, their range of application has been recently extended even to metals. The big advantage of X-ray refraction techniques is that they are able to detect nanometric defects, whose size would lie below the resolution of even state-of-the-art synchrotron-based X-ray computed tomography (SXCT). The superiority of synchrotron X-ray refraction radiography and tomography (SXRR and SXRCT) has been shown in the case of light materials, in particular composites. X-ray refraction techniques also yield a quantitifaction of the amount of damage (the so-called relative internal specific surface) and can well be compared with damage models. At the same time, it is impossible for SXRR and SXRCT to image single defects. We show that the combination of refraction- and transmission-based imaging techniques yields an impressive amount of additional information about the type and amount of defects in microstructured materials such as additively manufactured metals or metal matrix composites. We also show that the use of data fusion techniques allows the classification of defects in statistically significant representative volume elements.
An international comparison study on the accurate determination of the molar mass M(Si) of silicon artificially enriched in 28Si (x(28Si) > 0.9999 mol mol−1) has been completed. The measurements were part of the high level CCQM-P160 pilot study assessing the ability of National Metrology Institutes (NMIs) and Designated Institutes (DIs) to make such measurements at the lowest possible levels of measurement uncertainty and to identify possible difficulties when measuring this kind of sample. This study supports the molar mass measurements critical to disseminating the silicon route to realizing the new definitions for the kilogram and the mole. Measurements were also made by one external research institute and an external company. The different institutes were free to choose their experimental (mass spectrometric) set-ups and equipment, thereby enabling also the comparison of different techniques. The investigated material was a chemically pure, polycrystalline silicon material. The subsequent modified single crystalline secondary product of this material was intended for the production of silicon which was used for two additional spheres in the context of the redetermination of the Avogadro constant NA, required for the revision of the International System of Units (SI) via fundamental constants which came into force from May 2019. The CCQM pilot study was organized by Physikalisch-Technische Bundesanstalt (PTB). Aqueous silicon solutions were shipped to all participating institutions. The data analysis as well as the uncertainty modelling and calculation of the results was predefined. The participants were provided with an uncertainty budget as a GUM Workbench® file as well as a free software license for the duration of the comparison. The agreement of the values of the molar mass (M(Si) = 27.976 942 577 g mol−1) was excellent with ten out of 11 results reported within the range of relative uncertainty of 1 × 10−8 required
for the revision of the SI.
A software toolbox is introduced that addresses several needs common to computed tomography (CT). Built for the WIPANO CTSimU project to serve as the reference implementation for its image processing and evaluation tasks, it provides a Python 3 interface that is adaptable to many conceivable applications. Foremost, the toolbox features a pipeline architecture for sequential 2D image processing tasks, such as flat field corrections and image binning, and enables the user to create their own processing modules. Beyond that, it provides means to measure line profiles and image quality assessment algorithms to calculate modulation transfer functions (MTF) or to determine the interpolated basic spatial resolution (iSRb) using a duplex wire image. It can also be used to calculate projection matrices for the reconstruction of scans with arbitrary industrial CT geometries and trajectories. The CTSimU project defined a framework of projection- and volume-based test scenarios for the qualification of radiographic simulation software towards its use in dimensional metrology. The toolbox implements the necessary evaluation routines and generates reports for all projection-based tests.
The Dark side of Science
(2021)
We all may have started out as bright-eyed students trying to do science to the best of our abilities, but over time, some of us have gradually drifted to the dark side. The dark side of science has an impressive publication rate in high-ranking journals, good success with funding agencies, and rocks the world with stellar findings. Unfortunately, these findings aren't real, either by accident or on purpose. As the presenter and his colleagues found, trying to correct or even dispute any of these findings in literature is a supremely complex and time-consuming effort.
With no recent reduction in the frequency of such false findings, it is up to us to try to stem the flow. Besides looking at examples, we need to understand the underlying driving forces behind this dark scientific movement. By combining this understanding with a refresher of the core scientific principles, we can then develop the necessary argumentative tools and mechanisms that may prevent our own slide down the slippery slope.
This talk will therefore start out with several entertaining examples of probably accidental, as well as definitely deliberate, false scientific findings in literature (and in particular in the field of materials research). We will then take a brief look at the possible causes for these developments, after which some tools will be presented that can help both the fresh as well as the well-seasoned scientist to rise up against the dark side.
The dark side of science
(2020)
The Dark Side of Science
(2020)
The Dark Side of Science
(2019)
The Joint Summer School of the two Marie Skłodowska-Curie Innovative Training Networks (ITN) “BioCapture” and “GlycoImaging”, funded by the EU within the Horizon 2020 framework programme, which are both devoted to the development of new methods for cancer biomarker and cancer cell detection, will take place at the Adlershof Campus of BAM. 19 Early stage researchers of both projects will convene, discuss their own science and plan future collaborative research. Training in scientific writing (instructor: Luita Spangler, Free University of Berlin), an employability workshop (Antti Kapanen, University of Applied Sciences Berlin) and first contacts with the “dark side of science” (Brian R. Pauw, BAM) will complement the programme of the summer school.
An introductory lecture on the Dark Side of Science; what it is, why it exists, and what can be done to fight it. This lecture illuminates the increasing prevalence of fraudulent scientific work (e.g. faked data, manipulated images, paper mills) with plenty of examples and sources. The second section expands on the driving forces that caused this phenomenon to emerge, largely driven by pressures from management, peers and the researcher themselves. The third section expands on methods and tools that can be used to educate and arm oneself against this phenomenon. The 2023 edition includes new examples of larger fraudulent bodies of work emerging, and the problems posed by the arrival of LLMs.
This presentation highlights ongoing scientific misconduct as found in academic literature. This includes data- and image manipulation, and paper mills. Starting with an expose of examples, it delves deeper into the causes and metrics driving this phenomenon. Finally a range of possible tools is presented, that the young researcher can use to prevent themselves from sliding into the dark scientific methods.
The increasing request for hair ethyl glucuronide (HEtG) in alcohol consumption monitoring according to cut-off levels set by the Society of Hair Testing (SoHT) has triggered a proficiency testing program based on interlaboratory comparisons (ILC). Here, the outcome of nine consecutive ILC rounds organised by the SoHT on the determination of HEtG between 2011 and 2017 is summarised regarding interlaboratory reproducibility and the influence of procedural variants. Test samples prepared from cut hair (1 mm) with authentic (in-vivo incorporated) and soaked (in-vitro incorporated) HEtG concentrations up to 80 pg/mg were provided for 27–35 participating laboratories. Laboratory results were evaluated according to ISO 5725-5 and provided robust averages and relative reproducibility standard deviations typically between 20 and 35% in reasonable accordance with the prediction of the Horwitz model. Evaluation of results regarding the analytical techniques revealed no significant differences between gas and liquid chromatographic methods In contrast, a detailed evaluation of different sample preparations revealed significantly higher average values in case when pulverised hair is tested compared to cut hair. This observation was reinforced over the different ILC rounds and can be attributed to the increased acceptance and routine of hair pulverisation among laboratories. Further, the reproducibility standard deviations among laboratories performing pulverisation were on average in very good agreement with the prediction of the Horwitz model. Use of sonication showed no effect on the HEtG extraction yield.
The DIRECTT (Direct Iterative REconstruction of Computed Tomography Trajectories) algorithm represents a promising alternative to conventional algorithms, such as the Filtered Backprojection (FBP) or the Algebraic Reconstruction Technique (ART), by overcoming restrictions associated with them. Such restrictions include the limited spatial resolution achieved through FBP due to Nyquist‘s sampling theorem, the inability of FBP to perform a quality reconstruction when projections are missing, and the excessive computing time needed for ART.
Lanthanides (Ln) are critical raw materials, however, their mining and purification have a considerable negative environmental impact and sustainable recycling and separation strategies for these elements are needed. In this study, the precipitation and solubility behavior of Ln complexes with pyrroloquinoline quinone (PQQ), the cofactor of recently discovered lanthanide (Ln) dependent methanol Dehydrogenase (MDH) enzymes, is presented. In this context, the molecular structure of a biorelevant europium PQQ complex was for the first time elucidated outside a protein environment.
The complex crystallizes as an inversion symmetric dimer, Eu2PQQ2, with binding of Eu in the biologically relevant pocket of PQQ. LnPQQ and Ln1Ln2PQQ complexes were characterized by using inductively coupled Plasma mass spectrometry (ICP-MS), infrared (IR) spectroscopy, 151Eu-Mössbauer spectroscopy, X-ray total scattering, and Extended X-ray absorption fine structure (EXAFS). It is shown that a natural enzymatic cofactor is capable to achieve Separation by precipitation of the notoriously similar, and thus difficult to separate, lanthanides to some extent.
Lanthanides (Ln) are critical raw materials, however, their mining and purification have a considerable negative environmental impact and sustainable recycling and separation strategies for these elements are needed. In this study, the precipitation and solubility behavior of Ln complexes with pyrroloquinoline quinone (PQQ), the cofactor of recently discovered lanthanide (Ln) dependent methanol dehydrogenase (MDH) enzymes, is presented. In this context, the molecular structure of a biorelevant europium PQQ complex was for the first time elucidated outside a protein environment. The complex crystallizes as an inversion symmetric dimer, Eu2PQQ2, with binding of Eu in the biologically relevant pocket of PQQ. LnPQQ and Ln1Ln2PQQ complexes were characterized by using inductively coupled plasma mass spectrometry (ICP‐MS), infrared (IR) spectroscopy, 151Eu‐Mössbauer spectroscopy, X‐ray total scattering, and extended X‐ray absorption fine structure (EXAFS). It is shown that a natural enzymatic cofactor is capable to achieve separation by precipitation of the notoriously similar, and thus difficult to separate, lanthanides to some extent.
We show that complex physical and chemical interactions between boehmite nanoparticles and epoxy drastically affect matrix properties, which in the future will provide tuning of material properties for further optimization in applications from automotive to aerospace. We utilize intermodulation atomic force microscopy (ImAFM) for probing local stiffness of both particles and polymer matrix. Stiff particles are expected to increase total stiffness of nanocomposites and the stiffness of polymer should remain unchanged. However, ImAFM revealed that stiffness of matrix in epoxy/boehmite nanocomposite is significantly higher than unfilled epoxy. The stiffening effect of the boehmite on epoxy also depends on the particle concentration. To understand the mechanism behind property alteration induced by boehmite nanoparticles, network architecture is investigated using dynamic mechanical thermal analysis (DMTA). It was revealed that although with 15 wt% boehmite nanoparticles the modulus at glassy state increases, crosslinking density of epoxy for this composition is drastically low.
The prediction of structural parameters and optoelectronic properties of compound semiconductors is very important. However, calculations often neglect chemical variability and structural defects. In chalcopyrite type semiconductors one of the major defects are copper vacancies (V Cu). The four cation neighbors of the anion determine its position in the chalcopyrite type structure expressed by the Wyckoff position 8d (x, 1/4, 1/8). Intrinsic point defects like V
Cu and anti-sites may cause variations of the Anion position in the middle of the cation tetrahedron, especially in the Anion position Parameter x. For stoichiometric chalcopyrite type compounds a formalism according to the principle of conservation of tetrahedral bonds (CTB) can be applied to calculate the anion position parameter, but it fails in the case of off-stoichiometric chalcopyrites. This case study of chalcopyrite type CuGaS 2 and Mn-substituted GuGaS 2
shows that the experimentally determined anion position Parameter x
deviate from values calculated by CTB approach. The systematic investigation of off-stoichiometric CuGaS 2 and Mn-substituted GuGaS
2 demonstrates the effect of copper vacancies on the average radii of the cation sites (Wyckoff positions 4a and 4b) as well as on the anion position Parameter x. By applying an elaborated CTB Approach implementing copper vacancies an agreement between experimental and calculated anion position Parameter x can be obtained.
The oxygen reduction reaction (ORR) – an important reaction in electrochemical devices, such as fuel cells - is characterized by its sluggish kinetics and therefore requires catalysis. The industry currently relies on platinum as a catalyst, although it is scarce and expensive, hindering the commercial breakthrough of fuel cells in automotive applications. Platinum-free catalysts on basis of nitrogen- and metal doped carbons (NMCs) and fluorinated carbons are promising materials to replace platinum-based catalysts for the ORR. In this work we prepared six metal-organic frameworks (MOFs) by mechanical ball mill grinding and studied their formation by in-situ powder X-ray diffraction. Furthermore, the samples were carbonized under controlled conditions (900°C, 1h, N2-atmosphere) to yield carbon materials, that were employed in ORR-electrocatalysis. The effect of Co-doping and fluorination was systematically studied and outstanding ORR activity was found for the catalyst prepared from the Co-doped fluorinated ZIF-8.
This paper reports a systematic study into the effect of nitrogen on iron-catalyzed graphitization of biomass. Chitin, chitosan, N-acetylglucosamine, gelatin and glycine were selected to represent nitrogen-rich saccharides and amino-acid/polypeptide biomass precursors. The materials were pyrolyzed with an iron catalyst to produce carbons with a wide range of chemical and structural features such as mesoporosity and nitrogen-doping. Many authors have reported the synthesis of nitrogen-doped carbons by pyrolysis and these have diverse applications. However, this is the first systematic study of how nitrogen affects pyrolysis of biomass and importantly the catalytic graphitization step. Our data demonstrates that nitrogen inhibits graphitization but that some nitrogen survives the catalytic graphitization process to become incorporated into various chemical environments in the carbon product.
Slags from the nonferrous metals industry have great potential to be used as feedstocks for the production of alkali-activated materials. Until now, however, only very limited information has been available about the structural characteristics of these materials. In the work presented herein, synthetic slags in the CaO–FeOx–SiO2 system, representing typical compositions of Fe-rich slags, and inorganic polymers (IPs) produced from the synthetic slags by activation with alkali Silicate solutions have been studied by means of X-ray absorption near-edge structure (XANES) spectroscopy at the Fe K-edge. The iron in the slags was largely Fe2+, with an average coordination number of approximately 5 for the iron in the amorphous fraction. The increase in average oxidation number after alkali-activation was conceptualized as the consequence of slag dissolution and IP precipitation, and employed to calculate the degrees of reaction of the slags. The degree of reaction of the slags increased with increasing amorphous fraction. The iron in the IPs had an average coordination number of approximately 5; thus, IPs produced from the Fe-rich slags studied here are not Fe-analogs of aluminosilicate geopolymers, but differ significantly in terms of structure from the latter.
Contamination of the environment with antibiotics is of great concern as it promotes the evolution of antimicrobial resistances. In case of amoxicillin (AMX) in the aquatic environment, further risk arises from hydrolysis products (HPs) which can cause allergy. To assess these risks, a comprehensive investigation and understanding of the degradation of AMX is necessary. We investigated the hydrolysis rate of AMX in different types of water as well as the influence of temperature and irradiation. The content of the heavy metal ions copper and zinc was found to be crucial for the hydrolysis rate of AMX and stability of HPs. Eventually, a new degradation pathway for AMX could be elaborated and confirmed by tandem mass spectrometry (LC-MS/MS).
The finite volume method (FVM), like the finite element method (FEM), is a numerical method for determining an approximate solution for partial differential equations. The derivation of the two methods is based on very different considerations, as they have historically evolved from two distinct engineering disciplines, namely solid mechanics and fluid mechanics. This makes FVM difficult to learn for someone familiar with FEM. In this paper we want to show that a slight modification of the FEM procedure leads to an alternative derivation of the FVM. Both numerical methods are starting from the same strong formulation of the problem represented by differential equations, which are only satisfied by their exact solution. For an approximation of the exact solution, the strong formulation must be converted to a so-called weak form. From here on, the two numerical methods differ. By appropriate choice of the trial function and the test function, we can obtain different numerical methods for solving the weak formulation of the problem. While typically in FEM the basis functions of the trial function and test function are identical, in FVM they are chosen differently. In this paper, we show which trial and test function must be chosen to derive the FVM alternatively: The trial function of the FVM is a “shifted” trial function of the FEM, where the nodal points are now located in the middle of an integration interval rather than at the ends. Moreover, the basis functions of the test function are no longer the same as those of the trial function as in the FEM, but are shown to be a constant equal to 1. This is demonstrated by the example of a 1D Poisson equation.
BACKGROUND CONTEXT: Targeted delivery of osteoinductive bone morphogenetic Proteins (eg, GDF5) in bioresorbable calcium phosphate cement (CPC), potentially suitable for vertebroplasty and kyphoplasty of osteoporotic vertebral fractures, may be required to counteract augmented local bone catabolism and to support complete bone regeneration. The biologically optimized GDF5 Mutant BB-1 may represent an attractive drug candidate for this purpose.
PURPOSE: The aim of the current study was to test an injectable, poly (l-lactide-co-glycolide) acid (PLGA) fiber-reinforced, brushite-forming CPC containing low-dose BB-1 in a sheep lumbar osteopenia model.
STUDY DESIGN/ SETTING: This is a prospective experimental animal study.
METHODS: Bone defects (diameter 5 mm) were generated in aged, osteopenic female sheep and were filled with fiber-reinforced CPC alone (L4; CPC+fibers) or with CPC containing different dosages.
The presentation summarizes the economic research activities at Department S concerning Quality Infrastructure. It introduces the new tool "QI-FoKuS" - an annual company survey in Germany, seeking to provide data on conformity assessment and accreditation in Germany. The first company survey deals with the use and certification of management systems with a special focus on information security and the criteria for the selection of certification bodies.
Soybean oil takes around half of the vegetable oil resources in the world, increasing in importance constantly. Besides, soy oil plants have experienced numerous accidents due to the coexistence of soy flour and hexane (as a solvent) in the extraction process, thus creating a hazardous environment. This study aims to find the maximum pressure, the maximum rate of pressure rise, and the minimum ignition energy of soy flour−hexane mixtures through specific experiments by varying the concentration of fuels in air and ignition mechanism (chemical igniters or exploding wires). The results have shown that soy flour alone is hard to ignite, whereas adding hexane even in small amounts increases the hazard and the severity of the explosions considerably. Eventually, the substitution of hexane with a greener and safer extraction agent should be of utmost focus.
A round-robin study has been carried out to estimate the impact of the human element in small-angle scattering data analysis. Four corrected datasets were provided to participants ready for analysis. All datasets were measured on samples containing spherical scatterers, with two datasets in dilute dispersions and two from powders. Most of the 46 participants correctly identified the number of populations in the dilute dispersions, with half of the population mean entries within 1.5% and half of the population width entries within 40%. Due to the added complexity of the structure factor, far fewer people submitted answers on the powder datasets. For those that did, half of the entries for the means and widths were within 44 and 86%, respectively. This round-robin experiment highlights several causes for the discrepancies, for which solutions are proposed.
A Round Robin study has been carried out to estimate the impact of the human element in small-angle scattering data analysis. Four corrected datasets were provided to participants ready for analysis. All datasets were measured on samples containing spherical scatterers, with two datasets in dilute dispersions, and two from powders.
Most of the 46 participants correctly identified the number of populations in the dilute dispersions, with half of the population mean entries within 1.5 % and half of the population width entries within 40 %, respectively. Due to the added complexity of the structure factor, much fewer people submitted answers on the powder datasets.
For those that did, half of the entries for the means and widths were within 44 % and 86 % respectively. This Round Robin experiment highlights several causes for the discrepancies, for which solutions are proposed.
How much do we, the small-angle scatterers, influence the results of an investigation? What uncertainty do we add by our human diversity in thoughts and approaches, and is this significant compared to the uncertainty from the instrumental measurement factors?
After our previous Round Robin on data collection, we know that many laboratories can collect reasonably consistent small-angle scattering data on easy samples1. To investigate the next, human component, we compiled four existing datasets from globular (roughly spherical) scatterers, each exhibiting a common complication, and asked the participants to apply their usual methods and toolset to the quantification of the results https://lookingatnothing.com/index.php/archives/3274).
Accompanying the datasets was a modicum of accompanying information to help with the interpretation of the data, similar to what we normally receive from our collaborators. More than 30 participants reported back with volume fractions, mean sizes and size distribution widths of the particle populations in the samples, as well as information on their self-assessed level of experience and years in the field.
While the Round Robin is still underway (until the 25th of April, 2022), the initial results already show significant spread in the results. Some of these are due to the variety in interpretation of the meaning of the requested parameters, as well as simple human errors, both of which are easy to correct for. Nevertheless, even after correcting for these differences in understanding, a significant spread remains. This highlights an urgent challenge to our community: how can we better help ourselves and our colleagues obtain more reliable results, how could we take the human factor out of the equation, so to speak?
In this talk, we will introduce the four datasets, their origins and challenges. Hot off the press, we will summarize the anonymized, quantified results of the Data Analysis Round Robin. (Incidentally, we will also see if a correlation exists between experience and proximity of the result to the median). Lastly, potential avenues for improving our field will be offered based on the findings, ranging from low-effort yet somehow controversial improvements, to high-effort foundational considerations.
This is a remote presentation I gave at the 2022 Small-angle Scattering conference in Campinas, Brazil. The video has been obtained from the conference organisers with their explicit permission for use on YouTube. I've tried to spruce up the audio from the remote recording the best I could.
The conference abstract for this talk was:
"How much do we, the small-angle scatterers, influence the results of an investigation? What uncertainty do we add by our human diversity in thoughts and approaches, and is this significant compared to the uncertainty from the instrumental measurement factors?
After our previous Round Robin on data collection, we know that many laboratories can collect reasonably consistent small-angle scattering data on easy samples[1]. To investigate the next, human component, we compiled four existing datasets from globular (roughly spherical) scatterers, each exhibiting a common complication, and asked the participants to apply their usual methods and toolset to the quantification of the results (https://lookingatnothing.com/index.ph....
Accompanying the datasets was a modicum of accompanying information to help with the interpretation of the data, similar to what we normally receive from our collaborators. More than 30 participants reported back with volume fractions, mean sizes and size distribution widths of the particle populations in the samples, as well as information on their self-assessed level of experience and years in the field.
While the Round Robin is still underway (until the 25th of April, 2022), the initial results already show significant spread in the results. Some of these are due to the variety in interpretation of the meaning of the requested parameters, as well as simple human errors, both of which are easy to correct for. Nevertheless, even after correcting for these differences in understanding, a significant spread remains. This highlights an urgent challenge to our community: how can we better help ourselves and our colleagues obtain more reliable results, how could we take the human factor out of the equation, so to speak?
In this talk, we will introduce the four datasets, their origins and challenges. Hot off the press, we will summarize the anonymized, quantified results of the Data Analysis Round Robin. (Incidentally, we will also see if a correlation exists between experience and proximity of the result to the median). Lastly, potential avenues for improving our field will be offered based on the findings, ranging from low-effort yet somehow controversial improvements, to high-effort foundational considerations."
The increasing amount and complexity of clinical data require an appropriate way of storing and analyzing those data. Traditional approaches use a tabular structure (relational databases) for storing data and thereby complicate storing and retrieving interlinked data from the clinical domain. Graph databases provide a great solution for this by storing data in a graph as nodes (vertices) that are connected by edges (links). The underlying graph structure can be used for the subsequent data analysis (graph learning). Graph learning consists of two parts: graph representation learning and graph analytics. Graph representation learning aims to reduce high-dimensional input graphs to low-dimensional representations. Then, graph analytics uses the obtained representations for analytical tasks like visualization, classification, link prediction and clustering which can be used to solve domain-specific problems. In this survey, we review current state-of-the-art graph database management systems, graph learning algorithms and a variety of graph applications in the clinical domain. Furthermore, we provide a comprehensive use case for a clearer understanding of complex graph learning algorithms.
The importance of plasmonic heating for the plasmondriven photodimerization of 4-nitrothiophenol
(2019)
Metal nanoparticles form potent nanoreactors, driven by the optical generation of energetic electrons and nanoscale heat. The relative influence of these two factors on nanoscale chemistry is strongly debated. This article discusses the temperature dependence of the dimerization of 4-nitrothiophenol (4-NTP) into 4,4′-dimercaptoazobenzene (DMAB) adsorbed on gold nanoflowers by Surface-Enhanced Raman Scattering (SERS). Raman thermometry shows a significant optical heating of the particles. The ratio of the Stokes and the anti-Stokes Raman signal moreover demonstrates that the molecular temperature during the reaction rises beyond the average crystal lattice temperature of the plasmonic particles. The product bands have an even higher temperature than reactant bands, which suggests that the reaction proceeds preferentially at thermal hot spots. In addition, kinetic measurements of the reaction during external heating of the reaction environment yield a considerable rise of the reaction rate with temperature. Despite this significant heating effects, a comparison of SERS spectra recorded after heating the sample by an external heater to spectra recorded after prolonged illumination shows that the reaction is strictly photo-driven. While in both cases the temperature increase is comparable, the dimerization occurs only in the presence of light. Intensity dependent measurements at fixed temperatures confirm this finding.
Background. Post-menopausal osteoporosis is a common health problem worldwide, most commonly caused by estrogen deficiency. Most of the information regarding the skeletal effects of this disease relates to trabecular bone, while cortical bone is less studied. The purpose of this study was to evaluate the influence of estrogen deficiency on the structure and mechanical properties of cortical bone.
Methods. Eight ovariectomized (OVH) and eight intact (control) Sprague Dawley rats were used. Structural features of femoral cortical bone were studied by light microscopy, scanning electron microscopy and synchrotron-based microcomputer-tomography and their mechanical properties determined by nano-indentation.
Results. Cortical bone of both study groups contains two distinct regions: organized
circumferential lamellae and disordered bone with highly mineralized cartilaginous islands. Lacunar volume was lower in the OVH group both in the lamellar and disorganized regions (182 ± 75 µm3 vs 232 ± 106 µm3 , P < 0.001 and 195 ± 86 µm3 vs. 247 ± 106 µm3 , P < 0.001, respectively). Lacunar density was also lower in both bone regions of the OVH group (40 ± 18 ×103 lacunae/mm3 vs. 47 ± 9×103 lacunae/mm3
in the lamellar region, P = 0.003 and 63 ± 18×103 lacunae/mm3
vs. 75 ± 13×103 lacunae/mm3 in the disorganized region, P < 0.001). Vascular canal volume was lower in the disorganized region of the bone in the OVH group compared to the same Region in the control group (P < 0.001). Indentation moduli were not different between the study groups in both bone regions.
Discussion. Changes to cortical bone associated with estrogen deficiency in rats require high-resolution methods for detection. Caution is required in the application of These results to humans due to major structural differences between human and rat bone.
X-ray photoelectron spectroscopy (XPS) is widely used for characterising the chemistry of graphene-related two-dimensional materials (GR2M), however the careful preparation of the sample for analysis is important in obtaining representative quantifications. We report an investigation by three laboratories showing that the preparation method for oxygen-functionalised graphene nanoplatelet (GNP) powders has a significant effect on the homogeneous-equivalent elemental composition measured in XPS. We show that pressing GNP powders onto adhesive tapes, into recesses, or into solid pellets results in inconsistencies in the XPS quantification. The measured oxygen-to-carbon atomic ratio from GNP pellets depends upon the die pressure used to form them and the morphology of the GNPs themselves. We recommend that powder samples of GR2Ms are pelletised prior to XPS analysis to improve repeatability and reproducibility of measurements.
The application and benefits of Semantic Web Technologies (SWT) for managing, sharing, and (re-)using of research data are demonstrated in implementations in the field of Materials Science and Engineering (MSE). However, a compilation and classification are needed to fully recognize the scattered published works with its unique added values. Here, the primary use of SWT at the interface with MSE is identified using specifically created categories. This overview highlights promising opportunities for the application of SWT to MSE, such as enhancing the quality of experimental processes, enriching data with contextual information in knowledge graphs, or using ontologies to perform specific queries on semantically structured data. While interdisciplinary work between the two fields is still in its early stages, a great need is identified to facilitate access for nonexperts and develop and provide user-friendly tools and workflows. The full potential of SWT can best be achieved in the long term by the broad acceptance and active participation of the MSE community. In perspective, these technological solutions will advance the field of MSE by making data FAIR. Data-driven approaches will benefit from these data structures and their connections to catalyze knowledge generation in MSE.
The LAUS: First applications of a new system for ultrasonic imaging of very concrete structures
(2018)
The LAUS (Large Aperture Ultrasonic System) has been developed to image very thick concrete structures, which are not accessible for commercial systems. The device and the corresponding software is the result of joint research of BAM, an ultrasonic instrument manufacturer and University of Kassel, Germany. It consists of 12 separate arrays of 32 point-contact shear wave transducers each, which can be deployed in flexible configurations. Each array is combined with battery and transmitter, receiver and wireless communication electronics.
Three case histories are presented. First the system was deployed on a 5-m thick heavily reinforced foundation slab. The reflection of the slab’s bottom was imaged clearly. In addition, a multiple reflection was registered, thus giving hope that even thicker elements might be imaged by the instrument. Second, the LAUS was used to investigate a massive bridge girder where a heavy rainstorm during concreting had led to imperfections that were visible after removing the formwork was removed. The LAUS could image tendon ducts in 1.8m depth and the backwall closely behind them. Some limited areas showed blurred reflections and were checked by drill holes; these areas were affected by diffuse damage which could be repaired by injections. Third, a large retaining wall was checked for thickness.
Meanwhile, the LAUS has been used in underground waste deposits (nuclear and other) for quality assurance of sealing plugs. A confirmed penetration depth of about 7 m has been reached.
The LAUS: First Applications of a New System for Ultrasonic Imaging of Very Concrete Structures
(2019)
The LAUS (Large Aperture Ultrasonic System) has been developed to image very thick concrete structures, which are not accessible for commercial systems. The device and the corresponding software is the result of joint Research of BAM, an ultrasonic instrument manufacturer and University of Kassel, Germany. It consists of 12 separate Arrays of 32 point-contact shear wave transducers each, which can be deployed in flexible configurations. Each array is combined with battery and transmitter, receiver and wireless communication electronics.
Three case histories are presented. First the system was deployed on a 5-m thick heavily reinforced foundation slab.
The reflection of the slab’s bottom was imaged clearly. In addition, a multiple reflection was registered, thus giving hope that even thicker elements might be imaged by the instrument. Second, the LAUS was used to investigate a massive bridge girder where a heavy rainstorm during concreting had led to imperfections that were visible after removing the formwork was removed. The LAUS could image tendon ducts in 1.8m depth and the backwall closely behind them. Some limited areas showed blurred reflections and were checked by drill holes; these areas were affected by diffuse damage which could be repaired by injections. Third, a large retaining wall was checked for thickness.
Meanwhile, the LAUS has been used in underground waste deposits (nuclear and other) for quality assurance of sealing plugs. A confirmed penetration depth of about 7 m has been reached.
Through connecting genomic and metabolic information, metaproteomics is an essential approach for understanding how microbiomes function in space and time. The international metaproteomics community is delighted to announce the launch of the Metaproteomics Initiative (www.metaproteomics.org), the goal of which is to promote dissemination of metaproteomics fundamentals, advancements, and applications through collaborative networking in microbiome research. The Initiative aims to be the central information hub and open meeting place where newcomers and experts interact to communicate, standardize, and accelerate experimental and bioinformatic methodologies in this feld. We invite the entire microbiome community to join and discuss potential synergies at the interfaces with other disciplines, and to collectively promote innovative approaches to gain deeper insights into microbiome functions and dynamics.
The Meticulous Approach: Fully traceable X-ray scattering data via a comprehensive lab methodology
(2021)
To find out if experimental findings are real, you need to be able to repeat them. For a long time, however, papers and datasets could not necessarily include sufficient details to accurately repeat experiments, leading to a reproducibility crisis. It is here, that the MOUSE project (Methodology Optimization for Ultrafine Structure Exploration) tries to implement change – at least for small- and wide-angle X-ray scattering (SAXS/WAXS).
In the MOUSE project, we have combined: a) a comprehensive laboratory workflow with b) a heavily modified, highly automated Xenocs Xeuss 2.0 instrumental component. This combination allows us to collect fully traceable scattering data, with a well-documented data flow (akin to what is found at the more automated beamlines). With two full-time researchers, the lab collects and interprets thousands of datasets, on hundreds of samples for dozens of projects per year, supporting many users along the entire process from sample selection and preparation, to the analysis of the resulting data.
While these numbers do not light a candle to those achieved by our hardworking compatriots at the synchrotron beamlines, the laboratory approach does allow us to continually modify and fine-tune the integral methodology. So for the last three years, we have incorporated e.g. FAIR principles, traceability, automated processing, data curation strategies, as well as a host of good scattering practices into the MOUSE system. We have concomitantly expanded our purview as specialists to include an increased responsibility for the entire scattering aspect of the resultant publications, to ensure full exploitation of the data quality, whilst avoiding common pitfalls.
This talk will discuss the MOUSE project1 as implemented to date, and will introduce foreseeable upgrades and changes. These upgrades include better pre-experiment sample scattering predictions to filter projects on the basis of their suitability, exploitation of the measurement database for detecting long-term changes and automated flagging of datasets, and enhancing MC fitting with sample scattering simulations for better matching of odd-shaped scatterers.
Herein, we provide a "systems architecture"-like overview and detailed discussions of the methodological and instrumental components that, together, comprise the "MOUSE" project (Methodology Optimization for UltrafineStructure Exploration). The MOUSE project provides scattering information on a wide variety of samples, with traceable dimensions for both the scattering vector (q) and the absolute scattering cross-section (I). The measurable scattering vector-range of 0.012≤ q (nm-1) ≤ 92, allows information across a hierarchy of structures with dimensions ranging from ca. 0.1 to 400 nm. In addition to details that comprise the MOUSE project, such as the organisation and traceable aspects, several representative examples are provided to demonstrate its flexibility. These include measurements on alumina membranes, the tobacco mosaic virus, and dual-source information that overcomes fluorescence limitations on ZIF-8 and iron-oxide-containing carbon catalyst materials.
The European Commission's recommendation on the definition of nanomaterial [2011/696/EU] is broadly applicable across different regulatory sectors and requires the quantitative size determination of constituent particles in samples down to 1 nm. A material is a nanomaterial if 50 % or more of the particles are in the size range 1-100 nm. The implementation of the definition in a regulatory context challenges measurement methods to reliably identify nanomaterials and ideally also non-nanomaterials as substance or product ingredient as well as in various matrices.
The EU FP7 NanoDefine project [www.nanodefine.eu] addressed these challenges by developing a robust, readily implementable and cost-effective measurement strategy to decide for the widest possible range of materials whether it is a nanomaterial or not. It is based on existing and emerging particle measurement techniques evaluated against harmonized, material-dependent performance criteria and by intra- and inter-lab comparisons. Procedures were established to reliably measure the size of particles within 1-100 nm, and beyond, taking into account different shapes, coatings and chemical compositions in industrial materials and consumer products. Case studies prove their applicability for various sectors, including food, pigments and cosmetics.
A main outcome is the establishment of an integrated tiered approach including rapid screening (tier 1) and confirmatory methods (tier 2), a decision support flow scheme and a user manual to guide end-users, such as manufacturers, in selecting appropriate methods. Another main product is the “NanoDefiner” e-Tool which implements the flow scheme in a user-friendly software and guides the user in a semi-automated way through the entire decision procedure. It allows a cost-effective selection of appropriate methods for material classification according to the EC's nanomaterial definition and provides a comprehensive report with extensive explanation of all decision steps to arrive at a transparent identification of nanomaterials as well as non-nanomaterials for regulatory purposes.
The project has received funding from the European Union’s Seventh Programme for research, technological development and demonstration under grant agreement No 604347.
The European Commission's recommendation on the definition of nanomaterial [2011/696/EU] is broadly applicable across different regulatory sectors and requires the quantitative size Determination of constituent particles in samples down to 1 nm. A material is a nanomaterial if 50 % or more of the particles are in the size range 1-100 nm. The implementation of the definition in a regulatory context challenges measurement methods to reliably identify nanomaterials and ideally also nonnanomaterials as substance or product ingredient as well as in various matrices.
The EU FP7 NanoDefine project [www.nanodefine.eu] addressed these challenges by developing a robust, readily implementable and cost-effective measurement strategy to decide for the widest possible range of materials whether it is a nanomaterial or not. It is based on existing and emerging particle measurement techniques evaluated against harmonized, material-dependent performance criteria and by intra- and inter-lab comparisons. Procedures were established to reliably measure the size of particles within 1-100 nm, and beyond, taking into account different shapes, coatings and chemical compositions in industrial materials and consumer products. Case studies prove their applicability for various sectors, including food, pigments and cosmetics.
A main outcome is the establishment of an integrated tiered approach including rapid screening (Tier 1) and confirmatory methods (tier 2), a decision support flow scheme and a user manual to guide end-users, such as manufacturers, in selecting appropriate methods. Another main product is the “NanoDefiner” e-Tool which implements the flow scheme in a user-friendly software and guides the user in a semi-automated way through the entire decision procedure. It allows a cost-effective selection of appropriate methods for material classification according to the EC's nanomaterial definition and provides a comprehensive report with extensive explanation of all decision steps to arrive at a transparent identification of nanomaterials as well as non-nanomaterials for regulatory
purposes.
This document is a collection of three JRC Technical Reports that together form the “NanoDefine Methods Manual”, which has been developed within the NanoDefine project ‘Development of an integrated approach based on validated and standardized methods to support the implementation of the EC recommendation for a definition of nanomaterial’, funded by the European Union’s 7th Framework Programme, under grant agreement 604347. The overall goal of the NanoDefine project was to support the implementation of the European Commission Recommendation on the definition of nanomaterial (2011/696/EU). The project has developed an integrated empirical approach, which allows identifying a material as a nano- or not a nanomaterial according to the EC Recommendation. The NanoDefine Methods Manual consists of three parts: Part 1: The NanoDefiner Framework and Tools, which covers the NanoDefiner framework, general information on measurement methods and performance criteria, and tools developed by NanoDefine such as a materials categorisation system, a decision support flow scheme and an e-tool. Part 2: Evaluation of Methods, which discusses the outcome of the evaluation of the nanomaterials characterisation methods for measuring size. Part 3: Standard Operating Procedures (SOPs), which presents the 23 Standard Operating Procedures developed within the NanoDefine project. In this combined document, these three parts are included as stand-alone reports, each having its own abstract, table of contents, page, table and figure numbering, and references.
The present series of reports, the NanoDefine Methods Manual, has been developed within the NanoDefine project 'Development of an integrated approach based on validated and standardized methods to support the implementation of the EC recommendation for a definition of nanomaterial', funded by the European Union's 7th Framework Programme, under grant agreement 604347.
In 2011 the European Commission (EC) published a recommendation for a definition of the term 'nanomaterial', the EC NM Definition, as a reference to determine whether an unknown material can be considered as a 'nanomaterial' for regulatory purposes1. One challenge is the development of methods that reliably identify, characterize and quantify nanomaterials (NM) both as substances and in various products and matrices.
The overall goal of NanoDefine was to support the implementation of the EC NM Definition. It can also support the implementation of any NM definition based on particle size. The project has developed an integrated approach, which allows identifying any material as a nano- or not a nanomaterial according to the EC NM Definition. NanoDefine explicitly supported the governance challenges associated with the implementation of legislation concerning nanomaterials by:
- addressing the issues on availability of suitable measuring techniques, reference materials, validated methods, acceptable to all stakeholders (authorities, policy makers, commercial firms),
- developing an integrated and interdisciplinary approach and a close international co-operation and networking with academia, commercial firms and standardization bodies.
Thus, the NanoDefine Methods Manual provides guidance on practical implementation of the EC NM Definition throughout the nanomaterial characterization process, and on the characterization techniques employed as well as their application range and limits. It assists the user in choosing the most appropriate measurement method(s) to identify any substance or mixture for a specific purpose, according to the EC NM Definition of a nanomaterial. The NanoDefine project also explored how to assess a material against the criteria of the definition through proxy solutions, i.e. by applying measurement techniques that indirectly determine the x50. Those findings were developed through empirically based scientific work and are included in Part 1 of this Manual. As they go beyond the text of the EC NM Definition, they may be used as practical approach to indicate whether a material is a nanomaterial or not, but keeping in mind that they should not be taken as recommendation for the implementation of the EC NM Definition in a regulatory context.
The NanoDefine Methods Manual consists of the following three parts:
Part 1: The NanoDefiner Framework and Tools
Part 2: Evaluation of Methods
Part 3: Standard Operating Procedures (SOPs)
Part 1 covers the NanoDefiner framework, general information on measurement methods and performance criteria and tools developed by NanoDefine such as a materials categorisation system, a decision support flow scheme and an e-tool.
Part 2 discusses the outcome of the evaluation of the nanomaterials characterisation methods for measuring size.
Part 3 presents the 23 Standard Operating Procedures developed within the NanoDefine project.
The current document is part 1.
The present series of reports, the NanoDefine Methods Manual, has been developed within the NanoDefine project 'Development of an integrated approach based on validated and standardized methods to support the implementation of the EC recommendation for a definition of nanomaterial', funded by the European Union's 7th Framework Programme, under grant agreement 604347.
In 2011 the European Commission (EC) published a recommendation for a definition of the term 'nanomaterial', the EC NM Definition, as a reference to determine whether an unknown material can be considered as a 'nanomaterial' for regulatory purposes1. One challenge is the development of methods that reliably identify, characterize and quantify nanomaterials (NM) both as substances and in various products and matrices.
The overall goal of NanoDefine was to support the implementation of the EC NM Definition. It can also support the implementation of any NM definition based on particle size. The project has developed an integrated approach, which allows identifying any material as a nano- or not a nanomaterial according to the EC NM Definition. NanoDefine explicitly supported the governance challenges associated with the implementation of legislation concerning nanomaterials by:
- addressing the issues on availability of suitable measuring techniques, reference materials, validated methods, acceptable to all stakeholders (authorities, policy makers, commercial firms),
- developing an integrated and interdisciplinary approach and a close international co-operation and networking with academia, commercial firms and standardization bodies.
Thus, the NanoDefine Methods Manual provides guidance on practical implementation of the EC NM Definition throughout the nanomaterial characterization process, and on the characterization techniques employed as well as their application range and limits. It assists the user in choosing the most appropriate measurement method(s) to identify any substance or mixture for a specific purpose, according to the EC NM Definition of a nanomaterial. The NanoDefine project also explored how to assess a material against the criteria of the definition through proxy solutions, i.e. by applying measurement techniques that indirectly determine the x50. Those findings were developed through empirically based scientific work and are included in Part 1 of this Manual. As they go beyond the text of the EC NM Definition, they may be used as practical approach to indicate whether a material is a nanomaterial or not, but keeping in mind that they should not be taken as recommendation for the implementation of the EC NM Definition in a regulatory context.
The NanoDefine Methods Manual consists of the following three parts:
Part 1: The NanoDefiner Framework and Tools
Part 2: Evaluation of Methods
Part 3: Standard Operating Procedures (SOPs)
Part 1 covers the NanoDefiner framework, general information on measurement methods and performance criteria and tools developed by NanoDefine such as a materials categorisation system, a decision support flow scheme and an e-tool.
Part 2 discusses the outcome of the evaluation of the nanomaterials characterisation methods for measuring size.
Part 3 presents the 23 Standard Operating Procedures developed within the NanoDefine project.
The current document is part 2.
The present series of reports, the NanoDefine Methods Manual, has been developed within the NanoDefine project 'Development of an integrated approach based on validated and standardized methods to support the implementation of the EC recommendation for a definition of nanomaterial'1 funded by the European Union's 7th Framework Programme, under grant agreement 604347.
In 2011 the European Commission (EC) published the recommendation (2011/696/EU) for a definition of the term 'nanomaterial'1, the EC NM Definition, as a reference to determine whether an unknown material can be considered as a 'nanomaterial' for regulatory purposes. One challenge is the development of methods that reliably identify, characterize and quantify nanomaterials (NM) both as substances and in various products and matrices.
The overall goal of NanoDefine was to support the implementation of the EC NM Definition. It can also support the implementation of any NM definition based on particle size. The project has developed an integrated approach, which allows identifying any material as a nano or non-nano material according to the EC NM Definition. NanoDefine explicitly supported the governance challenges associated with the implementation of legislation concerning nanomaterials by:
- addressing the issues on availability of suitable measuring techniques, reference materials, validated methods, acceptable to all - developing an integrated and interdisciplinary approach and a close international co-operation and networking with academia, commercial firms and standardization bodies.
Thus, the NanoDefine Methods Manual provides guidance on practical implementation of the EC NM Definition throughout the nanomaterial characterization process, and on the characterization techniques employed as well as their application range and limits. It assists the user in choosing the most appropriate measurement method(s) to identify any substance or mixture for a specific purpose, according to the EC NM Definition of a nanomaterial. The NanoDefine project also explored how to assess a material against the criteria of the definition through proxy solutions, i.e. by applying measurement techniques that indirectly determine the D50. Those findings were developed through empirically based scientific work and are included in Part 1 of this Manual. As they go beyond the text of the EC NM Definition, they may be used as practical approach to indicate whether a material is a nanomaterial or not, but keeping in mind that they should not be taken as recommendation for the implementation of the EC NM Definition in a regulatory context.
The NanoDefine Methods Manual consists of the following three parts:
Part 1: The NanoDefiner Framework and Tools
Part 2: Evaluation of Methods
Part 3: Standard Operating Procedures (SOPs)
Part 1 covers the NanoDefiner framework, general information on measurement methods and performance criteria and tools developed by NanoDefine such as a materials categorisation system, a decision support flow scheme and an e-tool.
Part 2 discusses the outcome of the evaluation of the nanomaterials characterisation methods for measuring size.
Part 3 presents the 23 Standard Operating Procedures developed within the NanoDefine project. The current document is part 3.
The neutron imaging instrument CONRAD was operated as a part of the user program of the research reactor BER‐II at Helmholtz‐Zentrum Berlin (HZB) from 2005 to 2020. The Instrument was designed to use the neutron flux from the cold source of the reactor, transported by a curved neutron guide. The pure cold neutron spectrum provided a great advantage in the use of different neutron optical components such as focusing lenses and guides, solid‐state polarizers, Monochromators and phase gratings. The flexible setup of the instrument allowed for implementation of new methods including wavelength‐selective, dark‐field, phase‐contrast and imaging with polarized neutrons. In summary, these developments helped to attract a large number of scientists and industrial customers, who were introduced to neutron imaging and subsequently contributed to the Expansion of the neutron imaging community.
NMR is becoming increasingly popular for the investigation of building materials as it is a non-invasive technology that does not require any sample preparation nor causes damage to the material. Depending on the specific application it can offer insights into properties like porosity and spatial saturation degree as well as pore structure. Moreover it enables the determination of moisture transport properties and the (re-)distribution of internal moisture into different reservoirs or chemical phases upon damage and curing. However, as yet most investigations were carried out using devices originally either designed for geophysical applications or the analysis of rather homogeneous small scale (< 10 mL) samples. This paper describes the capabilities of an NMR tomograph, which has been specifically optimized for the investigation of larger, heterogeneous building material samples (diameters of up to 72 mm, length of up to 700 mm) with a high flexibility due to interchangeable coils allowing for a high SNR and short echo times (50 - 80 m s).
That human factors (HF) affect the reliability of NDT is not novelty. Still, when it comes to reliability assessments, the role of people is often neglected. Reliability is typically expressed in terms of POD curves, and the effects of human and organisational factors on the inspection are typically tackled by the regulations, procedures and by the qualification and training of the inspection personnel. However, studies have shown that even the most experienced personnel can make mistakes and that the reliability in the field is never as high as the reliability measured in the POD experiments. Generally, HF are considered too unpredictable and too uncontrollable to model. If that is the fact, then what can we do? The engineering perspective to this problem has often been to find ways to automate inspections and, recently, to make use of artificial intelligence tools to decrease the direct effect of people on the inspection results and improve the overall efficiency and reliability. However, despite automation and AI, people remain the key players, though their tasks change. The contemporary approach to HF is not to engineer them out of the system but to design human-machine systems that make the best use of both. In this talk, ways of tackling HF in the design of systems and processes will be presented.
The physico-chemical basis of DNA radiosensitization: Implications for cancer radiation therapy
(2018)
High-energy radiation is used in combination with radiosensitizing therapeutics to treat cancer. The most common radiosensitizers are halogenatednucleo-sides and cisplatin derivatives, and recently also metal nanoparticles have been suggested as potentialradiosensitizing agents. The radiosensitizingaction of these compounds can at least partly be ascribed to an enhancedreactivity towards secondary low-energy electrons generated along the radiation track of the high-energyprimary radiation, or to an additional emission of secondary reactive electrons close to the tumor tissue. This is referred to as physico-chem ical radiosensitization. In this Conceptarticle we presentcurrent experimental methodsused to study fundamentalprocesses of physico-chemical radiosensitization and discuss the most relevant classes of radiosensitizers. Open questions in the current discussions are identified and future directions outlined, which can lead to optimized treatment protocols or even novel therapeuticconcepts.
This paper discusses the feasibility of a novel strategy based on the combination of bioprinting nano-doping technology and laser ablation-inductively coupled plasma time-of-flight mass spectrometry analysis for the preparation and characterization of gelatin- based multi-element calibration standards suitable for quantitative imaging. To achieve this, lanthanide up-conversion nanoparticles were added to a gelatin matrix to produce the bioprinted calibration standards. The features of this bioprinting approach were com- pared with manual cryosectioning standard preparation, in terms of throughput, between batch repeatability and elemental signal homogeneity at 5 μm spatial resolution. By using bioprinting, the between batch variability for three independent standards of the same concentration of 89 Y (range 0–600 mg/kg) was reduced to 5% compared to up to 27% for cryosectioning. On this basis, the relative standard deviation ( RSD ) obtained between three independent calibration slopes measured within 1 day also reduced from 16% (using cryosectioning ) to 5% (using bioprinting), supporting the use of a single standard preparation replicate for each of the concentrations to achieve good calibration performance using bioprinting. This helped reduce the analysis time by approximately 3-fold. With cryosectioning each standard was prepared and sectioned individually, whereas using bio-printing it was possible to have up to six different standards printed simultaneously, reducing the preparation time from approximately 2 h to under 20 min (by approxi- mately 6-fold). The bio-printed calibration standards were found stable for a period of 2 months when stored at ambient temperature and in the dark.
Over the last few years, the Federal Institute for material research (BAM, Berlin) together with the Centre for the Study of Manuscript Cultures (CSMC, University of Hamburg) have initiated a systematic material investigation of black inks produced from Late Antiquity to the Middle Ages (ca. fourth century CE–fourteenth/fifteenth centuries CE), aimed primarily at extending and complementing findings from previous sporadic studies. Part of this systematic investigation has focused on Egyptian Coptic manuscripts, and the present preliminary study is one of its outputs. It centres on a corpus of 45 Coptic manuscripts—43 papyri and 2 ostraca—preserved at the Palau-Ribes and Roca-Puig collections in Barcelona. The manuscripts come from the Monastery of Apa Apollo at Bawit, one of the largest monastic settlements in Egypt between the Late Antiquity and the Early Islamic Period (sixth–eighth centuries CE). The composition of their black inks was investigated in situ using near-infrared reflectography (NIRR) and X-ray fluorescence (XRF). The analyses determined that the manuscripts were written using different types of ink: pure carbon ink; carbon ink containing iron; mixed inks containing carbon, polyphenols and metallic elements; and iron-gall ink. The variety of inks used for the documentary texts seems to reflect the articulate administrative system of the monastery of Bawit. This study reveals that, in contrast to the documents, written mostly with carbon-based inks, literary biblical texts were written with iron-gall ink. The frequent reuse of papyrus paper for certain categories of documents may suggest that carbon-based inks were used for ephemeral manuscripts, since they were easy to erase by abrasion.
Driven by recent technological advances and the need for improved viral diagnostic applications, mass spectrometry-based proteomics comes into play for detecting viral pathogens accurately and efficiently. However, the lack of specific algorithms and software tools presents a major bottleneck for analyzing data from host-virus samples. For example, accurate species- and strain-level classification of a priori unidentified organisms remains a very challenging task in the setting of large search databases. Another prominent issue is that many existing solutions suffer from the protein inference issue, aggravated because many homologous proteins are present across multiple species. One of the contributing factors is that existing bioinformatic algorithms have been developed mainly for single-species proteomics applications for model organisms or human samples. In addition, a statistically sound framework was lacking to accurately assign peptide identifications to viral taxa. In this presentation, an overview is given on current bioinformatics developments that aim to overcome the above-mentioned issues using algorithmic and statistical methods. The presented methods and software tools aim to provide tailored solutions for both discovery-driven and targeted proteomics for viral diagnostics and taxonomic sample profiling. Furthermore, an outlook is provided on how the bioinformatic developments might serve as a generic toolbox, which can be transferred to other research questions, such as metaproteomics for profiling microbiomes and identifying bacterial pathogens.
The Protocol Gap
(2021)
Although peer review is considered one of the main pillars of modern science, experimental methods and protocols seem to be not a rigorous subject of this process in many papers. Commercial equipment, test kits, labeling kits, previously published concepts, and standard protocols are often considered to be not worth a detailed description or validation. Even more disturbing is the extremely biased citation behavior in this context, which sometimes leads to surrogate citations to avoid low-impact journals, preprints, or to indicate traditional practices. This article describes some of these surprising habits and suggests some measures to avoid the most unpleasant effects, which in the long term may undermine the credibility of science as a whole.
Amyloid fibrils are polymers formed by proteins under specific conditions and in many cases they are related to pathogenesis, such as Parkinson’s and Alzheimer’s diseases. Their hallmark is the presence of a β-sheet structure. High resolution structural data on these systems as well as information gathered from multiple complementary analytical techniques is needed, from both a fundamental and a pharmaceutical perspective. Here, a previously reported de novo designed, pH-switchable coiled coil-based peptide that undergoes structural transitions resulting in fibril formation under physiological conditions has been exhaustively characterized by transmission electron microscopy (TEM), cryo-TEM, atomic force microscopy (AFM), wide-angle X-ray scattering (WAXS) and solid-state NMR (ssNMR). Overall, a unique 2-dimensional carpet-like assembly composed of large coexisiting ribbon-like, tubular and funnel-like structures with a clearly resolved protofilament substructure is observed. Whereas electron microscopy and scattering data point somewhat more to a hairpin model of β-fibrils, ssNMR data obtained from samples with selectively labelled peptides are in agreement with both, hairpin structures and linear arrangements.
The quest for the mixed inks
(2018)
In this article, we would like to share our observations concerning the inks produced by intentionally mixing soot or charcoal with tannin extracts or iron-gall ink. Aside from Zerdoun’s mention in her outstanding review of written sources, “Les encres noires au Moyen-Âge”, this ink category has received little if any attention from scholars and scientists. And yet, if analytically attested, the use of such inks could serve as an additional category to classify and distinguish the writing inks on the historical socio-geographic map of the writing inks we are trying to build.
A new concept called “Ring-Opening Polymerization (ROP) combined with simultaneous POlyCondensation” (ROPPOC) is presented and discussed. This synthetic strategy is based on the intermediate formation of chains having two end groups that can react with each other. The ROPPOC syntheses are subdivided into three groups according to the nature of the chain ends: two ionic end groups, one ionic and one covalent chain end and a combination of two reactive covalent end groups may be involved, depending on the catalyst. The usefulness for the preparation of cyclic polymers is discussed with a review of numerous previously published examples. These examples concern to following classes of cyclic polymers: polypeptides, polyamides, polyesters, including polycarbonates, and cyclic polysiloxanes. It is demonstrated, that the results of certain ROPPOC syntheses are in contradiction to the Jacobson-Stockmayer theory. Finally, the usefulness of ROPPOCs for the detection of polydisperse catenanes is discussed.
Digital transformation and especially the dramatic rise of products and services connected to the Internet of Things raise the questions on how to deal with the increasing risks related to Privacy and Cybersecurity. Presumably, these risks seem to be insufficiently reflected in the European Union’s current regulative system – by being neither part of the traditional definition of a safe product nor part of product-specific vertical or horizontal directives. Certification based on standards as underlying requirements has been identified by policymakers as an instrument to address this issue. The latest European draft regulation aims at increasing security and trust in ICT products and services, and reducing current European market fragmentation with a new Cybersecurity Certification Framework. Based on the results of a qualitative analysis of stakeholder statements on the current proposal on the Cybersecurity Act, this paper discusses elements of the proposed Cybersecurity Certification Framework. As a theoretical background, this paper provides definitions of the terms Safety, IT-security, and Cybersecurity, presents selected Cybersecurity-related standards and provides an outlook on future challenges to Conformity Assessment in the digital transformation.
Digital transformation and especially the dramatic rise of products and services connected to the Internet of Things raise the questions on how to deal with the increasing risks related to Privacy and Cybersecurity. Presumably, these risks seem to be insufficiently reflected in the European Union’s current regulative system – by being neither part of the traditional definition of a safe product nor part of product-specific vertical or horizontal directives. Certification based on standards as underlying requirements has been identified by policymakers as an instrument to address this issue. The latest European draft regulation aims at increasing security and trust in ICT products and services, and reducing current European market fragmentation with a new Cybersecurity Certification Framework. Based on the results of a qualitative analysis of stakeholder statements on the current proposal on the Cybersecurity Act, this paper discusses elements of the proposed Cybersecurity Certification Framework. As a theoretical background, this paper provides definitions of the terms Safety, IT-security, and Cybersecurity, presents selected Cybersecurity-related standards and provides an outlook on future challenges to Conformity Assessment in the digital transformation.
Interest in the type of skin used in scriptural materials and preparation methods increased from the nineteenth into the twentieth century. This was due partly to the number of newly discovered fragments and to the invention of new instruments and scientific procedures to identify animal skins and produce qualitative means to demonstrate specific preparation reagents and techniques. The invention of various means of analyzing the DNA of organic materials brought about a revolution in archaeology and in conservation. Difficulties in overcoming contamination of archaeological samples resulted in a number of controversies but also produced advances and improvement in the techniques of ancient DNA analysis and interpretation of results.
Due to their unique physico-chemical properties, nanoparticles are well established in research and industrial applications. A reliable characterization of their size, shape, and size distribution is not only mandatory to fully understand and exploit their potential and develop reproducible syntheses, but also to manage environmental and health risks related to their exposure and for regulatory requirements. To validate and standardize methods for the accurate and reliable particle size determination nanoscale reference materials (nanoRMs) are necessary. However, there is only a very small number of nanoRMs for particle size offered by key distributors such as the National Institute of Standards and Technology (NIST) and the Joint Research Centre (JRC) and, moreover, few provide certified values. In addition, these materials are currently restricted to polymers, silica, titanium dioxide, gold and silver, which have a spherical shape except for titania nanorods. To expand this list with other relevant nanomaterials of different shapes and elemental composition, that can be used for more than one sizing technique, we are currently building up a platform of novel nanoRMs relying on iron oxide nanoparticles of different shape, size and surface chemistry. Iron oxide was chosen as a core material because of its relevance for the material and life sciences.
ICP-MS has played a key role in inorganic chemical metrology for 25 years, from the 1993 CIPM feasibility study which led to establishment of the CCQM. Since that time, the Inorganic Analysis Working Group of the CCQM has organised 56 international comparisons involving measurements by ICP-MS and, in a recent comparison, 16 different national institutes submitted their results using the technique. Metrological applications of ICP-MS currently address an enormous range of measurements using a wide variety of instrumentation, calibration strategies and methodologies. This review provides an overview of the ICP-MS field with an emphasis on developments which are of particular relevance to chemical metrology.
Examples from CCQM comparisons and the services available from the participants are used to illustrate how the capability and scope of ICP-MS methods have expanded far beyond the expectations of 1993. This is due in part to the research and development Programmes of the national institutes which participate in the CCQM. They have played a key role in advancing new instrumentation and applications for elemental analysis, isotope dilution mass spectrometry, determination of isotopic ratio or composition, and speciation of organometallic compounds. These developments are continuing today, as demonstrated by work in new fields such as heteroatom quantitation of proteins, characterisation and counting of nanoparticles using spICP-MS, and LA-ICP-MS analysis of solid materials.
The SAXS platform at BAM
(2021)
A combination of different complementary methods is employed to investigate scaling of the molecular dynamics of two different liquid crystals. Each method is sensitive to different kind of fluctuations and provides therefore a different window to look at the molecular dynamics. In detail, broadband dielectric spectroscopy is combined with specific heat spectroscopy and neutron scattering. As systems the nematic liquid crystal E7 and a discotic liquid crystalline pyrene are considered. First of all it was proven that both systems show all peculiarities which are characteristic for glassy dynamics and the glassy state. Especially for the nematic liquid crystal E7 it could be unambiguously shown by a combination of dielectric and specific heat spectroscopy that the tumbling mode is the underlying motional process responsible for glassy dynamics. Dielectric investigations on the discotic liquid crystalline pyrene reveal that at the phase transition from the plastic crystalline to the hexagonal columnar liquid crystalline phase the molecular dynamics changes from a more strong to fragile temperature dependence of the relaxation rates. Moreover a combination of results obtained by specific heat spectroscopy with structural methods allows an estimation of the length scale relevant for the glass transition.
The SPONGE
(2020)
Simulates X-ray and Neutron scattering patterns from arbitrary shapes defined by STL files.
Features:
- Uses multithreading to compute a number of independent solutions, then uses the variance of the results to estimate an uncertainty on the output.
- Can be launched from the command line using an excel sheet to define settings, or from a jupyter notebook.
- Outputs scattering patterns in absolute units if the contrast is set.
- A Gaussian size distribution is available, where the relative scaling of objects for each repetion can be varied. Recommended to be used with limited width (max. 10%) to avoid artefacts.
- Writes results with settings to an archival HDF5 file.
Application examples:
This software has been used in several studies to date. For example, it has been used here to simulate a model scattering pattern for a cuboid shape, which was then fed forward into the McSAS3 analysis program for analyzing scattering patterns of polydisperse cuboids. A second use is here, where it was used for the modeling of flattened helices. In this paper, scattering pattern features could be matched with particular morphological changes in the structure. Lastly, this paper has an example where it was used to validate the analytical analysis model, and explore the realistic limits of application of the analytical model.
Dr Martin Seah, NPL, was the initiator, founder, and first chairman of the Surface Analysis Working Group (SAWG) at the Consultative Committee for Amount of Substance, Metrology in Chemistry and Biology (CCQM) at the Bureau International des Poids et Mesures (BIPM), the international organization established by the Metre Convention. This tribute letter summarizes his achievements during his chairmanship and his long-running impact on the successful work of the group after his retirement.
Optical Thermometry is popular among researchers because of its non-contact, high sensitivity, and fast measurement properties. In the present experiment, Er3+/Yb3+/K+ co-doped NaYF4 nanoparticles with different K+ concentrations were synthesized by solvothermal method, and the samples showed bright upconversion green emission under the excitation of a 980 nm laser. The powder X-ray diffractometer and transmission electron microscope were used to characterize the crystal structure and its surface morphology, respectively. The spectral characteristics of nanoparticles with K+ doping concentration from 10% to 30% (Molar ratio) were investigated by fluorescence spectroscopy, and it was observed that the fluorescence intensity reached the maximum at the K+ concentration of 20%, after which the intensity weakened when the K+ content continued to increase. According to the dependence between the luminescence intensity of the sample and the laser power density and fluorescence lifetime, the intrinsic mechanism was carefully investigated. Temperature-dependent spectra of the samples were recorded in the temperature range of 315–495 K, and the maximum values of absolute sensitivity (Sa) and relative sensitivity (Sr) were measured at 0.0041 K−1 (455 K) and 0.9220%K−1 (315 K). The experimental results show that K+/Er3+/Yb3+ triple-doped NaYF4 green fluorescent nanoparticles (GFNs) have good prospects for applications in display devices, temperature sensing, and other fields.
The present Table of Standard Atomic Weights (TSAW) of the elements is perhaps one of the most familiar data sets in science. Unlike most parameters in physical science whose values and uncertainties are evaluated using the “Guide to the Expression of Uncertainty in Measurement” (GUM), the majority of standard atomic weight values and their uncertainties are consensus values, not GUM-evaluated values. The Commission on Isotopic Abundances and Atomic Weights of the International Union of Pure and Applied Chemistry (IUPAC) regularly evaluates the literature for new isotopic-abundance measurements that can lead to revised standard atomic-weight values, Ar(E) for element E.
The Commission strives to provide utmost clarity in products it disseminates, namely the TSAW and the Table of Isotopic Compositions of the Elements (TICE). In 2016, the Commission recognized that a guideline recommending the expression of uncertainty listed in parentheses following the standard atomic-weight value, for example, Ar(Se) = 78.971(8), did not agree with the GUM, which suggests that this parenthetic notation be reserved to express standard uncertainty, not the expanded uncertainty used in the TSAW and TICE. In 2017, to eliminate this noncompliance with the GUM, a new format was adopted in which the uncertainty value is specified by the “±” symbol, for example, Ar(Se) = 78.971 ± 0.008. To clarify the definition of uncertainty, a new footnote has been added to the TSAW. This footnote emphasizes that an atomic-weight uncertainty is a consensus (decisional) uncertainty. Not only has the Commission shielded users of the TSAW and TICE from unreliable measurements that appear in the literature as a result of unduly small uncertainties, but the aim of IUPAC has been fulfilled by which any scientist, taking any natural sample from commerce or research, can expect the sample atomic weight to lie within Ar(E) ± its uncertainty almost all of the time.
Airborne ultrasonic testing of lightweight, structured composite materials enables fast and contact-free non-destructive testing in aerospace and avoids material degradation due to contact with a coupling liquid. Established resonant air-coupled transducers consist of piezocomposite materials and several matching layers or more advanced materials like charged cellular polypropylene. The relaxation time and the specific frequency of such mechanical ultrasound emitters limit the spectrum of applications for each device. A short pulse length is key for reliable defect detection and each component at test can be best characterized at material- and geometry-specific frequencies. Here we show that focused thermoacoustic transducers are suited for testing lightweight, structured composite plates. Since the ultrasound is generated in air, these transducers show no resonance behavior and emit a broadband acoustic spectrum between 1.2 kHz and 1 MHz. Composite specimens of 3 mm to 9 mm thickness made of polylactide with a honeycomb structure were tested. Flat bottom holes were introduced to quantify the spatial resolution of the imaging method inside the strongly anisotropic specimen. As no broadband receivers are available yet, cellular polypropylene transducers were used as receivers, which limits the bandwidth of the method towards the bandwidth of the receiver. Nevertheless, we demonstrate the competitiveness of the thermoacoustic transducer compared to mechanical emitters at their respective resonance frequencies. Because a thermoacoustic transmitter features a nearly ideal pulse width, a single transmitter can be coupled with receivers with different resonance frequencies. With the development of broadband ultrasound receivers, air-coupled ultrasound spectroscopy will likely be possible in the near future. The analyzed transducer holds the potential to speed up testing during production and maintenance in aerospace and automotives. Its combination with a broadband receiver could also expand the application field of air-coupled ultrasonic testing from a qualitative error detection towards a quantitative, spatially resolved analysis of mechanical material properties.
Airborne ultrasonic testing of lightweight, structured composite materials enables fast and contact-free non-destructive testing in aerospace and avoids material degradation due to contact with a coupling liquid. Established resonant air-coupled transducers consist of piezocomposite materials and several matching layers or more advanced materials like charged cellular polypropylene. The relaxation time and the specific frequency of such mechanical ultrasound emitters limit the spectrum of applications for each device. A short pulse length is key for reliable defect detection and each component at test can be best characterized at material- and geometry-specific frequencies. Here we show that focused thermoacoustic transducers are suited for testing lightweight, structured composite plates. Since the ultrasound is generated in air, these transducers show no resonance behavior and emit a broadband acoustic spectrum between 1.2 kHz and 1 MHz. Composite specimens of 3 mm to 9 mm thickness made of polylactide with a honeycomb structure were tested. Flat bottom holes were introduced to quantify the spatial resolution of the imaging method inside the strongly anisotropic specimen. As no broadband receivers are available yet, cellular polypropylene transducers were used as receivers, which limits the bandwidth of the method towards the bandwidth of the receiver. Nevertheless, we demonstrate the competitiveness of the thermoacoustic transducer compared to mechanical emitters at their respective resonance frequencies. Because a thermoacoustic transmitter features a nearly ideal pulse width, a single transmitter can be coupled with receivers with different resonance frequencies. With the development of broadband ultrasound receivers, air-coupled ultrasound spectroscopy will likely be possible in the near future. The analysed transducer holds the potential to speed up testing during production and maintenance in aerospace and automotives. Its combination with a broadband receiver could also expand the application field of air-coupled ultrasonic testing from a qualitative error detection towards a quantitative, spatially resolved analysis of mechanical material properties.
Simulation becomes more and more important in modern CT imaging. It is increasingly used to optimize techniques for complex applications, to support the preparation of written procedures, and for educational purposes. The radiographic simulator aRTist is a modelling tool which simulates X-ray imaging using a hybrid analytical and Monte Carlo method to efficiently model the radiation transport. In addition to the relevant physical effects such as absorption, scattering and fluorescence, simplified fast models are employed to describe the characteristics of the X-ray source and the detector. aRTist is well equipped to model realistic X-ray imaging setups due to the ability to load exported CAD object descriptions. A simple CT scan module is contained in aRTist which allows the simulation of standard (circular cone beam) scanning trajectories.
TomoSynth is a module for aRTist which allows to set up more complex scanning trajectories by attaching geometrical modification functions to the objects in the radiographic scene. In this way, advanced scanning modes can be realized, for instance helical CT as an overlay of a rotation and a linear motion, or laminography as a motion of the source point. In addition to deterministic motion, also random variations can be introduced. By combining random variations with deterministic motion, non-ideal (realistic) CT scan geometries can be simulated, e.g. focal spot drift and mechanical instability of the axis of rotation. The TomoSynth module conveniently allows to construct these scenarios in a graphical interface and provides a preview before starting the (potentially long running) batch job. Therefore, deviations from ideal CT scan trajectories can be easily adjusted which is a necessary step towards uncertainty determination from simulation.
The calibration of isotope ratio measurements is an ongoing challenge since instrumental isotope fractionation (IIF) has been detected in mass spectrometry (MS). There is a variety of approaches which either bypass IIF such as delta measurements or refer to reference materials (RMs) and thus shifting the problem of calibration to somebody else: the RM producer. For certifying isotope RMs with absolute isotope ratios only a few approaches are available, namely the isotope mixture approach, the double spike approach, the mass bias regression model and total evaporation in TIMS. All of them require either enriched isotopes, isotope RMs of another element or an RM for correcting residual error. As the enriched isotopes required for the isotope mixture and the double spike approach need to be fully characterized beforehand, all mentioned calibration approaches require a standard.
Here, a new and standard-free calibration approach for obtaining absolute isotope ratios of multi-isotopic elements has been developed. The underlying principle is that each MS suffers from IIF and thus yields a specific isotope fractionation line in a three-isotope diagram. When applying a second MS featuring a different ionization mechanism, we obtain a second isotope fractionation line with a different slope in the same three-isotope diagram. In both cases the absolute isotope ratios range somewhere on the isotope fractionation line. Consequentially, the intersect of both lines yield the absolute isotope ratios of the measured sample. This theory has been tested by measuring Cd and Pb isotope ratios of suitable isotope RMs with a TIMS and an ICP-MS, both equipped with multi-collector array. During the measurements the ionization conditions were changed such that different extent of the isotope fractionation has been achieved. With the resulting data set the theory described above could be verified. The obtained absolute isotope ratios were metrologically compatible with the certified isotope ratios. The remaining average bias of -5 ‰ can be reduced with further improvements. The calibration approach is universal and can be applied to any multi-isotopic element and it is not limited by the type of the mass spectrometer.