Analytische Chemie
Filtern
Dokumenttyp
- Vortrag (13)
- Zeitschriftenartikel (10)
- Beitrag zu einem Tagungsband (7)
- Buchkapitel (1)
Schlagworte
- Reliability (31) (entfernen)
Organisationseinheit der BAM
- 8 Zerstörungsfreie Prüfung (22)
- 8.2 Zerstörungsfreie Prüfmethoden für das Bauwesen (10)
- 8.0 Abteilungsleitung und andere (6)
- 8.4 Akustische und elektromagnetische Verfahren (6)
- 1 Analytische Chemie; Referenzmaterialien (4)
- 1.2 Biophotonik (4)
- 6 Materialchemie (1)
- 6.5 Synthese und Streuverfahren nanostrukturierter Materialien (1)
Eingeladener Vortrag
- nein (13)
Оперативная характеристика обнаружения как метод оценки достоверности неразрушающего контроля
(2021)
Оперативные характеристики обнаружения в течение многих лет успешно применяют-ся при оценке достоверности неразрушающего контроля. Основными проблемами для их использования, как и при применении любого из методов оценки достоверности, яв-ляются отсутствие информации о действительной дефектной ситуации и ограниченное количество дефектов в образцах объектов контроля. Дополнительной проблемой постро-ения оперативной характеристики обнаружения является отсутствие информации о раз-мерах дефектов и отсутствие универсального размера для различных типов дефектов.
В статье обобщён опыт построения оперативных характеристик по результатам ультра-звукового контроля.
That human factors (HF) affect the reliability of NDT is not novelty. Still, when it comes to reliability assessments, the role of people is often neglected. Reliability is typically expressed in terms of POD curves, and the effects of human and organisational factors on the inspection are typically tackled by the regulations, procedures and by the qualification and training of the inspection personnel. However, studies have shown that even the most experienced personnel can make mistakes and that the reliability in the field is never as high as the reliability measured in the POD experiments. Generally, HF are considered too unpredictable and too uncontrollable to model. If that is the fact, then what can we do? The engineering perspective to this problem has often been to find ways to automate inspections and, recently, to make use of artificial intelligence tools to decrease the direct effect of people on the inspection results and improve the overall efficiency and reliability. However, despite automation and AI, people remain the key players, though their tasks change. The contemporary approach to HF is not to engineer them out of the system but to design human-machine systems that make the best use of both. In this talk, ways of tackling HF in the design of systems and processes will be presented.
How much do we, the small-angle scatterers, influence the results of an investigation? What uncertainty do we add by our human diversity in thoughts and approaches, and is this significant compared to the uncertainty from the instrumental measurement factors?
After our previous Round Robin on data collection, we know that many laboratories can collect reasonably consistent small-angle scattering data on easy samples1. To investigate the next, human component, we compiled four existing datasets from globular (roughly spherical) scatterers, each exhibiting a common complication, and asked the participants to apply their usual methods and toolset to the quantification of the results https://lookingatnothing.com/index.php/archives/3274).
Accompanying the datasets was a modicum of accompanying information to help with the interpretation of the data, similar to what we normally receive from our collaborators. More than 30 participants reported back with volume fractions, mean sizes and size distribution widths of the particle populations in the samples, as well as information on their self-assessed level of experience and years in the field.
While the Round Robin is still underway (until the 25th of April, 2022), the initial results already show significant spread in the results. Some of these are due to the variety in interpretation of the meaning of the requested parameters, as well as simple human errors, both of which are easy to correct for. Nevertheless, even after correcting for these differences in understanding, a significant spread remains. This highlights an urgent challenge to our community: how can we better help ourselves and our colleagues obtain more reliable results, how could we take the human factor out of the equation, so to speak?
In this talk, we will introduce the four datasets, their origins and challenges. Hot off the press, we will summarize the anonymized, quantified results of the Data Analysis Round Robin. (Incidentally, we will also see if a correlation exists between experience and proximity of the result to the median). Lastly, potential avenues for improving our field will be offered based on the findings, ranging from low-effort yet somehow controversial improvements, to high-effort foundational considerations.
The process of ensuring reliability of NDT applications contains various aspects, such as determining the performance and probability of success, the uncertainty in measurement, the provision of clear and functional procedures and ensuring the correct application accordingly. Test specimens have become powerful elements in supporting many of these aspects. Within the committee for NDT in Civil Engineering (NDT-CE) of the German Society for Nondestructive Testing (DGZfP), the subcommittee on Quality Assurance (UA-QS) therefore addresses the design and the integration of test specimens in the quality assurance process. Depending on the specific purpose, the requirements on test specimens can vary significantly based on the defined simulated scenario. The most prominent purposes of test specimens might be seen in providing references for inspection systems in regard to function control, calibration and validation. Further aspects can be parametric studies, basic investigation of physical principles related to NDT or a simplified and therefore comprehensive demonstration of inspection concepts (e.g. for teaching purposes). The specific purpose of a test specimen dictates the requirements regarding its conception, including the exact design, the material or the fabrication accuracy and the conditioning. In the development of a general guideline by the UA-QS for application-specific procedures and their validation, the use of test specimens is addressed and specific concepts for the design of test specimens are made. This includes the analysis of the measurement process regarding any given application, deriving an adequate calibration approach for it and designing test specimens (calibration specimens) accordingly. Furthermore, it includes the validation of the procedure taking into account all conditions related to the specific application in the field. The validation requires a statistically sufficient number of trials. Thorough evaluation of each trial can only be established if the ground-truth is known. Therefore, test specimens providing a realistic but controlled simulation of the inspection problem are valuable and indispensable elements in the validation process. The requirement of being fully realistic will often not be possible to fulfill due to practical restrictions. Any aspect that cannot be included in the simulation realistically needs to be simulated conservatively. This again, requires a sufficient understanding of the inspection principle and technique to ensure conservativeness. Among other quality-assurance-related aspects, the UA-QS establishes concepts and guidelines regarding sound and efficient approaches for the specific purposes of test specimens. This subcommittee brings together representatives of different Groups along the entire value chain of NDT-CE, including researchers, practitioners, manufacturers and clients. They all work together in establishing a common understanding and level of quality assurance in the industry.
Issues that prevent Structural Health Monitoring (SHM) based on Guided Waves (GW) from being a part of today’s monitoring solutions in industry are not all obvious to the scientific community. To uncover and overcome these issues, scientists working on SHM and GW problems joined in an expert committee under the patronage of the German Society for Non-Destructive Testing. An initiated online survey among more than 700 experts and users reveals the hurdles hindering the practical application of GWbased SHM. Firstly, methods for proof of reliability of SHM approaches are missing.
Secondly, detailed understanding of phenomenological described wave-damage interactions is needed. Additionally, there are significant unsolved implementation issues and unsolved problems of signal processing including handling of environmental influences.
To enable substantial proof of reliability without unaffordable experimental effort also efficient simulation tools including realistic damage interaction are needed, enabling the joint use of experimental and simulated data to predict the capabilities of the Monitoring system. Considering these issues, the committee focusses on simulation, signal processing, as well as probability of detection and standardization. In the presented work, recent activities of the expert committee starting with survey results are summarized. An open access data basis of life-like measurements is presented to allow testing and comparison of signal processing and simulation algorithms. Finally, a strategy for efficient proof of reliability increasing the acceptance of SHM in industry and for successful Integration of SHM into real-world engineering structures is proposed.
Issues that prevent Structural Health Monitoring (SHM) based on Guided Waves (GW) from being a part of today’s monitoring solutions in industry are not all obvious to the scientific community. To uncover and overcome these issues, scientists working on SHM and GW problems joined in an expert committee under the patronage of the German Society for Non-Destructive Testing. An initiated online survey among more than 700 experts and users reveals the hurdles hindering the practical application of GWbased SHM. Firstly, methods for proof of reliability of SHM approaches are missing.
Secondly, detailed understanding of phenomenological described wave-damage interactions is needed. Additionally, there are significant unsolved implementation issues and unsolved problems of signal processing including handling of environmental influences.
To enable substantial proof of reliability without unaffordable experimental effort also efficient simulation tools including realistic damage interaction are needed, enabling the joint use of experimental and simulated data to predict the capabilities of the Monitoring system. Considering these issues, the committee focusses on simulation, signal processing, as well as probability of detection and standardization. In the presented work, recent activities of the expert committee starting with survey results are summarized. An open access data basis of life-like measurements is presented to allow testing and comparison of signal processing and simulation algorithms. Finally, a strategy for efficient proof of reliability increasing the acceptance of SHM in industry and for successful Integration of SHM into real-world engineering structures is proposed.
Introduction. Comparing different emitter classes and rationally designing the next generation of molecular and nanoscale probes for bioimaging applications require accurate and quantitative methods for the measurement of the key parameter photoluminescence quantum yield f.1 f equals the number of emitted per number of absorbed photons. This is particularly relevant for increasingly used fluorescence imaging in the short wave-infrared region (SWIR) ≥ 900 nm providing deeper penetration depths, a better image resolution, and an improved signal-to-noise or tumor-to-background ratio.2, 3 However, spectroscopic measurements in the SWIR are more challenging and require specific calibrations and standards.
This presentation offers a holistic view on the assessment of reliability of NDT including the intrinsic reliability (typically expressed in terms of probability of detection (POD) curves), application, human and organisational factors. In addition to POD, advanced methods such as multiparameter POD, volume POD and POD for combined data are presented. Human and organisational factors in NDT require a systematic approach, i.e. it is not just the individual that determines how the inspections are carried out but also the interactions of individuals with the technology, team, organisation and the extra-organisational environment. Lessons learned from the literature as well as from own studies are presented.
The non-destructive testing methods available for civil engineering (NDT-CE) enable the measurements of quantitative parameters, which realistically describe the characteristics of existing buildings. In the past, methods for quality evaluation and concepts for validation expanded into NDT-CE to improve the objectivity of measured data. Thereby, a metrological foundation was developed to collect statistically sound and structurally relevant information about the inner construction of structures without destructive interventions. More recently, the demand for recalculations of structural safety was identified. This paper summarizes a basic research study on structural analyses of bridges in combination with NDT. The aim is to use measurement data of nondestructive testing methods as stochastic quantities in static calculations. Therefore, a methodical interface between the guide to the expression of uncertainty in measurement and probabilistic approximation procedures (e.g. FORM) has been proven to be suitable. The motivation is to relate the scientific approach of the structural analysis with real information coming from existing structures and not with those found in the literature. A case study about the probabilistic bending proof of a reinforced concrete bridge with statistically verified data from ultrasonic measurements shows that the measuring results fulfil the requirements concerning precision, trueness, objectivity and reliability.
A modern day light microscope has evolved from a tool devoted to making primarily empirical observations to what is now a sophisticated, quantitative device that is an integral part of both physical and life science research. Nowadays, microscopes are found in nearly every experimental laboratory. However, despite their prevalent use in capturing and quantifying scientific phenomena, neither a thorough understanding of the principles underlying quantitative imaging techniques nor appropriate knowledge of how to calibrate, operate and maintain microscopes can be taken for granted. This is clearly demonstrated by the well-documented and widespread difficulties that are routinely encountered in evaluating acquired data and reproducing scientific experiments. Indeed, studies have shown that more than 70% of researchers have tried and failed to repeat another scientist’s experiments, while more than half have even failed to reproduce their own experiments1. One factor behind the reproducibility crisis of experiments published in scientific journals is the frequent underreporting of imaging methods caused by a lack of awareness and/or a lack of knowledge of the applied technique2,3. Whereas quality control procedures for some methods used in biomedical research, such as genomics (e.g., DNA sequencing, RNA-seq) or cytometry, have been introduced (e.g. ENCODE4), this issue has not been tackled for optical microscopy instrumentation and images. Although many calibration standards and protocols have been published, there is a lack of awareness and agreement on common Standards and guidelines for quality assessment and reproducibility5.
In April 2020, the QUality Assessment and REProducibility for instruments and images in Light Microscopy (QUAREP-LiMi) initiative6 was formed. This initiative comprises imaging scientists from academia and industry who share a common interest in achieving a better understanding of the performance and limitations of microscopes and improved quality control (QC) in light microscopy. The ultimate goal of the QUAREP-LiMi initiative is to establish a set of common QC standards, guidelines, metadata models7,8, and tools9,10, including detailed protocols, with the ultimate aim of improving reproducible advances in scientific research.
This White Paper 1) summarizes the major obstacles identified in the field that motivated the launch of the QUAREP-LiMi initiative; 2) identifies the urgent need to address these obstacles in a grassroots manner, through a community of Stakeholders including, researchers, imaging scientists11, bioimage analysts, bioimage informatics developers, corporate partners, Funding agencies, standards organizations, scientific publishers, and observers of such; 3) outlines the current actions of the QUAREPLiMi initiative, and 4) proposes future steps that can be taken to improve the dissemination and acceptance of the proposed guidelines to manage QC.
To summarize, the principal goal of the QUAREP-LiMi initiative is to improve the overall quality and reproducibility of light microscope image data by introducing broadly accepted standard practices and accurately captured image data metrics.