Wissenschaftliche Artikel der BAM
Filtern
Dokumenttyp
Referierte Publikation
- nein (17) (entfernen)
Schlagworte
- Reliability (17) (entfernen)
Organisationseinheit der BAM
The Infrastar training school provides lectures and hands-on training to Master and PhD students, early-stage researchers and (young) professionals on all aspects of asset management of civil infrastructures with respect to fatigue of materials. It is organized yearly since 2019 and up to 20 profiles are selected to attend the 3.5-day training.
A team of 7 teachers provides insight in multi-disciplinary and intersectoral basic concepts in three core fields, ranging from the design to the dismantling of the structures (bridges and wind turbines):
1. Monitoring and auscultation,
2. Structural and action models,
3. Reliability, risk and decision analyses.
Each year, a recognized expert is invited to deliver a keynote lecture to raise interest in a specific aspect of one of the fields covered. The presentation is recorded and published on the training school website.
Seit beinahe 50 Jahren werden Zuverlässigkeitsbewertungen zerstörungsfreier Prüfverfahren erfolgreich in unterschiedlichen Industriebranchen eingesetzt. Dennoch gibt es weiterhin viele Vorbehalte, da die Bewertungsgrundlage nicht standardisiert ist und konkrete Handlungsanweisungen fehlen. Dem gegenüber steht die Anerkennung in der ZfP-Community. Es ist bisher nur Wenigen bekannt, dass dem Wissen über die Zuverlässigkeit eines zerstörungsfreien Prüfverfahren eineSchlüsselfunktion für die ZfP 4.0 zukommt.
Im Vortrag wird die Notwendigkeit für eine Norm oder eine Richtlinie zum Thema Zuverlässigkeitsbewertungen aufgezeigt. Es wird darauf eingegangen, warum es bisher (in Deutschland) noch keine Richtlinie gab und gibt und was der:die Anwender:innen aus einer Richtlinie zur Zuverlässigkeitsbewertung von ZfP-Prüfverfahren erwarten kann.
Es werden konkrete Schritte und Vorgehensweisen zur Bewertung der Zuverlässigkeit eines Verfahrens aufgezeigt; von der Definition eines Anwendungsfalls, über die Herstellung geeigneter Testkörper bis hin zur Durchführung und Bewertung von Prüfungen unter Einbeziehung von menschlichen Einflüssen. Die Beispiele umfassen sowohl Anwendungen aus dem Bauwesen als auch aus dem Maschinen- und Anlagenbau.
Der Vortrag fasst die bisherigen und geplanten Arbeiten in dem WIPANO Projekt „Normung für die probabilistische Bewertung der Zuverlässigkeit für zerstörungsfreie Prüfverfahren“ zusammen.
Menschliche Faktoren sind ein häufig genanntes Thema, wenn wir über
die Zuverlässigkeit der zerstörungsfreien Prüfung (ZfP) sprechen. Die Probability of Detection (POD), das üblicherweise verwendete Maß für die Zuverlässigkeit der ZfP, betrachtet jedoch nur die technische Fähigkeit eines ZfP-Systems, einen Fehler zu entdecken. Nach mehreren Jahrzehnten der Erforschung des Einflusses der menschlichen Faktoren auf die Zuverlässigkeit der ZfP gibt es immer noch keinen allgemein akzeptierten Ansatz, um menschliche Faktoren in der Zuverlässigkeitsbewertung sichtbar zu machen. Dieser Beitrag gibt einen Überblick über verschiedene verfügbare Methoden zur Einbeziehung menschlicher Faktoren in die Zuverlässigkeitsbewertung. Das Thema ist ein essenzieller Bestandteil des laufenden WIPANO-Projekts "normPOD", das sich als Ziel gesetzt hat, die Standardisierung der Zuverlässigkeitsbewertung in Deutschland voranzutreiben und im Vergleich zu den schon bekannten internationalen Normen einen Fokus auf den Umgang mit den menschlichen Faktoren zu legen.
Each engineering decision is based on a number of more or less accurate information. In assessment of existing structures, additional relevant information collected with on-site inspections facilitate better decisions. However, observed data basically represents the physical characteristic of interest with an uncertainty. This uncertainty is a measure of the inspection quality and can be quantified by expressing the measurement uncertainty. The internationally accepted rules for calculating measurement uncertainty are well established and can be applied straightforwardly in many practical cases. Nevertheless, the calculations require the occasionally time-consuming development of an individually suitable measurement model. This contribution attempts to emphasize proposals for modelling the non-destructive depth measurement of tendons in concrete using the ultrasonic echo technique. The proposed model can serve as guideline for the determination of the quality of the measured information in future comparable inspection scenarios.
The acquisition and appropriate processing of relevant information about the considered system remains a major challenge in assessment of existing structures. Both the values and the validity of computed results such as failure probabilities essentially depend on the quantity and quality of the incorporated knowledge. One source of information are onsite measurements of structural or material characteristics to be modeled as basic variables in reliability assessment. The explicit use of (quantitative) measurement results in assessment requires the quantification of the quality of the measured information, i.e., the uncertainty associated with the information acquisition and processing. This uncertainty can be referred to as measurement uncertainty. Another crucial aspect is to ensure the comparability of the measurement results.This contribution attempts to outline the necessity and the advantages of measurement uncertainty calculations in modeling of measurement data-based random variables to be included in reliability assessment. It is shown, how measured data representing time-invariant characteristics, in this case non-destructively measured inner geometrical dimensions, can be transferred into measurement results that are both comparable and quality-evaluated. The calculations are based on the rules provided in the guide to the expression of uncertainty in measurement (GUM). The GUM-framework is internationally accepted in metrology and can serve as starting point for the appropriate processing of measured data to be used in assessment. In conclusion, the effects of incorporating the non-destructively measured data into reliability analysis are presented using a prestressed concrete bridge as case-study.
One of the most important goals in civil engineering is to guarantee the safety of the construction. Standards prescribe a required failure probability in the order of 10−4 to 10−6. Generally, it is not possible to compute the failure probability analytically.
Therefore, many approximation methods have been developed to estimate the failure probability. Nevertheless, these methods still require a large number of evaluations of the investigated structure, usually finite element (FE) simulations, making full probabilistic design studies not feasible for relevant applications. The aim of this paper is to increase the efficiency of structural reliability analysis by means of reduced order models. The developed method paves the way for using full probabilistic approaches in industrial applications. In the proposed PGD reliability analysis, the solution of the structural computation is directly obtained from evaluating the PGD solution for a specific parameter set without computing a full FE simulation. Additionally, an adaptive importance sampling scheme is used to minimize the total number of required samples. The accuracy of the failure probability depends on the accuracy of the PGD model (mainly influenced on mesh discretization and mode truncation) as well as the number of samples in the sampling algorithm. Therefore, a general iterative PGD reliability procedure is developed to automatically verify the accuracy of the computed failure probability. It is based on a goal-oriented refinement of the PGD model around the adaptively approximated design point. The methodology is applied and evaluated for 1D and 2D examples. The computational savings compared to the method based on a FE model is shown and the influence of the accuracy of the PGD model on the failure probability is studied.
Owners or operators of offshore wind farms perform inspections to collect information on the condition of the wind turbine support structures and perform repairs if required. These activities are costly and should be optimized. Risk-based methods can be applied to identify inspection and repair strategies that ensure an optimal balance between the expected total service life cost of inspection and repair, and the achieved risk reduction. Such an optimization requires explicit modeling of repairs. In this paper, the impact of different repair models on the results of a risk-based optimization of inspection and repair strategies is quantified in a numerical example considering a jacket-type steel frame subject to high-cycle fatigue. The example showed that, in this specific application, there is no need for detailed modeling of the behavior of repaired welded connections.
Reliability of NDT is affected by human factors, which have thus far received the least amount of attention in the reliability assessments. With increased use of automation, in terms of mechanised testing (automation-assisted inspection and the corresponding evaluation of data), higher reliability standards are believed to have been achieved. However, human inspectors, and thus human factors, still play an important role throughout this process and the risks involved in this application are unknown. The aim of this study was to explore for the first time the risks associated with mechanised NDT and find ways of mitigating their effects on the inspection performance. Hence, the objectives were to identify and Analyse potential risks in mechanised NDT and devise measures against them. To address those objectives, a risk assessment in form of a Failure Modes and Effects Analysis (FMEA) was conducted. This analysis revealed potential for failure during both the acquisition and evaluation of NDT data that could be assigned to human, technology, and organisation. Since the existing preventive measures were judged to be insufficient to defend the system from identified failures, new preventive measures were suggested.
Classical film radiography is a well-established NDT technique and it is most commonly used for testing weld seams and corroded pipes e.g. in oil and gas industry or in nuclear power plants. In the course of digitization, digital detector arrays (DDA) are finding their way into industrial applications and are replacing film radiography step by step. This study deals with the latest generation of DDAs, the photon counting and energy resolving detectors (PCD), and their characteristics compared to charge integrating detectors (CID). No matter which technology to use, radiography still lacks a general issue: A three-dimensional object is projected onto a two dimensional image. Of course, advanced computed tomography (CT) algorithms exist since many years, but if the object to investigate is too large to fit into the manipulation system or its shape is not appropriate, CT is not feasible or sensible to be applied. To overcome this limitation, numerous laminographic algorithms have been developed in the past. In this study, photon counting detectors are used in combination with co-planar translational laminography to gain reconstructed three-dimensional volumes. Both laminographic testing and PCDs require a serious knowledge of many parameters that can influence the image quality in the resulting datasets. These are e.g. the detector efficiency and calibration procedure, setting of energy thresholds, exposure data, number of projections, beam length correction and spatial resolution. The use of PCDs yields more variables to be considered compared to CIDs. The most important parameters in laminographic testing and in the use of PCDs are described in this study and limits are discussed.
Dass die Zuverlässigkeit der ZfP eine sehr wichtige Rolle bei der Bewertung sicherheitsrelevanter Systeme spielt, ist bekannt. Laut dem modularen Zuverlässigkeitsmodell für zerstörungsfreie Prüfprozesse hängt die Zuverlässigkeit von der innewohnenden (physikalisch-technischen) Fähigkeit der Prüfsystems, den Anwendungsparametern, den menschlichen Faktoren und dem organisatorischen Kontext ab. Seit Einführung des Modells während des ersten Workshops zur Zuverlässigkeit der ZfP (European-American Workshop on Reliability of NDE) wurde das Modell in weiteren fünf Workshops stetig weiterentwickelt, diskutiert und an den Stand der Wissenschaft angepasst. Das Ziel dieser Publikation ist es, den aktuellen Stand der Diskussion und die neusten Fragestellungen der internationalen Zuverlässigkeitsgemeinschaft vorzustellen. Im Einzelnen werden die Entwicklung fortgeschrittener Modelle zur Bewertung der Probability of Detection (POD), Bewertungsansätze im Bereich Structural Health Monitoring und Fragestellungen zu den menschlichen Faktoren dargestellt. Besonderer Wert wurde sowohl auf die Entwicklung, als auch auf die Anwendung der Ansätze in der industriellen Praxis gelegt.