Refine
Labor/Institute
Keywords
- Maschinelles Lernen (6)
- Neuronales Netz (2)
- Anomalie (1)
- Anomaly detection (1)
- Aufmerksamkeitsdefizit-Syndrom (1)
- Autismus (1)
- Batterie (1)
- Einzelphotonenemission (1)
- Einzelphotonenquellen (1)
- Elektrokardiogramm (1)
Document Type
- Article (6)
- Conference Proceeding (5)
Reviewed
- ja (9)
Deutschland investiert stark in Quantentechnologien und möchte seine aktuelle
Vorreiterrolle in diesem Markt behaupten. Einzelphotonenquellen werden als
Schlüsselkomponenten vieler Quantentechnologien dabei zukünftig große
Bedeutung erlangen. Dieser Beitrag gibt als Markt- und Technologiestudie einen
Überblick über diese junge Hochtechnologiebranche.
The usability of machine learning approaches for the development of in-situ process monitoring, automated anomaly detection and quality assurance for the selective laser melting (SLM) process receives currently increasing attention. For a given set of real machine data we compare two established methods, principal component analysis (PCA) and -variational autoencoder (-VAE), for their applicability in exploratory data analysis and anomaly detection. We introduce a PCA-based unsupervised feature extraction algorithm, which allows for root cause analysis of process anomalies. The -VAE enables a slightly more compact dimensionality reduction; we consider it an option for automated process monitoring systems.
Heart disease, also known as cardiovascular disease, encompasses a variety of heart conditions that can result in sudden death for many people. Examples include high blood pressure, ischaemia, irregular heartbeats and pericardial effusion. Electrocardiogram (ECG) signal analysis is frequently used to diagnose heart diseases, providing crucial information on how the heart functions. To analyse ECG signals, quantile graphs (QGs) is a method that maps a time series into a network based on the time-series fluctuation proprieties. Here, we demonstrate that the QG methodology can differentiate younger and older patients. Furthermore, we construct networks from the QG method and use machine-learning algorithms to perform the automatic diagnosis, obtaining high accuracy. Indeed, we verify that this method can automatically detect changes in the ECG of elderly and young subjects, with the highest classification performance for the adjacency matrix with a mean area under the receiver operating characteristic curve close to one. The findings reported here confirm the QG method’s utility in deciphering intricate, nonlinear signals like those produced by patient ECGs. Furthermore, we find a more significant, more connected and lower distribution of information networks associated with the networks from ECG data of the elderly compared with younger subjects. Finally, this methodology can be applied to other ECG data related to other diseases, such as ischaemia.
Design of experiments (DOE) is an established method to allocate resources for efficient parameter space exploration. Model based active learning (AL) data sampling strategies have shown potential for further optimization. This paper introduces a workflow for conducting DOE comparative studies using automated machine learning. Based on a practical definition of model complexity in the context of machine learning, the interplay of systematic data generation and model performance is examined considering various sources of uncertainty: this includes uncertainties caused by stochastic sampling strategies, imprecise data, suboptimal modeling, and model evaluation. Results obtained from electrical circuit models with varying complexity show that not all AL sampling strategies outperform conventional DOE strategies, depending on the available data volume, the complexity of the dataset, and data uncertainties. Trade-offs in resource allocation strategies, in particular between identical replication of data points for statistical noise reduction and broad sampling for maximum parameter space exploration, and their impact on subsequent machine learning analysis are systematically investigated. Results indicate that replication oriented strategies should not be dismissed but may prove advantageous for cases with non-negligible noise impact and intermediate resource availability. The provided workflow can be used to simulate practical experimental conditions for DOE testing and DOE selection.
Lithium-ion battery cell production is conducted through a multistep production process which suffers from a notable scrap rate. Machine learning (ML) based process monitoring provides solutions to mitigate the impact of substantial scrap rates by repeated multifactorial quality predictions (virtual quality gates) along the process line. This enables an early rejection of battery cells which are unlikely to reach required specifications, avoids further waste of resources at later process steps and simplifies recycling of rejected cells. A hierarchical architecture is used to apply ML algorithms first for process-adapted feature extraction which is guided by a priori knowledge on typical production anomalies. In a second step, these features are correlated with end-of-line quality control data using explainable ML methods. The resulting predictions may lead to pass or fail of a battery cell, or -in the context of flexible production- may also trigger adjustments of later process steps to compensate for detected deficiencies. An example ML based quality control concept is illustrated for a pilot battery cell production line.
Heart disease, also known as cardiovascular disease, encompasses a variety of heart conditions that can result in sudden death for many people. Examples include high blood pressure, ischaemia, irregular heartbeats and pericardial effusion. Electrocardiogram (ECG) signal analysis is frequently used to diagnose heart diseases, providing crucial information on how the heart functions. To analyse ECG signals, quantile graphs (QGs) is a method that maps a time series into a network based on the time-series fluctuation proprieties. Here, we demonstrate that the QG methodology can differentiate younger and older patients. Furthermore, we construct networks from the QG method and use machine-learning algorithms to perform the automatic diagnosis, obtaining high accuracy. Indeed, we verify that this method can automatically detect changes in the ECG of elderly and young subjects, with the highest classification performance for the adjacency matrix with a mean area under the receiver operating characteristic curve close to one. The findings reported here confirm the QG method’s utility in deciphering intricate, nonlinear signals like those produced by patient ECGs. Furthermore, we find a more significant, more connected and lower distribution of information networks associated with the networks from ECG data of the elderly compared with younger subjects. Finally, this methodology can be applied to other ECG data related to other diseases, such as ischaemia.
Limited process control can cause metallurgical defect formation and inhomogeneous relative density in laser powder bed fusion manufactured parts. This study shows that process monitoring, based on optical melt-pool signal analysis is capable of tracing relative density variations: Unsupervised machine learning, applied to cluster multiple-slice monitoring data, reveals characteristic patterns in this noisy time-series signal, which can be co-registered with geometrical positions in the build part. For cylindrical 15–5 PH stainless steel specimens, manufactured under constant process parameters and post-analyzed by µ-computer tomography, correlations between such patterns and an increased local relative density at the edge have been observed. Finite element method (FEM) modeling of thermal histories at exemplary positions close to the edge suggest pre-heating effects caused by neighboring laser scan trajectories as possible reasons for the increased melt pool intensity at the edge.
Machine learning algorithms make predictions by fitting highly parameterized nonlinear
functions to massive amounts of data. Yet those models are not necessarily consistent with physical
laws and offer limited interpretability. Extending machine learning models by introducing scientific
knowledge in the optimization problem is known as physics-based and data-driven modelling. A
promising development are physics informed neural networks (PINN) which ensure consistency to
both physical laws and measured data. The aim of this research is to model the time-dependent
temperature profile in bulk materials following the passage of a moving laser focus by a PINN. The
results from the PINN agree essentially with finite element simulations, proving the suitability of the
approach. New perspectives for applications in laser material processing arise when PINNs are
integrated in monitoring systems or used for model predictive control.
The substitution of expensive non-destructive material testing by data-based process monitoring is intensively explored in quality assurance for additive manufactured components. Machine learning show promising results for defect detection but require conceptual adaption to layer wise manufacturing and line scanning patterns in laser powder bed fusion. A multi-layer approach to co-register µ-computer tomography measurements with process monitoring data is developed and a workflow for automatic data set generation is implemented. The objective of this research is to benchmark the volumetric multi-layer approach and specifically selected deep learning methods for defect detection. The volumetric approach shows superior results compared to single slice monitoring. All investigated structured neural network topologies deliver similar performance.
Bridging the gap between physics-based modeling and data-driven machine learning promises to reduce the amount of training data required and to improve explainability in predictive maintenance applications. For a small fleet of industrial forklift trucks, we develop a physically inspired framework for predicting remaining useful life (RUL) for selected components by integrating physically motivated feature extraction, degradation modelling and machine learning. The discussed approach is promising for situations of limited data availability or large data heterogeneity, which often occurs in fleets of customized vehicles optimized for particular tasks.