Artifical Intelligence and Data Science
Refine
Labor/Institute
Keywords
- Fahrerassistenzsystem (65)
- Fußgänger (27)
- Neuronales Netz (27)
- Artificial Intelligence (18)
- ITS (Intelligent Transportation Systems) (17)
- Mikroelektrode (17)
- Array (16)
- Herzmuskelzelle (14)
- Robuste Regelung (14)
- Bildverarbeitung (13)
Year of publication
Document Type
- Conference Proceeding (154)
- Article (82)
- Other (11)
- Report (11)
- Part of a Book (3)
- Preprint (3)
- Book (2)
- Patent (2)
- Working Paper (1)
Major depressive disorder (MDD) is a multifaceted condition that affects millions of people worldwide and is a leading cause of disability. There is an urgent need for an automated and objective method to detect MDD due to the limitations of traditional diagnostic approaches. In this paper, we propose a methodology based on machine and deep learning to classify patients with MDD and identify altered functional connectivity patterns from EEG data. We compare several connectivity metrics and machine learning algorithms. Complex network measures are used to identify structural brain abnormalities in MDD. Using Spearman correlation for network construction and the SVM classifier, we verify that it is possible to identify MDD patients with high accuracy, exceeding literature results. The SHAP (SHAPley Additive Explanations) summary plot highlights the importance of C4-F8 connections and also reveals dysfunction in certain brain areas and hyperconnectivity in others. Despite the lower performance of the complex network measures for the classification problem, assortativity was found to be a promising biomarker. Our findings suggest that understanding and diagnosing MDD may be aided by the use of machine learning methods and complex networks.
Integrating artificial intelligence (AI) into decision-making processes is key to improving organizational performance. However, trust in AI-based decision support systems (DSSs), similar to other information systems, is important for successful integration. A disruptive phenomenon, “algorithm aversion”, can impede AI trust and, thus, acceptance. Although AI recommendations outperform human recommendations in different decision-making fields, individuals underweight recommendations from AI-based DSSs compared to human decision-makers due to a lack of AI trust. We conducted a lab experiment to investigate the role of AI recommendations in workplace-related tasks, first focusing on the mediating effect of AI trust and the negative impact of algorithm aversion on decision-making performance and the moderating effect of technical competence. Second, we analyzed the ability of gamification to reduce this phenomenon. We provide evidence regarding how to enhance decision-making performance when AI recommendations are deployed and identify countermeasures against algorithm aversion to facilitate the adoption of AI-based DSSs.
In this contribution we deal with the problem of producing “reasonable” data, when considering recorded energy consumption data, which are at certain sections incomplete and/or erroneous. This task is important, when energy providers employ prediction models for expected energy consumption, which are based on past recorded consumption data, which then of course should be reliable and valid. In a related contribution Yilmaz (2022), GAN-based methods for producing such “artificial data” have been investigated. In this contribution, we describe an alternative and complementary method based on signal inpainting, which has been successfully applied to audio processing Lieb and Stark (2018). After giving a short overview of the theory of proximity-based convex optimization, we describe and adapt an iterative inpainting scheme to our problem. The usefulness of this approach is demonstrated by analyzing real-world-data provided by a German energy supplier.
Machine learning algorithms make predictions by fitting highly parameterized nonlinear
functions to massive amounts of data. Yet those models are not necessarily consistent with physical
laws and offer limited interpretability. Extending machine learning models by introducing scientific
knowledge in the optimization problem is known as physics-based and data-driven modelling. A
promising development are physics informed neural networks (PINN) which ensure consistency to
both physical laws and measured data. The aim of this research is to model the time-dependent
temperature profile in bulk materials following the passage of a moving laser focus by a PINN. The
results from the PINN agree essentially with finite element simulations, proving the suitability of the
approach. New perspectives for applications in laser material processing arise when PINNs are
integrated in monitoring systems or used for model predictive control.
Bridging the gap between physics-based modeling and data-driven machine learning promises to reduce the amount of training data required and to improve explainability in predictive maintenance applications. For a small fleet of industrial forklift trucks, we develop a physically inspired framework for predicting remaining useful life (RUL) for selected components by integrating physically motivated feature extraction, degradation modelling and machine learning. The discussed approach is promising for situations of limited data availability or large data heterogeneity, which often occurs in fleets of customized vehicles optimized for particular tasks.
The substitution of expensive non-destructive material testing by data-based process monitoring is intensively explored in quality assurance for additive manufactured components. Machine learning show promising results for defect detection but require conceptual adaption to layer wise manufacturing and line scanning patterns in laser powder bed fusion. A multi-layer approach to co-register µ-computer tomography measurements with process monitoring data is developed and a workflow for automatic data set generation is implemented. The objective of this research is to benchmark the volumetric multi-layer approach and specifically selected deep learning methods for defect detection. The volumetric approach shows superior results compared to single slice monitoring. All investigated structured neural network topologies deliver similar performance.
Limited process control can cause metallurgical defect formation and inhomogeneous relative density in laser powder bed fusion manufactured parts. This study shows that process monitoring, based on optical melt-pool signal analysis is capable of tracing relative density variations: Unsupervised machine learning, applied to cluster multiple-slice monitoring data, reveals characteristic patterns in this noisy time-series signal, which can be co-registered with geometrical positions in the build part. For cylindrical 15–5 PH stainless steel specimens, manufactured under constant process parameters and post-analyzed by µ-computer tomography, correlations between such patterns and an increased local relative density at the edge have been observed. Finite element method (FEM) modeling of thermal histories at exemplary positions close to the edge suggest pre-heating effects caused by neighboring laser scan trajectories as possible reasons for the increased melt pool intensity at the edge.
Heart disease, also known as cardiovascular disease, encompasses a variety of heart conditions that can result in sudden death for many people. Examples include high blood pressure, ischaemia, irregular heartbeats and pericardial effusion. Electrocardiogram (ECG) signal analysis is frequently used to diagnose heart diseases, providing crucial information on how the heart functions. To analyse ECG signals, quantile graphs (QGs) is a method that maps a time series into a network based on the time-series fluctuation proprieties. Here, we demonstrate that the QG methodology can differentiate younger and older patients. Furthermore, we construct networks from the QG method and use machine-learning algorithms to perform the automatic diagnosis, obtaining high accuracy. Indeed, we verify that this method can automatically detect changes in the ECG of elderly and young subjects, with the highest classification performance for the adjacency matrix with a mean area under the receiver operating characteristic curve close to one. The findings reported here confirm the QG method’s utility in deciphering intricate, nonlinear signals like those produced by patient ECGs. Furthermore, we find a more significant, more connected and lower distribution of information networks associated with the networks from ECG data of the elderly compared with younger subjects. Finally, this methodology can be applied to other ECG data related to other diseases, such as ischaemia.
Heart disease, also known as cardiovascular disease, encompasses a variety of heart conditions that can result in sudden death for many people. Examples include high blood pressure, ischaemia, irregular heartbeats and pericardial effusion. Electrocardiogram (ECG) signal analysis is frequently used to diagnose heart diseases, providing crucial information on how the heart functions. To analyse ECG signals, quantile graphs (QGs) is a method that maps a time series into a network based on the time-series fluctuation proprieties. Here, we demonstrate that the QG methodology can differentiate younger and older patients. Furthermore, we construct networks from the QG method and use machine-learning algorithms to perform the automatic diagnosis, obtaining high accuracy. Indeed, we verify that this method can automatically detect changes in the ECG of elderly and young subjects, with the highest classification performance for the adjacency matrix with a mean area under the receiver operating characteristic curve close to one. The findings reported here confirm the QG method’s utility in deciphering intricate, nonlinear signals like those produced by patient ECGs. Furthermore, we find a more significant, more connected and lower distribution of information networks associated with the networks from ECG data of the elderly compared with younger subjects. Finally, this methodology can be applied to other ECG data related to other diseases, such as ischaemia.
Lithium-ion battery (LIB) manufacturing requires a pilot stage that optimizes its characteristics. However, this process is costly and time-consuming. One way to overcome this is to use a set of computational models that act as a digital twin of the pilot line, exchanging information in real-time that can be compared with measurements to correct parameters. Here we discuss the parameters involved in each step of LIB manufacturing, show available computational modeling approaches, and discuss details about practical implementation in terms of software. Then, we analyze these parameters regarding their criticality for modeling set-up and validation, measurement accuracy, and rapidity. Presenting this in an understandable format allows identifying missing aspects, remaining challenges, and opportunities for the emergence of pilot lines integrating digital twins. Finally, we present the challenges of managing the data produced by these models. As a snapshot of the state-of-the-art, this work is an initial step towards digitalizing battery manufacturing pilot lines, paving the way toward autonomous optimization.