Promotionszentrum Angewandte Informatik
Refine
Document Type
- Doctoral Thesis (13)
Has Fulltext
- yes (13)
Keywords
- 5G (1)
- Bildsegmentierung (1)
- Biometric Identification (1)
- Biometric Systems (1)
- Biometric systems (1)
- Biometrics (1)
- Computer Vision (1)
- Contactless Fingerprint (1)
- Datenschutz (1)
- Delay-Energy-Tradeoff (1)
Manual assembly remains a crucial aspect of industrial production, especially in non-standardized pre-assembly lines where human adaptability to new processes plays a significant role. Despite the human ability to quickly adjust to new tasks, errors frequently occur, particularly at the beginning and end of a task or during transitions between workstations, often due to unfamiliarity or fatigue. This thesis presents the development of an assistance system designed to enhance the accuracy of manual assembly tasks by recognizing and classifying fine-grained human actions through video data. The system guides workers and alerts them to potential errors, including those that might arise when components are placed correctly but improperly, such as a screw inserted without being tightened. Given the lack of industrial-standard training data, a new dataset, Industrial Hand Assembly Dataset V1 (IHADV1), is introduced, containing eleven assembly classes. The use of skeleton-based methods for feature extraction supports efficient and cost-effective training of a spatiotemporal Transformer network. Initial models demonstrated 87% accuracy with a significant reduction in trainable parameters, highlighting the dataset’s efficacy for capturing detailed assembly movements and the value of spatiotemporal analysis in skeletal data. Further advancements in the methodology, including the use of cross-attention mechanisms at the encoder level, resulted in an accuracy of 99%. The thesis also explores self-supervised learning techniques, such as random masking on non-industrial data, leading to a model capable of achieving over 90% accuracy on the fine tuned classes with a significant reduction in labeled data. Additionally a curriculum-based self-learning approach was developed to enable the model to adapt to evolving industrial environments and integrate new assembly classes, ensuring continuous improvement during operational deployment. The findings suggest promising applications for the assistance system in industrial settings, with potential for scalable and self-sustaining advancements in assembly line efficiency.
Numerical Evaluation and Optimization of the Mechanical Properties of Particle Reinforced Composites
(2025)
In today’s world, composite materials form the bases of many products, ranging from simple’ packaging material to the parts of a wind turbine. Even though computational methods have entered the modern product development cycle, material development has largely stayed unchanged with the use of personal and energy intensive trial and error approaches. This traditional method necessitates the manufacturing of multiple specimens for physical testing, resulting in a long development time while also generating a lot of waste in the process. Numerical methods such as Finite Element Analysis (FEA) in combination with optimization methods can act as an alternative pathway to shrink the development time of a novel composite and reduce the wastage of materials, time and money.
The goal of this work was to develop a numerical method for particle reinforced composites that generates microstructures with targeted effective material properties intended for a specific application. These requirements stated during the development process alongside with a set of constraints are utilized for the optimization algorithm to generate an optimized microstructure, drawing upon a data bank for the individual material phases.
Digital twins of particles encountered during the research are obtained by use of analytical functions such as Spherical Harmonics, Super Ellipsoids or other equations, which are in the following called ‘exact’ particles. Numerical studies based on FEA were conducted on representative volume elements (RVE) of particle reinforced composites to obtain effective elastic properties The results of the FEA calculations for spherical particles were then compared with results obtained with the use of micromechanical models, such as the Mori-Tanaka scheme, Dilute inclusion. Evaluations of the effect of the particle distribution on the elastic properties of the composite were studied, comparing a homogeneous particle distribution with two distinctly different particle cluster distributions. A numerical surrogate model was developed to approximate the effective elastic properties of the composite with ‘exact’ particles. This method is intended to reduce the computational effort such as calculation time and RAM requirements of evaluating the composites effective elastic material properties in comparison to RVE containing the surrogates’ ‘exact’ particle counterpart. This simplification makes it viable to explore different combinations of particle shapes for different matrix materials. Heuristic optimization methods such as Simulated Annealing, Genetic Algorithm and Particle Swarm Optimization (PSO) were explored for finding an optimal material combination to achieve the targeted effective material properties of the composite. For this a function was derived to obtain the optimal material mixture of the composite, which achieves the targeted effective material properties of the compound. The different heuristic methods were compared according to their numerical stability during optimization and the PSO method was chosen. Numerical methods to generate and evaluate conductive particle reinforced polymer matrix composites were explored, which can utilize a material mixture of two particle shapes such as disc-shaped and line-shaped in a polymer matrix. Lastly the application of machine learning methods such as feed forward neural networks were explored to enable a swift quantification of all the possible solutions with regards to different particle and matrix materials and provide a material mixture which achieves the targeted effective elastic properties.
Fingerprints, i.e. ridge and valley patterns on the tip of a human finger, are one of the most important biometric characteristics due to their known uniqueness and persistence properties. Large-scale fingerprint recognition systems are not only used worldwide by law enforcement and forensic agencies, they are also deployed in the mobile market and in nationwide applications. In recent years, contactless fingerprint recognition has become a viable alternative to established contact-based methods. The contactless capturing process avoids distinct problems, e.g. signal of low contrast caused by dirt or humidity and left-over latent fingerprints on the capture surface. Moreover, contactless schemes provide a faster and more hygienic as well as a more convenient capturing process and hence have a higher user acceptance. However, contactless fingerprint recognition introduces new challenges. Environmental influences such as an uncontrolled background and varying illumination and an unconstrained finger positioning are especially a challenge for mobile recognition schemes.
This Thesis contributes to an efficient and secure mobile contactless fingerprint recognition process. The work addresses various vital aspects along the contactless fingerprint recognition pipeline. The mobile, automatic capturing, segmentation and pre-processing of contactless fingerprint samples represents a central focus of this Thesis. Furthermore, contributions to the topics of quality assessment, feature extraction and presentation attack detection are conducted. To enable new research directions, such as training deep learning-based algorithms, a generator for synthetic mobile contactless fingerprint samples is also suggested. The results proposed in this Thesis show improvements on several components of the recognition method which contribute to an increased biometric performance, security and comfort level. Moreover, challenges and limitations are discussed.
Use of computational models to study heterogeneous materials opens a wide range of possibilities, from quantifying their structure at different scales to learning the different physical processes that define the macroscopic behavior and finally designing them at different scales to suit a particular application. This dissertation presents a numerical methodology starting from digital characterization of the microstructure, its artificial reconstruction and finally prediction of the mechanical properties of a heterogeneous material called ceramic foam. The low density and the base ceramic material means that the foam has low weight, high specific strength, corrosion resistance, thermal stability and a host of other properties suited to applications like light-weight construction, molten metal filtration, catalytic converters, biomedical implants, etc. In this work, the random microstructure of this material is characterized through use of statistical correlation functions. On the basis of these functions, methods are suggested to select the appropriate size of volume element and to reduce the ensemble size for estimation of effective material properties. A microstructure reconstruction algorithm is presented that recreates statistically equivalent microstructure volume elements with user-defined edge length and porosity. The algorithm converges to the optimal solution very fast by utilizing the information learned from the characterization studies. Next, the recreated virtual microstructures are utilized to study the uniaxial compression failure behavior of this material as a function of porosity through finite element simulations. The change in the macroscopic failure mode of the material as the porosity increases is studied through evolution of damaged regions within the microstructure. In our knowledge, this is the first attempt to compare the experimental observations and the theoretical predictions of this phenomenon with detailed computational studies. However, the effective compression stress-strain curve of the studied material is much different from the one assumed in theoretical models. To understand the reasons behind this difference, an image segmentation algorithm is developed that isolates the struts in the microstructure which play a crucial part in the macroscopic stress-strain behavior. It is observed that the struts fail in three different modes which cause the macroscopic failure. This is in contradiction to the single failure mode commonly observed in such materials. Lastly, a neural network based surrogate modelling strategy is devised to model the biaxial compression failure of ceramic foam. The objective of this study is to show that the response of a smaller volume element can be used to model the response of a larger volume element through a transfer learning-based strategy. The developed surrogate model is able to predict the response of a larger volume element without much prior information about it because the model has learned the material behavior from the smaller volume element. This thesis develops a numerical pathway for characterization-reconstruction-property estimation of the studied material. The methods develop to achieve this can be utilized in many other material systems with similar microstructures.
Biometric systems, particularly face recognition (FR) systems, have become ubiquitous and are used in many real-world authentication scenarios, ranging from unlocking smartphone devices to automated border control systems at airports. FR systems offer a seamless and convenient authentication method, as faces can be captured at a distance. Furthermore, advancements in deep learning-based methods and accessibility to large training datasets have meant that state-ofthe-art FR systems achieve near-perfect biometric performance on several challenging benchmarks. However, the high generalisability of FR systems has also meant that these systems are vulnerable to different digital and physical face manipulations, which can impair their security. For instance, alterations to a face might be performed by a malicious actor in the physical domain (e.g., using makeup or silicone masks) or in the digital domain (e.g., using morphing or face swapping techniques) with the intention to impersonate another target identity and, as such, gain fraudulent access to a system. To mitigate this security risk, methods for detecting various digital and physical face manipulations have been proposed. However, most of these approaches consider only a few well-studied types of digital face manipulations or physical face manipulations and struggle to generalise beyond the data they were trained on. Motivated by the need to enhance the security of FR systems towards digital and physical face manipulations, this thesis investigates in more detail the impact of different digital and physical face manipulations on FR systems. Moreover, this thesis proposes novel algorithms for detecting digital and physical face manipulations by both developing methods focusing on detecting specific types of digital face manipulations or physical face manipulations and by developing highly generalisable methods for the unified detection of attacks on FR systems across both the digital and physical domains. Furthermore, methods for adapting existing pre-trained FR systems to be more resilient towards digital and physical face manipulations are proposed, and it is investigated how algorithms might be used in collaboration with human examiners to detect digital face manipulations. Comprehensive and realistic experimental evaluations demonstrate the capabilities of the proposed algorithms for detecting different digital and physical face manipulations and for making FR systems more resistant towards such manipulations.
Conventional communication systems like the cellular mobile system rely on a preinstalled
and incessantly available infrastructure. This dependency makes them vulnerable to natural catastrophes as well as terrorist and cyber attacks. If the infrastructure is damaged, communication can come to a stillstand. In contrast, mobile ad-hoc networks (MANETs) do not require any infrastructure and allow every node to exchange messages directly with all remaining nodes. Due to that, MANETs are widely considered to be disaster-resistant and highly flexible.
If traditional multiple-access schemes are employed, MANETs scale poorly with respect to the total number of nodes. However, cooperative communication on the physical layer can enable a linear scaling behaviour. To this end, transmitting nodes support each other by simultaneously sending in the same interference domain. Until now, cooperative communication approaches have mostly been considered theoretically. Particularly, implementation strategies for practical systems have not been discussed. With regard to the latter, many challenges arise. Thereby, the greatest difficulty is that multiple imperfections occur simultaneously.
In this thesis communication systems are presented that enable distributed cooperative
communication (DisCoCom) for practical MANETs. Besides a single-carrier and multi-carrier system tailored for standalone single-antenna nodes, a singlecarrier communication system for standalone multi-antenna nodes is proposed. All systems are extremely robust against typical imperfections like multiple carrier frequency and timing offsets. Furthermore, a clustering strategy is exemplified that allows to significantly reduce the network energy consumption to realize DisCoCom. Overall, these systems enable distinctly more robust, effective and efficient broadcasts in MANETs, which can pave the way towards a linear scaling behaviour. The applicability of the introduced systems is not only shown by appropriate simulations, but additionally with an SDR-based MANET demonstration system that has specifically been implemented for this purpose.
The development of large-scale biometric identification systems that provide privacy protection of the enrolled subjects is an ongoing concern. Most importantly, biometric technologies demand interoperability and deployment assuring maximum usability by including multi-modal biometric solutions. In the context of privacy protection, several Biometric Template Protection (BTP) schemes have been proposed in the past. However, these schemes appear to be unsuitable for indexing (Workload Reduction (WR)) in biometric identification systems. As a consequence, they have been utilised in biometric identification systems performing exhaustive searches (i. e. one-to-many search), which represent a time-consuming task and, hence, a high computational workload dominated by the number of comparisons.
Additionally, novel privacy protection schemes have recently been developed in the literature. These approaches appear promising, but have not yet been evaluated in a detailed way, especially in terms of their privacy protection capabilities. Motivated by the acceleration of large-scale protected biometric database searches and the investigation for privacy enhancement, this thesis investigates in more detail indexing schemes operating on protected templates for different biometric characteristics, as well as some limitations in privacy protection. Extensive experimental evaluations demonstrate that novel BTP-agnostic and biometric characteristic (BC)-agnostic indexing schemes can successfully reduce the computational workload of a biometric system, while preserving biometric security and performance. Novel attacks have also been proposed in the context of privacy protection.
Die Arbeit beschäftigt sich mit der Untersuchung von Konzepten zur parallelen Verwendung unterschiedlicher Übertragungstechnologien für die direkte Kommunikation zwischen Verkehrsteilnehmern. Bei der V2X-Kommunikation können Nachrichten zwischen Verkehrsteilnehmern ausgetauscht werden, um z.B. vor Gefahrenstellen oder drohenden Konflikten zu warnen. Durch einen hybriden Einsatz unterschiedlicher Übertragungstechnologien soll die Qualität und die Zuverlässigkeit des Informationsaustausches gesteigert werden. Ziel ist die Erarbeitung von Konzepten, die die Eigenschaften der jeweiligen Technologien
kombinieren, um besonders bei zeit- bzw. sicherheitskritischen Verkehrssituationen eine Verbesserung der Verfügbarkeit von Informationen zu erreichen. Dies bezieht sich im Schwerpunkt auf die Anforderungen hinsichtlich Kommunikationsreichweite, Latenzzeiten bei der Datenübertragung und Reduzierung bzw. Vermeidung von Überlastsituationen auf einzelnen Übertragungskanälen. Die dynamische und situationsabhängige Organisation der Kommunikation unter Verwendung mehrerer Übertragungswege steht im Mittelpunkt der Betrachtung. Darüber hinaus werden auch Aspekte, wie die Einbindung von vulnerablen Verkehrsteilnehmern, wie Fuÿgängern oder Radfahrern, betrachtet. Die erarbeiteten Konzepte werden anhand konkreter Szenarien hinsichtlich ihres Potentials bei der Unterstützung sicherheitskritischer Anwendungsfälle untersucht. Hierfür wird eine Simulationsumgebung verwendet, welche sowohl die Verkehrsräume und deren Umfeld wie auch die Übertragungstechnologien und verwendeten Funkkanäle implementiert. Hierbei kommen auch stochastische Übertragungsmodelle zum Einsatz, die im Rahmen von Messungen im realen Verkehrsraum erstellt wurden.
The manufacturing industry is undergoing a transformation marked by the emergence of Industry 4.0 and Industry 5.0 paradigms, which are characterized by the integration and automation of machinery. Thereby, the machinery evolves into Cyber-Physical Systems (CPSs). These CPSs consist of software and hardware modules implementing complex manufacturing processes. The ongoing integration of machinery, and external technologies, e.g., the Industrial Internet of Things (IIoT), led to an evolving Smart Manufacturing (SM) environment. At the same time, legacy machinery, the brownfield machinery, exists side-by-side with modern CPSs. The brownfield machinery might be integrated by retrofitting in the modern manufacturing process. Therefore, the evolution of the SM domain thriven by the Industry 4.0 and Industry 5.0 paradigms leads to a more complex SM environment. Moreover, the integration and ongoing adaption of technologies and processes introduce novel relationships and dependencies between employed machinery and systems. Fault Diagnosis (FD) in such a complex SM environment becomes more time-consuming and laborious. A side effect of the ongoing evolution is the advancing capabilities of the machinery and the ability to produce data. Therewith, not only complex data has to be analyzed during any FD but also vast quantities. The search for the origin of the fault is challenging. Additionally, technical challenges in the SM environment hinder a thorough FD. For instance, the available bandwidth for data transmission is unequal to the capabilities of the machinery to produce vast data quantities. Therefore, the application challenge exists to focus on specific areas of the SM environment while choosing a reasonable granularity in data surveillance to cover the fault traces without losing too much information. Thereby, any FD depends heavily on the domain knowledge of the professionals entrusted with the FD task. On top, there is also economic pressure, which raises the tension on the employed professionals as an unexpected downtime, and the loss in production quantity equals the economic loss.
The thesis introduces context-aware FD to mitigate the risen complexity of the SM environment and support the professionals in their work. By supporting the professionals, the time for FD can be reduced, which results in faster fault amendment and reduced cost-intensive production downtimes. The Context-Aware Diagnosis in Smart Manufacturing (TAOISM) Visual Analytics (VA) model backs the context-aware FD. The TAOISM VA model is the theoretical foundation for the context-aware FD and defines the data layer, the models layer, the visualization layer, and the knowledge layer for SM. Hereby, the VA model enables the definition of context, context models, and context hierarchies for their integration in the respective layers. The main idea behind the context-aware FD is to use the narrowing character of the context definition to slice vast amounts of data into manageable context-separated data groups. Thereby, the context model works as a virtual boundary across machinery and systems, which encloses the physical domain (hardware) and the immaterial domain (software) equally. Further, the thesis focuses on contextual faults, which arise from context model violations, and proposes approaches for collecting contextual data. Also, the automated building of context models and the extraction and transformation of contextual data is part of the thesis. Employing the context models impacts each layer of the proposed TAOISM VA model. For each layer, various approaches show the impact of the context models and their employment in three different application scenarios for FD in SM. The performed research is tested and verified in the scenarios of Robotics Application Development (RAD), Maintenance of Industrial Inspection Machines (MIIM) and Abnormal Event Management in Production Lines (AEMPL). Along with employing context models, data augmentation with context models is proposed. Along with other benefits, the presented data augmentation technique has the ability to balance undersampled datasets, which would enable a reduction of data recordings for any context-aware FD in the future. Thereby, the data augmentation technique is to answer the existing inaccuracies in an SM environment, which also impacts the quality of any employed Artificial Intelligence (AI). Another approach targets the unsupervised selection of production-relevant variables to focus FD-related data recordings and surveillance on areas of the SM environment active during production automatically without any domain knowledge involved. The hypothesis, which was proven right, was that faults, especially contextual faults, occur more often on active software and hardware modules. Another challenge from the vast amount of data is that labeling data for AI becomes uneconomical, even for small fault cases in AI. As a result, evaluating any AI model in SM becomes challenging, as standard measures, e.g., accuracy, precision, recall, and F1-score, cannot be applied. In this case, the thesis proposes novel AI performance metrics that decouple comparability and correctness to enable the evaluation of AI models in an SM environment. All the contributions have led to the development of two distinct Proof of Concepts (PoCs). The PoCs are the reference implementation of the context-aware FD and reflect a knowledge-based FD Expert System (ES) and an unsupervised data-driven FD system. The latter was part of the thorough evaluation of the context-aware FD by two groups of domain experts and junior professionals. The successful qualitative evaluation not only hints towards a working context-aware FD but also unveils future research directions and a future vision for SM. Additional domain expert interviews expose the views on the relevancy of a context-aware FD in SM for the future. In general, the evaluation hints towards a context-aware FD, which has versatile applicability, usability, and suitability in SM-related FD.
The overall objective of this dissertation is to enable a more efficient and effective point cloud and mesh partition for artists and 3D application developers. In this dissertation, 3D scans are assumed as the source data material of the 3D application development, reducing the manual and time-consuming modelling of virtual objects. Furthermore, the scanned data is assumed to be processed to a point cloud and reconstructed to a polygon mesh. The mesh has to be partitioned into the objects of interest to design specific interactions with a game engine. Interviews revealed that the partition is manually conducted on a mesh with a 3D manipulation software, which is time-consuming. The partition creation should be automated to increase efficiency and effectiveness. Freely available point cloud and mesh partition algorithms require an expert with appropriate programming skills and field knowledge, which makes them difficult to use. More precisely, the algorithms cannot be used in existing workflows as they are not implemented in a common graphical 3D manipulation software. Beneath these problems, the partition automation should work on real-world data and have a low runtime to raise efficiency. Different sub-research objectives were formulated from these problems and requirements, leading to novel approaches in the domains of: (a) sequential partition creation with deep reinforcement and imitation learning, (b) episodic partition creation with graph neural networks, (c) match-based reward calculation and (d) synthetic scene generation. One sub-research objective is the replacement of a human expert with an agent. In this context, a novel deep reinforcement learning (DRL) partition framework is presented. Experiments were conducted using this framework combined with the region growing algorithm and synthetic scenes created by a self-developed scene generator. The maximum reward could almost be achieved with a fine-tuned PointNet and by evaluating the wall and non-wall objects separately. This approach is not applicable to real-world scenes, which is necessary to achieve the efficiency and effectiveness objective. Therefore, another DRL partition approach is introduced, where an agent unifies superpoints in the so-called superpoint growing environment. The point cloud is divided into superpoints, which will be unified into the objects of interest by an agent. The experimental results show that this approach can be applied to real-world scenes. Beneath the application of DRL, an imitation learning approach was developed, increasing the agent’s performance in the superpoint growing environment. The runtime in the sequential superpoint growing environment is poor, as each union decision requires a neural network call. Hence, a further sub-research objective is to improve the runtime. An episodic environment was developed as a solution, only requiring one graph neural network call. Similarities between superpoints are estimated in this environment and passed to a union algorithm. The differences between two graph neural network architectures and two union algorithms were experimentally investigated. According to the results, calculating the superpoint similarities with a correlation of the embedded node features is more robust than the similarity estimation with a sigmoid activation function. The reward function, used in the DRL partition approaches, was realised by a matching procedure. As this function influences the partition quality, another sub-research objective is to investigate the differences between various match types. Matching functions from the literature were compared, and another match type was introduced. The usage of different match types in the learning process was experimentally evaluated. Although an agent gets more feedback with all match types, the best results (visual and in terms of the partition size) were achieved by only using first-order matches in the reward function. The synthetic scenes of the region growing approach lack realism as the lighting information is ignored, which can be important to train networks for the partition task. Therefore, a further sub-research objective is to develop a scene generator where the lighting is taken into account. After its development, the generated scenes were experimentally evaluated in a pre-training task. It turned out that the lighting information is important for a pre-training as larger accuracies were achieved. Furthermore, a faster convergence can be achieved with the pre-trained network instead of training a network on a target data set from scratch.
Another sub-research objective targets the development of a usable partition interface. In this context, the Blender add-on OpenXtract was developed, containing five open-source point cloud partition algorithms. The partition algorithms were extended by approximating geodesic distances so that the edges of meshes are used. An experiment has shown that the extended algorithms produce larger accuracies, which is considered an increase in effectiveness. Moreover, unstructured interviews revealed that OpenXtract can improve the effectiveness and efficiency of the partition creation.