Informatik
In order to generate a machine learning algorithm (MLA) that can support ophthalmologists with the diagnosis of glaucoma, a carefully selected dataset that is based on clinically confirmed glaucoma patients as well as borderline cases (e.g., patients with suspected glaucoma) is required. The clinical annotation of datasets is usually performed at the expense of the data volume, which results in poorer algorithm performance. This study aimed to evaluate the application of an MLA for the automated classification of physiological optic discs (PODs), glaucomatous optic discs (GODs), and glaucoma-suspected optic discs (GSODs). Annotation of the data to the three groups was based on the diagnosis made in clinical practice by a glaucoma specialist. Color fundus photographs and 14 types of metadata (including visual field testing, retinal nerve fiber layer thickness, and cup–disc ratio) of 1168 eyes from 584 patients (POD = 321, GOD = 336, GSOD = 310) were used for the study. Machine learning (ML) was performed in the first step with the color fundus photographs only and in the second step with the images and metadata. Sensitivity, specificity, and accuracy of the classification of GSOD vs. GOD and POD vs. GOD were evaluated. Classification of GOD vs. GSOD and GOD vs. POD performed in the first step had AUCs of 0.84 and 0.88, respectively. By combining the images and metadata, the AUCs increased to 0.92 and 0.99, respectively. By combining images and metadata, excellent performance of the MLA can be achieved despite having only a small amount of data, thus supporting ophthalmologists with glaucoma diagnosis.
Biometric fingerprint identification hinges on the reliability of its sensors; however, calibrating and standardizing these sensors poses significant challenges, particularly in regards to repeatability and data diversity. To tackle these issues, we propose methodologies for fabricating synthetic 3D fingerprint targets, or phantoms, that closely emulate real human fingerprints. These phantoms enable the precise evaluation and validation of fingerprint sensors under controlled and repeatable conditions. Our research employs laser engraving, 3D printing, and CNC machining techniques, utilizing different materials. We assess the phantoms’ fidelity to synthetic fingerprint patterns, intra-class variability, and interoperability across different manufacturing methods. The findings demonstrate that a combination of laser engraving or CNC machining with silicone casting produces finger-like phantoms with high accuracy and consistency for rolled fingerprint recordings. For slap recordings, direct laser engraving of flat silicone targets excels, and in the contactless fingerprint sensor setting, 3D printing and silicone filling provide the most favorable attributes. Our work enables a comprehensive, method-independent comparison of various fabrication methodologies, offering a unique perspective on the strengths and weaknesses of each approach. This facilitates a broader understanding of fingerprint recognition system validation and performance assessment.
We address the need for a large-scale database of children’s faces by using generative adversarial networks (GANs) and face-age progression (FAP) models to synthesize a realistic dataset referred to as “HDA-SynChildFaces”. Hence, we proposed a processing pipeline that initially utilizes StyleGAN3 to sample adult subjects, which is subsequently progressed to children of varying ages using InterFaceGAN. Intra-subject variations, such as facial expression and pose, are created by further manipulating the subjects in their latent space. Additionally, this pipeline allows the even distribution of the races of subjects, allowing the generation of a balanced and fair dataset with respect to race distribution. The resulting HDA-SynChildFaces consists of 1,652 subjects and 188,328 images, each subject being present at various ages and with many different intra-subject variations. We then evaluated the performance of various facial recognition systems on the generated database and compared the results of adults and children at different ages. The study reveals that children consistently perform worse than adults on all tested systems and that the degradation in performance is proportional to age. Additionally, our study uncovers some biases in the recognition systems, with Asian and black subjects and females performing worse than white and Latino-Hispanic subjects and males.
Solar phase scintillation and solar amplitude scintillation are fundamentally important in deep space mission operations for designing a communication system capable of transmitting signals when the signal path is close to the Sun. The ESA’s BepiColombo measurement data were analyzed in a previous paper in terms of the power spectral density of the solar phase scintillation, also with a comparison with Woo’s solar phase scintillation theory, when X-band and Ka-band signals propagate close to the Sun with a small Sun-Earth-Probe (SEP) angle during the superior solar conjunction campaign in March 2021 in its cruise phase to Mercury. In this paper the solar amplitude scintillation is analyzed both by calculating the power spectral density and the scintillation index. The results of scintillation index, derived from these measurement data, fit the NASA JPL’s scintillation index model.
The relevance of Machine Intelligence, a.k.a. Artificial Intelligence (AI), is undisputed at the present time. This is not only due to AI successes in research but, more prominently, its use in day-to-day practice. In 2014, we started a series of annual workshops at the Leibniz Zentrum für Informatik, Schloss Dagstuhl, Germany, initially focussing on Corporate Semantic Web and later widening the scope to Applied Machine Intelligence. This article presents a number of AI applications from various application domains, including medicine, industrial manufacturing and the insurance sector. Best practices, current trends, possibilities and limitations of new AI approaches for developing AI applications are also presented. Focus is put on the areas of natural language processing, ontologies and machine learning. The article concludes with a summary and outlook.
Signals and images with discontinuities appear in many problems in such diverse areas as biology, medicine, mechanics and electrical engineering. The concrete data are often discrete, indirect and noisy measurements of some quantities describing the signal under consideration. A frequent task is to find the segments of the signal or image which corresponds to finding the discontinuities or jumps in the data. Methods based on minimizing the piecewise constant Mumford–Shah functional—whose discretized version is known as Potts energy—are advantageous in this scenario, in particular, in connection with segmentation. However, due to their non-convexity, minimization of such energies is challenging. In this paper, we propose a new iterative minimization strategy for the multivariate Potts energy dealing with indirect, noisy measurements. We provide a convergence analysis and underpin our findings with numerical experiments.
Machine intelligence, a.k.a. artificial intelligence (AI) is one of the most prominent and relevant technologies today. It is in everyday use in the form of AI applications and has a strong impact on society. This article presents selected results of the 2020 Dagstuhl workshop on applied machine intelligence. Selected AI applications in various domains, namely culture, education, and industrial manufacturing are presented. Current trends, best practices, and recommendations regarding AI methodology and technology are explained. The focus is on ontologies (knowledge-based AI) and machine learning.
The awareness of emerging trends is essential for strategic decision making because technological trends can affect a firm’s competitiveness and market position. The rise of artificial intelligence methods allows gathering new insights and may support these decision-making processes. However, it is essential to keep the human in the loop of these complex analytical tasks, which, often lack an appropriate interaction design. Including special interactive designs for technology and innovation management is therefore essential for successfully analyzing emerging trends and using this information for strategic decision making. A combination of information visualization, trend mining and interaction design can support human users to explore, detect, and identify such trends. This paper enhances and extends a previously published first approach for integrating, enriching, mining, analyzing, identifying, and visualizing emerging trends for technology and innovation management. We introduce a novel interaction design by investigating the main ideas from technology and innovation management and enable a more appropriate interaction approach for technology foresight and innovation detection.
When NoSQL database systems are used in an agile software development setting, data model changes occur frequently and thus, data is routinely stored in different versions. The management of versioned data leads to an overhead potentially impeding the software development. Several data migration strategies exist that handle legacy data differently during data accesses, each of which can be characterized by certain advantages and disadvantages. Depending on the requirements for the software application, we evaluate and compare different migration strategies through metrics like migration costs and latency as well as precision and recall. Ideally, exactly that strategy should be selected whose characteristics fulfill service-level agreements and match the migration scenario, which depends on the query workload and the changes in the data model which imply an evolution of the database schema. In this paper, we present a methodology of self-adapting data migration, which automatically adjusts migration strategies and their parameters with respect to the migration scenario and service-level agreements, thereby contributing to the self-management of database systems and supporting agile development.
Smart factories are complex; with the increased complexity of employed cyber-physical systems, the complexity evolves further. Cyber-physical systems produce high amounts of data that are hard to capture and challenging to analyze. Real-time recording of all data is not possible due to limited network capabilities. Limited network capabilities are the reason for a chain of faults introduced via active surveillance during fault diagnosis. These introduced faults may slow down production or lead to an outage of the production line. Here, we present a novel approach to automatically select production-relevant shop floor parameters to decrease the number of surveyed variables and, at the same time, maintain quality in fault diagnosis without overloading the network. We were able to achieve higher throughput, mitigate communication losses and prevent the disruption of factory instructions. Our approach uses an autoencoder ensemble via minority voting to differentiate between normal—always on—variables and production variables that may yield a higher entropy. Our approach has been tested in a production-equal smart factory and was cross-validated by a domain expert.