FG Kommunikationstechnik
Refine
Year of publication
Document Type
- Conference Proceeding (180)
- Part of a book (chapter) (24)
- Scientific journal article peer-reviewed (21)
- Conference publication peer-reviewed (21)
- Scientific journal article not peer-reviewed (15)
- Book (publisher) (9)
- Report (8)
- Patent (7)
- Doctoral thesis (6)
- Book (3)
Way of publication
- Open Access (7)
Keywords
- machine learning (4)
- Künstliche Intelligenz (2)
- Machine learning (2)
- Music psychology (2)
- Non-destructive testing (2)
- AURORA (1)
- Accessability (1)
- Acoustic (1)
- Anfragesysteme (1)
- Artificial intelligence (1)
Institute
Künstliche Intelligenz zieht derzeit in alle Bereiche der Gesellschaft und des Lebens ein. Aber welchen Stellenwert hat sie momentan auf dem Gebiet der zerstörungsfreien Prüfung? Was kann KI leisten? Welche Herausforderungen müssen erfolgreich bewältigt werden? Gibt es das eine KI-Verfahren, welches prinzipiell für ZfP geeignet ist? Bei der Bauteil- und Materialprüfung während und unmittelbar nach der Herstellung, der Überwachung von Verschleißteilen in Maschinen und Anlagen oder der Schadensdetektion an Bauteilen und Komponenten liefern ZfP-Verfahren Daten, die bewertet werden müssen. Obwohl inzwischen sehr leistungsfähige Toolkits verfügbar sind, erfordert der optimale Einsatz der KI für ein ZfP-Verfahren oftmals mehr. Die meisten Kunden möchten nicht nur eine Lösung ihres Problems; sie wollen verstehen, warum die KI so und nicht anders entschieden hat, warum der Klassifikator das Bauteil einer bestimmten Klasse (z. B. gut/schlecht oder neuwertig/verschlissen/defekt) zugewiesen hat. Abhängig von der Klassifikationsaufgabe sowie der Art und der Anzahl der vorliegenden Daten kann ein geeignetes Verfahren bestimmt werden. Mit Methoden des maschinellen Lernens werden Modelle gebildet, welche die Basis für die KI-Verfahren zur Klassifikation bilden.
Der Beitrag liefert einen Überblick über KI-Verfahren und deren Anwendungen in der zerstörungsfreien Prüfung. Zahlreiche Beispiele und Ergebnisse werden vorgestellt, um die Mannigfaltigkeit des Einsatzes in der ZfP und der bestehenden Möglichkeiten zu demonstrieren.
In this paper we give instructions on how to write a minimalist grammar (MG). In order to present the instructions as an algorithm, we use a variant of context free grammars (CFG) as an input format. We can exclude overgeneration, if the CFG has no recursion, i.e. no non-terminal can (indirectly) derive to a right-hand side containing itself. The constructed MGs utilize licensors/-ees as a special way of exception handling. A CFG format for a derivation A_eats_B↦∗peter_eats_apples, where A and B generate noun phrases, normally leads to overgeneration, e.\,g., i_eats_apples. In order to avoid overgeneration, a CFG would need many non-terminal symbols and rules, that mainly produce the same word, just to handle exceptions. In our MGs however, we can summarize CFG rules that produce the same word in one item and handle exceptions by a proper distribution of licensees/-ors. The difficulty with this technique is that in most generations the majority of licensees/-ors is not needed, but still has to be triggered somehow. We solve this problem with ϵ-items called \emph{adapters}.
Artificial intelligence experienced a technological breakthrough in science, industry, and everyday life in the recent few decades. The advancements can be credited to the ever-increasing availability and miniaturization of computational resources that resulted in exponential data growth. However, because of the insufficient amount of data in some cases, employing machine learning in solving complex tasks is not straightforward or even possible. As a result, machine learning with small data experiences rising importance in data science and application in several fields. The authors focus on interpreting the general term of "small data" and their engineering and industrial application role. They give a brief overview of the most important industrial applications of machine learning and small data. Small data is defined in terms of various characteristics compared to big data, and a machine learning formalism was introduced. Five critical challenges of machine learning with small data in industrial applications are presented: unlabeled data, imbalanced data, missing data, insufficient data, and rare events. Based on those definitions, an overview of the considerations in domain representation and data acquisition is given along with a taxonomy of machine learning approaches in the context of small data.
The renaissance of artificial intelligence (AI) in the last decade can be credited to several factors, but chief among these is the ever-increasing availability and miniaturization of computational resources. This process has contributed to the rise of ubiquitous computing via popularizing smart devices and the Internet of Things in everyday life. In turn, this has resulted in the generation of increasingly enormous amounts of data. The tech giants are harvesting and storing data on their clients’ behavior and, at the same time, introducing concerns about data privacy and protection. Suddenly, such an abundance of data and computing power, which was unimaginable a few decades ago, has caused a revival of old and the invention of new machine learning paradigms, like Deep Learning. Artificial intelligence has undergone a technological breakthrough in various fields, achieving better than human performance in many areas (such as vision, board games etc.). More complex tasks require more sophisticated algorithms that need more and more data. It has often been said that data is becoming a resource that is "more valuable than oil"; however, not all data is equally available and obtainable. Big data can be described by using the "four Vs"; data with immense velocity, volume, variety, and low veracity. In contrast, small data do not possess any of those qualities; they are limited in size and nature and are observed or produced in a controlled manner. Big data, along with powerful computing and storage resources, allow “black box” AI algorithms for various problems previously deemed unsolvable. One could create AI applications even without the underlying expert knowledge, assuming there are enough data and the right tools available (e.g. end-to-end speech recognition and generation, image and object recognition). There are numerous fields in science, industry and everyday life where AI has vast potential. However, due to the lack of big data, application is not straightforward or even possible. A good example is AI in medicine, where an AI system is intended to assist physicians in diagnosing and treating rare or previously never observed conditions, and there is no or an insufficient amount of data for reliable AI deployment. Both big and small data concepts have limitations and prospects for different fields of application. This paper will try to identify and present them by giving real-world examples in various AI fields.
Machine Semiotics
(2023)
Recognizing a basic difference between the semiotics of humans and machines presents a possibility to overcome the shortcomings of current speech assistive devices. For the machine, the meaning of a (human) utterance is defined by its own scope of actions. Machines, thus, do not need to understand the conventional meaning of an utterance. Rather, they draw conversational implicatures in the sense of (neo-)Gricean pragmatics. For speech assistive devices, the learning of machine-specific meanings of human utterances, i.e. the fossilization of conversational implicatures into conventionalized ones by trial and error through lexicalization appears to be sufficient. Using the quite trivial example of a cognitive heating | device, we show that — based on dynamic semantics — this process can be formalized as the reinforcement learning of utterance-meaning pairs (UMP).
Das wesentliche Ziel dieses Lehrbuchs ist es, die mathematischen Werkzeuge zur Modellierung kognitiver Strukturen und Prozesse auf der Grundlage der klassischen Logik und der Quantenlogik zu entwickeln. Die dafür erforderlichen mathematischen Sachverhalte werden so dargestellt und anhand von Beispielen motiviert, dass sie für angehende Ingenieure und Informatiker verständlich sind.
- Kompakte logische Darstellung der mathematischen Werkzeuge zur Modellierung kognitiver Strukturen und Prozesse
- Beispiele erläutern die Anwendung im Engineering
Zielgruppen sind insbesondere Studierende der Ingenieurwissenschaften und der Informatik, aber auch Studierende der Mathematik oder der Physik können durch den anwendungsbezogenen Blick ihren Horizont erweitern.
Ultrasonic Testing (UT) has seen increasing application of machine learning (ML) in recent years, promoting higher-level automation and decision-making in flaw detection and classification. Building a generalized training dataset to apply ML in non-destructive evaluation (NDE), and thus UT, is exceptionally difficult since data on pristine and representative flawed specimens are needed. Yet, in most UT test cases flawed specimen data is inherently rare making data coverage the leading problem when applying ML. Common data augmentation (DA) strategies offer limited solutions as they don’t increase the dataset variance, which can lead to overfitting of the training data. The virtual defect method and the recent application of generative adversarial neural networks (GANs) in UT are sophisticated DA methods targeting to solve this problem. On the other hand, well-established research in modeling ultrasonic wave propagations allows for the generation of synthetic UT training data. In this context, we present a first thematic review to summarize the progress of the last decades on synthetic and augmented UT training data in NDE. Additionally, an overview of methods for synthetic UT data generation and augmentation is presented. Among numerical methods such as finite element, finite difference, and elastodynamic finite integration methods, semi-analytical methods such as general point source synthesis, superposition of Gaussian beams, and the pencil method as well as other UT modeling software are presented and discussed. Likewise, existing DA methods for one- and multidimensional UT data, feature space augmentation, and GANs for augmentation are presented and discussed. The paper closes with an in-detail discussion of the advantages and limitations of existing methods for both synthetic UT training data generation and DA of UT data to aid the decision-making of the reader for the application to specific test cases.