TY - THES A1 - Niklaus, Christina T1 - From Complex Sentences to a Formal Semantic Representation using Syntactic Text Simplification and Open Information Extraction N2 - Sentences that present a complex linguistic structure act as a major stumbling block for Natural Language Processing (NLP) applications whose predictive quality deteriorates with sentence length and complexity. The task of Text Simplification (TS) may remedy this situation. It aims to modify sentences in order to make them easier to process, using a set of rewriting operations, such as reordering, deletion or splitting. These transformations are executed with the objective of converting the input into a simplified output, while preserving its main idea and keeping it grammatically sound. State-of-the-art syntactic TS approaches suffer from two major drawbacks: first, they follow a very conservative approach in that they tend to retain the input rather than transforming it, and second, they ignore the cohesive nature of texts, where context spread across clauses or sentences is needed to infer the true meaning of a statement. To address these problems, we present a discourse-aware TS framework that is able to split and rephrase complex English sentences within the semantic context in which they occur. By generating a fine-grained output with a simple canonical structure that is easy to analyze by downstream applications, we tackle the first issue. For this purpose, we decompose a source sentence into smaller units by using a linguistically grounded transformation stage. The result is a set of selfcontained propositions, with each of them presenting a minimal semantic unit. To address the second concern, we suggest not only to split the input into isolated sentences, but to also incorporate the semantic context in the form of hierarchical structures and semantic relationships between the split propositions. In that way, we generate a semantic hierarchy of minimal propositions that benefits downstream Open Information Extraction (IE) tasks. To function well, the TS approach that we propose requires syntactically well-formed input sentences. It targets generalpurpose texts in English, such as newswire or Wikipedia articles, which commonly contain a high proportion of complex assertions. In a second step, we present a method that allows state-of-the-art Open IE systems to leverage the semantic hierarchy of simplified sentences created by our discourseaware TS approach in constructing a lightweight semantic representation of complex assertions in the form of semantically typed predicate-argument structures. In that way, important contextual information of the extracted relations is preserved that allows for a proper interpretation of the output. Thus, we address the problem of extracting incomplete, uninformative or incoherent relational tuples that is commonly to be observed in existing Open IE approaches. Moreover, assuming that shorter sentences with a more regular structure are easier to process, the extraction of relational tuples is facilitated, leading to a higher coverage and accuracy of the extracted relations when operating on the simplified sentences. Aside from taking advantage of the semantic hierarchy of minimal propositions in existing Open IE Abstract approaches, we also develop an Open IE reference system, Graphene. It implements a relation extraction pattern upon the simplified sentences. The framework we propose is evaluated within our reference TS implementation DisSim. In a comparative analysis, we demonstrate that our approach outperforms the state of the art in structural TS both in an automatic and a manual analysis. It obtains the highest score on three simplification datasets from two different domains with regard to SAMSA (0.67, 0.57, 0.54), a recently proposed metric targeted at automatically measuring the syntactic complexity of sentences which highly correlates with human judgments on structural simplicity and grammaticality. These findings are supported by the ratings from the human evaluation, which indicate that our baseline implementation DisSim returns fine-grained simplified sentences that achieve a high level of syntactic correctness and largely preserve the meaning of the input. Furthermore, a comparative analysis with the annotations contained in the RST Discourse Treebank (RST-DT) reveals that we are able to capture the contextual hierarchy between the split sentences with a precision of approximately 90% and reach an average precision of almost 70% for the classification of the rhetorical relations that hold between them. Finally, an extrinsic evaluation shows that when applying our TS framework as a pre-processing step, the performance of state-ofthe-art Open IE systems can be improved by up to 32% in precision and 30% in recall of the extracted relational tuples. Accordingly, we can conclude that our proposed discourse-aware TS approach succeeds in transforming sentences that present a complex linguistic structure into a sequence of simplified sentences that are to a large extent grammatically correct, represent atomic semantic units and preserve the meaning of the input. Moreover, the evaluation provides sufficient evidence that our framework is able to establish a semantic hierarchy between the split sentences, generating a fine-grained representation of complex assertions in the form of hierarchically ordered and semantically interconnected propositions. Finally, we demonstrate that state-of-the-art Open IE systems benefit from using our TS approach as a pre-processing step by increasing both the accuracy and coverage of the extracted relational tuples for the majority of the Open IE approaches under consideration. In addition, we outline that the semantic hierarchy of simplified sentences can be leveraged to enrich the output of existing Open IE systems with additional meta information, thus transforming the shallow semantic representation of state-of-the-art approaches into a canonical context-preserving representation of relational tuples. KW - Text Simplification KW - Syntactic Simplification KW - Open Information Extraction KW - Semantic Representation KW - Complex Sentences Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10540 ER - TY - THES A1 - Mandarawi, Waseem T1 - Multi-objective Network Virtualization and its Applicability to Industrial Networks N2 - Network virtualization provides high flexibility for deploying communication services in dense and heterogeneous environments. Two main approaches (dimensions) that are usually combined exist: Network Function Virtualization (NFV) technologies for functionality virtualization and Virtual Network Embedding (VNE) algorithms for resource virtualization. These approaches can be applied to different network levels, such as factory and enterprise levels of industrial networks. Several objectives and constraints, that might be conflicting, shall be considered when network virtualization is applied, mainly in complex topologies. This thesis proposes a network virtualization model that considers both virtualization dimensions, two network levels, and different objectives and constraints. The network levels considered are two primary levels in industrial networks. However, this consideration does not restrict the model to a particular environment or certain levels. The considered objectivities/constraints are topology, reliability, security, performance, and resource usage. Based on this model, we first build an overall combined solution for autonomic and composite virtual networking. This solution considers both virtualization dimensions, two network levels, and target objectives. Furthermore, this solution combines three novel virtualization sub-approaches that consider performance, reliability, and performance. However, the sub-approaches apply to different combinations of levels and dimensions, and the reliability approach additionally considers the resource usage objective. After presenting all solutions, we map them to the defined model. Regarding applicability to industrial networks, the combined approach is applied to an enterprise-level Industrial Internet of Things (IIoT) use case inspired by the smart factory concept in Industry 4.0. However, the sub-approaches are applied to more specific use cases. The performance and reliability solutions are integrated with relevant components of the Time Sensitive Networks (TSN) standard as a modern technology for industrial networks. The goal is to enrich the reliability and performance capabilities of TSN with the flexibility of network virtualization. In the combined approach, we compose and embed an environment-aware Extended Virtual Network (EVN) that represents the physical devices, virtual application functions, and required Service Function Chains (SFCs). We use the graph transformation method to transform abstract application requirements (represented by an Application Request (AR)) into an EVN. Both EVN composition and embedding methods consider the Substrate Network (SN) topology and different security, reliability, performance, and resource usage policies. These policies are applied with a certain priority and depend on the properties of communicating entities such as location and type. The EVN is embedded using property-based node mapping, reliability-aware branching, and a greedy chain embedding heuristic. The chain embedding heuristic is evaluated using a random topology that represents the use case. The performance sub-approach is NFV-based and is applied to a specific use case with Time-critical Traffic (TCT) flows. We develop and evaluate a complete framework for virtualizing Time-aware Shaper (TAS) using high-performance NFV. The reliability sub-approach is VNE-based and is applied to a specific factory level use case. We develop minimal and maximal branching heuristics based on a reliability-aware k-shortest path algorithm and compare them using a typical factory topology. We then integrate these algorithms with a Frame Replication and Elimination for Reliability (FRER) simulator to realize reliability policies by the autonomic and efficient configuration of a supporting technology. The security sub-approaches are related to both virtualization dimensions and are applied to generic enterprise-level use cases. However, the applicability of the security aspect to industrial networks is only shown in the combined (EVN) approach and its use case. We research the autonomic security management in Network Function Virtualization Infrastructure (NFVI) with the main goal of early reaction to threats through SFC reconfiguration through Virtual Network Function (VNF) live migration. This goal is approached by supporting the security measurements with a decision making architecture that considers, on the one hand, the threats and events in the environment and, on the other hand, the Service Level Agreement (SLA) between the NFVI provider and user. For this purpose, we classify the VNF-specific attacks and define possible early detectable behavior patterns. Finally, we develop a security-aware VNE heuristic that considers the security requirements of the Virtual Network (VN) and the security capabilities of the SN. This approach is modified in the combined approach to consider deploying virtualized security VNFs. KW - Network Virtualization KW - Industrial Networks KW - Virtual Network Embedding KW - Network Function Virtualization KW - Time Sensitive Networking Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10606 ER - TY - THES A1 - Lang, Thomas T1 - AI-Supported Interactive Segmentation of 3D Volumes N2 - The segmentation of volumetric datasets, i.e., the partitioning of the data into disjoint sub-volumes with the goal to extract information about these regions,is a difficult problem and has been discussed in medical imaging for decades. Due to the ever-increasing imaging capabilities, in particular in X-ray computed tomography (CT) or magnetic resonance imaging, segmentation in industrial applications also gains interest. Especially in industrial applications the generated datasets increase in size. Hence, most applications apply well-known techniques in a 2+1-dimensional manner,i.e., they apply image segmentation procedures on each slice separately and track the progress along the axis of the volume in which the slices are stacked on. This discards the information on preceding or subsequent slices, which is often assumed to be nearly identical. However, in the industrial context this might prove wrong since industrial parts might change their appearance significantly over the course of even a few slices. Moreover, artifacts can further distort the content of the slices. Therefore, three-dimensional processing of voxel volumes has to be preferred, which induces constraints upon the segmentation procedures. For example, they must not consider global information as it is usually not feasible in big scans to compute them efficiently. Yet another frequent problem is that applications focus on individual parts only and algorithms are tailored to that case. Most prominent medical segmentation procedures do so by applying methods to specifically find the liver and only the liver of a patient, for example. The implication is that the same method then cannot be applied to find other parts of the scan and such methods have to be designed individually for any object to be segmented. Flexible segmentation methods are needed too specifically when partitioning unique scans. We define a unique scan to be a voxel dataset for which no comparable volume exists. Classical examples include the use case of cultural heritage where not only the objects themselves are unique but also scan parameters are optimized to obtain the best image quality possible for that specific scan. This thesis aims at introducing novel methods for voxelwise classifications based on local geometric features. The latter are computed from local environments around each voxel and extract information in similar ways as humans do, namely by observing their similarity to geometric or textural primitives. These features serve as the foundation to learning the proposed voxelwise classifiers and to discriminate between segmented and unsegmented voxels. On the one hand, they perform fully automated clustering of volumes for which a representative random sample is extracted first. On the other hand, a set of segmenting classifiers can be trained from few seed voxels, i.e., volume elements for which a domain expert marked if they belong to the components that shall be segmented. The interactive selection offers the advantage that no completely labeled voxel volumes are necessary and hence that unique scans of objects can be segmented for which no comparable scans exist. Overall, it will be shown that all proposed segmentation methods are effectively of linear runtime with respect to the number of voxels in the volume. Thus, voxel volumes without size restrictions can be segmented in an efficient linear pass through the volume. Finally, the segmentation performance is evaluated on selected datasets which shows that the introduced methods can achieve good results on scans from a broad variety of domains for both small and big voxel volumes. N2 - Die Segmentierung von Volumendaten, also die Partitionierung der Daten in disjunkte Teilvolumen zur weiteren Informationsextraktion, ist ein Problem, welches in der medizinischen Bildverarbeitung seit Jahrzehnten behandelt wird. Bedingt durch die sich ständig verbessernden Bilderfassungsmethoden, speziell im Bereich der Röntgen-Computertomographie (CT) oder der Magnetresonanztomographie, gewinnt die Segmentierung von industriellen Volumendaten auch an Wichtigkeit. Insbesondere im industriellen Kontext steigt die Größe der zu segmentierenden Daten jedoch rasant an, so dass sich die meisten Segmentierungsapplikationen auf den 2+1-dimensionalen Fall beschränken, also Bilder verarbeiten und die Ergebnisse über mehrere Bilder hinweg verfolgen. Jedoch werden somit beispielsweise geometrische Informationen über benachbarte Schichten ignoriert. Diese können sich aber gerade im industriellen Bereich signifikant ändern. Aus diesem Grund ist hier die dreidimensionale Bildverarbeitung vorzuziehen. Dadurch ergeben sich neue Einschränkungen, beispielsweise können keine globalen Informationen zur Segmentierung herangezogen werden, da diese typischerweise nicht effizient berechenbar sind. Ferner fokussieren sich dreidimensionale Methoden aus medizinischen Bereichen zumeist auf bestimmte Bestandteile der Daten, wie einzelne Organe. Dies schränkt die Generalität dieser Methoden signifikant ein und somit sind separate Verfahren für jedes zu segmentierende Objekt notwendig. Flexible Methoden sind darüber hinaus bei Anwendung auf einzigartige Scans erforderlich. Ein einzigartiger Scan ist ein Voxelvolumen, für welches kein vergleichbares Datum existiert. Klassische Beispiele sind Kulturgutdigitalisate, da dort nicht nur die Objekte einzigartig sind, sondern auch die Aufnahmeparameter spezifisch für diesen einen Scan optimiert wurden. Die vorliegende Dissertation führt neuartige Methoden zur voxelweisen dreidimensionalen Segmentierung von Volumendaten auf Basis lokaler geometrischer Informationen ein. Die Bewertung dieser Informationen imitiert die menschliche Objektwahrnehmung, indem lokale Regionen mit geometrischen oder strukturellen Primitiven verglichen werden. Mit Hilfe dieser Bewertungen werden voxelweise anzuwendende Klassifikatoren trainiert, welche zwischen erwünschten und unerwünschten Voxeln unterscheiden sollen. Ein Teil dieser Klassifikatoren führt eine vollautomatische Clustering-Analyse durch, nachdem eine repräsentative und zufällig ausgewählte Teilmenge fester Größe an Voxeln selektiert wurde. Die verbliebenen Segmentierungsalgorithmen erhalten Trainingsdaten in Form von Seed-Voxeln, also wenige Volumenelemente, die von einem Domänenexperten markiert wurden. Diese interaktive Herangehensweise ermöglicht das Einbringen von Expertenwissen ohne die Notwendigkeit vollständig annotierter Trainingsvolumen, wodurch auch einzigartige Scans segmentiert werden können. Für alle Verfahren wird dargelegt, dass die eingeführten Algorithmen von asymptotisch linearer Laufzeit in der Anzahl der Voxel im Volumen sind. Somit können Voxeldaten ohne Größenbeschränkungen in einem effizienten linearen Durchgang verarbeitet werden. Abschließend wird die Performanz der vorgestellten Verfahren auf ausgewählten Daten evaluiert und aufgezeigt, dass mit denselben wenigen Verfahren gute Ergebnisse auf vielen unterschiedlichen Domänen und gleichfalls auf kleinen und großen Volumen erzielt werden können. KW - Segmentation KW - Computed Tomography KW - Artificial Intelligence KW - Active Learning KW - Interactive KW - Machine learning KW - Image processing Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9221 ER - TY - JOUR A1 - Basmadjian, Robert T1 - Flexibility-Based Energy and Demand Management in Data Centers BT - a Case Study for Cloud Computing JF - Energies N2 - The power demand (kW) and energy consumption (kWh) of data centers were augmenteddrastically due to the increased communication and computation needs of IT services. Leveragingdemand and energy management within data centers is a necessity. Thanks to the automated ICTinfrastructure empowered by the IoT technology, such types of management are becoming more feasiblethan ever. In this paper, we look at management from two different perspectives: (1) minimization of theoverall energy consumption and (2) reduction of peak power demand during demand-response periods.Both perspectives have a positive impact on total cost of ownership for data centers. We exhaustivelyreviewed the potential mechanisms in data centers that provided flexibilities together with flexiblecontracts such as green service level and supply-demand agreements. We extended state-of-the-artby introducing the methodological building blocks and foundations of management systems for theabove mentioned two perspectives. We validated our results by conducting experiments on a lab-gradescale cloud computing data center at the premises of HPE in Milano. The obtained results support thetheoretical model, by highlighting the excellent potential of flexible service level agreements in Green IT:33% of overall energy savings and 50% of power demand reduction during demand-response periods inthe case of data center federation. Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9251 VL - 2019 IS - 12 SP - 1 EP - 22 PB - MDPI CY - Basel ER - TY - THES A1 - Schmid, Matthias T1 - Towards Storing 3D Model Graphs in Relational Databases N2 - The increasing relevance of massive graph data reinforces the need for adequate graph data management. While several graph database engines have been developed, the storage of graph data in a relational database management system, and therefore the seamless integration into existing information systems remains an open challenge. Motivated by the use case to integrate Building Information Modeling (BIM) data into the MonArch system, we propose a solution that transforms the BIM data into a property graph and stores this graph in the database system. We present a novel approach to efficiently store property graph data in a relational database management system using JSON functionality and redundant storage of edges in adjacency lists and show how to import huge data sets into this schema. Applying this approach, we import data sets of up to nearly 1 TB of disk space within the relational database, while only having 96 GB of main memory available. We also present a new approach of how to retrieve data from this database schema, translating queries written in the popular property graph query language Cypher into SQL. Hence, we provide an intuitive way to write semantically complex queries. We also demonstrate the efficiency of our approach using the standardized Linked Data Benchmark Council – Social Network Benchmark (LDBC - SNB) framework. Our approach increases the throughput for this benchmark by up to 85 times, compared to existing approaches for RDBMS. In addition, we propose a new method to transform BIM data into the property graph model and how to apply the aforementioned property graph storage to this data. We can import IFC models of up to 300 MB within five minutes. We show the suitability of our approach using our own use case specific benchmark, which we integrated into the previously mentioned Social Network Benchmark. For our interactive use case-specific queries, we achieve response times faster than 5 ms in 99% of all executions. Finally, we present how the aforementioned approach to store BIM data in a relational database management system is integrated into the existing MonArch system by splitting the different functionalities of our approach into a microservice architecture. N2 - Die steigende Relevanz von riesigen Graphdatenmengen verstärkt die Notwendigkeit von adäquatem Graphdaten Management. Während bereits mehrere Graphdatenbanken entwickelt wurden, bleibt die Speicherung von Graphdaten in relationalen Datenbanken und die damit verbundene nahtlose Integration in bereits existierende Informationssysteme eine ungelöste Herausforderung. Motiviert durch unseren eigenen Anwendungsfall Building Information Modeling (BIM)Daten in das MonArch Informationssystem zu integrieren, schlagen wir einen Ansatz vor, BIM Daten in eine Property Graph Form umzuwandeln und diesen in der Datenbank zu speichern. Um dies zu erreichen, stellen wir einen neuartigen Ansatz vor, um Property Graphen in einem relationalen Datenbanksystem zu speichern, indem wir Funktionalitäten wie JSON und die redundante Speicherung von Kanten in Adjazenzlisten kombinieren und zeigen wie große Mengen dieser Daten in das Schema importiert werden können. Durch die Anwendung unseres Ansatzes können wir Datensätze von bis zu 1 TB in das Datenbanksystem importieren, während wir nur 96 GB Hauptspeicher zur Verfügung haben. Wir stellen außerdem einen neuen Ansatz vor, um Daten aus dem zuvor genannten Schema abzufragen, indem wir die beliebte Graphanfragesprache Cypher in die Sprache SQL übersetzen. Dadurch erreichen wir eine intuitive Art semantisch komplexe Anfragen zu schreiben. Zusätzlich zeigen wir die Effizienz unseres Ansatzes, indem wir das standardisierte Evaluationsframework Social Network Benchmark des Linked Data Benchmar Council (LDBC – SNB) verwenden. Unser Ansatz erhöht den Durchsatz dieses Benchmarks, im Vergleich zu existierenden Ansätzen für relationale Datenbanksysteme, auf einen bis zu 85-fachen Durchsatz. Zusätzlich schlagen wir eine neue Methode vor, um BIM Daten in das Property Graph Modell zu übertragen und wie das zuvor vorgestellte Speichermodel verwendet werden kann, um diese Daten zu speichern. Damit können wir IFC Modelle mit bis zu 300 MB in unter 5 Minuten in unser System importieren. Schließlich zeigen wir die Eignung unseres Ansatzes, indem wir einen eigenen Benchmark spezifisch für unseren Anwendungsfall verwenden, welchen wir in den zuvor erwähnte Social Network Benchmark integriert haben. Für unsere anwendungsfallspezifischen Anfragen erreichen wir Antwortzeiten von unter 5 ms in 99% der Ausführungen. KW - Graph-based database models KW - Relational database model KW - Property graph KW - Industry Foundation Classes KW - IFC Store Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10353 ER - TY - RPRT A1 - Eckhardt, Dennis A1 - Freiling, Felix A1 - Herrmann, Dominik A1 - Katzenbeisser, Stefan A1 - Pöhls, Henrich C. T1 - Sicherheit in der Digitalisierung des Alltags: Definition eines ethnografisch-informatischen Forschungsfeldes für die Lösung alltäglicher Sicherheitsprobleme N2 - In den vergangenen Jahrzehnten hat es unübersehbar zahlreiche Fortschritte im Bereich der IT-Sicherheitsforschung gegeben, etwa in den Bereichen Systemsicherheit und Kryptographie. Es ist jedoch genauso unübersehbar, dass IT-Sicherheitsprobleme im Alltag der Menschen fortbestehen. Mutmaßlich liegt dies an der Komplexität von Alltagssituationen, in denen Sicherheitsmechanismen und Gerätefunktionalität sowie deren Heterogenität in schwer antizipierbarer Weise mit menschlichem Verständnis und Alltagsgebrauch interagieren. Um die wissenschaftliche Forschung besser auf Menschen und deren IT-Sicherheitsbedürfnisse auszurichten, müssen wir daher den Alltag der Menschen besser verstehen. Das Verständnis von Alltag ist in der Informatik jedoch noch unterentwickelt. Dieser Beitrag möchte das Forschungsfeld “Sicherheit in der Digitalisierung des Alltags” definieren, um Forschenden die Gelegenheit zu geben, ihre Anstrengungen in diesem Bereich zu bündeln. Wir machen dabei Vorschläge einerseits zur inhaltlichen Eingrenzung der informatischen Forschung. Andererseits möchten wir durch die Einbeziehung von Forschungsmethoden aus der Ethnografie, die Erkenntnisse aus der durchaus subjektiven Beobachtung des “Alltags” vieler einzelner Individuen zieht, zur methodischen Weiterentwicklung interdisziplinärer Forschung in diesem Feld beitragen. Die IT- Sicherheitsforschung kann dann Bestehendes gezielt für eine richtige Alltagstauglichkeit optimieren und neue grundlegende Sicherheitsfunktionalitäten für die konkreten Herausforderungen im Alltag entwickeln. KW - Alltagsdigitalisierung KW - Ethnografie KW - IT-Sicherheit KW - Interdisziplinarität Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13721 ER - TY - JOUR A1 - Herbold, Steffen A1 - Hautli‑Janisz, Annette A1 - Heuer, Ute A1 - Kikteva, Zlata A1 - Trautsch, Alexander T1 - A large‑scale comparison of human‑written versus ChatGPT‑generated essays JF - Scientific Reports N2 - ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13961 VL - 13 PB - Springer Nature ER - TY - JOUR A1 - Hassen, Wiem Fekih A1 - Ben Ahmed, Mariem T1 - Optimization of a Redox-Flow Battery Simulation Model Based on a Deep Reinforcement Learning Approach JF - Batteries N2 - Vanadium redox-flow batteries (VRFBs) have played a significant role in hybrid energy storage systems (HESSs) over the last few decades owing to their unique characteristics and advantages. Hence, the accurate estimation of the VRFB model holds significant importance in large-scale storage applications, as they are indispensable for incorporating the distinctive features of energy storage systems and control algorithms within embedded energy architectures. In this work, we propose a novel approach that combines model-based and data-driven techniques to predict battery state variables, i.e., the state of charge (SoC), voltage, and current. Our proposal leverages enhanced deep reinforcement learning techniques, specifically deep q-learning (DQN), by combining q-learning with neural networks to optimize the VRFB-specific parameters, ensuring a robust fit between the real and simulated data. Our proposed method outperforms the existing approach in voltage prediction. Subsequently, we enhance the proposed approach by incorporating a second deep RL algorithm—dueling DQN—which is an improvement of DQN, resulting in a 10% improvement in the results, especially in terms of voltage prediction. The proposed approach results in an accurate VFRB model that can be generalized to several types of redox-flow batteries. KW - energy storage KW - redox-flow battery KW - battery modeling KW - battery state variables KW - parameter optimization KW - accurate estimation KW - voltage prediction KW - deep reinforcement learning KW - deep q-learning KW - dueling deep q-networks Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13994 VL - 10 PB - MDPI CY - Basel ER - TY - JOUR A1 - Hassen, Wiem Fekih A1 - Imen Azzouz, Imen Azzouz T1 - Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach JF - Energies N2 - The worldwide adoption of Electric Vehicles (EVs) has embraced promising advancements toward a sustainable transportation system. However, the effective charging scheduling of EVs is not a trivial task due to the increase in the load demand in the Charging Stations (CSs) and the fluctuation of electricity prices. Moreover, other issues that raise concern among EV drivers are the long waiting time and the inability to charge the battery to the desired State of Charge (SOC). In order to alleviate the range of anxiety of users, we perform a Deep Reinforcement Learning (DRL) approach that provides the optimal charging time slots for EV based on the Photovoltaic power prices, the current EV SOC, the charging connector type, and the history of load demand profiles collected in different locations. Our implemented approach maximizes the EV profit while giving a margin of liberty to the EV drivers to select the preferred CS and the best charging time (i.e., morning, afternoon, evening, or night). The results analysis proves the effectiveness of the DRL model in minimizing the charging costs of the EV up to 60%, providing a full charging experience to the EV with a lower waiting time of less than or equal to 30 min. KW - smart EV charging KW - day-ahead planning KW - deep Q-Network KW - data-driven approach KW - waiting time KW - cost minimization KW - real dataset Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13985 VL - 16 PB - MDPI CY - Basel ER - TY - THES A1 - Püllen, Dominik T1 - Holistic Security Engineering for Software-Defined Vehicles N2 - With the increasing use of digital technologies in the automotive sector, the traditional automobile is undergoing a structural transformation, requiring new technologies and enabling innovative mobility concepts. In particular, the ability to drive automatically or even fully autonomously, update control software, and remain connected to the environment allows attackers to infiltrate highly critical vehicle systems and take control without adequate protection. Once not only individual vehicles but entire fleets are dominated by software, cyberattacks could disrupt a significant portion of the infrastructure and expose passengers to substantial risks. This work follows a holistic approach to protecting highly automated software-defined vehicles from cyberattacks by designing and implementing security concepts in the main phases of a vehicle's lifecycle. We use SAE level 4 prototype vehicles to evaluate our proposed techniques. We start with a systematic security requirement analysis using the ISA-62443 standard series, demonstrating how threats can be identified in a collaborative, hierarchical process and how the resulting security risks impact the software and hardware architecture of a self-driving vehicle. We show how this analysis process results in concrete requirements whose consideration reduces the overall security risk to a tolerable level. Subsequently, we develop technical solutions for selected requirements. We begin by securing the CAN and FlexRay legacy protocols, which we foresee being used in specific areas of SDV in a transitional period despite technological changes. To enable vehicle-wide security management, we address the management and distribution of cryptographic keys within such networks, mainly focusing on resource-constrained devices. We propose using lightweight implicit certificates for deriving cryptographic group keys that can be used in CAN networks. Additionally, we demonstrate how the slot-based frame structure of the FlexRay protocol allows for efficient "multi-slot" authentication, for which we calculate cryptographic keys using hash-based key chains. SDV use Ethernet-based communication protocols and custom middleware stacks to transmit large amounts of data in real-time. We develop a three-stage security process for the novel ASOA, which enables the development and central orchestration of system-agnostic functional software components on embedded systems and HPC platforms. After the central specification of the security architecture at the data flow level, security tokens are automatically calculated and distributed for runtime protection of the service-oriented, DDS-based data transmission. Our process ensures the strict separation of function and system knowledge, allowing for cost-effective and adaptable security architecture management. The evaluation in four self-driving, software-defined vehicles demonstrates an average runtime overhead of approximately 5.71%. As the initial risk analysis and actual cyberattacks have shown, protective measures against the compromise of control units must be taken alongside communication security. To address this, we develop a method for verifying and validating the software integrity of control units. A governmental third party confirms a measurement through a digital certificate, proving the examined vehicle's trustworthiness and suitability for participation in automated traffic. In the final step of this work, we present an assessment scheme that allows software-defined vehicles to evaluate security incidents during operation in terms of their maximum expected damage and initiate appropriate countermeasures. We follow the ISO/SAE 21434 standard and model attack paths using a graph representing dependencies among internal vehicle assets to account for the propagation effects of cyberattacks. The assessment of a security incident considers not only the probability of individual attack paths but also the vehicle context. Our practical evaluation demonstrates that we can detect, report, and assess security incidents below the human reaction time in the earlier mentioned prototype vehicles. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14497 ER -