TY - THES A1 - Klaus, Tina T1 - Complexity Analysis of Quantizations of Multidimensional Stochastic Differential Equations N2 - The dissertation is located in the field of quantizations of certain stochastic processes, namely a solution X of a multidimensional stochastic differential equation (SDE). The quantization problem for X consists in approximating X by a a random element which takes only finitely many values. Our main interest lies in the investigation of the asymptotic behavior of the Nth minimal quantization error of X as N tends to infinity, which incorporates the determination of both the sharp rate of convergence and explicit asymptotic constants. Especially explicit asymptotic constants have been so far unknown in the context of multidimensional SDEs. Furthermore, as part of our analysis, we provide a method which yields a strongly asymptotically optimal sequence of N-quantization of X. In certain special cases our method is fully constructive and the algorithm is easy to implement. KW - Stochastische Differentialgleichung KW - Komplexität KW - Quantifizierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7665 ER - TY - THES A1 - Awwad, Tarek T1 - Context-Aware Worker Selection For Efficient Quality Control In Crowdsourcing N2 - In the last decade, crowdsourcing has proved its ability to address large scale data collection tasks, such as labeling large data sets, at a low cost and in a short time. However, the performance and behavior variability between workers as well as the variability in task designs and contents, induce an unevenness in the quality of the produced contributions and, thus, in the final output quality. In order to maintain the effectiveness of crowdsourcing, it is crucial to control the quality of the contributions. Furthermore, maintaining the efficiency of crowdsourcing requires the time and cost overhead related to the quality control to be at its lowest. While effective, current quality control techniques such as contribution aggregation, worker selection, context-specific reputation systems, and multi-step workflows, suffer from fairly high time and budget overheads and from their dependency on prior knowledge about individual workers. In this thesis, we address this challenge by leveraging the similarity between completed and incoming tasks as well as the correlation between the worker declarative profiles and their performance in previous tasks in order to perform an efficient task-aware worker selection. To this end, we propose CAWS (Context AwareWorker Selection) method which operates in two phases; in an offline phase, completed tasks are clustered into homogeneous groups for each of which the correlation with the workers declarative profile is learned. Then, in the online phase, incoming tasks are matched to one of the existing clusters and the correspondent, previously inferred profile model is used to select the most reliable online workers for the given task. Using declarative profiles helps eliminate any probing process, which reduces the time and the budget while maintaining the crowdsourcing quality. Furthermore, the set of completed tasks, when compared to a probing task split, provides a larger corpus from which a more precise profile model can be learned. This translates to a better selection quality, especially for harder tasks. In order to evaluate CAWS, we introduce CrowdED (Crowdsourcing Evaluation Dataset), a rich dataset to evaluate quality control methods and quality-driven task vectorization and clustering. The generation of CrowdED relies on a constrained sampling approach that allows to produce a task corpus which respects both, the budget and type constraints. Beside helping in evaluating CAWS, and through its generality and richness, CrowdED helps in plugging the benchmarking gap present in the crowdsourcing quality control community. Using CrowdED, we evaluate the performance of CAWS in terms of the quality of the worker selection and in terms of the achieved time and budget reduction. Results shows the following: first, automatic grouping is able to achieve a learning quality similar to job-based grouping. And second, CAWS is able to outperform the state-of-the-art profile-based worker selection when it comes to quality. This is especially true when strong budget and time constraints are present on the requester side. Finally, we complement our work by a software contribution consisting of an open source framework called CREX (CReate Enrich eXtend). CREX allows the creation, the extension and the enrichment of crowdsourcing datasets. It provides the tools to vectorize, cluster and sample a task corpus to produce constrained task sets and to automatically generate custom crowdsourcing campaign sites. N2 - Im letzten Jahrzehnt hat Crowdsourcing seine Fähigkeit bewiesen große Datensammelaufgaben, wie die Beschriftung großer Datensätze, zu geringen Kosten und in kurzer Zeit zu bewältigen. Die Leistungs- und Verhaltensschwankungen zwischen den Arbeitern sowie die Variabilität in den Aufgabenentwürfen und -inhalten führen jedoch zu einer Ungleichmäßigkeit in der Qualität der erworbenen Beiträge und somit in der endgültigen Ausgabequalität. Um die Effektivität von Crowdsourcing zu erhalten, ist es entscheidend die Qualität der einzelnen Beiträge zu kontrollieren. Darüber hinaus erfordert die Aufrechterhaltung der Effizienz von Crowdsourcing, dass der Zeit- und Kostenaufwand für die Qualitätskontrolle am geringsten ist. Effektive, aktuelle Qualitätskontrolltechniken wie die Aggregation von Beiträgen, die gezielte Auswahl von Arbeitern, kontextspezifische Reputationssysteme und mehrstufige Workflows leiden unter ziemlich hohen Zeit- und Budgetzwangslagen und von ihrer Abhängigkeit von vorausgehenden Kenntnissen über die einzelnen Arbeiter. Ìn dieser Arbeit gehen wir diese Herausforderungen an, indem wir die Ähnlichkeit zwischen abgeschlossenen und eingehenden Aufgaben sowie die Korrelation zwischen den von Arbeitern deklarierten Profilen und deren Leistung in früheren Aufgaben nutzen, um eine effiziente aufgabenbewusste Arbeiterauswahl durchzuführen. Zu diesem Zweck schlagen wir eine zweiphasige Methode vor: CAWS (Context Aware Worker Selection). In einer Offline-Phase werden bereits bearbeitete Aufgaben in homogene Cluster gruppiert, für welche jeweils die Korrelation mit dem vorab deklarierten Profil der Arbeiter erlernt wird. In der Online-Phase werden eingehende Aufgaben dann einem der vorhandenen Cluster zugeordnet, und das entsprechende, zuvor erschlossene Profilmodell wird dazu verwendet, um die vertrauenswürdigsten Online-Mitarbeiter für die gegebene Aufgabe auszuwählen. Die Verwendung von deklarativen Profilen hilft dabei jeglichen Sondierungsprozess zu eliminieren, wobei Zeit und Kosten reduziert werden und gleichzeitig die Crowdsourcing-Qualität beibehalten wird. Darüber hinaus bietet das Aggregat der abgeschlossenen Aufgaben im Vergleich zu einer Aufgabenaufteilung durch Sondierung einen größeren Korpus, aus dem ein präziseres Profilmodell erlernt werden kann. Dies führt zu einer besseren Auswahlqualität, insbesondere für schwierigere Aufgaben. Um CAWS zu evaluieren, stellen wir CrowdED (Crowdsourcing Evaluation Dataset) vor, einen umfassenden Datensatz zur Evaluierung von Qualitätskontrollmethoden und qualitätsgetriebener Aufgaben-Vektorisierung und Clusterbildung. Die Generierung von CrowdED basiert auf einem bedingten Stichprobeverfahren, welches es ermöglicht, einen Aufgaben-Corpus zu erstellen, der sowohl die Budget- als auch die Typ-Bedingungen einhält. Neben seiner Allgemeingültigkeit und Reichhaltigkeit, hilft CrowdED nicht nur bei der Bewertung von CAWS, sondern es hilft auch dabei, die Benchmarking-Lücke in der Crowdsourcing-Community für Qualitätskontrolle zu schließen. Mit CrowdED evaluieren wir die Leistung von CAWS im Hinblick auf die Qualität der Arbeiterauswahl und auf die erreichte Zeit- und Kostenreduzierung. Die Ergebnisse zeigen folgendes: Zum einen kann mit der automatischen Gruppierung eine Lernqualität ähnlich der von Job-basierten Gruppierungen erreicht werden. Und zweitens ist CAWS in der Lage, die aktuellen profilbasierten Auswahlmethoden in Bezug auf Qualität zu übertreffen. Dies gilt insbesondere dann, wenn auf der Anfordererseite starke Budget- und Zeitbeschränkungen bestehen. Schließlich ergänzen wir unsere Arbeit mit einer Software, die aus einem lizenzfreien Framework namens CREX (CReate Enrich eXtend) besteht. CREX ermöglicht die Erstellung, Erweiterung und Anreicherung von Crowdsourcing-Datensätzen. Es liefert die nötigen Werkzeuge um einen Aufgabenkorpus zu vektorisieren, zu gruppieren und zu samplen, um eingeschränkte Aufgabensätze zu erzeugen und um automatisch benutzerdefinierte Crowdsourcing-Kampagnen-Seiten zu generieren. KW - Crowdsourcing KW - Quality control KW - Machine learning KW - Qualitätssicherung KW - Open Innovation Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7409 ER - TY - THES A1 - Wahl, Florian T1 - Methods for monitoring the human circadian rhythm in free-living N2 - Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles. In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime. At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations. We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32±17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40±48 min and wake error of 42±57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle. KW - context recognition KW - human circadian rhythm KW - machine learning KW - sleep timing KW - smart eyeglasses KW - Tagesrhythmus Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7607 ER - TY - THES A1 - Kronawitter, Stefan T1 - Automatic Performance Optimization of Stencil Codes N2 - A widely used class of codes are stencil codes. Their general structure is very simple: data points in a large grid are repeatedly recomputed from neighboring values. This predefined neighborhood is the so-called stencil. Despite their very simple structure, stencil codes are hard to optimize since only few computations are performed while a comparatively large number of values have to be accessed, i.e., stencil codes usually have a very low computational intensity. Moreover, the set of optimizations and their parameters also depend on the hardware on which the code is executed. To cut a long story short, current production compilers are not able to fully optimize this class of codes and optimizing each application by hand is not practical. As a remedy, we propose a set of optimizations and describe how they can be applied automatically by a code generator for the domain of stencil codes. A combination of a space and time tiling is able to increase the data locality, which significantly reduces the memory-bandwidth requirements: a standard three-dimensional 7-point Jacobi stencil can be accelerated by a factor of 3. This optimization can target basically any stencil code, while others are more specialized. E.g., support for arbitrary linear data layout transformations is especially beneficial for colored kernels, such as a Red-Black Gauss-Seidel smoother. On the one hand, an optimized data layout for such kernels reduces the bandwidth requirements while, on the other hand, it simplifies an explicit vectorization. Other noticeable optimizations described in detail are redundancy elimination techniques to eliminate common subexpressions both in a sequence of statements and across loop boundaries, arithmetic simplifications and normalizations, and the vectorization mentioned previously. In combination, these optimizations are able to increase the performance not only of the model problem given by Poisson’s equation, but also of real-world applications: an optical flow simulation and the simulation of a non-isothermal and non-Newtonian fluid flow. KW - Optimierung KW - Codegenerierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7618 ER - TY - CHAP A1 - Berger, Christian A1 - Reiser, Hans P. A1 - Sousa, João A1 - Bessani, Alysson T1 - Resilient Wide-Area Byzantine Consensus Using Adaptive Weighted Replication T2 - 38th IEEE International Symposium on Reliable Distributed Systems (SRDS 2019) N2 - In geo-replicated systems, the heterogeneous latencies of connections between replicas limit the system’s ability to achieve fast consensus. State machine replication (SMR) protocols can be refined for their deployment in wide-area networks by using a weighting scheme for active replication that employs additional replicas and assigns higher voting power to faster replicas. Utilizing more variability in quorum formation allows replicas to swifter proceed to subsequent protocol stages, thus decreasing consensus latency. However, if network conditions vary during the system’s lifespan or faults occur, the system needs a solution to autonomously adjust to new conditions. We incorporate the idea of self-optimization into geographically distributed, weighted replication by introducing AWARE, an automated and dynamic voting weight tuning and leader positioning scheme. AWARE measures replica-replica latencies and uses a prediction model, thriving to minimize the system’s consensus latency. In experiments using different Amazon EC2 regions, AWARE dynamically optimizes consensus latency by self-reliantly finding a fast weight configuration yielding latency gains observed by clients located across the globe. KW - adaptivness, weighted replication, consensus, geo-replication, Byzantine fault tolerance, self-optimization Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7537 PB - IEEE Xplore ER - TY - CHAP A1 - Berger, Christian A1 - Reiser, Hans P. T1 - Scaling Byzantine Consensus: A Broad Analysis T2 - SERIAL'18 Proceedings of the 2nd Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers N2 - Blockchains and distributed ledger technology (DLT) that rely on Proof-of-Work (PoW) typically show limited performance. Several recent approaches incorporate Byzantine fault-tolerant (BFT) consensus protocols in their DLT design as Byzantine consensus allows for increased performance and energy efficiency, as well as it offers proven liveness and safety properties. While there has been a broad variety of research on BFT consensus protocols over the last decades, those protocols were originally not intended to scale for a large number of nodes. Thus, the quest for scalable BFT consensus was initiated with the emerging research interest in DLT. In this paper, we first provide a broad analysis of various optimization techniques and approaches used in recent protocols to scale Byzantine consensus for large environments such as BFT blockchain infrastructures. We then present an overview of both efforts and assumptions made by existing protocols and compare their solutions. KW - Distributed Ledgers KW - Blockchain KW - Byzantine Fault-Tolerant Consensus Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7526 SN - 978-1-4503-6110-1 PB - ACM CY - New York, NY, USA ER - TY - THES A1 - Lachat, Paul T1 - Detecting Inference Attacks Involving Sensor Data N2 - The collection of personal information by organizations has become increasingly essential for social interactions. Nevertheless, according to the GDPR (General Data Protection Regulation), the organizations have to protect collected data. Access Control (AC) mechanisms are traditionally used to secure information systems against unauthorized access to sensitive data. The increased availability of personal sensor data, thanks to IoT-oriented applications, motivates new services to offer insights about individuals. Consequently, data mining algorithms have been proposed to infer personal insights from collected sensor data. Although they can be used for genuine purposes, attackers can leverage those outcomes, combining them with other type of data, and further breaching individuals’ privacy. Thus, bypassing AC mechanisms thanks to such insights is a concrete problem. We propose an inference detection system based on the analysis of queries issued on a sensor database. The knowledge obtained through these queries, and the inference channels corresponding to the use of data mining algorithms on sensor data to infer individual information, are described using Raw sensor data based Inference ChannEl Model (RICE-M). The detection is carried out by RICE-M based inference detection System (RICE-Sy). RICE-Sy considers at the time of the query, the knowledge that a user obtains via a new query and has obtained via his query history, and determines whether this is sufficient to allow that user to operate a channel. Thus, privacy protection systems can take advantage of the inferences detected by RICE-Sy, taking into account individuals’ information obtained by the attackers via a database of sensors, to further protect these individuals. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14149 ER - TY - THES A1 - Pöhls, Henrich C. T1 - Increasing the Legal Probative Value of Cryptographically Private Malleable Signatures N2 - Die Arbeit befasst sich mit der Erarbeitung von technischen Vorgaben und deren Umsetzung in kryptographisch sichere Verfahren von datenschutzfreundlichen, veränderbaren digitalen Signaturverfahren (private malleable signature schemes oder MSS) zur Erlangung möglichst hoher rechtlicher Evidenz. Im Recht werden bestimmte kryptographische Algorithmen, Schlüssellängen und deren korrekte organisatorische Anwendungen zur Erzeugung elektronisch signierter Dokumente als rechtssicher eingestuft. Dies kann zu einer Beweiserleichterung mithilfe signierter Dokumente führen. So gelten nach Verordnung (EU) Nr. 910/2014 (eIDAS) qualifiziert signierte elektronische Dokumente entweder als Anscheinsbeweis der Echtheit oder ihnen wird gar eine gesetzliche Vermutung der Echtheit zuteil. Gesetzlich anerkannte technische Verfahren, die einen solch erhöhten Beweiswert erreichen, erfüllen mithilfe von Kryptographie im wesentlichen zwei Eigenschaften: Integritätsschutz (integrity), also die Erkennung der Abwesenheit von unerwünschten Änderungen und die Zurechenbarkeit des unveränderten Dokumentes zum Signaturersteller (accountability). Hingegen ist der größte Vorteil veränderbarer digitaler Signaturverfahren (MSS) die „privacy“ genannte Eigenschaft: Eine autorisierte Änderung verbirgt den vorherigen Inhalt. Des Weiteren bleibt die Signatur solange valide wie ausschliesslich autorisierte Änderungen vorgenommen werden. Wird diese Eigenschaft kryptographisch nachweislich sicher erfüllt, so spricht man von einem private malleable signature scheme. In der Arbeit werden zwei verbreitete Formen, die sogenannten redactable signature schemes (RSS) und die sanitizable signature schemes (SSS), eingehend betrachtet. Diese erlauben vielfältige Einsatzmöglichkeiten, zum Beispiel eine autorisierte spätere Veränderung zur Wahrung von Geschäftsgeheimnissen oder zum Datenschutz: Der Unterzeichner delegiert so beispielsweise über ein private redactable signature scheme nur das nachträgliche Schwärzen (redaction). Dies schränkt die Veränderbarkeit auf das Entfernen von Informationen ein, erlaubt aber wirksam die Wahrung des Datenschutzes oder den Schutz von (Geschäfts)geheimnissen indem diese Informationen irreversibel für Angreifer entfernt werden. Die kryptographische privacy Eigenschaft besagt, dass es nun nicht mehr effizient möglich ist, aus dem geschwärzten Dokument Wissen über die geschwärzten Informationen zu erlangen, auch und gerade nicht für den Signaturprüfer. Die Arbeit geht im Kern der Frage nach, ob ein MSS sowohl die kryptographische Eigenschaft „privacy“ als auch gleichzeitig die Eigenschaften „integrity“ und „accountability“ mit ausreichend hohen Sicherheitsniveaus erfüllen kann. Das Ziel ist es, dass ein MSS gleichzeitig ein solch ausreichend hohen Grad an Sicherheit erfüllt, dass (1) die autorisierten nachträglichen Änderungen zum Schutze von Geschäftsgeheimnissen oder personenbezogenen Daten eingesetzt werden können, und dass (2) dem Dokument, welches mit dem speziellen Signaturverfahren signiert wurde, ein erhöhter Beweiswert beigemessen werden kann. In Bezug auf letzteres stellt die Arbeit sowohl die technischen Vorgaben, welche für qualifizierte elektronische Signaturen (nach Verordnung (EU) Nr. 910/2014) gelten, in Bezug auf die nachträgliche Änderbarkeit dar, als auch konkrete kryptographische Eigenschaften und Verfahren um diese Vorgaben kryptographisch beweisbar zu erreichen. Insbesondere weisen veränderbare Signaturen (MSS) einen anderen Integritätsschutz als traditionelle digitale Signaturen auf: Eine signierte Nachricht darf nachträglich durch eine definierte dritte Partei in einer definierten Art modifiziert werden. Diese sogenannte autorisierte Änderung (authorized modification) kann auch ohne Kenntnis des geheimen Signaturschlüssels des Unterzeichners durchgeführt werden. Bei der Verifikation der digitalen Signatur durch den Signaturprüfer bleibt der ursprüngliche Signierende und dessen Einwilligung zur autorisierten Änderung kryptographisch verifizierbar, auch wenn autorisierte Änderungen vorgenommen wurden. Die Arbeit umfasst folgende Bereiche: 1. Analyse der Rechtsvorgaben zur Ermittlung der rechtlich relevanten technischen Anforderungen hinsichtlich des geforderten Integritätsschutzes (integrity protection) und hinsichtlich des Schutzes von personenbezogenen Daten und (Geschäfts)geheimnissen (privacy protection), 2. Definition eines geeigneten Integritäts-Begriffes zur Beschreibung der Schutzfunktion von existierenden malleable signatures und bereits rechtlich anerkannten Signaturverfahren, 3. Harmonisierung und Analyse der kryptographischen Eigenschaften existierender malleable signature Verfahren in Hinblick auf die rechtlichen Anforderungen, 4. Entwicklung neuer und beweisbar sicherer kryptographischer Verfahren, 5. abschließende Bewertung des rechtlichen Beweiswertes (probative value) und des Datenschutzniveaus anhand der technischen Umsetzung der rechtlichen Anforderungen. Die Arbeit kommt zu dem Ergebnis, dass zunächst einmal jedwede (autorisierte wie auch unautorisierte) Änderung von einem kryptographisch sicheren malleable signature Verfahren (MSS) ebenfalls erkannt werden muss um Konformität mit Verordnung (EU) Nr. 910/2014 (eIDAS) zu erlangen. Eine solche Änderungserkennung durch die der Signaturprüfer, ohne Zuhilfe weiterer Parteien oder Geheimnisse, die Abwesenheit von autorisierten und unautorisierten Änderungen erkennt wurde im Rahmen der Arbeit entwickelt (non-interactive public accountability (PUB)). Diese neue kryptographische Eigenschaft wurde veröffentlicht und bereits von Arbeiten Anderer aufgegriffen. Des Weiteren werden neue kryptographische Eigenschaften und redactable signature und sanitizable signature Verfahren vorgestellt, welche zusätzlich zu dieser Änderungerkennung einen starken Schutz gegen die Aufdeckung des Orginals ermöglichen. Werden geeignete Eigenschaften erfüllt so wird für bestimmte Fälle ein technisches Schutzniveau erzielt, welches mit klassischen Signaturen vergleichbar ist. Damit lässt sich die Kernfrage positiv beantworten: Private MSS können ein Integritätsschutzniveau erreichen, welches dem rechtlich anerkannter digitaler Signaturen technisch entspricht, aber dennoch nachträgliche Änderungen autorisieren kann, welche einen starken Schutz gegen die Wiederherstellung des Orginals ermöglichen. N2 - This thesis distills technical requirements for an increased probative value and data protection compliance, and maps them onto cryptographic properties for which it constructs provably secure and especially private malleable signature schemes (MSS). MSS are specialised digital signature schemes that allow the signatory to authorize certain subsequent modifications, which will not negatively affect the signature verification result. Legally, regulations such as European Regulation 910/2014 (eIDAS), ‘follow-up’ to longstanding Directive 1999/93/EC, describe the requirements in technology-neutral language. eIDAS states that, when a digital signature meets the full requirements it becomes a qualified electronic signature and then it “[...] shall have the equivalent legal effect of a handwritten signature [...]” [Art. 25 Regulation 910/2014]. The question of what legal effect this has with regards to the probative value that is assigned is actually not determined in EU Regulation 910/2014 but in European member state law. This thesis concentrates in its analysis on the — in this respect detailed — German Code of Civil Procedure (ZPO). Following the ZPO, a signature awards the signed document with at least a high probative value of prima facie evidence. For signed documents of official authority the ZPO’s statutory rules even award evidence with a legal presumption of authenticity. This increased probative value is also awarded to electronic documents bearing electronic signatures when those conform to the eIDAS requirements. The requirements centre around the technical security goals of integrity and accountability. Technical mechanisms use cryptographic means to detect the absence of unauthorized modifications (integrity) and allow to authenticate the signed document’s signatory (accountability). However, the specialised malleable signature schemes’ main advantage is a cryptographic property termed privacy: An authorized subsequent modification will protect the confidentiality of the modified original. Moreover, the MSS will retain a verifiable signature if only authorized modifications were carried out. If these properties are reached with provable security the schemes are called private malleable signature schemes. This thesis analyses two forms of MSS discussed in existing literature: Redactable signature schemes (RSS) which allow subsequent deletions, and sanitizable signature schemes (SSS) which allow subsequent edits. These two forms have many application scenarios: A signatory can delegate that a later redaction might take place while retaining the integrity and authenticity protection for the still remaining parts. The verification of a signature on a redacted or sanitized document still enables the verifying entity to corroborate the signatory’s identity with the help of flanking technical and organisational mechanisms, e.g. a trusted public key infrastructure. The valid signature further corroborates the absence of unauthorized changes, because the MSS is still cryptographically protecting the signed document from undetected unauthorized changes inflicted by adversaries. Due to the confidentiality protection for the overwritten parts of the document following from cryptographic privacy the sanitization and redaction can be used to safeguard personal data to comply with data protection regulation or withhold trade-secrets. The research question is: Can a malleable signature scheme be private to be compliant with EU data protection regulation and at the same time fulfil the integrity protection legally required in the EU to achieve a high probative value for the data signed? Answering this requires to understand the protection requirements in respect to accountability and integrity rooted in Regulation 910/2014 and related legal texts. This thesis has analysed the previous Directive 1999/93/EC as well as German SigG and SigVO or UK and US laws. Besides that, legal texts, laws and regulations for the protection requirements of personal data (or PII) have been analysed to distill the confidentiality requirements, e.g. the German BDSG or the EU Regulation 2016/679 (GDPR). Moreover, an answer to the research question entails understanding the relevant difference between regular digital signature schemes, like RSASSA-PSS from PKCS-v2.2 [422], which are legally accepted mechanisms for generating qualified electronic signatures and MSS for which the legal status was completely unknown before the thesis. Especially as MSS allow the authorized entity to adapt the signature, such that it is valid after the authorized modification, without the knowledge or use of the signatory’s signature generation key. On verification of an MSS the verifying entity still sees a valid signature technically appointing the legal signatory as the origin of a document, which might — however — have undergone authorized modifications after the signature was applied. The thesis documents the results achieved in several domains: 1. Analysis of legal requirements towards integrity protection for an increased probative value and towards the confidentiality protection for use as a privacy-enhancing-technique to comply with data protection regulation. 2. Definition of a suitable terminology for integrity protection to capture (a) the differences between classical and malleable signature schemes, (b) the subtleties among existing MSS, as well as (c) the legal requirements. 3. Harmonisation of existing MSS and their cryptographic properties and the analysis of their shortcomings with respect to the legal requirements. 4. Design of new cryptographic properties and their provably secure cryptographic instantiations, i.e., the thesis proposes nine new cryptographic constructions accompanied by rigorous proofs of their security with respect to the formally defined cryptographic properties. 5. Final evaluation of the increased probative value and data-protection level achievable through the eight proposed cryptographic malleable signature schemes. The thesis concludes that the detection of any subsequent modification (authorized and unauthorized) is of paramount legal importance in order to meet EU Regulation 910/2014. Further, this thesis formally defined a public form of the legally requested integrity verification which allows the verifying entity to corroborate the absence of any unauthorized modifications with a valid signature verification while simultaneously detecting the presence of an authorized modification — if at least one such authorized modification has subsequently occurred. This property, called non-interactive public accountability (PUB), has been formally defined in this thesis, was published and has already been adopted by the academic community. It was carefully conceived to not negatively impact a base-line level of privacy protection, as non-interactive public accountability had to destroy an existing strong privacy notion of transparency, which was identified as a hinderance to legal equivalence arguments. With RSS and SSS constructions that meet these properties, the thesis can give a positive answer to the research question: Private MSS can reach a level of integrity protection and guarantee a level of accountability comparable to that of technical mechanisms that are legally accepted to generate qualified electronic signatures giving an increased probative value to the signed document, while at the same time protect the overwritten contents’ confidentiality. KW - Integrity KW - Privacy KW - Redactable Signature Scheme (RSS) KW - Sanitizable Signature Scheme (RSS) KW - eIDAS KW - Integrität KW - Elektronische Unterschrift KW - Beweiswürdigung KW - Datenschutz KW - Vertraulichkeit Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5823 ER - TY - JOUR A1 - Herbold, Steffen A1 - Hautli‑Janisz, Annette A1 - Heuer, Ute A1 - Kikteva, Zlata A1 - Trautsch, Alexander T1 - A large‑scale comparison of human‑written versus ChatGPT‑generated essays JF - Scientific Reports N2 - ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13961 VL - 13 PB - Springer Nature ER - TY - JOUR A1 - Becher, Stefan A1 - Gerl, Armin ED - Sarne, Giuseppe Maria Luigi ED - Ma, Jianhua ED - Rosaci, Domenico ED - Srivastava, Gautam T1 - ConTra Preference Language: Privacy Preference Unification via Privacy Interfaces JF - Sensors N2 - After the enactment of the GDPR in 2018, many companies were forced to rethink their privacy management in order to comply with the new legal framework. These changes mostly affect the Controller to achieve GDPR-compliant privacy policies and management.However, measures to give users a better understanding of privacy, which is essential to generate legitimate interest in the Controller, are often skipped. We recommend addressing this issue by the usage of privacy preference languages, whereas users define rules regarding their preferences for privacy handling. In the literature, preference languages only work with their corresponding privacy language, which limits their applicability. In this paper, we propose the ConTra preference language, which we envision to support users during privacy policy negotiation while meeting current technical and legal requirements. Therefore, ConTra preferences are defined showing its expressiveness, extensibility, and applicability in resource-limited IoT scenarios. In addition, we introduce a generic approach which provides privacy language compatibility for unified preference matching. KW - privacy KW - preference language KW - legal factors KW - GDPR KW - usability Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11218 SN - 1424-8220 VL - 22 IS - 14 PB - MDPI CY - Basel, Switzerland ER -