Fakultät für Informatik und Mathematik
Refine
Year of publication
Document Type
- Doctoral Thesis (87)
- Conference Proceeding (7)
- Article (4)
- Master's Thesis (1)
- Other (1)
- Preprint (1)
- Report (1)
Has Fulltext
- yes (102)
Is part of the Bibliography
- no (102)
Keywords
- Computersicherheit (7)
- Semantic Web (4)
- Web Security (4)
- Browser Security (3)
- Datenschutz (3)
- Internet of Things (3)
- Kryptologie (3)
- Multimedia (3)
- Suchmaschine (3)
- Virtual Network Embedding (3)
Institute
Code injection attacks like the one used in the high-profile 2017 Equifax breach, have become increasingly common, ranking at the top of OWASP’s list of critical web application vulnerabilities. The injection attacks can also target embedded applications running on processors like ARM and Xtensa by exploiting memory bugs and maliciously altering the program’s behavior or even taking full control over a system. Especially, ARM’s support of low power consumption without sacrificing performance is leading the industry to shift towards ARM processors, which advances the attention of injection attacks as well.
In this thesis, we are considering web applications and embedded applications (running on ARM and Xtensa processors) as the target of injection attacks. To detect injection attacks in web applications, taint analysis is mostly proposed but the precision, scalability, and runtime overhead of the detection depend on the analysis types (e.g., static vs dynamic, sound vs unsound). Moreover, in the existing dynamic taint tracking approach for Java- based applications, even the most performant can impose a slowdown of at least 10–20% and often far more. On the other hand, considering the embedded applications, while some initial research has tried to detect injection attacks (i.e., ROP and JOP) on ARM, they suffer from high performance or storage overhead. Besides, the Xtensa has been neglected though used in most firmware-based embedded WiFi home automation devices.
This thesis aims to provide novel approaches to precisely detect injection attacks on both the web and embedded applications. To that end, we evaluate JavaScript static analysis frameworks to evaluate the security of a hybrid app (JS & native) from an industrial partner, provide RIVULET – a tool that precisely detects injection attacks in Java-based real-world applications, and investigate injection attacks detection on ARM and Xtensa platforms using hardware performance counters (HPCs) and machine learning (ML) techniques.
To evaluate the security of the hybrid application, we initially compare the precision, scalability, and code coverage of two widely-used static analysis frameworks—WALA and SAFE. The result of our comparison shows that SAFE provides higher precision and better code coverage at the cost of somewhat lower scalability. Based on these results, we analyze the data flows of the hybrid app via taint analysis by extending the SAFE’s taint analysis and detected a potential for injection attacks of the hybrid application.
Similarly, to detect injection attacks in Java-based applications, we provide Rivulet which monitors the execution of developer-written functional tests using dynamic taint tracking. Rivulet uses a white-box test generation technique to re-purpose those functional tests to check if any vulnerable flow could be exploited. We compared Rivulet to the state-of-the-art static vulnerability detector Julia on benchmarks and Rivulet outperformed Julia in both false positives and false negatives. We also used Rivulet to detect new vulnerabilities.
Moreover, for applications running on ARM and Xtensa platforms, we investigate ROP1 attack detection by combining HPCs and ML techniques. We collect data exploiting real- world vulnerable applications and small benchmarks to train the ML. For ROP attack detection on ARM, we also implement an online monitor which labels a program’s execution as benign or under attack and stops its execution once the latter is detected. Evaluating our ROP attack detection approach on ARM provides a detection accuracy of 92% for the offline training and 75% for the online monitoring. Similarly, our ROP attack detection on the firmware-only Xtensa processor provides an overall average detection accuracy of 79%.
Last but not least, this thesis shows how relevant taint analysis is to precisely detect injection attacks on web applications and the power of HPC combined with machine learning in the control flow injection attacks detection on ARM and Xtensa platforms.
In den vergangenen Jahrzehnten hat es unübersehbar zahlreiche Fortschritte im Bereich der IT-Sicherheitsforschung gegeben, etwa in den Bereichen Systemsicherheit und Kryptographie. Es ist jedoch genauso unübersehbar, dass IT-Sicherheitsprobleme im Alltag der Menschen fortbestehen. Mutmaßlich liegt dies an der Komplexität von Alltagssituationen, in denen Sicherheitsmechanismen und Gerätefunktionalität sowie deren Heterogenität in schwer antizipierbarer Weise mit menschlichem Verständnis und Alltagsgebrauch interagieren. Um die wissenschaftliche Forschung besser auf Menschen und deren IT-Sicherheitsbedürfnisse auszurichten, müssen wir daher den Alltag der Menschen besser verstehen. Das Verständnis von Alltag ist in der Informatik jedoch noch unterentwickelt. Dieser Beitrag möchte das Forschungsfeld “Sicherheit in der Digitalisierung des Alltags” definieren, um Forschenden die Gelegenheit zu geben, ihre Anstrengungen in diesem Bereich zu bündeln. Wir machen dabei Vorschläge einerseits zur inhaltlichen Eingrenzung der informatischen Forschung. Andererseits möchten wir durch die Einbeziehung von Forschungsmethoden aus der Ethnografie, die Erkenntnisse aus der durchaus subjektiven Beobachtung des “Alltags” vieler einzelner Individuen zieht, zur methodischen Weiterentwicklung interdisziplinärer Forschung in diesem Feld beitragen. Die IT- Sicherheitsforschung kann dann Bestehendes gezielt für eine richtige Alltagstauglichkeit optimieren und neue grundlegende Sicherheitsfunktionalitäten für die konkreten Herausforderungen im Alltag entwickeln.
A Comprehensive Comparison of Fuzzy Extractor Schemes Employing Different Error Correction Codes
(2023)
This thesis deals with fuzzy extractors, security primitives often used in conjunction with Physical Unclonable Functions (PUFs). A fuzzy extractor works in two stages: The generation phase and the reproduction phase. In the generation phase, an Error Correction Code (ECC) is used to compute redundant bits for a given PUF response, which are then stored as helper data, and a key is extracted from the response. Then, in the reproduction phase, another (possibly noisy) PUF response can be used in conjunction with this helper data to extract the original key.
It is clear that the performance of the fuzzy extractor is strongly dependent on the underlying ECC. Therefore, a comparison of ECCs in the context of fuzzy extractors is essential in order to make them as suitable as possible for a given situation. It is important to note that due to the plethora of various PUFs with different characteristics, it is very unrealistic to propose a single metric by which the suitability of a given ECC can be measured.
First, we give a brief introduction to the topic, followed by a detailed description of the background of the ECCs and fuzzy extractors studied. Then, we summarise related work and describe an implementation of the ECCs under consideration. Finally, we carry out the actual comparison of the ECCs and the thesis concludes with a summary of the results and suggestions for future work.
Understanding of financial data has always been a point of interest for market participants to make better informed decisions. Recently, different cutting edge technologies have been addressed in the Financial Technology (FinTech) domain, including numeracy understanding, opinion mining and financial ocument processing.
In this thesis, we are interested in analyzing the arguments of financial experts with the goal of supporting investment decisions. Although various business studies confirm the crucial role of argumentation in financial communications, no work has addressed this problem as a computational argumentation task. In other words, the automatic analysis of arguments. In this regard, this thesis presents contributions in the three essential axes of theory, data, and evaluation to fill the gap between argument mining and financial text.
First, we propose a method for determining the structure of the arguments stated by company representatives during the public announcement of their quarterly results and future estimations through earnings conference calls. The proposed scheme is derived from argumentation theory at the micro-structure level of discourse. We further conducted the corresponding annotation study and published the first financial dataset annotated with arguments: FinArg.
Moreover, we investigate the question of evaluating the quality of arguments in this financial genre of text. To tackle this challenge, we suggest using two levels of quality metrics, considering both the Natural Language Processing (NLP) literature of argument quality assessment and the financial era peculiarities.
Hence, we have also enriched the FinArg data with our quality dimensions to produce the FinArgQuality dataset.
In terms of evaluation, we validate the principle of ensemble learning on the argument identification and argument unit classification tasks. We show that combining a traditional machine learning model along with a deep learning one, via an integration model (stacking), improves the overall performance, especially in small dataset settings.
In addition, despite the fact that argument mining is mainly a domain dependent task, to this date, the number of studies that tackle the generalization of argument mining models is still relatively small. Therefore, using our stacking approach and in comparison to the transfer learning model of DistilBert, we address and analyze three real-world scenarios concerning the model robustness over completely unseen domains and unseen topics.
Furthermore, with the aim of the automatic assessment of argument strength, we have investigated and compared different (refined) versions of Bert-based models that incorporate external knowledge in the decision layer. Consequently, our method outperforms the baseline model by 13 ± 2% in terms of F1-score through integrating Bert with encoded categorical features.
Beyond our theoretical and methodological proposals, our model of argument quality assessment, annotated corpora, and evaluation approaches are publicly available, and can serve as strong baselines for future work in both FinNLP and computational argumentation domains.
Hence, directly exploiting this thesis, we proposed to the community, a new task/challenge related to the analysis of financial arguments: FinArg-1, within the framework of the NTCIR-17 conference.
We also used our proposals to react to the Touché challenge at the CLEF 2021 conference. Our contribution was selected among the «Best of Labs».
Sichtbarkeitsprobleme, wie das Folgende, gehören zu den grundlegenden Problemen der algorithmischen Geometrie: Berechne zu einem einfachen Polygon, dem sogenannten Kanal, und zu einem darin enthaltenen Punkt die von diesem Punkt aus sichtbare Punktmenge. Dabei ist ein Punkt von einem anderen Punkt aus sichtbar, wenn deren Verbindungsstrecke den Kanal nicht verlässt. Wir wollen uns in dieser Arbeit mit zirkulärer Sichtbarkeit beschäftigen. Zur Verbindung zweier Punkte sind dann nicht nur Strecken, sondern auch Kreisbögen zulässig. Außerdem betrachten wir als Ausgangspunkt dieser sogenannten Sichtbarkeitskreisbögen und -strecken eine Kante des Kanals anstatt eines einzelnen Punkts. Konkret liefert diese Arbeit einen Beitrag zur numerisch robusten Bestimmung der zirkulären Sichtbarkeitsmenge ausgehend von einer Kante des Kanals.
Hierfür wird in dieser Arbeit ein Algorithmus vorgestellt, mit dem für einen gegebenen Punkt festgestellt werden kann, ob dieser von der Startkante aus sichtbar ist. Im Fall eines sichtbaren Punkts wird ein Sichtbarkeitskreisbogen berechnet, der zwei Kanalberührungen besitzt. Damit kann der Algorithmus bei geeigneter Wahl des zu untersuchenden Punkts – der als dritte Kanalberührung fungiert – direkt zur Berechnung von sogenannten Grenzkreisbögen der Sichtbarkeitsmenge benutzt werden. Diese definieren den Rand der zirkulären Sichtbarkeitsmenge und zeichnen sich dadurch aus, dass sie vom Kanal dreimal abwechselnd von links und von rechts berührt werden.
Der beschriebene Algorithmus basiert auf der Untersuchung derjenigen Kreisbögen, die zwar nicht notwendigerweise vollständig im Kanal liegen, aber die Startkante mit dem Punkt verbinden, dessen Sichtbarkeit bestimmt werden soll. Insbesondere werden dabei die Bereiche untersucht, in denen der jeweilige Kreisbogen den Kanal
verlässt, die sogenannten Verletzungen. Da die „Schwere“ einer solchen Verletzung quantifizierbar ist, wird ein iteratives Vorgehen ermöglicht. Dabei wird der Kreisbogen iterativ so verändert, dass dieser bei gleichem Endpunkt den Kanal immer „weniger verlässt“. Ist der Endpunkt und damit der zu untersuchende Punkt nicht sichtbar, wird im Laufe des Algorithmus festgestellt, dass keine derartige Verbesserung möglich ist. Der vorgestellte Algorithmus ist numerisch robust, einfach umzusetzen und besitzt eine in der Anzahl der Kanalecken lineare Laufzeit.
After the enactment of the GDPR in 2018, many companies were forced to rethink their privacy management in order to comply with the new legal framework. These changes mostly affect the Controller to achieve GDPR-compliant privacy policies and management.However, measures to give users a better understanding of privacy, which is essential to generate legitimate interest in the Controller, are often skipped. We recommend addressing this issue by the usage of privacy preference languages, whereas users define rules regarding their preferences for privacy handling. In the literature, preference languages only work with their corresponding privacy language, which limits their applicability. In this paper, we propose the ConTra preference language, which we envision to support users during privacy policy negotiation while meeting current technical and legal requirements. Therefore, ConTra preferences are defined showing its expressiveness, extensibility, and applicability in resource-limited IoT scenarios. In addition, we introduce a generic approach which provides privacy language compatibility for unified preference matching.
This thesis investigates the quality of randomly collected data by employing a framework built on information-based complexity, a field related to the numerical analysis of abstract problems. The quality or power of gathered information is measured by its radius which is the uniform error obtainable by the best possible algorithm using it. The main aim is to present progress towards understanding the power of random information for approximation and integration problems.
In the first problem considered, information given by linear functionals is used to recover vectors, in particular from generalized ellipsoids. This is related to the approximation of diagonal operators which are important objects of study in the theory of function spaces. We obtain upper bounds on the radius of random information both in a convex and a quasi-normed setting, which extend and, in some cases, improve existing results. We conjecture and partially establish that the power of random information is subject to a dichotomy determined by the decay of the length of the semiaxes of the generalized ellipsoid.
Second, we study multivariate approximation and integration using information given by function values at sampling point sets. We obtain an asymptotic characterization of the radius of information in terms of a geometric measure of equidistribution, the distortion, which is well known in the theory of quantization of measures. This holds for isotropic Sobolev as well as Hölder and Triebel-Lizorkin spaces on bounded convex domains. We obtain that for these spaces, depending on the parameters involved, typical point sets are either asymptotically optimal or worse by a logarithmic factor, again extending and improving existing results.
Further, we study isotropic discrepancy which is related to numerical integration using linear algorithms with equal weights. In particular, we analyze the quality of lattice point sets with respect to this criterion and obtain that they are suboptimal compared to uniform random points. This is in contrast to the approximation of Sobolev functions and resolves an open question raised in the context of a possible low discrepancy construction on the two-dimensional sphere.
The generalization of univariate splines to higher dimensions is not straightforward. There are different approaches, each with its own advantages and drawbacks. A promising approach using Delaunay configurations and simplex splines is due to Neamtu.
After recalling fundamentals of univariate splines, simplex splines, and the wellknown, multivariate DMS-splines, we address Neamtu’s DCB-splines. He defined two variants that we refer to as the nonpooled and the pooled approach, respectively. Regarding these spline spaces, we contribute the following results.
We prove that, under suitable assumptions on the knot set, both variants exhibit the local finiteness property, i.e., these spline spaces are locally finite-dimensional and at each point only a finite number of basis candidate functions have a nonzero value. Additionally, we establish a criterion guaranteeing these properties within a compact region under mitigated assumptions.
Moreover, we show that the knot insertion process known from univariate splines does not work for DCB-splines and reason why this behavior is inherent to these spline spaces. Furthermore, we provide a necessary criterion for the knot insertion property to hold true for a specific inserted knot. This criterion is also sufficient for bivariate, nonpooled DCB-splines of degrees zero and one. Numerical experiments suggest that the sufficiency also holds true for arbitrary spline degrees.
Univariate functions can be approximated in terms of splines using the Schoenberg operator, where the approximation error decreases quadratically as the maximum distance between consecutive knots is reduced. We show that the Schoenberg operator can be defined analogously for both variants of DCB-splines with a similar error bound.
Additionally, we provide a counterexample showing that the basis candidate functions of nonpooled DCB-splines are not necessarily linearly independent, contrary to earlier statements in the literature. In particular, this implies that the corresponding functions are not a basis for the space of nonpooled DCB-splines.
Network communication has become a part of everyday life, and the interconnection among devices and people will increase even more in the future. A new area where this development is on the rise is the field of connected vehicles. It is especially useful for automated vehicles in order to connect the vehicles with other road users or cloud services. In particular for the latter it is beneficial to establish a mobile network connection, as it is already widely used and no additional infrastructure is needed. With the use of network communication, certain requirements come along.
One of them is the reliability of the connection. Certain Quality of Service (QoS) parameters need to be met. In case of degraded QoS, according to the SAE level specification, a downgrade of the automated system can be required, which may lead to a takeover maneuver, in which control is returned back to the driver. Since such a handover takes time, prediction is necessary to forecast the network quality for the next few seconds. Prediction of QoS parameters, especially in terms of Throughput (TP) and Latency (LA), is still a challenging task, as the wireless transmission properties of a moving mobile network connection are undergoing fluctuation. In this thesis, a new approach for prediction Network Quality Parameters (NQPs) on Transmission Control Protocol (TCP) level is presented. It combines the knowledge of the environment with the low level parameters of the mobile network. The aim of this work is to perform a comprehensive study of various models including both Location Smoothing (LS) grid maps and Learning Based (LB) regression ones. Moreover, the possibility of using the location independence of a model as well as suitability for automated driving is evaluated.