Fakultät für Informatik und Mathematik
Refine
Year of publication
Document Type
- Doctoral Thesis (89)
- Article (8)
- Conference Proceeding (7)
- Master's Thesis (1)
- Other (1)
- Preprint (1)
- Report (1)
Has Fulltext
- yes (108)
Is part of the Bibliography
- no (108)
Keywords
- Computersicherheit (7)
- Semantic Web (4)
- Web Security (4)
- Browser Security (3)
- Datenschutz (3)
- Internet of Things (3)
- Kryptologie (3)
- Multimedia (3)
- Suchmaschine (3)
- Virtual Network Embedding (3)
Institute
Abstract zum Dissertationsfragment "Spotzuordnung und Wellenfront-Rekonstruktion für Shack-Hartmann-Sensoren" von Sascha Groening (20.04.1972 - 16.11.2001)
Sascha Groening war nach seinem Studium der Informatik an der Universität Passau von 01.Oktober 1998 bis 16. November 2001 wissenschaftlicher Mitarbeiter der Forschungsgruppe Entscheidungsunterstützende Systeme innerhalb des Forschungsverbundes Wissensbasierte Systeme, die 2005 in das Institut für Softwaresystem in technischen Anwendungen der Informatik an der Universität Passau überging.
Tragischerweise ist er am 16. November 2001 im Alter von nur 28 Jahren kurz vor der Fertigstellung seiner Dissertation völlig überraschend verstorben. In seiner Dissertation beschäftigte er sich mit Fragestellungen aus dem Teilprojekt „Entwicklung eines Messverfahrens mit hohem Dynamikbereich für die Qualitätssicherung von optischen Asphären für die in‐situ Messung von Wellenfronten (kurz Wellensensor)“ des Forschungsverbundes FORMIKROSYS II, der von der Bayerischen Forschungsstiftung gefördert wurde.
Auf den Sicherungen des Instituts konnten wir nur eine relativ unvollständige elektronische Version seiner Dissertation „Spotzuordnung und Wellenfrontrekonstruktion für Shack‐Hartmann‐Sensoren“ finden, da er seine Arbeit auf einem privaten PC anfertigte.
Monika Groening, seine Mutter, hat dagegen im Jahr 2013 privat unten stehenden Ausdruck seiner schriftlichen Arbeit gefunden, die einem Stand etwa zwei Wochen vor der geplanten Abgabe entspricht und die nun hiermit als Scan der Öffentlichkeit zur Verfügung gestellt wird.
Am Institut wurden nach Abschluss des Projektes Wellensensor die Forschungen auf diesem Gebiet nicht weiter verfolgt, da die Problemstellung des Projektes für die Partner, die die Lösung von Sascha Groening angewendet haben, zufriedenstellend gelöst war.
Im Rahmen des Projektes wurden die Leistungsgrenzen für ein Messgerät zur Vermessung von optischen Asphären (z.B. Gleitsicht‐Brillengläsern) und allgemeinen Wellenfronten (z.B. Strahlprofil eines Lasers, Wellenfront hinter einem optischen Subsystem) auf der Basis des Shack‐Hartmann‐Sensors erforscht. Das Prinzip beruht auf der geometrisch‐optischen Bestimmung lokaler Wellenfrontkrümmungen mit Hilfe eines Feldes von Mikrolinsen und einer CCD‐Kamera in der Fokalebene der Mikrolinsen. Sascha Groening entwickelte völlig neu konzipierte Auswertealgorithmen, die eine schnelle und zuverlässige Zuordnung von Fokuspunkten zu Mikrolinsen durchführen und damit eine hochgenaue Vermessung von
Ashären bei stark aberranten Wellenfronten möglich machen. Eine einfallende Wellenfront erzeugt in der Fokalebene der Mikrolinsen ein charakteristisches Spotmuster. Durch die Analyse der lokalen Ablenkungen der Spots von ihren Idealpositionen, also den Positionen, die bei Einfall einer ebenen Wellenfront entstehen würden, können Aussagen über das
lokale Steigungsverhalten der einfallenden Wellenfront getroffen werden. Je größer der Dynamikbereich des Messgerätes sein soll, desto schwieriger wird das Problem der Zuordnung von Spot zu Mikrolinse. Genau diese Herausforderung wurde von Sascha Gröning durch einen iterativen Spline‐Passungs‐Algorithmus schnell und elegant gelöst. Wenn dann
die Spotzuordnung erfolgt ist, kann die Wellenfront aus den lokalen Ablenkungen der Spots rekonstruiert werden.
Das Fragment der Dissertationsschrift ist vollständig bis auf Kapitel 1 Einleitung und Kapitel 7 Zusammenfassung. Außerdem fehlen in Kapitel 4 noch die Beschreibungen einiger untersuchter Verfahren zur Spotdetektion und in den Unterkapiteln von 5 und 6 fehlen die Verfahren zur Spotzuordnung und Wellenfrontrekonstruktion bei nicht stetig differenzierbaren bzw. unstetigen Wellenfronten. Ansonsten bietet die Arbeit aber einen guten Überblick über den Stand der Technik zur damaligen Zeit und erklärt die grundlegenden Verfahren, die von Sascha Groening erforscht und entwickelt wurden. In Kapitel 2 werden ausführlich das Funktionsprinzip des Shack‐Hartmann‐Sensors und die Wellenfrontvermessung erklärt, wobei speziell auf das Zuordnungsproblem von Spot zu Mikrolinse und auf die globale Wellenfrontrekonstruktion aus einem Feld partieller Ableitungen eingegangen wird.
Die mathematischen Grundlagen wie Tensorprodukt‐Splines oder die Lösungsverfahren für lineare Ausgleichsprobleme ohne und mit linearen Nebenbedingungen werden in Kapitel 3 gelegt. Einen ersten Schwerpunkt bildet Kapitel 4, das sich mit der Spotdetektion beschäftigt. Nach der Erläuterung der optischen Grundlagen wird die Spotentstehung mathematisch modelliert, um so geeignete Detektionsverfahren ableiten zu können.
Im zentralen Kapitel 5 der Dissertation geht es um die Spotzuordnung. Nach der Vorstellung bekannter Verfahren wird die neu entwickelte Spotzuordnung durch iterative Funktionspassung beschrieben. Nach einer initialen Zuordnung von einigen zentralen Spots, die unter Nutzung von Nebenwissen problemlos möglich ist, wird dann iterativ mit Hilfe von Splinepassungen vorgegangen. Bei einer vorhandenen Zuordnung wird durch Extrapolation der berechneten Splinepassung der Suchbereich für weitere, nicht zugeordnete Spots ausgedehnt, und diese werden dann wiederum korrekt zugeordnet. Diese Schritte werden iterativ wiederholt, bis alle Spots zugeordnet sind. Die Vorgehensweise, von bekanntem in unbekanntes Gebiet zu extrapolieren führt hier vortrefflich zum Ziel. Sascha Groening zeigt auch, dass der Erfolg vom gewählten Funktionsraum, hier Tensorprodukt‐Splines, abhängt. Schließlich runden die Verfahren zur Wellenfrontrekonstruktion, die ebenfalls auf Spline‐Passungen basieren, in Kapitel 6 die Arbeit ab.
Mit der Bereitstellung dieses Fragments seiner Dissertation wollen wir die geleistete Arbeit von Sascha Groening an der Universität Passau würdigen und hoffen, dass seine Ergebnisse ihren Platz in der wissenschaftlichen Gemeinschaft finden.
Im Mai 2015, Dr. Erich Fuchs
Eine Gruppe erfüllt die Tits-Alternative, wenn sie entweder eine nicht-abelsche freie Untergruppe vom Rang 2 enthält oder virtuell auflösbar ist, d. h. eine auflösbare Untergruppe von endlichem Index enthält. Diese Eigenschaft geht auf J. Tits zurück, der sie für endlich erzeugte lineare Gruppen nachwies. Es wird eine relevante Klasse endlich präsentierter Gruppen in Bezug auf die Tits-Alternative untersucht. Die betrachteten Gruppen stellen eine Verallgemeinerung von Pride-Gruppen und der von Vinberg betrachteten Gruppen erzeugt von periodisch gepaarten Relationen für drei Erzeugende dar. Zusätzlich treten diese Gruppen als Fundamentalgruppen hyperbolischer Orbifaltigkeiten auf. Es wird der Nachweis der Tits-Alternative unter bestimmten Voraussetzungen an die Präsentierungen der betrachteten Gruppen erbracht. Für diesen Nachweis werden verschiedene Methoden angewandt: Es werden zum einen homomorphe Bilder der Gruppen betrachtet. Zum anderen wird die Existenz wesentlicher Darstellungen in eine lineare Gruppe nachgewiesen. Basierend auf diesen Darstellungen kann in vielen Fällen der Nachweis der Existenz nicht-abelscher freier Untergruppen erbracht werden. Zusätzlich wird für den Nachweis der Endlichkeit und damit der Tits-Alternative einiger Gruppen eine Methode angewandt, die auf Berechnungen von Gröbner-Basen in nicht-kommutativen Polynomringen basiert. Es wird dabei die Dimension der Gruppenringe betrachtet als Vektorräume berechnet. Für die betrachtete Klasse von Gruppen wird für Relationen der Blocklänge 1 die Tits-Alternative vollständig nachgewiesen. Als Folgerung ergibt sich eine Klassifikation der endlichen unter diesen Gruppen.
This thesis addresses a problem related to usage analysis in information retrieval
systems. Indeed, we exploit the history of search queries as support of analysis to
extract a profile model. The objective is to characterize the user and the data source
that interact in a system to allow different types of comparison (user-to-user, sourceto-
source, user-to-source). According to the study we conducted on the work done on
profile model, we concluded that the large majority of the contributions are strongly
related to the applications within they are proposed. As a result, the proposed
profile models are not reusable and suffer from several weaknesses. For instance,
these models do not consider the data source, they lack of semantic mechanisms and
they do not deal with scalability (in terms of complexity). Therefore, we propose
a generic model of user and data source profiles. The characteristics of this model
are the following. First, it is generic, being able to represent both the user and the
data source. Second, it enables to construct the profiles in an implicit way based on histories of search queries. Third, it defines the profile as a set of topics of interest,
each topic corresponding to a semantic cluster of keywords extracted by a specific
clustering algorithm. Finally, the profile is represented according to the vector space
model. The model is composed of several components organized in the form of a
framework, in which we assessed the complexity of each component.
The main components of the framework are:
• a method for keyword queries disambiguation
• a method for semantically representing search query logs in the form of a
taxonomy;
• a clustering algorithm that allows fast and efficient identification of topics of
interest as semantic clusters of keywords;
• a method to identify user and data source profiles according to the generic
model.
This framework enables in particular to perform various tasks related to usage-based
structuration of a distributed environment. As an example of application, the framework
is used to the discovery of user communities, and the categorization of data
sources. To validate the proposed framework, we conduct a series of experiments
on real logs from the search engine AOL search, which demonstrate the efficiency
of the disambiguation method in short queries, and show the relation between the
quality based clustering and the structure based clustering.
Quantum computing is an emerging technology that has the potential to change the perspectives and applications of computing in general. A wide range of applications are enabled: from faster algorithmic solutions of classically still difficult problems to theoretically more secure communication protocols. A quantum computer uses the quantum mechanical effects of particles or particle-like systems, and a major similarity between quantum and classical computers consists of both being abstracted as information processing machines. Whereas a classical computer operates on classical digital information, the quantum computer processes quantum information, which shares similarities with analog signals. One of the central differences between the two types of information is that classical information is more fault-tolerant when compared to its quantum counterpart.
Faults are the result of the quantum systems being interfered by external noise, but during the last decades quantum error correction codes (QECC) were proposed as methods to reduce the effect of noise. Reliable quantum circuits are the result of designing circuits that operate directly on encoded quantum information, but the circuit’s reliability is also increased by supplemental redundancies, such as sub-circuit repetitions.
Reliable quantum circuits have not been widely used, and one of the major obstacles is their vast associated resource overhead, but recent quantum computing architectures show promising scalabilities. Consequently the number of particles used for computing can be more easily increased, and that the classical control hardware (inherent for quantum computation) is also more reliable. Reliable quantum circuits haev been investigated for almost as long as general quantum computing, but their limited adoption (until recently) has not generated enough interest into their systematic design.
The continuously increasing practical relevance of reliability motivates the present thesis to investigate some of the first answers to questions related to the background and the methods forming a reliable quantum circuit design stack.
The specifics of quantum circuits are analysed from two perspectives: their probabilistic behaviour and their topological properties when a particular class of QECCs are used. The quantum phenomena, such as entanglement and superposition, are the computational resources used for designing quantum circuits. The discrete nature of classical information is missing for quantum information. An arbitrary quantum system can be in an infinite number of states, which are linear combinations of an exponential number of basis states. Any nontrivial linear combination of more than one basis states is called a state superposition. The effect of superpositions becomes evident when the state of the system is inferred (measured), as measurements are probabilistic with respect to their output: a nontrivial state superposition will collapse to one of the component basis states, and the measurement result is known exactly only after the measurement.
A quantum system is, in general, composed from identical subsystems, meaning that a quantum computer (the complete system) operates on multiple similar particles (subsystems). Entanglement expresses the impossibility of separating the state of the subsystems from the state of the complete system: the nontrivial interactions between the subsystems result into a single indivisible state. Entanglement is an additional source of probabilistic behaviour: by measuring the state of a subsystem, the states of the unmeasured subsystems will probabilistically collapse to states from a well defined set of possible states. Superposition and entanglement are the building blocks of quantum information teleportation protocols, which in turn are used in state-of-the-art fault-tolerant quantum computing architectures. Information teleportation implies that the state of a subsystem is moved to a second subsystem without copying any information during the process.
The probabilistic approach towards the design of quantum circuits is initiated by the extension of classical test and diagnosis methods. Quantum circuits are modelled similarly to classical circuits by defining gate-lists, and missing quantum gates are modelled by the single missing gate fault. The probabilistic approaches towards quantum circuits are facilitated by comparing these to stochastic circuits, which are a particular type of classical digital circuits. Stochastic circuits can be considered an emulation of analogue computing using digital components.
A first proposed design method, based on the direct comparison, is the simulation of quantum circuits using stochastic circuits by mapping each quantum gate to a stochastic computing sub-circuit. The resulting stochastic circuit is compiled and simulated on FPGAs. The obtained results are encouraging and illustrate the capabilities of the proposed simulation technique. However, the exponential number of possible quantum basis states was translated into an exponential number of stochastic computing elements.
A second contribution of the thesis is the proposal of test and diagnosis methods for both stochastic and quantum circuits. Existing verification (tomographic) methods of quantum circuits were targeting the reconstruction of the gate-lists. The repeated execution of the quantum circuit was followed by different but specific measurement at the circuit outputs. The similarities between stochastic and quantum circuits motivated the proposal of test and diagnosis methods that use a restricted set of measurement types, which minimise the number of circuit executions. The obtained simulation results show that the proposed validation methods improve the feasibility of quantum circuit tomography for small and medium size circuits.
A third contribution of the thesis is the algorithmic formalisation of a problem encountered in teleportation-based quantum computing architectures. The teleportation results are probabilistic and require corrections represented as quantum gates from a particular set. However, there are known commutation properties of these gates with the gates used in the circuit. The corrections are not applied as dynamic gate insertions (during the circuit’s execution) into the gate-lists, but their effect is tracked through the circuit, and the corrections are applied only at circuit outputs. The simulation results show that the algorithmic solution is applicable for very large quantum circuits.
Topological quantum computing (TQC) is based on a class of fault-tolerant quantum circuits that use the surface code as the underlying QECC. Quantum information is encoded in lattice-like structures and error protection is enabled by the topological properties of the lattice. The 3D structure of the lattice allows TQC computations to be visualised similarly to knot diagrams. Logical information is abstracted as strands and strand interactions (braids) represent logical quantum gates. Therefore, TQC circuits are abstracted using a geometrical description, which allows circuit input-output transformations (correlations) to be represented as geometric sub-structures.
TQC design methods were not investigated prior to this work, and the thesis introduces the topological computational model by first analysing the necessary concepts. The proposed TQC design stack follows a top-down approach: an arbitrary quantum circuit is decomposed into the TQC supported gate set; the resulting circuit is mapped to a lattice of appropriate dimensions; relevant resulting topological properties are extracted and expressed using graphs and Boolean formulas. Both circuit representations are novel and applicable to TQC circuit synthesis and validation. Moreover, the Boolean formalism is broadened into a formal mechanism for proving circuit correctness.
The thesis introduces TQC circuit synthesis, which is based on a novel logical gate geometric description, whose formal correctness is demonstrated. Two synthesis methods are designed, and both use a general planar representation of the circuit. Initial simulation results demonstrate the practicality and performance of the methods.
An additional group of proposed design methods solves the problem of automatic correlation construction. The methods use validity criteria which were introduced and analysed beforehand in the thesis. Input-output correlations existing in the circuit are inferred using both the graph and the Boolean representation.
The thesis extends the TQC state-of-the-art by recognising the importance of correlations in the validation process: correlation construction is used as a sub-routine for TQC circuit validation. The presented cross-layer validation procedure is useful when investigating both the QECC and the circuit, while a second proposed method is QECC-independent. Both methods are scalable and applicable even to very large circuits.
The thesis completes with the analysis of TQC circuit identities, where the developed Boolean formalism is used. The proofs of former known circuit identities were either missing or complex, and the presented approach reduces the length of the proofs and represents a first step towards standardising them. A new identity is developed and detailed during the process of illustrating the known circuit identities.
Reliable quantum circuits are a necessity for quantum computing to become reality, and specialised design methods are required to support the quest for scalable quantum computers. This thesis used a twofold approach towards this target: firstly by focusing on the probabilistic behaviour of quantum circuits, and secondly by considering the requirements of a promising quantum computing architecture, namely TQC. Both approaches resulted in a set of design methods enabling the investigation of reliable quantum circuits.
The thesis contributes with the proposal of a new quantum simulation technique, novel and practical test and diagnosis methods for general quantum circuits, the proposal of the TQC design stack and the set of design methods that form the stack. The mapping, synthesis and validation of TQC circuits were developed and evaluated based on a novel and promising formalism that enabled checking circuit correctness.
Future work will focus on improving the understanding of TQC circuit identities as it is hoped that these are the key for circuit compaction and optimisation. Improvements to the stochastic circuit simulation technique have the potential of spawning new insights about quantum circuits in general.
Als sich in der ersten Hälfte des 19. Jahrhunderts zunehmend mehr bedeutende Mathematiker mit der Suche nach Invarianten beschäftigten, konnte natürlich noch niemand vorhersehen, dass die Invariantentheorie mit Beginn des Computerzeitalters in der Bildverarbeitung bzw. dem Rechnersehen ein äußerst fruchtbares Anwendungsgebiet finden wird. In dieser Arbeit wird eine neue Anwendungsmöglichkeit der Invariantentheorie in der Bildverarbeitung vorgestellt. Dazu werden lokale Bildmerkmale betrachtet. Dabei handelt es sich um die Koordinaten einer Polynomfunktion bzgl. einer geeigneten Orthonormalbasis von P_n(R^2,R), die die zeitintegrierte Sensorinputfunktion auf lokalen Pixelfenstern bestmöglich approximiert. Diese Bildmerkmale werden in vielen Anwendungen eingesetzt, um Objekte in Bildern zu erkennen und zu lokalisieren. Beispiele hierfür sind die Detektion von Werkstücken an einem Fließband oder die Verfolgung von Fahrbahnmarkierungen in Fahrerassistenzsystemen. Modellieren lässt sich die Suche nach einem Muster in einem Suchbild als Paar von Stereobildern, auf denen lokal die affin-lineare Gruppe AGL(R) operiert. Will man also feststellen, ob zwei lokale Pixelfenster in etwa Bilder eines bestimmten dreidimensionalen Oberflächenausschnitts sind, ist zu klären, ob die Bildausschnitte durch eine Operation der Gruppe AGL(R) näherungsweise ineinander übergeführt werden können. Je nach Anwendung genügt es bereits, passende Untergruppen G von AGL(R) zu betrachten. Dank der lokalen Approximation durch Polynomfunktionen induziert die Operation einer Untergruppe G eine Operation auf dem reellen Vektorraum P_n(R^2,R). Damit lässt sich das Korrespondenzproblem auf die Frage reduzieren, ob es eine Transformation T in G gibt so, dass p ungefähr mit der Komposition von q und T für die zugehörigen Approximationspolynome p,q in P_n(R^2,R) gilt. Mit anderen Worten, es ist zu klären, ob sich p und q näherungsweise in einer G-Bahn befinden, eine typische Fragestellung der Invariantentheorie. Da nur lokale Bildausschnitte betrachtet werden, genügt es weiter, Untergruppen G von GL_2(R) zu betrachten. Dann erhält man sofort auch die Antwort für das semidirekte Produkt von R^2 mit G. Besonders interessant für Anwendungen ist hierbei die spezielle orthogonale Gruppe G=SO_2(R) und damit insgesamt die eigentliche Euklidische Gruppe. Für diese Gruppe und spezielle Pixelfenster ist das Korrespondenzproblem bereits gelöst. In dieser Arbeit wird das Problem in eben dieser Konstellation ebenfalls gelöst, allerdings auf elegante Weise mit Methoden der Invariantentheorie. Der Ansatz, der hier vorgestellt wird, ist aber nicht auf diese Gruppe und spezielle Pixelfenster begrenzt, sondern leicht auf weitere Fälle erweiterbar. Dazu ist insbesondere zu klären, wie sich sogenannte fundamentale Invarianten von lokalen Bildmerkmalen, also letztendlich Invarianten von Polynomfunktionen, berechnen lassen, d.h. Erzeugendensysteme der entsprechenden Invariantenringe. Mit deren Hilfe lässt sich die Zugehörigkeit einer Polynomfunktion zur Bahn einer anderen Funktion auf einfache Weise untersuchen.
Neben der Vorstellung des Verfahrens zur Korrespondenzfindung und der dafür notwendigen Theorie werden in dieser Arbeit Erzeugendensysteme von Invariantenringen untersucht, die besonders "schöne" Eigenschaften besitzen. Diese schönen Erzeugendensysteme von Unteralgebren werden, analog zu Gröbner-Basen als Erzeugendensysteme von Idealen, SAGBI-Basen genannt ("Subalgebra Analogs to Gröbner Bases for Ideals"). SAGBI-Basen werden hier insbesondere aus algorithmischer Sicht behandelt, d.h. die Berechnung von SAGBI-Basen steht im Vordergrund. Dazu werden verschiedene Algorithmen erarbeitet, deren Korrektheit bewiesen und implementiert. Daraus resultiert ein Software-Paket zu SAGBI-Basen für das Computeralgebrasystem ApCoCoA, dessen Funktionalität in diesem Umfang in keinem Computeralgebrasystem zu finden sein wird. Im Zuge der Umsetzung der einzelnen Algorithmen konnte außerdem die Theorie der SAGBI-Basen an zahlreichen Stellen erweitert werden.
This doctoral thesis is dedicated to the analysis and the design of
symmetric cryptographic algorithms.
In the first part of the dissertation, we deal with fault-based attacks
on cryptographic circuits which belong to the field of active implementation
attacks and aim to retrieve secret keys stored on such chips. Our main focus
lies on the cryptanalytic aspects of those attacks. In particular, we target
block ciphers with a lightweight and (often) non-bijective key schedule where
the derived subkeys are (almost) independent from each other. An attacker who is
able to reconstruct one of the subkeys is thus not necessarily able to directly
retrieve other subkeys or even the secret master key by simply reversing the key
schedule. We introduce a framework based on differential fault analysis that
allows to attack block ciphers with an arbitrary number of independent subkeys
and which rely on a substitution-permutation network. These methods are then
applied to the lightweight block ciphers LED and PRINCE and we show in both
cases how to recover the secret master key requiring only a small number of
fault injections. Moreover, we investigate approaches that utilize algebraic
instead of differential techniques for the fault analysis and discuss advantages
and drawbacks. At the end of the first part of the dissertation, we explore
fault-based attacks on the block cipher Bel-T which also has a lightweight key
schedule but is not based on a substitution-permutation network but instead on
the so-called Lai-Massey scheme. The framework mentioned above is thus not
usable against Bel-T. Nevertheless, we also present techniques for the case of
Bel-T that enable full recovery of the secret key in a very efficient way using
differential fault analysis.
In the second part of the thesis, we focus on authenticated encryption
schemes. While regular ciphers only protect privacy of processed data,
authenticated encryption schemes also secure its authenticity and integrity.
Many of these ciphers are additionally able to protect authenticity and
integrity of so-called associated data. This type of data is transmitted
unencrypted but nevertheless must be protected from being tampered with during
transmission. Authenticated encryption is nowadays the standard technique to
protect in-transit data. However, most of the currently deployed schemes have
deficits and there are many leverage points for improvements. With NORX we
introduce a novel authenticated encryption scheme supporting associated data.
This algorithm was designed with high security, efficiency in both hardware and
software, simplicity, and robustness against side-channel attacks in mind. Next
to its specification, we present special features, security goals,
implementation details, extensive performance measurements and discuss
advantages over currently deployed standards. Finally, we describe our
preliminary security analysis where we investigate differential and rotational
properties of NORX. Noteworthy are in particular the newly developed
techniques for differential cryptanalysis of NORX which exploit the power of
SAT- and SMT-solvers and have the potential to be easily adaptable to other
encryption schemes as well.
This doctoral thesis is devoted to generalize border bases to the module setting and to apply them in various ways.
First, we generalize the theory of border bases to finitely generated modules over a polynomial ring. We characterize these generalized border bases and show that we can compute them. As an application, we are able to characterize subideal border bases in various new ways and give a new algorithm for their computation. Moreover, we prove Schreyer's Theorem for border bases of submodules of free modules of finite rank over a polynomial ring.
In the second part of this thesis, we study the effect of homogenization to border bases of zero-dimensional ideals. This yields the new concept of projective border bases of homogeneous one-dimensional ideals. We show that there is a one-to-one correspondence between projective border bases and zero-dimensional closed subschemes of weighted projective spaces that have no point on the hyperplane at infinity. Applying that correspondence, we can characterize uniform zero-dimensional closed subschemes of weighted projective spaces that have a rational support over the base field in various ways. Finally, we introduce projective border basis schemes as specific subschemes of border basis schemes. We show that these projective border basis schemes parametrize all zero-dimensional closed subschemes of a weighted projective space whose defining ideals possess a projective border basis. Assuming that the base field is algebraically closed, we are able to prove that the set of all closed points of a projective border basis scheme that correspond to a uniform subscheme is a constructive set with respect to the Zariski topology.
Static analysis tools and transformation engines for source code belong to the standard equipment of a software developer. Their use simplifies a developer's everyday work of maintaining and evolving software systems significantly and, hence, accounts for much of a developer's programming efficiency and programming productivity. This is also beneficial from a financial point of view, as programming errors are early detected and avoided in the the development process, thus the use of static analysis tools reduces the overall software-development costs considerably.
In practice, software systems are often developed as configurable systems to account for different requirements of application scenarios and use cases. To implement configurable systems, developers often use compile-time implementation techniques, such as preprocessors, by using #ifdef directives. Configuration options control the inclusion and exclusion of #ifdef-annotated source code and their selection/deselection serve as an input for generating tailor-made system variants on demand. Existing configurable systems, such as the linux kernel, often provide thousands of configuration options, forming a huge configuration space with billions of system variants.
Unfortunately, existing tool support cannot handle the myriads of system variants that can typically be derived from a configurable system. Analysis and transformation tools are not prepared for variability in source code, and, hence, they may process it incorrectly with the result of an incomplete and often broken tool support.
We challenge the way configurable systems are analyzed and transformed by introducing variability-aware static analysis tools and a variability-aware transformation engine for configurable systems' development. The main idea of such tool support is to exploit commonalities between system variants, reducing the effort of analyzing and transforming a configurable system. In particular, we develop novel analysis approaches for analyzing the myriads of system variants and compare them to state-of-the-art analysis approaches (namely sampling). The comparison shows that variability-aware analysis is complete (with respect to covering the whole configuration space), efficient (it outperforms some of the sampling heuristics), and scales even to large software systems. We demonstrate that variability-aware analysis is even practical when using it with non-trivial case studies, such as the linux kernel.
On top of variability-aware analysis, we develop a transformation engine for C, which respects variability induced by the preprocessor. The engine provides three common refactorings (rename identifier, extract function, and inline function) and overcomes shortcomings (completeness, use of heuristics, and scalability issues) of existing engines, while still being semantics-preserving with respect to all variants and being fast, providing an instantaneous user experience. To validate semantics preservation, we extend a standard testing approach for refactoring engines with variability and show in real-world case studies the effectiveness and scalability of our engine.
In the end, our analysis and transformation techniques show that configurable systems can efficiently be analyzed and transformed (even for large-scale systems), providing the same guarantees for configurable systems as for standard systems in terms of detecting and avoiding programming errors.
The world wide web today serves as a distributed application platform. Its origins, however, go back to a simple delivery network for static hypertexts. The legacy from these days can still be observed in the communication protocol used by increasingly sophisticated clients and applications. This thesis identifies the actual security requirements of modern web applications and shows that HTTP does not fit them: user and application authentication, message integrity and confidentiality, control-flow integrity, and application-to-application authorization. We explore the other protocols in the web stack and work out why they can not fill the gap. Our analysis shows that the underlying problem is the connectionless property of HTTP. However, history shows that a fresh start with web communication is far from realistic. As a consequence, we come up with approaches that contribute to meet the identified requirements.
We first present impersonation attack vectors that begin before the actual user authentication, i.e. when secure web interaction and authentication seem to be unnecessary. Session fixation attacks exploit a responsibility mismatch between the web developer and the used web application framework. We describe and compare three countermeasures on different implementation levels: on the source code level, on the framework level, and on the network level as a reverse proxy.
Then, we explain how the authentication credentials that are transmitted for the user login, i.e. the password, and for session tracking, i.e. the session cookie, can be complemented by browser-stored and user-based secrets respectively. This way, an attacker can not hijack user accounts only by phishing the user's password because an additional browser-based secret is required for login. Also, the class of well-known session hijacking attacks is mitigated because a secret only known by the user must be provided in order to perform critical actions.
In the next step, we explore alternative approaches to static authentication credentials. Our approach implements a trusted UI and a mutually authenticated session using signatures as a means to authenticate requests. This way, it establishes a trusted path between the user and the web application without exchanging reusable authentication credentials. As a downside, this approach requires support on the client side and on the server side in order to provide maximum protection. Another approach avoids client-side support but can not implement a trusted UI and is thus susceptible to phishing and clickjacking attacks.
Our approaches described so far increase the security level of all web communication at all time. This is why we investigate adaptive security policies that fit the actual risk instead of permanently restricting all kinds of communication including non-critical requests. We develop a smart browser extension that detects when the user is authenticated on a website meaning that she can be impersonated because all requests carry her identity proof. Uncritical communication, however, is released from restrictions to enable all intended web features.
Finally, we focus on attacks targeting a web application's control-flow integrity. We explain them thoroughly, check whether current web application frameworks provide means for protection, and implement two approaches to protect web applications: The first approach is an extension for a web application framework and provides protection based on its configuration by checking all requests for policy conformity. The second approach generates its own policies ad hoc based on the observed web traffic and assuming that regular users only click on links and buttons and fill forms but do not craft requests to protected resources.
Top-k Semantic Caching
(2015)
The subject of this thesis is the intelligent caching of top-k queries in an environment with high latency and low throughput. In such an environment, caching can be used to reduce network traffic and improve response time. Slow database connections of mobile devices and to databases, which have been offshored, are practical use cases.
A semantic cache is a query-based cache that caches query results and maintains their semantic description. It reuses partial matches of previous query results. Each query that is processed by the semantic cache is split into two disjoint parts: one that can be completely answered with tuples of the cache probe query, and another that requires tuples to be transferred from the server (remainder query).
Existing semantic caches do not support top-k queries, i.e., ordered and limited queries. In this thesis, we present an innovative semantic cache that naturally supports top-k queries. The support of top-k queries in a semantic cache has considerable effects on cache elements, operations on cache elements -- like creation, difference, intersection, and union -- and query answering. Hence, we introduce new techniques for cache management and query processing. They enable the semantic cache to become a true top-k semantic cache.
In addition, we have developed a new algorithm that can estimate the lower bounds of query results of sorted queries using multidimensional histograms. Using this algorithm, our top-k semantic cache is able to pipeline partial query results of top-k queries. Thereby, query execution performance can be significantly increased.
We have implemented a prototype of a top-k semantic cache called IQCache (Intelligent Query Cache). An extensive and thorough evaluation with various benchmarks using our prototype demonstrates the applicability and performance of top-k semantic caching in practice. The experiments prove that the top-k semantic cache invariably outperforms simple hash-based caching strategies and scales very well.