open_access
Refine
Year of publication
Document Type
- Doctoral Thesis (233)
- Article (28)
- Preprint (12)
- Conference Proceeding (9)
- Report (4)
- Book (3)
- Part of Periodical (3)
- Master's Thesis (1)
- Other (1)
Language
- English (294) (remove)
Has Fulltext
- yes (294)
Keywords
- Computersicherheit (8)
- Maßtheorie (8)
- Graphenzeichnen (6)
- Iteriertes Funktionensystem (5)
- Marketing (5)
- Multimedia (5)
- Software Engineering (5)
- Information Retrieval (4)
- Kryptologie (4)
- Modellierung (4)
Institute
- Fakultät für Informatik und Mathematik (104)
- Wirtschaftswissenschaftliche Fakultät (55)
- Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik (47)
- Philosophische Fakultät (36)
- Sozial- und Bildungswissenschaftliche Fakultät (11)
- Mitarbeiter Lehrstuhl/Einrichtung der Wirtschaftswissenschaftlichen Fakultät (9)
- Sonstiger Autor der Fakultät für Informatik und Mathematik (9)
- Philosophische Fakultät / Südostasienkunde (6)
- Sonstiger Autor der Wirtschaftswissenschaftlichen Fakultät (5)
- Juristische Fakultät (4)
Visual navigation of hierarchically structured graphs is a technique for interactively exploring large graphs that possess an additional hierarchical structure. This structure is expressed in form of a recursive clustering of the nodes: in call graphs of telephone networks, for instance, the nodes are identified with phone numbers; they are clustered recursively through the implicit structure of the numbers, e. g., nodes with the same area code belong to a cluster. In order to reduce the complexity and the size of the graph, only those subgraphs that are currently needed are shown in detail, while the others are collapsed, i. e., represented by meta nodes. In such a graph view the subgraphs in the areas of interest are expanded furthest, whereas those on the periphery are abstracted. As the areas of interest change over time, clusters in a view need to be expanded or contracted. First and foremost, there is need for an efficient data structure for this graph view maintenance problem. Depending on the admissible modifications of the graph and its hierarchical clustering, three variants have been discussed in the literature: in the static case, everything is fixed; in the dynamic graph variant, only edges of the graph can be inserted and deleted; finally, in the dynamic graph and tree variant the graph additionally is subject to node insertions and deletions and the clustering may change through splitting and merging of clusters. We introduce a new variant, dynamic leaves, which is based on the dynamic graph variant, but additionally allows insertion and deletion of graph nodes, i. e., leaves of the hierarchy. So far efficient data structures were known only for the static and the dynamic graph variant, i. e., neither the nodes of the graph nor the clustering could be modified. As this is unsatisfactory in an interactive editor for hierarchically structured graphs, we first generalize the approach of Buchsbaum et. al (Proc. 8th ESA, vol. 1879 of LNCS, pp. 120–131, 2000), in which graph view maintenance is formulated as a special case of range searching over tree cross products, to the new dynamic leaves variant. This generalization builds on a novel technique of superimposing a search tree over an ordered list maintenance structure. With an additional factor of roughly O(log n/log log n), this is the first data structure for the problem of graph view maintenance where the node set is dynamic. Visualizing the expanding and contracting appropriately is the second challenge. We propose a local update scheme for the algorithm of Sugiyama and Misue (IEEE Trans. on Systems, Man, and Cybernetics 21 (1991) 876– 892) for drawing compound digraphs. The layered drawings that it produces have many applications ranging from biochemical pathways to UML diagrams. Modifying the intermediate results of every step of the original algorithm locally, the update scheme is more efficient than re-applying the entire algorithm after expansion or contraction. As our experimental results on randomly generated graphs show, the average time for updating the drawing is around 50 % of the time for redrawing for dense graphs and below 20 % for sparse graphs. Also, the performance gain is not at the expense of quality as regards the area of the drawing, which increases only insignificantly, and the number of crossings, which is reduced. At the same time, the locality of the updates preserves the user ’s mental map of the graph: nodes that are are not affected stay on the same level in the same relative order and expanded edges take the same course as the corresponding contracted edge; furthermore, expansion and contraction are visually inverse. Finally, our new data structure and the update scheme are combined into an interactive editor and viewer for compound (di-)graphs. A flexible and extensible software architecture is introduced that lays the ground for future research. It employs the well-known Model-View-Controller (MVC) paradigm to separate the abstract data from its presentation. As a consequence, the purely combinatorial parts, i. e., the compound (di-)graph and its views, are reusable without the editor front-end. A proof-of-concept implementation based on the proposed architecture shows its feasibility and suitability.
Refactoring is a well known technique to enhance various aspects of an object-oriented program. It has become very popular during recent years, as it allows to overcome deficits present in many programs. Doing refactoring by hand is almost impossible due to the size and complexity of modern software systems. Automated tools provide support for the application of refactorings, but do not give hints, which refactorings to apply and why. The Snelting/Tip analysis is a program analysis, which creates a refactoring proposal for a class hierarchy by analyzing how class members are used inside a program. KABA is an adaption and extension of the Snelting/Tip analysis for Java. It has been implemented and expanded to become a semantic preserving, interactive refactoring system. Case studies of real world programs will show the usefulness of the system and its practical value.
Scheduling methodologies for real-time applications have been of keen interest to diverse research communities for several decades. Depending on the application area, algorithms have been developed that are tailored to specific requirements with respect to both the individual components of which an application is made up and the computational platform on which it is to be executed. Many real-time scheduling algorithms base their decisions solely or partly on timing constraints expressed by deadlines which must be met even under worst-case conditions. The increasing complexity of computing hardware means that worst-case execution time analysis becomes increasingly pessimistic. Scheduling hard real-time computations according to their worst-case execution times (which is common practice) will thus result, on average, in an increasing amount of spare capacity. The main goal of flexible real-time scheduling is to exploit this otherwise wasted capacity. Flexible scheduling schemes have been proposed to increase the ability of a real-time system to adapt to changing requirements and nondeterminism in the application behaviour. These models can be categorised as those whose source of flexibility is the quality of computations and those which are flexible regarding their timing constraints. This work describes a novel model which allows to specify both flexible timing constraints and quality profiles for an application. Furthermore, it demonstrates the applicability of this specification method to real-world examples and suggests a set of feasible scheduling algorithms for the proposed problem class.
The collection of various texts on D. H. Lawrence (1885-1930) represents the English writer’s first journey abroad having led the young and receptive teacher - already deeply influenced by German philosophy - into Bavaria and the Tyrol. Vividly featured in his - during his lifetime unpublished - novel "Mr Noon" the stay in Germany and Bavaria in the years 1912 and 1913 and the people he met there were to be the plot of Lawrence’s main works. In Munich Lawrence and his later German wife Frieda von Richthofen (1879-1956) were part of the so-called Schwabing-Bohème. In these circles of artists, poets, social-reformes, as well as of heroines of free love, anarchists and early fascists the author received his ideas about sex and erotics, which were performed in his famous novel "Lady Chatterley’s Lover" in 1927/1928. Especially the impact of the Austrian Doctor Otto Gross (1877-1920), a former lover of Frieda Lawrence, who tried to connect Friedrich Nietzsche’s "Will to Power" and Sigmund Freud’s Psychoanalysis, on Lawrence’s work is a remarkable criterion. The studies also follow Lawrence’s tracks into the Tyrol and his and Frieda’s wandering across the Alps to Northern Italy (1912-1913), an adventure playing the real setting of his novel "Women in Love" of 1920 and described in his essays "Twilight in Italy" (1916).
Clustered graphs are an enhanced graph model with a recursive clustering of the vertices according to a given nesting relation. This prime technique for expressing coherence of certain parts of the graph is used in many applications, such as biochemical pathways and UML class diagrams. For directed clustered graphs usually level drawings are used, leading to clustered level graphs. In this thesis we analyze the interrelation of clusters and levels and their influence on edge crossings and cluster/edge crossings.
A parallelising compilation consists of many translation and optimisation stages. The programmer may steer the compiler through these stages by supplying directives with the source code or setting compiler switches. However, for an evaluation of the effects of individual stages, their selection and their best order, this approach is not optimal. To solve this problem, we propose the following method. The compilation is cast as a sequence of program transformations. Each intermediate program runs on an Abstract Parallel Machine (APM), while the program generated by the final transformation runs on the target architecture. Our intermediate programs are all in the same language, Haskell. Thus, each program is executable and still abstract enough to be legible, which enables the evaluation of the transformation that generated it. This evaluation is supported by a cost model, which makes a performance prediction of the abstract program for a real machine. Our project, PolyAPM, provides an acyclic directed graph -- usually a tree -- of APMs whose traversal specifies different combinations and orders of transformations. From one source program, several target programs can be constructed. Their run time characteristics can be evaluated and compared. The goal of PolyAPM is not to support the one-off construction of parallel application programs. For the method's overhead to pay off, the project aims rather at supporting the construction and comparison of many similar variations of a parallel program and a comparative evaluation of parallelisation techniques. With the automation of transformations, PolyAPM can also be used to construct semi-automatic compilation systems.
Program slicing is a technique to identify statements that may influence the computations in other statements. Despite the ongoing research of almost 25 years, program slicing still has problems that prevent a widespread use: Sometimes, slices are too big to understand and too expensive and complicated to be computed for real-life programs. This thesis presents solutions to these problems: It contains various approaches which help the user to understand a slice more easily by making it more focused on the user's problem. All of these approaches have been implemented in the VALSOFT system and thorough evaluations of the proposed algorithms are presented. The underlying data structures used for slicing are program dependence graphs. They can also be used for different purposes: A new approach to clone detection based on identifying similar subgraphs in program dependence graphs is presented; it is able to detect modified clones better than other tools. In the theoretical part, this thesis presents a high-precision approach to slice concurrent procedural programs despite that optimal slicing is known to be undecidable. It is the first approach to slice concurrent programs that does not rely on inlining of called procedures.
In this work we present novel query evaluation techniques for data integration systems in different environments, ranging from a central data-warehouse approach, over distributed virtual market places, to peer-to-peer (P2P) systems. Based on a new distributed evaluation technique, the so-called HyperQueries, we present a reference architecture for distributed virtual market places. These HyperQueries enable us to dynamically construct query evaluation plans by referencing sub-plans in the Internet. Furthermore, the process of data integration is structured. Subsequently, we investigate P2P data integration systems without central instances. We introduce so-called Super-Peers which structure a P2P network. Using this Super-Peer based network we "unroll" queries. This allows us to execute even user-defined operators nearby the data sources. Finally, we propose novel, efficient join algorithms for decision support queries in central data-warehouse systems. The proposed order-preserving hashjoins and generalized hashteams are based on early sorting and early partitioning of the inputs and can speed up the query evaluation up to orders of magnitutes.
In this dissertation we generalise the notion of level planar graphs in two directions: track planarity and radial planarity. Our main results are linear time algorithms both for the planarity test and for the computation of an embedding, and thus a drawing. Our algorithms use and generalise PQ-trees, which are a data structure for efficient planarity tests.
This work presents techniques for the construction of a global data integrations system. Similar to distributed databases this system allows declarative queries in order to express user-specific information needs. Scalability towards global data integration systems and openness were major design goals for the architecture and techniques developed in this work. It is shown how service composition, extensibility and quality of service can be supported in an open system of providers for data, functionality for query processing operations, and computing power.
Deduction-based software component retrieval is a software reuse technique that uses formal specifications as component descriptors and as search keys; matching components are identified using an automated theorem prover. This dissertation contains a detailed theoretical investigation of the concept as well as the first substantial experimental evaluation of its technical feasibility.
The well-founded semantics has been accepted as the most relevant semantics for logic-based information systems. In this dissertation a framework based on a set of program transformations is presented that generalizes all major computation approaches for the well-founded semantics using a common data structure and provides a common language to describe their evaluation strategy. This rewriting system gives the formal background to analyze and combine different evaluation strategies in a common framework, or to design new algorithms and prove the correctness of its implementations at a high level just by changing the order of program transformations.
One of the most important algorithms for real quantifier elimination is the quantifier elimination by virtual substitution introduced by Weispfenning in 1988. In this thesis we present numerous algorithmic approaches for optimizing this quantifier elimination algorithm. Optimization goals are the actual running time of the implementation of the algorithm and the size of the output formula. Strategies for obtaining these goals include simplification of first-order formulas,reduction of the size of the computed elimination set, and condensing a new replacement for the virtual substitution. Local quantifier elimination computes formulas that are equivalent to the input formula only nearby a given point. We can make use of this restriction for further optimizing the quantifier elimination by virtual substitution. Finally we discuss how to solve a large class of scheduling problems by real quantifier elimination. To optimize our algorithm for solving scheduling problems we make use of the special form of the input formula and of additional information given by the description of the scheduling problem