FG Theoretische Informatik
Refine
Year of publication
Document Type
- Conference Proceeding (54)
- Scientific journal article peer-reviewed (42)
- Article (13)
- Scientific journal article not peer-reviewed (10)
- Part of a book (chapter) (6)
- Periodical (4)
- Book (publisher) (3)
- Conference publication peer-reviewed (3)
- Review (3)
- Book (1)
Keywords
Institute
An infinite sequence X is said to have trivial (prefix-free) initial segment complexity if the prefix-free Kolmogorov complexity of each initial segment of X is the same as the complexity of the sequence of 0s of the same length, up to a constant. We study the gap between the minimum complexity K(0 n ) and the initial segment complexity of a nontrivial sequence, and in particular the nondecreasing unbounded functions f such that
(⋆)
for a nontrivial sequence X, where K denotes the prefix-free complexity. Our first result is that there exists a $\varDelta^{0}_{3}$ unbounded nondecreasing function f which does not have this property. It is known that such functions cannot be $\varDelta^{0}_{2}$ hence this is an optimal bound on their arithmetical complexity. Moreover it improves the bound $\varDelta^{0}_{4}$ that was known from Csima and Montalbán (Proc. Amer. Math. Soc. 134(5):1499–1502, 2006).
Our second result is that if f is $\varDelta^{0}_{2}$ then there exists a non-empty $\varPi^{0}_{1}$ class of reals X with nontrivial prefix-free complexity which satisfy (⋆). This implies that in this case there uncountably many nontrivial reals X satisfying (⋆) in various well known classes from computability theory and algorithmic randomness; for example low for Ω, non-low for Ω and computably dominated reals. A special case of this result was independently obtained by Bienvenu, Merkle and Nies (STACS, pp. 452–463, 2011).
The PCP theorem has recently been shown to hold as well in the real number model of Blum, Shub, and Smale (Baartse and Meer, 2015). The proof given there structurally closely follows the proof of the original PCP theorem by Dinur (2007). In this paper we show that the theorem also can be derived using algebraic techniques similar to those employed by Arora et al. (Arora et al., 1998; Arora and Safra, 1998) in the first proof of the PCP theorem. This needs considerable additional efforts. Due to severe problems when using low degree algebraic polynomials over the reals as codewords for one of the verifiers to be constructed, we work with certain trigonometric polynomials. This entails the necessity to design new segmentation procedures in order to obtain well structured real verifiers appropriate for applying the classical technique of verifier composition.
We believe that designing as well an algebraic proof for the real PCP theorem on one side leads to interesting questions in real number complexity theory and on the other sheds light on which ingredients are necessary in order to prove an important result as the PCP theorem in different computational models.
Degrees of d.c.e. Reals
(2004)
Das Simplexverfahren
(2002)
Ellipsoidmethoden
(2001)
Innere-Punkte Methoden
(2001)
Optimierung
(2003)
Optimization Theory
(2004)
On a refined analysis of some problems in interval arithmetic using real number complexity theory
(2004)
Fragestellungen aus Mathematik und theoretischer Informatik bei der Konstruktion von Getrieben
(2008)
We propose the study of query languages for databases involving real numbers as data (called real number databases in the sequel). As main new aspect our approach is based on real number complexity theory as introduced in [8] and descriptive complexity for the latter developed in [17]. Using this formal framework a uniform treatment of query languages for such databases is obtained. Precise results about both the data- and the expression-complexity of several such query languages are proved. More explicitly, relying on descriptive complexity theory over ℝ gives the possibility to derive a hierarchy of complete languages for most of the important real number complexity classes. A clear correspondence between different logics and such complexity classes is established. In particular, it is possible to formalize queries involving in a uniform manner real spaces of different dimensions. This can be done in such a way that the logical description exactly reflects the computational complexity of a query. The latter might circumvent a problem appearing in some of the former approaches dealing with semi-algebraic databases (see [20] , [18]), where the use of first-order logic over real-closed fields can imply inefficiency as soon as the dimension of the underlying real space is not fixed - no matter whether the query under consideration is easy to compute or not.
Regarding real number models of computation may be a helpful way to get deeper insight into the classical theory over Z. Therefore it seems useful to study the complexity of classical problems in the real model. In this connection, the problem of deciding the existence of a nonnegative zero for certain polynomials plays an important part because a lot of NP-problems over Z can be polynomial reduced to it.
The computational model of Blum, Shub, and Smale (1989, Bull. Amer. Math. Soc.21, 1-46) yields a numerous number of different algorithm-classes if the allowed set of instructions is varied. For two such classes we consider the "P = NP?" problem as well as the question of whether the NP- problems are decidable.
When comparing complexity theory over the ring Z (Turing machine) on one hand and over the ring R (Blum-Shub-Smale machine) on the other hand, it will be important to study how methods and ideas of the first can be transformed to the second one. In this sense the present paper is concerned with the relation between a characterization of the P = NP?-question for the Z-case (given by Krentel) and a special class of quadratic-programming problems which are important in the real model.
The complexity of linearly constrained (nonconvex) quadratic programming is analyzed within the framework of real number models, namely the one of Blum, Shub, and Smale and its modification recently introduced by Koiran (“weak BSS-model”). In particular we show that this problem is not NP-complete in the Koiran setting. Applications to the (full) BSS-model are discussed.
Relations between discrete and continuous complexity models are considered. The present paper is devoted to combine both models. In particular we analyze the 3-Satisfiability problem. The existence of fast decision procedures for this problem over the reals is examined based on certain conditions on the discrete setting. Moreover we study the behaviour of exponential time computations over the reals depending on the real complexity of 3-Satisfiability. This will be done using tools from complexity theory over the integers.
This paper is devoted to the study of lower bounds on the inherent number of additions and subtractions necessary to solve some natural matrix computational tasks such as computing the nullspace, some band transformation, and some triangulation of a givenm×mmatrix. The additive complexities of such tasks are shown to grow asymptotically like that of them×mmatrix multiplication. The paper is a continuation of an earlier paper by the authors, and also of 4where multiplicative complexity has been considered. We also propose a formalization of semialgebraic computational tasks.
Shub (in "From Topology to Computation: Proceedings of the Smalefest." (M. Hirsch, J. Marsden, and M. Shub, Eds.), pp. 281-301, Springer-Verlag, New York/Berlin, 1993) proposes to attack the complex PFull-size image (<1 K) versus the NPFull-size image (<1 K) problem by focussing on lower bounds on testing the resultant of quadratic forms for zero. Taking up this question we show in the present paper a lower bound of order n3 for the resultant.
On the structure of NP_C
(1999)
In a recent work, Gandhi, Khoussainov, and Liu [7] introduced and studied a generalized model of finite automata able to work over arbitrary structures. As one relevant area of research for this model the authors identify studying such automata over particular structures such as real and algebraically closed fields.
In this paper we start investigations into this direction. We prove several structural results about sets accepted by such automata, and analyze decidability as well as complexity of several classical questions about automata in the new framework. Our results show quite a diverse picture when compared to the well known results for finite automata over finite alphabets.
The kinematic synthesis of Stephenson mechanisms for motion generation is often based on specifying arbitrary values for certain kinematic dimensions. If no appropriate values are known, an inadequate choice can make it impossible to find a good mechanism.
In the present paper a synthesis method is described where kinematic dimensions can be chosen a priori (to meet side conditions, e.g., a limited area for the locations of the fixed pivots), but do not have to be chosen. A circlepoint search in combination with homotopy methods will be applied. Concerning the homotopy methods two approaches – via the Bézout number and the BKK bound – will be compared.
Some aspects of studying an optimization or decision problem in different computational models
(2002)
In this paper we want to discuss some of the features coming up when analyzing a problem in different complexity theoretic frameworks. The focus will be on two problems. The first is related to mathematical optimization. We consider the quadratic programming problem of minimizing a quadratic polynomial on a polyhedron. We discuss how the complexity of this problem might change if we consider real data together with an algebraic model of computation (the Blum–Shub–Smale model) instead of rational inputs together with the Turing machine model. The results obtained will lead us to the second problem; it deals with the intrinsic structure of complexity classes in models over real- or algebraically closed fields. A classical theorem by Ladner for the Turing model is examined in these different frameworks. Both examples serve well for working out in how far different approaches to the same problem might shed light upon each other. In some cases this will lead to quite diverse results with respect to the different models. On the other hand, for some problems the more general approach can also give a unifying idea why results hold true in several frameworks.
The paper is of tutorial character in that it collects some results into the above direction obtained previously.
Real Computational Universality: The word problem for a class of groups with infinite presentation
(2007)
Real Computational Universality: The word problem for a class of groups with infinite presentation
(2009)
Modularity is a widely used quality measure for graph clusterings. Its exact maximization is prohibitively expensive for large graphs. Popular heuristics progressively merge clusters starting from singletons (coarsening), and optionally improve the resulting clustering by moving vertices between clusters (refinement). This paper experimentally compares existing and new heuristics of this type with respect to their effectiveness (achieved modularity) and runtime. For coarsening, it turns out that the most widely used criterion for merging clusters (modularity increase) is outperformed by other simple criteria, and that a recent multi-step algorithm is no improvement over simple single-step coarsening for these criteria. For refinement, a new multi-level algorithm produces significantly better clusterings than conventional single-level algorithms. A comparison with published benchmark results and algorithm implementations shows that combinations of coarsening and multi-level refinement are competitive with the best algorithms in the literature.