FG Theoretische Informatik
Refine
Year of publication
Document Type
- Conference Proceeding (53)
- Scientific journal article peer-reviewed (41)
- Article (12)
- Scientific journal article not peer-reviewed (9)
- Periodical (4)
- Conference publication peer-reviewed (3)
- Book (publisher) (2)
- Part of a book (chapter) (2)
- Review (2)
- Book (1)
Language
- English (131) (remove)
Keywords
Institute
The PCP theorem has recently been shown to hold as well in the real number model of Blum, Shub, and Smale (Baartse and Meer, 2015). The proof given there structurally closely follows the proof of the original PCP theorem by Dinur (2007). In this paper we show that the theorem also can be derived using algebraic techniques similar to those employed by Arora et al. (Arora et al., 1998; Arora and Safra, 1998) in the first proof of the PCP theorem. This needs considerable additional efforts. Due to severe problems when using low degree algebraic polynomials over the reals as codewords for one of the verifiers to be constructed, we work with certain trigonometric polynomials. This entails the necessity to design new segmentation procedures in order to obtain well structured real verifiers appropriate for applying the classical technique of verifier composition.
We believe that designing as well an algebraic proof for the real PCP theorem on one side leads to interesting questions in real number complexity theory and on the other sheds light on which ingredients are necessary in order to prove an important result as the PCP theorem in different computational models.
Mathematical Logic Quarterly
(2017)
Journal of Complexity
(2017)
Journal of Complexity
(2016)
Mathematical Logic Quarterly
(2016)
In a recent work, Gandhi, Khoussainov, and Liu [7] introduced and studied a generalized model of finite automata able to work over arbitrary structures. As one relevant area of research for this model the authors identify studying such automata over particular structures such as real and algebraically closed fields.
In this paper we start investigations into this direction. We prove several structural results about sets accepted by such automata, and analyze decidability as well as complexity of several classical questions about automata in the new framework. Our results show quite a diverse picture when compared to the well known results for finite automata over finite alphabets.
This book constitutes the refereed proceedings of the 10th Conference on Computability in Europe, CiE 2014, held in Budapest, Hungary, in June 2014. The 42 revised papers presented were carefully reviewed and selected from 78 submissions and included together with 15 invited papers in this proceedings. The conference had six special sessions: computational linguistics, bio-inspired computation, history and philosophy of computing, computability theory, online algorithms and complexity in automata theory.
An infinite sequence X is said to have trivial (prefix-free) initial segment complexity if the prefix-free Kolmogorov complexity of each initial segment of X is the same as the complexity of the sequence of 0s of the same length, up to a constant. We study the gap between the minimum complexity K(0 n ) and the initial segment complexity of a nontrivial sequence, and in particular the nondecreasing unbounded functions f such that
(⋆)
for a nontrivial sequence X, where K denotes the prefix-free complexity. Our first result is that there exists a $\varDelta^{0}_{3}$ unbounded nondecreasing function f which does not have this property. It is known that such functions cannot be $\varDelta^{0}_{2}$ hence this is an optimal bound on their arithmetical complexity. Moreover it improves the bound $\varDelta^{0}_{4}$ that was known from Csima and Montalbán (Proc. Amer. Math. Soc. 134(5):1499–1502, 2006).
Our second result is that if f is $\varDelta^{0}_{2}$ then there exists a non-empty $\varPi^{0}_{1}$ class of reals X with nontrivial prefix-free complexity which satisfy (⋆). This implies that in this case there uncountably many nontrivial reals X satisfying (⋆) in various well known classes from computability theory and algorithmic randomness; for example low for Ω, non-low for Ω and computably dominated reals. A special case of this result was independently obtained by Bienvenu, Merkle and Nies (STACS, pp. 452–463, 2011).
Many-core processors combine fast on-chip communication
with access to large amounts of shared memory. This
makes it possible to exploit the benefits of distributed as well
as shared memory programming models within single parallel
algorithms. While large amounts of data can be shared in the
memory and caches, coordinating the activities of hundreds
of cores relies on cross core communication mechanisms with
ultra low latency for very small messages. In this paper we
discuss two communication protocols for the Intel SCC and
compare them to the MPI implementation of the SCC. Our
micro-benchmark results underline that special purpose protocols
for small messages make much finer levels of parallelism possible
than general purpose message passing systems.
Index Terms—many-core, message passing, shared memory
Today's multi-cores and future many-cores are NUMA architectures with complex cache hierarchies and multiple memory channels. Depending on the topologies of these memory networks we find everything from true data sharing with shared caches to distributed memory architectures which just pretend to be physical shared memory systems. In fact, most many-cores are hybrid systems that exhibit the characteristics of both distributed systems and SMPs. In this paper we argue in favor of middleware platforms for many-cores. We will discuss the needed functionality in contrast to common distributed system middleware and present micro benchmarks on several architectures to substantiate our claims.
Modularity is a widely used quality measure for graph clusterings. Its exact maximization is prohibitively expensive for large graphs. Popular heuristics progressively merge clusters starting from singletons (coarsening), and optionally improve the resulting clustering by moving vertices between clusters (refinement). This paper experimentally compares existing and new heuristics of this type with respect to their effectiveness (achieved modularity) and runtime. For coarsening, it turns out that the most widely used criterion for merging clusters (modularity increase) is outperformed by other simple criteria, and that a recent multi-step algorithm is no improvement over simple single-step coarsening for these criteria. For refinement, a new multi-level algorithm produces significantly better clusterings than conventional single-level algorithms. A comparison with published benchmark results and algorithm implementations shows that combinations of coarsening and multi-level refinement are competitive with the best algorithms in the literature.
Real Computational Universality: The word problem for a class of groups with infinite presentation
(2009)
Real Computational Universality: The word problem for a class of groups with infinite presentation
(2007)
On a refined analysis of some problems in interval arithmetic using real number complexity theory
(2004)
Degrees of d.c.e. Reals
(2004)
Optimization Theory
(2004)
The kinematic synthesis of Stephenson mechanisms for motion generation is often based on specifying arbitrary values for certain kinematic dimensions. If no appropriate values are known, an inadequate choice can make it impossible to find a good mechanism.
In the present paper a synthesis method is described where kinematic dimensions can be chosen a priori (to meet side conditions, e.g., a limited area for the locations of the fixed pivots), but do not have to be chosen. A circlepoint search in combination with homotopy methods will be applied. Concerning the homotopy methods two approaches – via the Bézout number and the BKK bound – will be compared.
Some aspects of studying an optimization or decision problem in different computational models
(2002)
In this paper we want to discuss some of the features coming up when analyzing a problem in different complexity theoretic frameworks. The focus will be on two problems. The first is related to mathematical optimization. We consider the quadratic programming problem of minimizing a quadratic polynomial on a polyhedron. We discuss how the complexity of this problem might change if we consider real data together with an algebraic model of computation (the Blum–Shub–Smale model) instead of rational inputs together with the Turing machine model. The results obtained will lead us to the second problem; it deals with the intrinsic structure of complexity classes in models over real- or algebraically closed fields. A classical theorem by Ladner for the Turing model is examined in these different frameworks. Both examples serve well for working out in how far different approaches to the same problem might shed light upon each other. In some cases this will lead to quite diverse results with respect to the different models. On the other hand, for some problems the more general approach can also give a unifying idea why results hold true in several frameworks.
The paper is of tutorial character in that it collects some results into the above direction obtained previously.