Digitalisierung
Refine
Document Type
- Preprint (11) (remove)
Is part of the Bibliography
- no (11)
Keywords
- Ars Magna (1)
- CNN (1)
- Finite state machines (1)
- Fourier integral operators (1)
- Functionapproximation (1)
- Giordano Bruno (1)
- Image orientation (1)
- Image reconstruction (1)
- Lambda tomography (1)
- Leibniz (1)
Finding the optimal join order (JO) is one of the most important problems in query optimisation, and has been extensively considered in research and practise. As it involves huge search spaces, approximation approaches and heuristics are commonly used, which explore a reduced solution space at the cost of solution quality. To explore even large JO search spaces, we may consider special-purpose software, such as mixed-integer linear programming (MILP) solvers, which have successfully solved JO problems. However, even mature solvers cannot overcome the limitations of conventional hardware prompted by the end of Moore’s law. We consider quantum-inspired digital annealing hardware, which takes inspiration from quantum processing units (QPUs). Unlike QPUs, which likely remain limited in size and reliability in the near and mid-term future, the digital annealer (DA) can solve large instances of mathematically encoded optimisation problems today. We derive a novel, native encoding for the JO problem tailored to this class of machines that substantially improves over known MILP and quantum-based encodings, and reduces encoding size over the state-of-the-art. By augmenting the computation with a novel readout method, we derive valid join orders for each solution obtained by the (probabilistically operating) DA. Most importantly and despite an extremely large solution space, our approach scales to practically relevant dimensions of around 50 relations and improves result quality over conventionally employed approaches, adding a novel alternative to solving the long-standing JO problem.
Inverse problems are inherently ill-posed and therefore require regularization techniques to achieve a stable solution. While traditional variational methods have wellestablished theoretical foundations, recent advances in machine learning based approaches have shown remarkable practical performance. However, the theoretical foundations of learning-based methods in the context of regularization are still underexplored. In this paper, we propose a general framework that addresses the current gap between learning-based methods and regularization strategies. In particular, our approach emphasizes the crucial role of data consistency in the solution of inverse problems and introduces the concept of data-proximal null-space networks as a key component for their solution. We provide a complete convergence analysis by extending the concept of regularizing null-space networks with data proximity in the visual part. We present numerical results for limited-view computed tomography to illustrate the validity of our framework.
In a variety of tomographic applications, data cannot be fully acquired, leading to severely underdetermined image reconstruction. Conventional methods result in reconstructions with significant artifacts. In order to remove these artifacts, regularization methods have to be applied that incorporate additional information. An important example is TV reconstruction which is well known to efficiently compensate for missing data and well reduces reconstruction artifacts. At the same time, however, tomographic data is also contaminated by noise, which poses an additional challenge. The use of a single regularizer within a variational regularization framework must therefore account for both the missing data and the noise. However, a single regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction over different scales, in which case ℓ1 curvelet regularization methods work well. To address this issue, in this paper we introduce a novel variational regularization framework that combines the advantages of two different regularizers. The basic idea of our framework is to perform reconstruction in two stages, where the first stage mainly aims at accurate reconstruction in the presence of noise, and the second stage aims at artifact reduction. Both reconstruction stages are connected by a data proximity condition. The proposed method is implemented and tested for limited-view CT using a combined curvelet-TV approach. We define and implement a curvelet transform adapted to the limited view problem and demonstrate the advantages of our approach in a series of numerical experiments in this context.
Mankind has always been confronted with limited knowledge about Nature and many people devoted their lives and intellects to the very question: How to gain knowledge, confirm ideas and theories, and understand the world we live in? Religion was one fundamental base for explanations and existential questions. Philosophy and Science were the other. Nowadays, science and technology dominate our daily lives, religion and metaphysics have lost their former dominance, while our thoughts and factual knowledge are increasingly governed by the digitised versions of conversation and discussion, filtered and guided by the algorithms of social networks. Today, "artificial intelligence" is a phenomenon in information technology which takes over everyday routines. A new ideal, a new promise, and at the same time used as a stratagem by political and economic systems to guide and manipulate our thinking, values, and behaviour. But nothing is so new that one could not find its precursors in earlier times. Science and mathematics has kindled many brilliant ideas. Can we describe the world by numbers, find truth and explanation in mathematical structures? This essay is devoted to early ideas by the medieval philosopher Ramón Llull, the great rationalist G.W. Leibniz, and early approaches to combinations, number theory. and information processing. Llull's thinking machine and subsequent endeavours to find methods of mechanical reasoning and inference paved the way to modern-day concepts of artificial intelligence.
Finite state machines (FSMs) are an appealing mechanism for simple practical computations: They lend themselves to very effcient and deterministic implementation, are easy to understand, and allow for formally proving many properties of interest. Unfortunately, their computational power is deemed insuffcient for many tasks, and their usefulness has been further hampered by the state space explosion problem and other issues when naïvely trying to scale them to sizes large enough for many real–life applications.
This paper expounds on theory and implementation of multiple coupled fnite state machines (McFSMs), a novel mechanism that combines benefits of FSMs with near Turing-complete, practical computing power, and that was designed from the ground up to support static analysis and reasoning. We develop an elaborate category–theoretical foundation based on non–deterministic Mealy machines, which gives a suitable algebraic description for novel ways of blending di#erent computing models. Our experience is based on a domain specific language and an integrated development environment that can compile McFSM models to multiple target languages, applying it to use-cases based on industrial scenarios. We discuss properties and advantages of McFSMs, explain how the mechanism can interact with real–world systems and existing code without sacrificing provability, determinism or performance.
We discuss how McFSMs can be used to replace and improve on commonly employed programming patterns, and show how their effcient handling of large state spaces enables them to be used as core building blocks for distributed, safety critical, and real–time systems of industrial complexity, which contributes to the longdesired goal of providing executable specifications.
We present a paradigm for characterization of artifacts in limited data tomography problems. In particular, we use this paradigm to characterize artifacts that are generated in reconstructions from limited angle data with generalized Radon transforms and general filtered backprojection type operators. In order to find when visible singularities are imaged, we calculate the symbol of our reconstruction operator as a pseudodifferential operator.
This article provides a mathematical classification of artifacts from arbitrary incom-plete X-ray tomography data when using the classical filtered backprojection algorithm. Usingmicrolocal analysis, we prove that all artifacts arise from points at the boundary of the data set.Our results show that, depending on the geometry of the data set boundary, two types of artifactscan arise: object-dependent and object-independent artifacts. The object-dependent artifacts aregenerated by singularities of the object being scanned and these artifacts can extend all along lines.This is a generalization of the streak artifacts observed in limited angle CT. The article also char-acterizes two new phenomena: the object-independent artifacts are caused only by the geometryof the data set boundary; they occur along lines if the boundary of the data set is not smooth andalong curves if the boundary of the data set is smooth. In addition to the geometric descriptionof artifacts, the article also provides characterizations of their strength in Sobolev scale in certaincases. Moreover, numerical reconstructions from simulated and real data are presented illustratingour theorems.This work is motivated by a reconstruction we present from a synchrotron data set in whichartifacts along lines appeared that were independent of the object.The results of this article apply to a wide range of well-known incomplete data problems, in-cluding limited angle CT and region of interest tomography, as well as to unconventional x-ray CTimaging setups. Some of those problems are explicitly addressed in this article, theoretically and numerically.
On embedded processors that are increasingly equipped with multiple CPU cores, static hardware partitioning is an established means of consolidating and isolating workloads onto single chips. This architectural pattern is suitable for mixed-criticality workloads that need to satisfy both, real-time and safety requirements, given suitable hardware properties. In this work, we focus on exploiting contemporary virtualisation mechanisms to achieve freedom from interference respectively isolation between workloads. Possibilities to achieve temporal and spatial isolation-while maintaining real-time capabilities-include statically partitioning resources, avoiding the sharing of devices, and ascertaining zero interventions of superordinate control structures. This eliminates overhead due to hardware partitioning, but implies certain hardware capabilities that are not yet fully implemented in contemporary standard systems. To address such hardware limitations, the customisable and configurable RISC-V instruction set architecture offers the possibility of swift, unrestricted modifications. We present findings on the current RISC-V specification and its implementations that necessitate interventions of superordinate control structures. We identify numerous issues adverse to implementing our goal of achieving zero interventions respectively zero overhead: On the design level, and especially with regards to handling interrupts. Based on micro-benchmark measurements, we discuss the implications of our findings, and argue how they can provide a basis for future extensions and improvements of the RISC-V architecture.
The characteristic feature of inverse problems is their instability with respect to data perturbations. In order to stabilize the inversion process, regularization methods have to be developed and applied. In this work we introduce and analyze the concept of filtered diagonal frame decomposition which extends the standard filtered singular value decomposition to the frame case. Frames as generalized singular system allows to better adapt to a given class of potential solutions. In this paper, we show that filtered diagonal frame decomposition yield a convergent regularization method. Moreover, we derive convergence rates under source type conditions and prove order optimality under the assumption that the considered frame is a Riesz-basis.
The main issues in many image processing applications are
object recognition and detection of objects, which answers the questions whether an object is present and if it is present, where it is located. Popular object detection algorithms like YOLO use a regression formulation for the whole problem, especially for the bounding box parameters. In production industry the setting usually is different: One usually knows the object type and rather wants to know with high precision where the object is. We study a prototype application in this area where we identify the rotation of an object in a plane. To solve this problem use a regression approach with a CNN architecture as a function approximator. We compare our results to standard image processing algorithms, which do not use neural networks, and present quantitative results on the accuracy.
CNNs seem at least competitive to classical image processing.