Digitalisierung
Refine
Year of publication
Document Type
- conference proceeding (article) (35)
- Article (21)
- conference proceeding (presentation, abstract) (16)
- Preprint (4)
- Book (3)
- Moving Images (2)
- Part of a Book (1)
- conference talk (1)
- Doctoral Thesis (1)
Is part of the Bibliography
- no (84)
Keywords
- Computer Science (3)
- Linux (3)
- Quantum Physics (3)
- Benchmarks (2)
- Hardware (2)
- Kernel <Informatik> (2)
- Quantum Annealin (2)
- Quantum computing (2)
- Software Architecture (2)
- Software Engineering (2)
Institute
- Labor für Digitalisierung (LFD) (84) (remove)
Begutachtungsstatus
- peer-reviewed (54)
Finding the optimal join order (JO) is one of the most important problems in query optimisation, and has been extensively considered in research and practise. As it involves huge search spaces, approximation approaches and heuristics are commonly used, which explore a reduced solution space at the cost of solution quality. To explore even large JO search spaces, we may consider special-purpose software, such as mixed-integer linear programming (MILP) solvers, which have successfully solved JO problems. However, even mature solvers cannot overcome the limitations of conventional hardware prompted by the end of Moore’s law. We consider quantum-inspired digital annealing hardware, which takes inspiration from quantum processing units (QPUs). Unlike QPUs, which likely remain limited in size and reliability in the near and mid-term future, the digital annealer (DA) can solve large instances of mathematically encoded optimisation problems today. We derive a novel, native encoding for the JO problem tailored to this class of machines that substantially improves over known MILP and quantum-based encodings, and reduces encoding size over the state-of-the-art. By augmenting the computation with a novel readout method, we derive valid join orders for each solution obtained by the (probabilistically operating) DA. Most importantly and despite an extremely large solution space, our approach scales to practically relevant dimensions of around 50 relations and improves result quality over conventionally employed approaches, adding a novel alternative to solving the long-standing JO problem.
Preface QDSM
(2023)
The first international workshop on Quantum Data Science and anagement (QDSM), co-located with VLDB 2023, is centered around addressing the possibilities of quantum computing for data science and data management. Quantum computing is a relatively new and emerging field that is believed to have huge computational potential in the future. In the QDSM workshop, we want to provide a venue for discussing and publishing novel results of applying quantum computing to hard data science and data management problems. These problems include join order optimization, designing efficient quantum feature maps, studying possibilities of solving linear programs with quantum algorithms, and divergent index tuning with quantum machine learning. Besides, we include a short and visionary survey on quantum computing for databases. Theworkshop provides a platform for active discussion on these and related topics.
Recent advances in the manufacture of quantum computers attract much attention over a wide range of fields, as early-stage quantum processing units (QPU) have become accessible. While contemporary quantum machines are very limited in size and capabilities, mature QPUs are speculated to eventually excel at optimisation problems. This makes them an attractive technology for database problems, many of which are based on complex optimisation problems with large solution spaces. Yet, the use of quantum approaches on database problems remains largely unexplored. In this paper, we address the long-standing join ordering problem, one of the most extensively researched database problems. Rather than running arbitrary code, QPUs require specific mathematical problem encodings. An encoding for the join ordering problem was recently proposed, allowing first small-scale queries to be optimised on quantum hardware. However, it is based on a faithful transformation of a mixed integer linear programming (MILP) formulation for JO, and inherits all limitations of the MILP method. Most strikingly, the existing encoding only considers a solution space with left-deep join trees, which tend to yield larger costs than general, bushy join trees. We propose a novel QUBO encoding for the join ordering problem. Rather than transforming existing formulations, we
construct a native encoding tailored to quantum systems, which allows us to process general bushy join trees. This makes the
full potential of QPUs available for solving join order optimisation problems.
The use of quantum processing units (QPUs) promises speed-ups for solving computational problems. Yet, current devices are limited by the number of qubits and suffer from significant imperfections, which prevents achieving quantum advantage. To step towards practical utility, one approach is to apply hardware-software co-design methods. This can involve tailoring problem formulations and algorithms to the quantum execution environment, but also entails the possibility of adapting physical properties of the QPU to specific applications. In this work, we follow the latter path, and investigate how key figures— circuit depth and gate count—required to solve four cornerstone NP-complete problems vary with tailored hardware properties. Our results reveal that achieving near-optimal performance and properties does not necessarily require optimal quantum hardware, but can be satisfied with much simpler structures that can potentially be realised for many hardware approaches.m Using statistical analysis techniques, we additionally identify an underlying general model that applies to all subject problems. This suggests that our results may be universally applicable to other algorithms and problem domains, and tailored QPUs can find utility outside their initially envisaged problem domains.
The substantial possible improvements nonetheless highlight the importance of QPU tailoring to progress towards practical deployment and scalability of quantum software.
Quantum computing is a relatively new paradigm that has raised considerable interest in physics and computer science in general but has so far received little attention in software engineering and architecture. Hybrid applications that consist of both quantum and classical components require the development of appropriate quantum software architectures. However, given that quantum software engineering (QSE) in general is a new research area, quantum software architecture–a subresearch area in QSE is also understudied. The goal of this chapter is to provide a list of research challenges and opportunities for such architectures. In addition, to make the content understandable to a broader computer science audience, we provide a brief overview of quantum computing and explain the essential technical foundations.
Quantum computers promise considerable speedups over classical approaches, which has raised interest from many disciplines. Since any currently available implementations suffer from noise and imperfections, achieving concrete speedups for meaningful problem sizes remains a major challenge. Yet, imperfections and noise may remain present in quantum computing for a long while. Such limitations play no role in classical software computing, and software engineers are typically not well accustomed to considering such imperfections, albeit they substantially influence core properties of software and systems. In this paper, we show how to model imperfections with an approach tailored to (quantum) software engineers. We intuitively illustrate, using numerical simulations, how imperfections influence core properties of quantum algorithms on NISQ systems, and show possible options for tailoring future NISQ machines to improve system performance in a co-design approach. Our results are obtained from a software framework that we provide in form of an easy-to-use reproduction package. It does not require computer scientists to acquire deep physical knowledge on noise, yet provide tangible and intuitively accessible means of interpreting the influence of noise on common software quality and performance indicators.
We present an industrial end-user perspective on the current state of quantum computing hardware for one specific technological approach, the neutral atom platform. Our aim is to assist developers in understanding the impact of the specific properties of these devices on the effectiveness of algorithm execution. Based on discussions with different vendors and recent literature, we discuss the performance data of the neutral atom platform. Specifically, we focus on the physical qubit architecture, which affects state preparation, qubit-to-qubit connectivity, gate fidelities, native gate instruction set, and individual qubit stability. These factors determine both the quantum-part execution time and the end-to-end wall clock time relevant for end-users, but also the ability to perform fault-tolerant quantum computation in the future. We end with an overview of which applications have been shown to be well suited for the peculiar properties of neutral atom-based quantum computers.
Quantum Machine Learning: Foundation, New Techniques, and Opportunities for Database Research
(2023)
In the last few years, the field of quantum computing has experienced remarkable progress. The prototypes of quantum computers already exist and have been made available to users through cloud services (e.g., IBM Q experience, Google quantum AI, or Xanadu quantum cloud). While fault-tolerant and large-scale quantum computers are not available yet (and may not be for a long time, if ever), the potential of this new technology is undeniable. Quantum algorithms havethe proven ability to either outperform classical approaches for several tasks, or are impossible to be efficiently simulated by classical means under reasonable complexity-theoretic assumptions. Even imperfect current-day technology is speculated to exhibit computational advantages over classical systems. Recent research is using quantum computers to solve machine learning tasks. Meanwhile, the database community already successfully applied various machine learning algorithms for data management tasks, so combining the fields seems to be a promising endeavour. However, quantum machine learning is a new research field for most database researchers. In this tutorial, we provide a fundamental introduction to quantum computing and quantum machine learning and show the potential benefits and applications for database research. In addition, we demonstrate how to apply quantum machine learning to the optimization of join order problem for databases.
Ascertaining reproducibility of scientific experiments is receiving increased attention across disciplines. We argue that the necessary skills are important beyond pure scientific utility, and that they should be taught as part of software engineering (SWE) education. They serve a dual purpose: Apart from acquiring the coveted badges assigned to reproducible research, reproducibility engineering is a lifetime skill for a professional industrial career in computer science.
SWE curricula seem an ideal fit for conveying such capabilities, yet they require some extensions, especially given that even at flagship conferences like ICSE, only slightly more than one-third of the technical papers (at the 2021 edition) receive recognition for artefact reusability. Knowledge and capabilities in setting up engineering environments that allow for reproducing artefacts and results over decades (a standard requirement in many traditional engineering disciplines), writing semi-literate commit messages that document crucial steps of a decision-making process and that are tightly coupled with code, or sustainably taming dynamic, quickly changing software dependencies, to name a few: They all contribute to solving the scientific reproducibility crisis, and enable software engineers to build sustainable, long-term maintainable, software-intensive, industrial systems. We propose to teach these skills at the undergraduate level, on par with traditional SWE topics.
Finite state machines (FSMs) are an appealing mechanism for simple practical computations: They lend themselves to very effcient and deterministic implementation, are easy to understand, and allow for formally proving many properties of interest. Unfortunately, their computational power is deemed insuffcient for many tasks, and their usefulness has been further hampered by the state space explosion problem and other issues when naïvely trying to scale them to sizes large enough for many real–life applications.
This paper expounds on theory and implementation of multiple coupled fnite state machines (McFSMs), a novel mechanism that combines benefits of FSMs with near Turing-complete, practical computing power, and that was designed from the ground up to support static analysis and reasoning. We develop an elaborate category–theoretical foundation based on non–deterministic Mealy machines, which gives a suitable algebraic description for novel ways of blending di#erent computing models. Our experience is based on a domain specific language and an integrated development environment that can compile McFSM models to multiple target languages, applying it to use-cases based on industrial scenarios. We discuss properties and advantages of McFSMs, explain how the mechanism can interact with real–world systems and existing code without sacrificing provability, determinism or performance.
We discuss how McFSMs can be used to replace and improve on commonly employed programming patterns, and show how their effcient handling of large state spaces enables them to be used as core building blocks for distributed, safety critical, and real–time systems of industrial complexity, which contributes to the longdesired goal of providing executable specifications.
Computer-based automation in industrial appliances led to a growing number of logically dependent, but physically separated embedded control units per appliance. Many of those components are safety-critical systems, and require adherence to safety standards, which is inconsonant with the relentless demand for features in those appliances. Features lead to a growing amount of control units per appliance, and to a increasing complexity of the overall software stack, being unfavourable for safety certifications. Modern CPUs provide means to revise traditional separation of concerns design primitives: the consolidation of systems, which yields new engineering challenges that concern the entire software and system stack.
Multi-core CPUs favour economic consolidation of formerly separated systems with one efficient single hardware unit. Nonetheless, the system architecture must provide means to guarantee the freedom from interference between domains of different criticality. System consolidation demands for architectural and engineering strategies to fulfil requirements (e.g., real-time or certifiability criteria) in safety-critical environments.
In parallel, there is an ongoing trend to substitute ordinary proprietary base platform software components by mature OSS variants for economic and engineering reasons. There are fundamental differences of processual properties in development processes of OSS and proprietary software. OSS in safety-critical systems requires development process assessment techniques to build an evidence-based fundament for certification efforts that is based upon empirical software engineering methods.
In this thesis, I will approach from both sides: the software and system engineering perspective. In the first part of this thesis, I focus on the assessment of OSS components: I develop software engineering techniques that allow to quantify characteristics of distributed OSS development processes. I show that ex-post analyses of software development processes can be used to serve as a foundation for certification efforts, as it is required for safety-critical systems.
In the second part of this thesis, I present a system architecture based on OSS components that allows for consolidation of mixed-criticality systems on a single platform. Therefore, I exploit virtualisation extensions of modern CPUs to strictly isolate domains of different criticality. The proposed architecture shall eradicate any remaining hypervisor activity in order to preserve realtime capabilities of the hardware by design, while guaranteeing strict isolation across domains.
The prospect of achieving computational speedups by exploiting quantum phenomena makes the use of quantum processing units (QPUs) attractive for many algorithmic database problems. Query optimisation, which concerns problems that typically need to explore large search spaces, seems like an ideal match for the known quantum algorithms. We present the first quantum implementation of join ordering, which is one of the most investigated and fundamental query optimisation problems, based on a reformulation to quadratic binary unconstrained optimisation problems. We empirically characterise our method on two state-of-the-art approaches (gate-based quantum computing and quantum annealing), and identify speed-ups compared to the best know classical join ordering approaches for input sizes that can be processed with current quantum annealers. However, we also confirm that limits of early-stage technology are quickly reached.
Current QPUs are classified as noisy, intermediate scale quantum computers (NISQ), and are restricted by a variety of limitations that reduce their capabilities as compared to ideal future quantum computers, which prevents us from scaling up problem dimensions and reaching practical utility. To overcome these challenges, our formulation accounts for specific QPU properties and limitations, and allows us to trade between achievable solution quality and possible problem size.
In contrast to all prior work on quantum computing for query optimisation and database-related challenges, we go beyond currently available QPUs, and explicitly target the scalability limitations: Using insights gained from numerical simulations and our experimental analysis, we identify key criteria for co-designing QPUs to improve their usefulness for join ordering, and show how even relatively minor physical architectural improvements can result in substantial enhancements. Finally, we outline a path towards practical utility of custom-designed QPUs.
We evaluate the applicability of quantum computing on two fundamental query optimization problems, join order optimization and multi query optimization (MQO). We analyze the problem dimensions that can be solved on current gate-based quantum systems and quantum annealers, the two currently commercially available architectures.
First, we evaluate the use of gate-based systems on MQO, previously solved with quantum annealing. We show that, contrary to classical computing, a different architecture requires involved adaptations. We moreover propose a multi-step reformulation for join ordering problems to make them solvable on current quantum systems. Finally, we systematically evaluate our contributions for gate-based quantum systems and quantum annealers. Doing so, we identify the scope of current limitations, as well as the future potential of quantum computing technologies for database systems.
We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.
Quantum key distribution is among the foremost applications of quantum mechanics, both in terms of fundamental physics and as a technology on the brink of commercial deployment. Starting from principal schemes and initial proofs of unconditional security for perfect systems, much effort has gone into providing secure schemes which can cope with numerous experimental imperfections unavoidable in real world implementations. In this paper, we provide a comparison of various schemes and protocols. We analyse their efficiency and performance when implemented with imperfect physical components. We consider how experimental faults are accounted for using effective parameters. We compare various recent protocols and provide guidelines as to which components propose best advances when being improved.
We experimentally analyze the complete photon number statistics of parametric down-conversion and ascertain the influence of multimode effects. Our results clearly reveal a difference between single-mode theoretical description and the measured distributions. Further investigations assure the applicability of loss-tolerant photon number reconstruction and prove strict photon number correlation between signal and idler modes.
Every security analysis of quantum-key distribution (QKD) relies on a faithful modeling of the employed quantum states. Many photon sources, such as for instance a parametric down-conversion (PDC) source, require a multimode description but are usually only considered in a single-mode representation. In general, the important claim in decoy-based QKD protocols for indistinguishability between signal and decoy states does not hold for all sources. We derive bounds on the single-photon transmission probability and error rate for multimode states and apply these bounds to the output state of a PDC source. We observe two opposing effects on the secure key rate. First, the multimode structure of the state gives rise to a new attack that decreases the key rate. Second, more contributing modes change the photon number distribution from a thermal toward a Poissonian distribution, which increases the key rate.
Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics4,5,6,7,8,9,10,11. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique12,13,14,15. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.
Wer seine Daten mit ansehnlichen und informativen Graphen veranschaulichen möchte, braucht meist viel Geduld. Die R-Erweiterung Ggplot2 bringt System in die Grafik, drückt sich in knappem Quellcode aus und bläst frischen Wind in den Alltag der Datenvisualisierung.
The use of quantum processing units (QPUs) promises speed-ups for solving computational problems, but the quantum devices currently available possess only a very limited number of qubits and suffer from considerable imperfections. One possibility to progress towards practical utility is to use a co-design approach: Problem formulation and algorithm, but also the physical QPU properties are tailored to the specific application. Since QPUs will likely be used as accelerators for classical computers, details of systemic integration into existing architectures are another lever to influence and improve the practical utility of QPUs.
In this work, we investigate the influence of different parameters on the runtime of quantum programs on tailored hybrid CPU-QPU-systems. We study the influence of communication times between CPU and QPU, how adapting QPU designs influences quantum and overall execution performance, and how these factors interact. Using a simple model that allows for estimating which design choices should be subjected to optimisation for a given task, we provide an intuition to the HPC community on potentials and limitations of co-design approaches. We also discuss physical limitations for implementing the proposed changes on real quantum hardware devices.
On embedded processors that are increasingly equipped with multiple CPU cores, static hardware partitioning is an established means of consolidating and isolating workloads onto single chips. This architectural pattern is suitable for mixed-criticality workloads that need to satisfy both, real-time and safety requirements, given suitable hardware properties. In this work, we focus on exploiting contemporary virtualisation mechanisms to achieve freedom from interference respectively isolation between workloads. Possibilities to achieve temporal and spatial isolation-while maintaining real-time capabilities-include statically partitioning resources, avoiding the sharing of devices, and ascertaining zero interventions of superordinate control structures. This eliminates overhead due to hardware partitioning, but implies certain hardware capabilities that are not yet fully implemented in contemporary standard systems. To address such hardware limitations, the customisable and configurable RISC-V instruction set architecture offers the possibility of swift, unrestricted modifications. We present findings on the current RISC-V specification and its implementations that necessitate interventions of superordinate control structures. We identify numerous issues adverse to implementing our goal of achieving zero interventions respectively zero overhead: On the design level, and especially with regards to handling interrupts. Based on micro-benchmark measurements, we discuss the implications of our findings, and argue how they can provide a basis for future extensions and improvements of the RISC-V architecture.
We discuss the spectral structure and decomposition of multi-photon states. Ordinarily 'multi-photon states' and 'Fock states' are regarded as synonymous. However, when the spectral degrees of freedom are included this is not the case, and the class of 'multi-photon' states is much broader than the class of 'Fock' states. We discuss the criteria for a state to be considered a Fock state. We then address the decomposition of general multi-photon states into bases of orthogonal eigenmodes, building on existing multi-mode theory, and introduce an occupation number representation that provides an elegant description of such states. This representation allows us to work in bases imposed by experimental constraints, simplifying calculations in many situations. Finally we apply this technique to several example situations, which are highly relevant for state of the art experiments. These include Hong–Ou–Mandel interference, spectral filtering, finite bandwidth photo-detection, homodyne detection and the conditional preparation of Schrödinger kitten and Fock states. Our techniques allow for very simple descriptions of each of these examples.
Every open source project needs to decide on an open source license. This decision is of high economic relevance: Just which license is the best one to help the project grow and attract a community? The most common question is: Should the project choose a restrictive (reciprocal) license or a more permissive one? As an important step towards answering this question, this paper analyses actual license choice and correlated project growth from ten years of open source projects. It provides closed analytical models and finds that around 2001 a reversal in license choice occurred from restrictive towards permissive licenses.
Multi-core CPUs are a standard component in many modern embedded systems. Their virtualisation extensions enable the isolation of services, and gain popularity to implement mixed-criticality or otherwise split systems. We present Jailhouse, a Linux-based, OS-agnostic partitioning hypervisor that uses novel architectural approaches to combine Linux, a powerful general-purpose system, with strictly isolated special-purpose components. Our design goals favour simplicity over features, establish a minimal code base, and minimise hypervisor activity. Direct assignment of hardware to guests, together with a deferred initialisation scheme, offloads any complex hardware handling and bootstrapping issues from the hypervisor to the general purpose OS. The hypervisor establishes isolated domains that directly access physical resources without the need for emulation or paravirtualisation. This retains, with negligible system overhead, Linux's feature-richness in uncritical parts, while frugal safety and real-time critical workloads execute in isolated, safe domains.
Find an introduction to the architecture, concepts and algorithms of the Linux kernel in Professional Linux Kernel Architecture, a guide to the kernel sources and large number of connections among subsystems. Find an introduction to the relevant structures and functions exported by the kernel to userland, understand the theoretical and conceptual aspects of the Linux kernel and Unix derivatives, and gain a deeper understanding of the kernel. Learn how to reduce the vast amount of information contained in the kernel sources and obtain the skills necessary to understand the kernel sources.
Parametric down-conversion (PDC) is a technique of ubiquitous experimental significance in the production of nonclassical, photon-number-correlated twin beams. Standard theory of PDC as a two-mode squeezing process predicts and homodyne measurements observe a thermal photon number distribution per beam. Recent experiments have obtained conflicting distributions. In this article, we explain the observation by an a priori theoretical model solely based on directly accessible physical quantities. We compare our predictions with experimental data and find excellent agreement.
Measurement is the only part of a general quantum system that has yet to be characterised experimentally in a complete manner. Detector tomography provides a procedure for doing just this; an arbitrary measurement device can be fully characterised, and thus calibrated, in a systematic way without access to its components or its design. The result is a reconstructed POVM containing the measurement operators associated with each measurement outcome. We consider two detectors, a single-photon detector and a photon-number counter, and propose an easily realised experimental apparatus to perform detector tomography on them. We also present a method of visualising the resulting measurement operators.
Software evolution is a fundamental process that transcends the realm of technical artifacts and permeates the entire organizational structure of a software project. By means of a longitudinal empirical study of 18 large open-source projects, we examine and discuss the evolutionary principles that govern the coordination of developers. By applying a network-analytic approach, we found that the implicit and self-organizing structure of developer coordination is ubiquitously described by non-random organizational principles that defy conventional software-engineering wisdom. In particular, we found that: (a) developers form scale-free networks, in which the majority of coordination requirements arise among an extremely small number of developers, (b) developers tend to accumulate coordination requirements with more and more developers over time, presumably limited by an upper bound, and (c) initially developers are hierarchically arranged, but over time, form a hybrid structure, in which core developers are hierarchically arranged and peripheral developers are not. Our results suggest that the organizational structure of large projects is constrained to evolve towards a state that balances the costs and benefits of developer coordination, and the mechanisms used to achieve this state depend on the project’s scale.
Modifications to open-source software (OSS) are often provided in the form of "patch stacks" -- sets of changes (patches) that modify a given body of source code. Maintaining patch stacks over extended periods of time is problematic when the underlying base project changes frequently. This necessitates a continuous and engineering-intensive adaptation of the stack. Nonetheless, long-term maintenance is an important problem for changes that are not integrated into projects, for instance when they are controversial or only of value to a limited group of users.
We present and implement a methodology to systematically examine the temporal evolution of patch stacks, track non-functional properties like integrability and maintainability, and estimate the eventual economic and engineering effort required to successfully develop and maintain patch stacks. Our results provide a basis for quantitative research on patch stacks, including statistical analyses and other methods that lead to actionable advice on the construction and long-term maintenance of custom extensions to OSS.
The partitioning hypervisor Jaihouse allows us to run safety critical and uncritical applications in parallel on a single SoC. We present our experiences when porting a safety and real-time critical existing application as a Jailhouse guest. It shows a novel and promising approach for implementing mixed-criticality applications with real-time requirement while not loosing the benefits of Linux. This is done by static partitioning of hardware resources; guests do not interfere. We will present a multicopter platform running the real-time critical flight stack in an isolated Jailhouse guest. This proves the practicability of Jailhouse as well as the suitability for real-time safety critical systems by porting an existing application to a Jailhouse cell. We stress its concept and show up current hardware limitations, like undesired behaviour and present possible workarounds and solutions.
Effective software engineering demands a coordinated effort. Unfortunately, a comprehensive view on developer coordination is rarely available to support software-engineering decisions, despite the significant implications on software quality, software architecture, and developer productivity. We present a fine-grained, verifiable, and fully automated approach to capture a view on developer coordination, based on commit information and source-code structure, mined from version-control systems. We apply methodology from network analysis and machine learning to identify developer communities automatically. Compared to previous work, our approach is fine-grained, and identifies statistically significant communities using order-statistics and a community-verification technique based on graph conductance. To demonstrate the scalability and generality of our approach, we analyze ten open-source projects with complex and active histories, written in various programming languages. By surveying 53 open-source developers from the ten projects, we validate the authenticity of inferred community structure with respect to reality. Our results indicate that developers of open-source projects form statistically significant community structures and this particular view on collaboration largely coincides with developers' perceptions of real-world collaboration.
Parametric downconversion (PDC) is a popular technique to produce twin beams of photons that are entangled in multiple degrees of freedom. The generated states form the basis for numerous applications that require entanglement. An exact quantification of this resource is therefore essential, for instance for quantum cryptography that relies on a complete knowledge of the correlation contained in the state. While the determination of an entanglement monotone for the PDC process is only possible analytically in special cases, an exact calculation must usually be performed numerically. Recent work by Mikhailova et al. [2] analyses a certain class of PDC states for which the concurrence entanglement measure can be obtained by an analytical approximation. In this contribution, we analyse the validity of the approximation by comparison with exact numerical methods.
Software development projects leave a large amount of data in repositories of Application Lifecycle Management (ALM) tools. These data contain detailed histories of their respective projects, their results and decisions made along the way. Analysis of such data helps uncover various interesting facts about projects, e.g. their socio-technical structures and the actual (vs. purported) roles of team members. Based on experiences with tools supporting our research we are convinced that it is feasible to consolidate data from different ALM tools, tapping into the situation common in real-life projects. In this paper we report on our work towards a shared common data model and tool integration aimed at improved project analysis. We discuss how this can help in the identification of architects in the project organizational structures, their activity patterns and collaboration with other team roles.