Digitalisierung
Refine
Year of publication
Document Type
- Article (173) (remove)
Is part of the Bibliography
- no (173)
Keywords
- Betriebliches Informationssystem (10)
- Offshoring (7)
- Business-managed IT (5)
- Shadow IT (5)
- Datensicherung (4)
- IT governance (4)
- Information systems (4)
- Informationstechnik (4)
- Lean Management (4)
- Literaturbericht (4)
Institute
- Fakultät Informatik und Mathematik (113)
- Fakultät Elektro- und Informationstechnik (34)
- Laboratory for Safe and Secure Systems (LAS3) (29)
- Regensburg Strategic IT Management (ReSITM) (24)
- Labor für Digitalisierung (LFD) (21)
- Fakultät Maschinenbau (12)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (11)
- Labor eHealth (eH) (8)
- Institut für Sozialforschung und Technikfolgenabschätzung (IST) (7)
- Labor Industrielle Elektronik (5)
Begutachtungsstatus
- peer-reviewed (103)
In modern vehicles, system complexity and technical capabilities are constantly growing. As a result, manufacturers and regulators are both increasingly challenged to ensure the reliability, safety, and intended behavior of these systems. With current methodologies, it is difficult to address the various interactions between vehicle components and environmental factors. However, model-based engineering offers a solution by allowing to abstract reality and enhancing communication among engineers and stakeholders. Applying this method requires a model format that is machine-processable, human-understandable, and mathematically sound. In addition, the model format needs to support probabilistic reasoning to account for incomplete data and knowledge about a problem domain. We propose structural causal models as a suitable framework for addressing these demands. In this article, we show how to combine data from different sources into an inferable causal model for an advanced driver-assistance system. We then consider the developed causal model for scenario-based testing to illustrate how a model-based approach can improve industrial system development processes. We conclude this paper by discussing the ongoing challenges to our approach and provide pointers for future work.
In the field of software engineering, graph-based models are used for a variety of applications. Usually, the layout of those graphs is determined at the discretion of the user. This article empirically investigates whether different layouts affect the comprehensibility or popularity of a graph and whether one can predict the perception of certain aspects in the graph using basic graphical laws from psychology (i.e., Gestalt principles). Data on three distinct layouts of one causal graph is collected from 29 subjects using eye tracking and a print questionnaire. The evaluation of the collected data suggests that the layout of a graph does matter and that the Gestalt principles are a valuable tool for assessing partial aspects of a layout.
Auch kleine und mittlere Unternehmen (KMUs) benötigen zunehmend ein effektives Informationstechnologie- (IT)-Management, um wettbewerbsfähig zu bleiben. Im Vergleich zu großen Unternehmen verfügen KMUs jedoch oft nicht über die Ressourcen, die Arbeitgeberattraktivität oder den Bedarf, um einen Chief Information Officer (CIO) in Vollzeit zu beschäftigen. Um diese Lücke zu schließen, hat eine wachsende Zahl von Expertinnen und Experten weltweit damit begonnen, CIO-Dienste in Teilzeit anzubieten. Auf diese Weise erhalten KMUs Zugang zu erfahrenen und kompetenten IT-Führungskräften zu einem Bruchteil der Kosten und ohne langfristige Verpflichtungen. Während diese so genannten „Fractional CIOs“ in der Praxis bereits einen Mehrwert schaffen, gibt es noch kaum wissenschaftliche Untersuchungen zu diesem neuen Phänomen. In einem größeren Forschungsprojekt mit insgesamt 62 Fractional CIOs aus 10 Ländern wurden daher eine Definition, Typen verschiedenartiger Engagements und Erfolgsfaktoren abgeleitet. Die vorliegende Studie fasst die Ergebnisse zusammen und setzt sie in Bezug zum deutschen Markt, indem sie drei Fractional CIOs/CTOs aus Deutschland befragt. Es zeigt sich, dass die folgenden vier Engagement-Typen von Fractional CIOs für KMUs in verschiedenen Situationen von Nutzen sind: Strategisches IT-Management, Restrukturierung, Skalierung und Hands-on Support. Darüber hinaus zeigt die Studie, dass Vertrauen, die Unterstützung durch das Top-Management-Team und die Integrität des Fractional CIOs Schlüsselfaktoren für den Erfolg von Fractional CIO-Engagements sind. Für den deutschen Markt werden die Ergebnisse durch drei befragte Fractional CIOs/CTOs weitgehend bestätigt. Die Fractional CIOs/CTOs können zwar keine genauen Gründe für die geringe Akzeptanz der Rolle nennen, betonen aber ihr Wertpotenzial für den deutschen Markt.
Ziel der Studie:
Ziel der Studie ist die Messung des Stands der Digitalisierung und die mit einer Anbindung an die Telematikinfrastruktur verbundenen Chancen und Herausforderungen für Rehabilitationseinrichtungen.
Methodik:
Teilstandardisierte Online-Befragung bei Trägern von Rehabilitationseinrichtungen in Bayern (n=33). Der Fragebogen mit 36 Fragen beinhaltet eine leicht veränderte Skala auf Basis des „Electronic Medical Record Adoption Model (EMRAM)“.
Ergebnisse:
Der Digitalisierungsgrad wurde in 70 Prozent der Rehabilitationseinrichtungen mit Stufe 0 angegeben (Stufenmodell bis 7). Die Übermittlung patientenbezogener Daten (Eingang und Ausgang) erfolgt häufig analog, wohingegen die Verarbeitung innerhalb der Einrichtung in vielen Fällen bereits überwiegend digital ist. Beim Anschluss an die Telematikinfrastruktur wird hoher Aufwand bei der Installation, aber auch der Schulung des Personals und der Anpassung der Arbeitsorganisation gesehen.
Schlussfolgerung:
Durch Änderung der gesetzlich-finanziellen Lage in Deutschland eröffnen sich für Rehabilitationseinrichtungen neue Möglichkeiten einer verstärkten Digitalisierung. Hürden hängen mit Anforderungen an IT-Sicherheit, Schulung des Personals und sowie dem ebenfalls geringen Digitalisierungsstand bei Krankenhäusern und Ärzt*innen sowie Patient*innen zusammen, die eine digitale Datenübermittlung erschweren.
Business process improvement (BPI) is of high priority for practitioners. But especially the most value-adding phase in a BPI project, namely the “act of improvement”, is insufficiently supported despite the many existing methods and techniques. Until now, it is largely unclear as to what degree existing BPI techniques support each other and are interrelated with one another. Thus, the purpose of this paper is to investigate the functional interdependencies between BPI techniques to get a better understanding for the beneficial synergies between the BPI techniques and to provide a basis for purposefully combining them within projects. Based on the functional interdependencies, a graphical “Functional Interdependency Map” is developed and its usability demonstrated in an experiment. The paper is valuable for academics and practitioners alike because the impact of BPI on organizational performance is high.
Computing a sample mean of time series under dynamic time warping is NP-hard. Consequently, there is an ongoing research effort to devise efficient heuristics. The majority of heuristics have been developed for the constrained sample mean problem that assumes a solution of predefined length. In contrast, research on the unconstrained sample mean problem is underdeveloped. In this article, we propose a generic average-compress (AC) algorithm to address the unconstrained problem. The algorithm alternates between averaging (A-step) and compression (C-step). The A-step takes an initial guess as input and returns an approximation of a sample mean. Then the C-step reduces the length of the approximate solution. The compressed approximation serves as initial guess of the A-step in the next iteration. The purpose of the C-step is to direct the algorithm to more promising solutions of shorter length. The proposed algorithm is generic in the sense that any averaging and any compression method can be used. Experimental results show that the AC algorithm substantially outperforms current state-of-the-art algorithms for time series averaging.
Design propositions for nudging in healthcare: Adoption of national electronic health recordsystems
(2023)
Objectives: Electronic health records (EHRs) are considered important for improving efficiency and reducing costs of ahealthcare system. However, the adoption of EHR systems differs among countries and so does the way the decision to par-ticipate in EHRs is presented. Nudging is a concept that deals with influencing human behaviour within the research streamof behavioural economics. In this paper, we focus on the effects of the choice architecture on the decision for the adoption ofnational EHRs. Our study aims to link influences on human behaviour through nudging with the adoption of EHRs to inves-tigate how choice architects can facilitate the adoption of national information systems.
Methods: We employ a qualitative explorative research design, namely the case study method. Using theoretical sampling,we selected four cases (i.e., countries) for our study: Estonia, Austria, the Netherlands, and Germany. We collected and ana-lyzed data from various primary and secondary sources: ethnographic observation, interviews, scientific papers, homepages,press releases, newspaper articles, technical specifications, publications from governmental bodies, and formal studies.
Results: The findings from our European case studies show that designing for EHR adoption should encompass choice archi-tecture elements (i.e., defaults), technical elements (i.e., choice granularity and access transparency), and institutional ele-ments (i.e., regulations for data protection, information campaigns, and financial incentives) in combination.
Conclusions: Our findings provide insights on the design of the adoption environments of large-scale, national EHR systems.Future research could estimate the magnitude of effects of the determinants.
We present an industrial end-user perspective on the current state of quantum computing hardware for one specific technological approach, the neutral atom platform. Our aim is to assist developers in understanding the impact of the specific properties of these devices on the effectiveness of algorithm execution. Based on discussions with different vendors and recent literature, we discuss the performance data of the neutral atom platform. Specifically, we focus on the physical qubit architecture, which affects state preparation, qubit-to-qubit connectivity, gate fidelities, native gate instruction set, and individual qubit stability. These factors determine both the quantum-part execution time and the end-to-end wall clock time relevant for end-users, but also the ability to perform fault-tolerant quantum computation in the future. We end with an overview of which applications have been shown to be well suited for the peculiar properties of neutral atom-based quantum computers.
The transfer of knowledge from client to service provider poses major challenges in information systems (IS) offshoring projects. Knowledge transfer directly affects IS offshoring success. Therefore, associated challenges must be overcome. Our study examines the determinants of success and failure of knowledge transfer in IS offshoring projects based on a ranking-type Delphi study. We questioned 32 experts from Germany, each with more than ten years of experience in near- or offshore initiatives to seek a consensus among them. We identified 19 success and 20 failure determinants. These determinants are ranked in order of importance using best-worst scaling. Aspects of closer cooperation are critical for effective knowledge transfer. This includes regular collaboration, willingness to help and support, and mutual trust. In contrast, critical determinants of failure are concerned with fears and fluctuation of human resources. Hidden ambiguities or knowledge gaps, an unwillingness and disability to share knowledge, and high fluctuation of human resources negatively impact knowledge transfer.
Organizations are under increasing pressure to develop applications within budget and time at high quality. Therefore, multiple organizations adopt Low Code Development Platforms (LCDP) to develop applications faster and cheaper compared to traditional application development. However, current research on LCDP adoption lacks empirical grounding as well as a deeper understanding of the importance of adoption drivers and inhibitors. We conducted semi-structured interviews and a Delphi study with seventeen experts to address these gaps. As a result, we identified twelve drivers and nineteen inhibitors for adopting LCDPs. We show that the experts have a consensus on the most and the least important drivers and inhibitors for LCDP adoption. Yet, the ranking of the drivers and inhibitors between the most and least important is highly context dependent. For some drivers and inhibitors, the experts’ ranking is similar to academic literature, whereas, for others, it differs. In conclusion, the study at hand empirically validates drivers and inhibitors for LCDP adoption, adds six new drivers and six new inhibitors to the body of knowledge, and analyses the importance of these factors.
Die Konformitätsanalyse ist eine Technik der statischen Code-Analyse (SCA) zur Software-Qualitätssicherung. Ihr Kernproblem ist, dass Werkzeuge nicht aus bereits eingetretenen Fehlern automatisiert dazulernen. Zur Lösung wurde in dieser Arbeit das maschinelle Lernen (ML) evaluiert, indem ein wissenschaftlich fundierter und praktisch erprobter Ansatz zur unüberwachten Lerntechnik angewandt und das Ergebnis analysiert wurde. Es wurde festgestellt, dass zur Anwendung auf verschiedene Programmiersprachen nur ein sprachspezifisches API Mining-Tool notwendig ist. Ein derartiges Tool durchsucht in parallelisierter Form Codezeilen und normalisiert sie für maschinelle Lernprozesse. Dieses System wurde für die Programmiersprache C# implementiert, da viele Industrieprojekte in dieser Sprache entwickelt werden. Zur funktionalen Validierung wurde in einer Fallstudie gezeigt, dass Regeln mit einem positiven Effekt auf Software-Qualität gelernt wurden. Konkret wurde der Wartungsaufwand eines Code-Smells in einem Beispielprojekt durch das Auslagern einer gelernten Assoziation in eine gemeinsame Methode um den Faktor 30 reduziert. Die Laufzeit des Algorithmus wurde empirisch in acht open-source Repositorys evaluiert. Durch Parallelisierung kann eine durchschnittliche Laufzeitverbesserung von 45,16% erwartet werden. Allerdings wurden bei der Anwendung auch Grenzen deutlich: Viele Assoziationen sind nutzlos, die Regelbewertung ist von einem subjektiven Faktor abhängig und die Wirtschaftlichkeit des Tools ist deshalb nicht transparent. Dennoch belegt diese Arbeit, dass ein ML-basiertes SCA-Tool als ergänzende Qualitätssicherungsmaßnahme im Software-Engineering möglich ist.
Digitale Transformation in Echtzeit: Die Ziele von morgen basierend auf dem Datenmodell von gestern
(2022)
Die Digitale Transformation fordert Unternehmen aller Couleur. Ironischer Weise sind es gerade die bisher verwendeten IT-Systeme mit ihren starren Strukturen, die Unternehmen in Ihrer digitalen Trans-formation oft ausbremsen. Auch wenn die Soft-warehersteller längst reagiert haben und neue, flexib-lere Versionen ihrer Produkte anbieten, so ist ein größerer Softwarewechsel immer noch eine Heraus-forderung für Unternehmen und ein Schritt der wohl-überlegt und geplant sein will. In dieser Arbeit wird deshalb ein Vorgehen vorge-stellt, um mittels In-Memory Technologie und Vir-tualisierung zumindest die wichtigsten Ergebnisse der Transformation bereits auf den bestehenden Da-tenmodellen in Echtzeit zu generieren. Dadurch wird genug Zeit gewonnen, um die eigentliche Transfor-mation der IT-Landschaft geplant und mit der not-wendigen Sorgfalt durchzuführen.
The prospect of achieving computational speedups by exploiting quantum phenomena makes the use of quantum processing units (QPUs) attractive for many algorithmic database problems. Query optimisation, which concerns problems that typically need to explore large search spaces, seems like an ideal match for the known quantum algorithms. We present the first quantum implementation of join ordering, which is one of the most investigated and fundamental query optimisation problems, based on a reformulation to quadratic binary unconstrained optimisation problems. We empirically characterise our method on two state-of-the-art approaches (gate-based quantum computing and quantum annealing), and identify speed-ups compared to the best know classical join ordering approaches for input sizes that can be processed with current quantum annealers. However, we also confirm that limits of early-stage technology are quickly reached.
Current QPUs are classified as noisy, intermediate scale quantum computers (NISQ), and are restricted by a variety of limitations that reduce their capabilities as compared to ideal future quantum computers, which prevents us from scaling up problem dimensions and reaching practical utility. To overcome these challenges, our formulation accounts for specific QPU properties and limitations, and allows us to trade between achievable solution quality and possible problem size.
In contrast to all prior work on quantum computing for query optimisation and database-related challenges, we go beyond currently available QPUs, and explicitly target the scalability limitations: Using insights gained from numerical simulations and our experimental analysis, we identify key criteria for co-designing QPUs to improve their usefulness for join ordering, and show how even relatively minor physical architectural improvements can result in substantial enhancements. Finally, we outline a path towards practical utility of custom-designed QPUs.
We evaluate the applicability of quantum computing on two fundamental query optimization problems, join order optimization and multi query optimization (MQO). We analyze the problem dimensions that can be solved on current gate-based quantum systems and quantum annealers, the two currently commercially available architectures.
First, we evaluate the use of gate-based systems on MQO, previously solved with quantum annealing. We show that, contrary to classical computing, a different architecture requires involved adaptations. We moreover propose a multi-step reformulation for join ordering problems to make them solvable on current quantum systems. Finally, we systematically evaluate our contributions for gate-based quantum systems and quantum annealers. Doing so, we identify the scope of current limitations, as well as the future potential of quantum computing technologies for database systems.
EMDLAB: A toolbox for analysis of single-trial EEG dynamics using empirical mode decomposition
(2015)
Background:
Empirical mode decomposition (EMD) is an empirical data decomposition technique. Recently there is growing interest in applying EMD in the biomedical field.
New method:
EMDLAB is an extensible plug-in for the EEGLAB toolbox, which is an open software environment for electrophysiological data analysis.
Results:
EMDLAB can be used to perform, easily and effectively, four common types of EMD: plain EMD, ensemble EMD (EEMD), weighted sliding EMD (wSEMD) and multivariate EMD (MEMD) on EEG data. In addition, EMDLAB is a user-friendly toolbox and closely implemented in the EEGLAB toolbox.
Comparison with existing methods:
EMDLAB gains an advantage over other open-source toolboxes by exploiting the advantageous visualization capabilities of EEGLAB for extracted intrinsic mode functions (IMFs) and Event-Related Modes (ERMs) of the signal.
Conclusions:
EMDLAB is a reliable, efficient, and automated solution for extracting and visualizing the extracted IMFs and ERMs by EMD algorithms in EEG study.
Today, ubiquitous mobile devices have not only arrived but entered the safety critical domain. There, systems are about to be controlled where human health or even human life is put at risk. For example, in automation systems first ideas surface to control parts of the system via a COTS smartphone. Another example is the idea to control the autonomous parking function of a car via a COTS smartphone too. As beneficial and convenient these ideas are on the first thought, on the second thought, dangers of these approaches become obvious. Especially in case of failures the system’s safety has to be maintained. The open question is how to achieve this mandatory requirement with COTS components, e.g. smartphones that are not developed following the development process necessary for safetycritical systems. This paper presents a concept to reliably detect human interaction while activating safety critical functions via COTS mobile devices. Thus a means is provided to detect erroneous activation requests for the safetycritical function.
PURPOSE
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography.
METHODS
In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization.
RESULTS
Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection.
CONCLUSIONS
The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
We consider the reconstruction problem for limited angle tomography using filtered backprojection (FBP) and lambda tomography. We use microlocal analysis to explain why the well-known streak artifacts are present at the end of the limited angular range. We explain how to mitigate the streaks and prove that our modified FBP and lambda operators are standard pseudodifferential operators, and so they do not add artifacts. We provide reconstructions to illustrate our mathematical results.
We investigate the reconstruction problem of limited angle tomography. Such problems arise naturally in applications like digital breast tomosynthesis, dental tomography, electron microscopy, etc. Since the acquired tomographic data is highly incomplete, the reconstruction problem is severely ill-posed and the traditional reconstruction methods, e.g. filtered backprojection (FBP), do not perform well in such situations.
To stabilize the reconstruction procedure additional prior knowledge about the unknown object has to be integrated into the reconstruction process. In this work, we propose the use of the sparse regularization technique in combination with curvelets. We argue that this technique gives rise to an edge-preserving reconstruction. Moreover, we show that the dimension of the problem can be significantly reduced in the curvelet domain. To this end, we give a characterization of the kernel of the limited angle Radon transform in terms of curvelets and derive a characterization of solutions obtained through curvelet sparse regularization. In numerical experiments, we will show that the theoretical results directly translate into practice and that the proposed method outperforms classical reconstructions.
Artifacts in Incomplete Data Tomography with Applications to Photoacoustic Tomography and Sonar
(2015)
We develop a paradigm using microlocal analysis that allows one to characterize the visible and added singularities in a broad range of incomplete data tomography problems. We give precise characterizations for photoacoustic and thermoacoustic tomography and sonar, and provide artifact reduction strategies. In particular, our theorems show that it is better to arrange sonar detectors so that the boundary of the set of detectors does not have corners and is smooth. To illustrate our results, we provide reconstructions from synthetic spherical mean data as well as from experimental photoacoustic data.
We investigate the reconstruction problem for limited angle tomography. Such problems arise naturally in applications like digital breast tomosynthesis, dental tomography, etc. Since the acquired tomographic data is highly incomplete, the reconstruction problem is severely ill-posed and the traditional reconstruction methods, such as filtered backprojection (FBP), do not perform well in such situations. To stabilize the inversion we propose the use of a sparse regularization technique in combination with curvelets. We argue that this technique has the ability to preserve edges. As our main result, we present a characterization of the kernel of the limited angle Radon transform in terms of curvelets. Moreover, we characterize reconstructions which are obtained via curvelet sparse regularizations at a limited angular range. As a result, we show that the dimension of the limited angle problem can be significantly reduced in the curvelet domain.
We propose a new algorithmic approach to the non-smooth and non-convex Potts problem (also called piecewise-constant Mumford–Shah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require a priori knowledge on the gray levels nor on the number of segments of the reconstruction. Further, it avoids anisotropic artifacts such as geometric staircasing. We demonstrate the suitability of our method for joint image reconstruction and segmentation. We focus on Radon data, where we in particular consider limited data situations. For instance, our method is able to recover all segments of the Shepp–Logan phantom from seven angular views only. We illustrate the practical applicability on a real positron emission tomography dataset. As further applications, we consider spherical Radon data as well as blurred data.
Simultaneous EEG-fMRI provides an increasingly attractive research tool to investigate cognitive processes with high temporal and spatial resolution. However, artifacts in EEG data introduced by the MR scanner still remain a major obstacle. This study, employing commonly used artifact correction steps, shows that head motion, one overlooked major source of artifacts in EEG-fMRI data, can cause plausible EEG effects and EEG–BOLD correlations. Specifically, low-frequency EEG (< 20 Hz) is strongly correlated with in-scanner movement. Accordingly, minor head motion (< 0.2 mm) induces spurious effects in a twofold manner: Small differences in task-correlated motion elicit spurious low-frequency effects, and, as motion concurrently influences fMRI data, EEG–BOLD correlations closely match motion-fMRI correlations. We demonstrate these effects in a memory encoding experiment showing that obtained theta power (~ 3–7 Hz) effects and channel-level theta–BOLD correlations reflect motion in the scanner. These findings highlight an important caveat that needs to be addressed by future EEG-fMRI studies.
Background
The purpose of this study was to evaluate the impact of Cone Beam CT (CBCT) based setup correction on total dose distributions in fractionated frameless stereotactic radiation therapy of intracranial lesions.
Methods
Ten patients with intracranial lesions treated with 30 Gy in 6 fractions were included in this study. Treatment planning was performed with Oncentra® for a SynergyS® (Elekta Ltd, Crawley, UK) linear accelerator with XVI® Cone Beam CT, and HexaPOD™ couch top. Patients were immobilized by thermoplastic masks (BrainLab, Reuther). After initial patient setup with respect to lasers, a CBCT study was acquired and registered to the planning CT (PL-CT) study. Patient positioning was corrected according to the correction values (translational, rotational) calculated by the XVI® system. Afterwards a second CBCT study was acquired and registered to the PL-CT to confirm the accuracy of the corrections. An in-house developed software was used for rigid transformation of the PL-CT to the CBCT geometry, and dose calculations for each fraction were performed on the transformed CT. The total dose distribution was achieved by back-transformation and summation of the dose distributions of each fraction. Dose distributions based on PL-CT, CBCT (laser set-up), and final CBCT were compared to assess the influence of setup inaccuracies.
Results
The mean displacement vector, calculated over all treatments, was reduced from (4.3 ± 1.3) mm for laser based setup to (0.5 ± 0.2) mm if CBCT corrections were applied. The mean rotational errors around the medial-lateral, superior-inferior, anterior-posterior axis were reduced from (−0.1 ± 1.4)°, (0.1 ± 1.2)° and (−0.2 ± 1.0)°, to (0.04 ± 0.4)°, (0.01 ± 0.4)° and (0.02 ± 0.3)°. As a consequence the mean deviation between planned and delivered dose in the planning target volume (PTV) could be reduced from 12.3% to 0.4% for D95 and from 5.9% to 0.1% for Dav. Maximum deviation was reduced from 31.8% to 0.8% for D95, and from 20.4% to 0.1% for Dav.
Conclusion
Real dose distributions differ substantially from planned dose distributions, if setup is performed according to lasers only. Thermoplasic masks combined with a daily CBCT enabled a sufficient accuracy in dose distribution.
Re-irradiation of spinal column metastases by IMRT: Impact of setup errors on the dose distribution
(2013)
Background
This study investigates the impact of an automated image guided patient setup correction on the dose distribution for ten patients with in-field IMRT re-irradiation of vertebral metastases.
Methods
10 patients with spinal column metastases who had previously been treated with 3D-conformal radiotherapy (3D-CRT) were simulated to have an in-field recurrence. IMRT plans were generated for treatment of the vertebrae sparing the spinal cord. The dose distributions were compared for a patient setup based on skin marks only and a Cone Beam CT (CBCT) based setup with translational and rotational couch corrections using an automatic robotic image guided couch top (Elekta - HexaPOD™ IGuide® - system). The biological equivalent dose (BED) was calculated to evaluate and rank the effects of the automatic setup correction for the dose distribution of CTV and spinal cord.
Results
The mean absolute value (± standard deviation) over all patients and fractions of the translational error is 6.1 mm (±4 mm) and 2.7° (±1.1 mm) for the rotational error. The dose coverage of the 95% isodose for the CTV is considerable decreased for the uncorrected table setup. This is associated with an increasing of the spinal cord dose above the tolerance dose.
Conclusions
An automatic image guided table correction ensures the delivery of accurate dose distribution and reduces the risk of radiation induced myelopathy.
This paper introduces a novel chaotic flower pollination algorithm (CFPA) to solve a tardiness-constrained flow-shop scheduling problem with simultaneously loaded stations. This industrial manufacturing problem is modeled from a filter basket production line in Germany and has been generally solved using standard deterministic algorithms. This research develops a metaheuristic approach based on the highly efficient flower pollination algorithm coupled with different chaos maps for stochasticity. The objective function targeted is the tardiness constraint of the due dates. Fifteen different experiments with thirty scenarios are generated to mimic industrial conditions. The results are compared with the genetic algorithm and with the four standard benchmark priority rule-based deterministic algorithms of First In First Out, Raghu and Rajendran, Shortest Processing Time and Slack. From the obtained results and analysis of the relative difference, percentage relative difference and t tests, CFPA was found to be significantly better performing than the deterministic heuristics and the GA algorithm.
Der Digitale Zwilling (DZ) ist ein wichtiger Bestandteil der Industrie 4.0 und ermöglicht Anwendungen wie Predictive Maintenance, virtuelles Prototyping oder die Steuerung von Produktions- und Logistikprozessen. Herausforderungen bei der Entwicklung des Digitalen Zwillings entstehen durch fehlende Struktur und Standards. Mit diesem Beitrag soll ein Vorgehensmodell für die Erstellung eines Digitalen Zwillings im Bereich der Produktion und Logistik aufgezeigt werden. Das Vorgehensmodell hilft bei der Einordnung, für welche Anwendungsfälle ein Digitaler Zwilling entwickelt werden kann, welche Schritte bei einer Umsetzung erfolgen müssen, und gibt einen Überblick über die Voraussetzungen und Komplexität bei der Entwicklung. Das zentrale Element bildet dabei die zielgerichtete Aufbereitung und Analyse der zugrunde liegenden Daten mittels des in der Industrie etablierten Vorgehensmodell CRISP-DM.
Radiative transfer modelling of high resolution infrared (or microwave) spectra still represents a major challenge for the processing of atmospheric remote sensing data despite significant advances in the numerical techniques utilized in line-by-line modelling by, e.g., optimized Voigt function algorithms or multigrid approaches. Special purpose computing hardware such as Field Programmable Gate Arrays (FPGAs) can be used to cope with the dramatic increase of data quality and quantity. Utilizing a highly optimized implementation of an uniform rational function approximation of the Voigt function, the molecular absorption cross section computation-representing the most compute intensive part of radiative transfer codes-has been realized on FPGA. Design and implementation of the FPGA coprocessor is presented along with first performance tests and an outlook for the ongoing further development.
In this work, a method for reducing the number of degrees of freedom in online optimal dynamic experiment design problems for systems described by differential equations is proposed. The online problems are posed such that only the inputs which extend an operation policy resulting from an experiment designed offline are optimized. This is done by formulating them as multiple experiment designs, considering explicitly the information of the experiment designed offline and possible time delays unknown a priori. The performance of the method is shown for the case of the separation of isopropanolol isomers in a Simulated Moving Bed plant.
Multiple hop routing in mobile ad hoc networks can minimize energy consumption and increase data throughput. Yet, the problem of radio interferences remain. However if the routes are restricted to a basic network based on local neighborhoods, these interferences can be reduced such that standard routing algorithms can be applied.
We compare different network topologies for these basic networks, i.e. the Yao-graph (aka. Θ-graph) and some also known related models, which will be called the SymmYgraph (aka. YS-graph), the SparsY-graph (aka. YY-graph) and the BoundY-graph. Further, we present a promising network topology called the HL-graph (based on Hierarchical Layers).
We compare these topologies regarding degree, spanner-properties, and communication features. We investigate how these network topologies bound the number of (uni- and bidirectional) interferences and whether these basic networks provide energy-optimal or congestion-minimal routing. Then, we compare the ability of these topologies to handle
dynamic changes of the network when radio stations appear and disappear. For this we measure the number of involved radio stations and present distributed algorithms for repairing the network structure.
Rational functions are frequently used as efficient yet accurate numerical approximations for real and complex valued special functions. For the complex error function , whose real part is the Voigt function , the rational approximation developed by Hui, Armstrong, and Wray [Rapid computation of the Voigt and complex error functions, J. Quant. Spectrosc. Radiat. Transfer 19 (1978) 509–516] is investigated. Various optimizations for the algorithm are discussed. In many applications, where these functions have to be calculated for a large x grid with constant y, an implementation using real arithmetic and factorization of invariant terms is especially efficient.
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Nonlinear ill-posed problem analysis in model-based parameter estimation and experimental design
(2015)
Discrete ill-posed problems are often encountered in engineering applications. Still, their sound analysis is not yet common practice and difficulties arising in the determination of uncertain parameters are typically not assigned properly. This contribution provides a tutorial review on methods for identifiability analysis, regularization techniques and optimal experimental design. A guideline for the analysis and classification of nonlinear ill-posed problems to detect practical identifiability problems is given. Techniques for the regularization of experimental design problems resulting from ill-posed parameter estimations are discussed. Applications are presented for three different case studies of increasing complexity.
The method of loci is one, if not the most, efficient mnemonic encoding strategy. This spatial mnemonic combines the core cognitive processes commonly linked to medial temporal lobe (MTL) activity: spatial and associative memory processes. During such processes, fMRI studies consistently demonstrate MTL activity, while electrophysiological studies have emphasized the important role of theta oscillations (3–8 Hz) in the MTL. However, it is still unknown whether increases or decreases in theta power co-occur with increased BOLD signal in the MTL during memory encoding. To investigate this question, we recorded EEG and fMRI separately, while human participants used the spatial method of loci or the pegword method, a similarly associative but nonspatial mnemonic. The more effective spatial mnemonic induced a pronounced theta power decrease source localized to the left MTL compared with the nonspatial associative mnemonic strategy. This effect was mirrored by BOLD signal increases in the MTL. Successful encoding, irrespective of the strategy used, elicited decreases in left temporal theta power and increases in MTL BOLD activity. This pattern of results suggests a negative relationship between theta power and BOLD signal changes in the MTL during memory encoding and spatial processing. The findings extend the well known negative relation of alpha/beta oscillations and BOLD signals in the cortex to theta oscillations in the MTL.
NoSQL-Datenbanksysteme sind in den letzten Jahren sehr populär geworden, gute Gründe sprechen für ihren Einsatz: Eine attraktive Eigenschaft vieler Systeme ist ihre Schema-Flexibilität, die insbesondere in der agilen Anwendungsentwicklung Vorteile bietet. Durch horizontale Skalierbarkeit ermöglichen NoSQL-Datenbanksysteme eine effiziente Verarbeitung großer Datenmengen. Einige Systeme, die für die Datenhaltung interaktiver Anwendungen konzipiert sind, können zudem hochfrequente Nutzeranfragen bedienen. Diesen Vorteilen stehen eine Reihe von Nachteilen gegenüber, aus denen sich neue Herausforderungen für die Anwendungsentwicklung ergeben: Fehlende Standards bei den Anfragesprachen erschweren die Entwicklung datenbanksystemunabhängiger Anwendungen. Schema-Flexibilität im Datenbankmanagementsystem führt dazu, dass die Verantwortung für das Schema-Management in die Anwendung verlagert wird. Im vorliegenden Beitrag werden wesentliche Herausforderungen identifiziert und Lösungsansätze aus Forschung und Praxis vorgestellt. Dabei liegt der Fokus auf schema-flexiblen NoSQL-Datenbanksystemen, mit einem aggregat-orientierten Datenmodell, d. h. Key-Value Datenbanksysteme, dokumentenorientierten Datenbanksystemen und Column-Family Datenbanksystemen.
NoSQL data stores have become very popular over the last years, as good reasons are justifying their application: One attractive feature of many systems is their schema flexibility, which may be preferable in agile software development projects. Due to their horizontal scalability, NoSQL data stores make it possible to efficiently process large amounts of data. Some systems, designed as data backends for interactive applications, can also manage highly frequent user requests. Apart from these advantages, there are also downsides to NoSQL data stores that create new challenges for software development: Missing standards in query languages make it difficult to build data store independent applications. Schema flexibility in the data store shifts the responsibility for schema management into the application. This article identifies substantial challenges as well as solution statements from research and practice. The focus of our survey is on schema-flexible NoSQL data management systems with an aggregate-oriented data model, i. e., key-value data management systems, as well as document and column family data management systems.
Inverse problems are at the heart of many practical problems such as image reconstruction or nondestructive testing. A characteristic feature is their instability with respect to data perturbations. To stabilize the inversion process, regularization methods must be developed and applied. In this paper, we introduce the concept of filtered diagonal frame decomposition, which extends the classical filtered SVD to the case of frames. The use of frames as generalized singular systems allows a better match to a given class of potential solutions and is also beneficial for problems where the SVD is not analytically available. We show that filtered diagonal frame decompositions yield convergent regularization methods, derive convergence rates under source conditions and prove order optimality. Our analysis applies to bounded and unbounded forward operators. As a practical application of our tools, we study filtered diagonal frame decompositions for inverting the Radon transform as an unbounded operator on L2(R2).
We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.
Quantum key distribution is among the foremost applications of quantum mechanics, both in terms of fundamental physics and as a technology on the brink of commercial deployment. Starting from principal schemes and initial proofs of unconditional security for perfect systems, much effort has gone into providing secure schemes which can cope with numerous experimental imperfections unavoidable in real world implementations. In this paper, we provide a comparison of various schemes and protocols. We analyse their efficiency and performance when implemented with imperfect physical components. We consider how experimental faults are accounted for using effective parameters. We compare various recent protocols and provide guidelines as to which components propose best advances when being improved.
We experimentally analyze the complete photon number statistics of parametric down-conversion and ascertain the influence of multimode effects. Our results clearly reveal a difference between single-mode theoretical description and the measured distributions. Further investigations assure the applicability of loss-tolerant photon number reconstruction and prove strict photon number correlation between signal and idler modes.
Every security analysis of quantum-key distribution (QKD) relies on a faithful modeling of the employed quantum states. Many photon sources, such as for instance a parametric down-conversion (PDC) source, require a multimode description but are usually only considered in a single-mode representation. In general, the important claim in decoy-based QKD protocols for indistinguishability between signal and decoy states does not hold for all sources. We derive bounds on the single-photon transmission probability and error rate for multimode states and apply these bounds to the output state of a PDC source. We observe two opposing effects on the secure key rate. First, the multimode structure of the state gives rise to a new attack that decreases the key rate. Second, more contributing modes change the photon number distribution from a thermal toward a Poissonian distribution, which increases the key rate.
Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics4,5,6,7,8,9,10,11. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique12,13,14,15. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.
Wer seine Daten mit ansehnlichen und informativen Graphen veranschaulichen möchte, braucht meist viel Geduld. Die R-Erweiterung Ggplot2 bringt System in die Grafik, drückt sich in knappem Quellcode aus und bläst frischen Wind in den Alltag der Datenvisualisierung.
As use of digital fabrication increases in architecture, engineering and construction, the industry seeks appropriate management and processes to enable the adoption during the design/planning phase. Many enablers have been identified across various studies; however, a comprehensive synthesis defining the enablers of design for digital fabrication does not yet exist. This work conducts a systematic literature review of 59 journal articles published in the past decade and identifies 140 enablers under eight categories: actors, resources, conditions, attributes, processes, artefacts, values and risks. The enablers’ frequency network is illustrated using an adjacency matrix. Through the lens of actor-network theory, the work creates a relational ontology to demonstrate the linkages between different enablers. Three examples are presented using onion diagrams: circular construction focus, business model focus and digital twin in industrialisation focus. Finally, this work discusses the intersection of relational ontology with process modelling to design future digital fabrication work routines.
Special issue ISARC 2021
(2022)
The research filed of construction robotics broadens increasingly in terms of complexity, approaches, technologies used, active stakeholders, and application areas. Worldwide labour and resource shortages, the need to increase circularity and resource efficiency, new materials and the increasing utilisation of digital construction tools in the planning and construction industry massively spur the uptake of robotic solutions for on-site construction.
The initial boom of construction robots happened in the 1970s, driven by the Japanese construction industry. In the 1980s, a combination with parallel developments was supposed to achieve complete, integrated robotic on-site factories. From the mid-1980s onwards, the global interest in construction robots decreased gradually. Bulky and expensive systems, complex on-site navigation and logistics approaches, a narrow scope of tasks, inflexibility, incompatibility with on-site work organisation and professional qualification, low usability and insufficient inter-robot coordination capabilities revealed the immaturity of the systems. Only a few organisations predominantly situated in Asia such as Takenaka, Obayashi, Kajima Corporation, Nihon Bisho Co., Samsung, and Hitachi maintained development activities.
However, since the mid-2010s, development activities are gaining traction again. On the application side, this is mainly driven by trends such as the need to upgrade the energy performance of buildings in Europe, a global necessity to remove asbestos from existing structures, and a demand for enormous quantities of high-rise buildings all over East Asia. On the system side, the renewed interest stems from major advances in physical–mechanical robot technology in other automation-driven industries such as the automotive industry. Robots became lighter, more flexible, their parts modular and interchangeable, more user friendly as well as significantly cheaper. On the digital side, the BIM-to-Robot pipeline was subject of intensive reserach and development. More and more methods and tools help to increase the usability of robots and facilitate the simulation and optimisation of robot-driven construction processes.
In the last 4–5 years, the worldwide growing need and interest in construction robotics became highly evident. More than 200 robot systems are pushed by start-ups and spin-offs and their investors to the market. This is backed by an enormous number of activities and projects carried out in the academic area pushing to the boundaries of what is technologically possible.
Major associations and their conferences increase significantly in popularity such as ISARC (International Association for Automation and Robotics in Construction), EC3 (European Council of Computing in Construction), and Robots in Architecture. Competency in digital construction, automation and robotics becomes a key for all stakeholders in the construction industry and many universities worldwide launch dedicated interdisciplinary programs. Powerful governments (China) and major funding programs such as Horizon Europe (Europe) massively request and fund the development of robotic solutions for construction such as drones, mobile robots, 3D-printing solutions, cable-driven robots, and exoskeletons. Regulators and standardisation organisation start to develop the first certification and standardisation schemes for construction robots and large software companies make attempts to allow to simulate and program robotic construction processes efficiently and robustly based on digital building and construction data.
To showcase the diversity of cutting-edge research in the area, this special issue invited eight extended versions of selected papers from the ISARC 2021 conference. As such, this issue covers digital approaches to embed fabrication and robot information in BIM and IFC and program robots directly from digital building models. New robot systems spur novel robotic production processes, and machine learning enable novel logistics approaches for building components that may ultimately lead to robotic cranes and other robotic on-site logistics and handling solutions (including autonomous construction machines). In parallel, systematic evaluation and robot development methods are developed that allow to shed light on their performance in the construction process.
Dynamic Time Warping (DTW) is a well-known similarity measure for time series. The standard dynamic programming approach to compute the DTW distance of two length-n time series, however, requires O(n2) time, which is often too slow for real-world applications. Therefore, many heuristics have been proposed to speed up the DTW computation. These are often based on lower bounding techniques, approximating the DTW distance, or considering special input data such as binary or piecewise constant time series. In this paper, we present a first exact algorithm to compute the DTW distance of two run-length encoded time series whose running time only depends on the encoding lengths of the inputs. The worst-case running time is cubic in the encoding length. In experiments we show that our algorithm is indeed fast for time series with short encoding lengths.
DevOps paradigm is widely used in industry to develop software faster, deploy high quality frequent releases of features by integrating and harmonizing the Development and IT Operations activities.
Industries are taking strategic decisions to remove the barriers that existed between Development and Operational teams by encouraging collaborations among these teams throughout System Development Life Cycle (SDLC). These strategic decisions to implement DevOps paradigm resulted in the development and emergence of large arrays of tool chains to support, monitor, and automate activities of various SDLC stages. In this paper authors attempt to give practical insights on how the using of DevOps can speed up the management, development and deployment process of a simple web application. Widely used DevOps model consisting of eight stages is used to implement the example application. A toolchain consisting of state of arts tools is used at various DevOps stages. A detailed explanation of each tool, including details to their implementation and a short evaluation concludes the study. The results revealed that the usage of DevOps enables to accelerate the development process of web applications, as most steps during the build and testing process can be automated. Especially the outsourcing of operational overhead to an external cloud provider can lead to economic advantages, which will impact the future of software development.
Independent component analysis (ICA), being a data-driven method, has been shown to be a powerful tool for functional magnetic resonance imaging (fMRI) data analysis. One drawback of this multivariate approach is that it is not, in general, compatible with the analysis of group data. Various techniques have been proposed to overcome this limitation of ICA. In this paper, a novel ICA-based workflow for extracting resting-state networks from fMRI group studies is proposed. An empirical mode decomposition (EMD) is used, in a data-driven manner, to generate reference signals that can be incorporated into a constrained version of ICA (cICA), thereby eliminating the inherent ambiguities of ICA. The results of the proposed workflow are then compared to those obtained by a widely used group ICA approach for fMRI analysis. In this study, we demonstrate that intrinsic modes, extracted by EMD, are suitable to serve as references for cICA. This approach yields typical resting-state patterns that are consistent over subjects. By introducing these reference signals into the ICA, our processing pipeline yields comparable activity patterns across subjects in a mathematically transparent manner. Our approach provides a user-friendly tool to adjust the trade-off between a high similarity across subjects and preserving individual subject features of the independent components.
Background and objective
The study follows the proposal of decomposing a given data matrix into a product of independent spatial and temporal component matrices. A multi-variate decomposition approach is presented, based on an approximate diagonalization of a set of matrices computed using a latent space representation.
Methods
The proposed methodology follows an algebraic approach, which is common to space, temporal or spatiotemporal blind source separation algorithms. More specifically, the algebraic approach relies on singular value decomposition techniques, which avoids computationally costly and numerically instable matrix inversion. The method is equally applicable to correlation matrices determined from second order correlations or by considering fourth order correlations.
Results
The resulting algorithms are applied to fMRI data sets either to extract the underlying fMRI components or to extract connectivity maps from resting state fMRI data collected for a dynamic functional connectivity analysis. Intriguingly, our algorithm shows increased spatial specificity compared to common approaches, while temporal precision stays similar.
Conclusion
The study presents a novel spatiotemporal blind source separation algorithm, which is both robust and avoids parameters that are difficult to fine tune. Applied on experimental data sets, the new method yields highly confined and focused areas with least spatial extent in the retinotopy case, and similar results in the dynamic functional connectivity analyses compared to other blind source separation algorithms. Therefore, we conclude that our novel algorithm is highly competitive and yields results, which are superior or at least similar to existing approaches.
Investigating temporal variability of functional connectivity is an emerging field in connectomics. Entering dynamic functional connectivity by applying sliding window techniques on resting-state fMRI (rs-fMRI) time courses emerged from this topic. We introduce frequency-resolved dynamic functional connectivity (frdFC) by means of multivariate empirical mode decomposition (MEMD) followed up by filter-bank investigations. In general, we find that MEMD is capable of generating time courses to perform frdFC and we discover that the structure of connectivity-states is robust over frequency scales and even becomes more evident with decreasing frequency. This scale-stability varies with the number of extracted clusters when applying k-means. We find a scale-stability drop-off from k = 4 to k = 5 extracted connectivity-states, which is corroborated by null-models, simulations, theoretical considerations, filter-banks, and scale-adjusted windows. Our filter-bank studies show that filter design is more delicate in the rs-fMRI than in the simulated case. Besides offering a baseline for further frdFC research, we suggest and demonstrate the use of scale-stability as a possible quality criterion for connectivity-state and model selection. We present first evidence showing that connectivity-states are both a multivariate, and a multiscale phenomenon. A data repository of our frequency-resolved time-series is provided.
A Deep Learning System to Transform Cross-Section Spectra to Varying Environmental Conditions
(2022)
Absorption cross-sections provide a basis for many gas sensing applications. Therefore, any error in molecular cross-sections caused by varying environmental conditions propagates to spectroscopic applications. Original molecular cross-sections in varying environmental conditions can only be simulated for some molecules, whereas for most multi-atom molecules, one must rely on high-precision measurements at certain environmental configurations. In this study, a deep learning system trained with simulated absorption cross-sections for predicting cross-sections at a different pressure configuration is presented. The system’s capability to transfer to measured, multi-atom cross-sections is demonstrated. Thus, it provides an alternative to (pseudo-) line lists whenever the required information for simulation is unavailable. The predictive performance of the system was evaluated on validation data via simulation, and its transfer learning capabilities were demonstrated on actual measurement chlorine nitrate data. From the comparison between the system and line lists, the system shows slightly worse performance than pseudo-line lists but its predictive quality is still deemed acceptable with less than 5% relative integral change with a highly localized error around the peak center. This opens a promising way for further research to use deep learning to simulate the effect of varying environmental conditions on absorption cross-sections.
Lately, parallel task models have received much attention in the development of real-time multiprocessor systems, as they allow highly compute-intensive tasks to have shorter deadlines which is very much required in modern reactive systems. However, missing modularity and portability can make parallel programming a cumbersome endeavor. As a consequence, compute-intensive sectors in the desktop and server segment have relied on parallelism frameworks such as Intel Threading Building Blocks, Cilk and OpenMP. These parallelism frameworks, however, are optimized for decent average case performance and consequently, do not meet the strict requirements imposed by real-time systems.
In this paper, we present a proof-of-concept parallelism framework which was implemented in particular for soft real-time systems and having tight timing and safety requirements of such critical systems in mind. The proposed runtime system implements static memory allocation in a work-stealing environment that conforms to the strict space and tight probabilistic time bounds of work-stealing schedulers. Furthermore, we evaluate the performance of this framework by conducting multiprogrammed benchmarks on a real-time embedded multicore architecture.
This short survey reviews the recent literature on the relationship between the brain structure and its functional dynamics. Imaging techniques such as diffusion tensor imaging (DTI) make it possible to reconstruct axonal fiber tracks and describe the structural connectivity (SC) between brain regions. By measuring fluctuations in neuronal activity, functional magnetic resonance imaging (fMRI) provides insights into the dynamics within this structural network. One key for a better understanding of brain mechanisms is to investigate how these fast dynamics emerge on a relatively stable structural backbone. So far, computational simulations and methods from graph theory have been mainly used for modeling this relationship. Machine learning techniques have already been established in neuroimaging for identifying functionally independent brain networks and classifying pathological brain states. This survey focuses on methods from machine learning, which contribute to our understanding of functional interactions between brain regions and their relation to the underlying anatomical substrate.
Semantic alignment of application software components’ ontologies represents a great interest in vehicle application domains that manipulate heterogeneous overlapping knowledge application frameworks. In the past few years, with the growth in the novel vehicle service requirements such as autonomous driving, V2X (Vehicle-to-Vehicle communication) and many others, automotive application software component models are becoming increasingly collaborative with other qualified cross-enterprise industrial partners to accomplish these complex service requirements. The most daunting impediment to this cross-enterprise collaboration is semantic interoperability. For efficient services collaboration through cross-enterprise semantic interoperability between the vehicle application frameworks’ software components, aligning the interface ontologies of these components by identifying the depth of semantic alignment relationships between the concepts of the interface ontologies is the major focus of this paper. In contrast to several existing ontology structural metrics, this work defines, evaluates and validates ontology metrics to measure the depth of semantic alignment between the vehicle domain software component frameworks’ interface ontological models. To emphasize the substantial role of semantic alignment of software component frameworks’ interface ontologies in semantic interoperability, a typical vehicle domain case study involving vehicle applications is considered for demonstration.
Frequency conversion (FC) and type-II parametric down-conversion (PDC) processes serve as basic building blocks for the implementation of quantum optical experiments: type-II PDC enables the efficient creation of quantum states such as photon-number states and Einstein–Podolsky–Rosen (EPR)-states. FC gives rise to technologies enabling efficient atom–photon coupling, ultrafast pulse gates and enhanced detection schemes. However, despite their widespread deployment, their theoretical treatment remains challenging. Especially the multi-photon components in the high-gain regime as well as the explicit time-dependence of the involved Hamiltonians hamper an efficient theoretical description of these nonlinear optical processes. In this paper, we investigate these effects and put forward two models that enable a full description of FC and type-II PDC in the high-gain regime. We present a rigorous numerical model relying on the solution of coupled integro-differential equations that covers the complete dynamics of the process. As an alternative, we develop a simplified model that, at the expense of neglecting time-ordering effects, enables an analytical solution. While the simplified model approximates the correct solution with high fidelity in a broad parameter range, sufficient for many experimental situations, such as FC with low efficiency, entangled photon-pair generation and the heralding of single photons from type-II PDC, our investigations reveal that the rigorous model predicts a decreased performance for FC processes in quantum pulse gate applications and an enhanced EPR-state generation rate during type-II PDC, when EPR squeezing values above 12 dB are considered.
Praxisprojekte in Zusammenarbeit mit externen Auftraggebern erfreuen sich in der Projektmanagementlehre zunehmender Beliebtheit - Learning by Doing. Die digitale Lehre eröffnet hier Chancen: Sie ermöglicht z. B. eine Zusammenarbeit über größere Distanzen und dient dem Ideen- und Erfahrungstransfer aus der online-Lehre in die Geschäftswelt. Die spezifischen Erfahrungen aus der Perspektive der OTH Regensburg sowie der des Auftraggebers, der Unternehmensberatung fifty1 aus Wien, werden vorgestellt. Studierende erarbeiteten Ideen für digitale Beratungsprodukte, die zu marktfähigen Angeboten des Auftraggebers weiterentwickelt werden.
Ob in der Schule, im Studium oder Selbststudium: Die Möglichkeiten und Techniken, Java zu lernen, haben sich in den vergangenen Jahren entscheidend weiterentwickelt. Dieser Artikel gibt einen Einblick in innovative Möglichkeiten des Java-Lernens sowie einen Ausblick, der zeigt, dass sich nicht nur Programmiersprachen weiterentwickeln, sondern auch der Weg, wie wir sie lernen und lehren.
Bierdeckelsalto
(2015)
Ein beliebtes Spiel besteht darin, einen auf einer Tischkante liegenden Bierdeckel von unten mit den ausgestreckten Fingern hochzuschnellen und dann nach einem oder mehreren Saltos zwischen Finger und Daumen wieder aufzufangen. Physikalisch gesehen, übt man einen Stoß auf den Bierdeckel aus. Die Anwendung von Impuls- und Drehimpulssatz führt zu einfachen Abschätzungen zur Mechanik des Bierdeckelsaltos. Mit Physik-Simulationsprogrammen lässt sich dieses Experiment nachvollziehen. Highspeed-Videos ergänzen Theorie und Simulation.
We discuss the spectral structure and decomposition of multi-photon states. Ordinarily 'multi-photon states' and 'Fock states' are regarded as synonymous. However, when the spectral degrees of freedom are included this is not the case, and the class of 'multi-photon' states is much broader than the class of 'Fock' states. We discuss the criteria for a state to be considered a Fock state. We then address the decomposition of general multi-photon states into bases of orthogonal eigenmodes, building on existing multi-mode theory, and introduce an occupation number representation that provides an elegant description of such states. This representation allows us to work in bases imposed by experimental constraints, simplifying calculations in many situations. Finally we apply this technique to several example situations, which are highly relevant for state of the art experiments. These include Hong–Ou–Mandel interference, spectral filtering, finite bandwidth photo-detection, homodyne detection and the conditional preparation of Schrödinger kitten and Fock states. Our techniques allow for very simple descriptions of each of these examples.
Abstract Social network analysis is extremely well supported by the R community and is routinely used for studying the relationships between people engaged in collaborative activities. While there has been rapid development of new approaches and metrics in this field, the challenging question of validity (how well insights derived from social networks agree with reality) is often difficult to address. We propose the use of several R packages to generate interactive surveys that are specifically well suited for validating social network analyses. Using our web-based survey application, we were able to validate the results of applying community-detection algorithms to infer the organizational structure of software developers contributing to open-source projects.
We present a ready to compute trace formula for Hecke operators on vector-valued modular forms of integral weight for SL2(Z) transforming under the Weil representation. As a corollary, we obtain a ready to compute dimension formula for the corresponding space of vector-valued cusp forms, which is more general than the dimension formulae previously published in the vector-valued setting.
Parametric down-conversion (PDC) is a technique of ubiquitous experimental significance in the production of nonclassical, photon-number-correlated twin beams. Standard theory of PDC as a two-mode squeezing process predicts and homodyne measurements observe a thermal photon number distribution per beam. Recent experiments have obtained conflicting distributions. In this article, we explain the observation by an a priori theoretical model solely based on directly accessible physical quantities. We compare our predictions with experimental data and find excellent agreement.
Measurement is the only part of a general quantum system that has yet to be characterised experimentally in a complete manner. Detector tomography provides a procedure for doing just this; an arbitrary measurement device can be fully characterised, and thus calibrated, in a systematic way without access to its components or its design. The result is a reconstructed POVM containing the measurement operators associated with each measurement outcome. We consider two detectors, a single-photon detector and a photon-number counter, and propose an easily realised experimental apparatus to perform detector tomography on them. We also present a method of visualising the resulting measurement operators.
Software evolution is a fundamental process that transcends the realm of technical artifacts and permeates the entire organizational structure of a software project. By means of a longitudinal empirical study of 18 large open-source projects, we examine and discuss the evolutionary principles that govern the coordination of developers. By applying a network-analytic approach, we found that the implicit and self-organizing structure of developer coordination is ubiquitously described by non-random organizational principles that defy conventional software-engineering wisdom. In particular, we found that: (a) developers form scale-free networks, in which the majority of coordination requirements arise among an extremely small number of developers, (b) developers tend to accumulate coordination requirements with more and more developers over time, presumably limited by an upper bound, and (c) initially developers are hierarchically arranged, but over time, form a hybrid structure, in which core developers are hierarchically arranged and peripheral developers are not. Our results suggest that the organizational structure of large projects is constrained to evolve towards a state that balances the costs and benefits of developer coordination, and the mechanisms used to achieve this state depend on the project’s scale.
This article revisits an analysis on (in)accuracies of time series averaging under dynamic time warping (dtw) conducted by Niennattrakul and Ratanamahatana [16]. They proposed a correctness-criterion for dtw-averages and postulated that dtw-averages can drift out of the cluster of time series to be averaged. They claimed that dtw-averages are inaccurate if they violate the correctness-criterion or suffer from the drift-out phenomenon. Furthermore, they conjectured that such inaccuracies are caused by the lack of triangle inequality. In this article, we show that a rectified version of the correctness-criterion is unsatisfiable and that the concept of drift-out is geometrically and operationally inconclusive. Satisfying the triangle inequality is insufficient to achieve correctness and unnecessary to overcome the drift-out phenomenon. We place the concept of drift-out on a principled basis and show that Fréchet means never drift out. The adjusted drift-out is a way to test to which extent an approximated dtw-average is coherent. Empirical results show that approximations obtained by the state-of-the-art averaging methods are incoherent in over a third of all cases.
The literature postulates that the dynamic time warping (dtw) distance can cope with temporal variations but stores and processes time series in a form as if the dtw-distance cannot cope with such variations. To address this inconsistency, we first show that the dtw-distance is not warping-invariant—despite its name and contrary to its characterization in some publications. The lack of warping-invariance contributes to the inconsistency mentioned above and to a strange behavior. To eliminate these peculiarities, we convert the dtw-distance to a warping-invariant semi-metric, called time-warp-invariant (twi) distance. Empirical results suggest that the error rates of the twi and dtw nearest-neighbor classifier are practically equivalent in a Bayesian sense. However, the twi-distance requires less storage and computation time than the dtw-distance for a broad range of problems. These results challenge the current practice of applying the dtw-distance in nearest-neighbor classification and suggest the proposed twi-distance as a more efficient and consistent option.
Within many real-world networks, the links between pairs of nodes change over time. Thus, there has been a recent boom in studying temporal graphs. Recognizing patterns in temporal graphs requires a proximity measure to compare different temporal graphs. To this end, we propose to study dynamic time warping on temporal graphs. We define the dynamic tem- poral graph warping (dtgw) distance to determine the dissimilarity of two temporal graphs. Our novel measure is flexible and can be applied in various application domains. We show that computing the dtgw-distance is a challenging (in general) NP-hard optimization problem and identify some polynomial-time solvable special cases. Moreover, we develop a quadratic programming formulation and an efficient heuristic. In experiments on real-world data, we show that the heuristic performs very well and that our dtgw-distance performs favorably in de-anonymizing networks compared to other approaches.
Die wissenschaftliche Reproduktionskrise hat den Blick auf digitale Forschungswerkzeuge intensiviert. Auch wenn der Mehraufwand für Reproduzierbarkeit und Zugänglichkeit zunehmend anerkannt wird, existieren noch Defizite in der Umsetzung, wenn es darum geht, die Datenbasis und Forschungswerkzeuge verfügbar zu machen
Many industrially relevant problems can be deterministically solved by computers in principle, but are intractable in practice, as the seminal P/NP dichotomy of complexity theory and Cobham’s thesis testify. For the many NP-complete problems, industry needs to resort to using heuristics or approximation algorithms. For approximation algorithms, there is a more refined classification in complexity classes that goes beyond the simple P/NP dichotomy. As it is well known, approximation classes form a hierarchy, that is, FPTAS \subseteq PTAS \subseteq APX \subseteq NPO. This classification gives a more realistic notion of complexity but—unless unexpected breakthroughs happen for fundamental problems like P = NP or related questions— there is no known efficient algorithm that can solve such problems exactly on a realistic computer. Therefore, new ways of computations are sought. Recently, considerable hope was placed on the possible computational powers of quantum computers and quantum annealing (QA) in particular. However, the precise benefits of such a drastic shift in hardware are still unchartered territory to a good extent. Firstly, the exact relations between classical and quantum complexity classes pose many open questions, and secondly, technical details of formulating and implementing quantum algorithms play a crucial role in real-world applications. Guided by the hierarchy of classical optimisation complexity classes, we discuss how to map problems of each class to a quantum annealer. Those problems are the Minimum Multiprocessor Scheduling (MMS) problem, the Minimum Vertex Cover (MVC) problem and the Maximum Independent Set (MIS) problem. We experimentally investigate if and how the degree of approximability influences implementation and run-time performance. Our experiments indicate a discrepancy between classical approximation complexity and QA behaviour: Problems MIS and MVC, members of APX respectively PTAS, exhibit better solution quality on a QA than MMS, which is in FPTAS, even despite the use of preprocessing the for latter. This leads to the hypothesis that traditional classifications do not immediately extend to the quantum annealing domain, at least when the properties of real-world devices are taken into account. A structural reason, why FPTAS problems do not show good solution quality, might be the use of an inequlity in the problem description of the FPTAS problems. Formulating those inequalities on a quantum hardware (mostly done by formulating a Quadratic Unconstrained Binary optimisation (QUBO) problem in form of a matrix) requires a lot of hardware space which makes finding an optimal solution more difficult. Reducing the density of a QUBO is possible by appropriately pruning QUBO matrices. For the problems considered in our evaluation, we find that the achievable solution quality on a real-world machine is unexpectedly robust against pruning, often up to ratios as high as 50% or more. Since quantum annealers are probabilistic machines by design, the loss in solution quality is only of subordinate relevance, especially considering that the pruning of QUBO matrices allows for solving larger problem instances on hardware of a given capacity. We quantitatively discuss the interplay between these factors.
Software engineering in open source projectsfaces similar challenges as in traditional software development(coordination of and cooperation between contributors, changeand release management, quality assurance, . . .), but often usesdifferent means of solving them. This leads to some salientdistinctions between both worlds, especially with respect tocommunication and how technical issues are addressed. Thevariations within open source software (OSS) communities areconsiderable, and many different approaches are currently inuse, ranging from informal conventions to highly systematic,formally specified and rigidly applied processes. We discussthe archetypal best practises in the field, illustrate them bypresenting example projects, and provide a comparison to tradi-tional approaches.
Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on-the-fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this paper, we present a novel solution to this problem. We propose a compression scheme for a-priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.
In this paper, we introduce a novel technique for pre-filtering multi-layer shadow maps. The occluders in the scene are stored as variable-length lists of fragments for each texel. We show how this representation can be filtered by progressively merging these lists. In contrast to previous pre-filtering techniques, our method better captures the distribution of depth values, resulting in a much higher shadow quality for overlapping occluders and occluders with different depths. The pre-filtered maps are generated and evaluated directly on the GPU, and provide efficient queries for shadow tests with arbitrary filter sizes. Accurate soft shadows are rendered in real-time even for complex scenes and difficult setups. Our results demonstrate that our pre-filtered maps are general and particularly scalable.
Tightening quality requirements of industrial products involving manual assembly lead to the development of assisting workbenches with integrated functions to support workers performing these manual tasks. This contribution discusses a new approach to learning transitions of a finite state automaton representing the sequence of work tasks based on the video stream of a 3D depth camera. Preprocessed video data is fed into a three-stage classification scheme based on support vector machines. The results of the classification are then related to the state automation to trigger state transitions indicating the completion of a specific work task and the start of the next one. The proposed approach has been evaluated at an industrial assembly process of moderate complexity and shows very robust results with respect to disturbances caused by inaccurate object classification.
In recent years, substantial progress has been made in the field of reverberant speech signal processing, including both single- and multichannel dereverberation techniques and automatic speech recognition (ASR) techniques that are robust to reverberation. In this paper, we describe the REVERB challenge, which is an evaluation campaign that was designed to evaluate such speech enhancement (SE) and ASR techniques to reveal the state-of-the-art techniques and obtain new insights regarding potential future research directions. Even though most existing benchmark tasks and challenges for distant speech processing focus on the noise robustness issue and sometimes only on a single- channel scenario, a particular novelty of the REVERB challenge is that it is carefully designed to test robustness against reverberation, based on both real, single- channel, and multichannel recordings. This challenge attracted 27 papers, which represent 25 systems specifically designed for SE purposes and 49 systems specifically designed for ASR purposes. This paper describes the problems dealt within the challenge, provides an overview of the submitted systems, and scrutinizes them to clarify what current processing strategies appear effective in reverberant speech processing.
Rendering in real time for virtual reality headsets with high user immersion is challenging due to strict framerate constraints as well as due to a low tolerance for artefacts. Eye tracking-based foveated rendering presents an opportunity to strongly increase performance without loss of perceived visual quality. To this end, we propose a novel foveated rendering method for virtual reality headsets with integrated eye tracking hardware. Our method comprises recycling pixels in the periphery by spatio-temporally reprojecting them from previous frames. Artefacts and disocclusions caused by this reprojection are detected and re-evaluated according to a confidence value that is determined by a newly introduced formalized perception-based metric, referred to as confidence function. The foveal region, as well as areas with low confidence values, are redrawn efficiently, as the confidence value allows for the delicate regulation of hierarchical geometry and pixel culling. Hence, the average primitive processing and shading costs are lowered dramatically. Evaluated against regular rendering as well as established foveated rendering methods, our approach shows increased performance in both cases. Furthermore, our method is not restricted to static scenes and provides an acceleration structure for post-processing passes.
Deep Reinforcement Learning (RL) has considerably advanced over the past decade. At the same time, state-of-the-art RL algorithms require a large computational budget in terms of training time to converge. Recent work has started to approach this problem through the lens of quantum computing, which promises theoretical speed-ups for several traditionally hard tasks. In this work, we examine a class of hybrid quantumclassical RL algorithms that we collectively refer to as variational quantum deep Q-networks (VQ-DQN). We show that VQ-DQN approaches are subject to instabilities that cause the learned policy to diverge, study the extent to which this afflicts reproduciblity of established results based on classical simulation, and perform systematic experiments to identify potential explanations for the observed instabilities. Additionally, and in contrast to most existing work on quantum reinforcement learning, we execute RL algorithms on an actual quantum processing unit (an IBM Quantum Device) and investigate differences in behaviour between simulated and physical quantum systems that suffer from implementation deficiencies. Our experiments show that, contrary to opposite claims in the literature, it cannot be conclusively decided if known quantum approaches, even if simulated without physical imperfections, can provide an advantage as compared to classical approaches. Finally, we provide a robust, universal and well-tested implementation of VQ-DQN as a reproducible testbed for future experiments.
Quantum computing promises to overcome computational limitations with better and faster solutions for optimization, simulation, and machine learning problems. Europe and Germany are in the process of successfully establishing research and funding programs with the objective to dvance the technology’s ecosystem and industrialization, thereby ensuring digital sovereignty, security, and competitiveness. Such an ecosystem comprises hardware/software solution providers, system integrators, and users from research institutions, start-ups, and industry. The vision of the Quantum Technology and Application Consortium (QUTAC) is to establish and advance the quantum computing ecosystem, supporting the ambitious goals of the German government and various research programs. QUTAC is comprised of ten members representing different industries, in particular automotive manufacturing, chemical and pharmaceutical production, insurance, and technology. In this paper, we survey the current state of quantum computing in these sectors as well as the aerospace industry and identify the contributions of QUTAC to the ecosystem. We propose an application-centric approach for the industrialization of the technology based on proven business impact. This paper identifies 24 different use cases. By formalizing high-value use cases into well-described reference problems and benchmarks, we will guide technological progress and eventually commercialization. Our results will be beneficial to all ecosystem participants, including suppliers, system integrators, software developers, users, policymakers, funding program managers, and investors.
We have a platform, but nobody builds on it – what influences Platform-as-a-Service post-adoption?
(2022)
When higher-level management of a company has strategically decided to adopt Platform-as-a-Service (PaaS) as a Cloud Computing (CC) delivery model, decision-makers at lower hierarchy levels still need to decide whether they want to post-adopt PaaS for building or running an information system (IS) – a decision that numerous companies are currently facing. This research analyzes the influential factors of this managerial post-adoption decision on the IS-level. A survey of 168 business and Information Technology (IT) professionals investigated the influential factors of this PaaS post-adoption decision. The results show that decision-makers’ perceptions of risks inhibit post-adoption. Vendor trust and trialability reduce these perceived risks. While competitive pressure increases perceived benefits, it does not significantly influence PaaS post-adoption. Controversially, security and privacy, cost savings, and top management support do not influence post-adoption, as opposed to findings on company-level adoption. Subsamples constructed by the form of post-adoptive use (migration of IS, enhancement of IS, new IS development) exhibit better goodness-of-fit measures than the full sample. Future research should explore this interrelation of the form of post-adoptive use and the post-adoption influence factors.
Drohnen werden inzwischen in vielen und sehr unterschiedlichen Kon-texten verwendet. Aus dem Blickwinkel der Technikfolgenabschätzung (TA) scheint es daher sinnvoll, den Umfang der momentanen und zu-künftigen Nutzung von Drohnen und daraus resultierende Implikatio-nen näher zu beleuchten und eine Bestandsaufnahme durchzuführen. Darüber hinaus sollen die voraussichtlichen Pfade der weiteren Tech-nikentwicklung, relevante Akteure und deren Interessenslage sowie zu-künftige Anwendungspotenziale und Einsatzfelder analysiert werden
Steigende Anforderungen an die Qualität von zum Teil manuell gefertigten Produkten führen dazu, dass Handarbeitsplätze mit Assistenzsystemen für die Unterstützung der am Arbeitsplatz arbeitenden Mitarbeiterinnen und Mitarbeiter ausgestattet werden. Der Beitrag beschreibt einen neuen Ansatz, um mittels Verfahren des maschinellen Lernens die Objekterkennung sowie die Transitionen eines, den Arbeitsprozess repräsentierenden Zustandsautomaten eines solchen Systems einzulernen. Hierfür werden nach einer Vorverarbeitung Daten aus einer Tiefenkamera in drei Stufen durch Support Vector Machines (SVM) klassifiziert und das Ergebnis mit dem Zustandsautomaten verknüpft. Das Konzept wird an einem industriellen Montageprozess überschaubarer Komplexität evaluiert; es zeigt gute Ergebnisse hinsichtlich der Robustheit gegenüber Fehlern bei der Objektklassifikation.
Background and objective
Parkinson’s disease (PD) is considered a degenerative disorder that affects the motor system, which may cause tremors, micrography, and the freezing of gait. Although PD is related to the lack of dopamine, the triggering process of its development is not fully understood yet.
Methods
In this work, we introduce convolutional neural networks to learn features from images produced by handwritten dynamics, which capture different information during the individual’s assessment. Additionally, we make available a dataset composed of images and signal-based data to foster the research related to computer-aided PD diagnosis.
Results
The proposed approach was compared against raw data and texture-based descriptors, showing suitable results, mainly in the context of early stage detection, with results nearly to 95%.
Conclusions
The analysis of handwritten dynamics using deep learning techniques showed to be useful for automatic Parkinson’s disease identification, as well as it can outperform handcrafted features.
This article provides a mathematical analysis of singular (nonsmooth) artifacts added to reconstructions by filtered backprojection (FBP) type algorithms for X-ray computed tomography (CT) with arbitrary incomplete data. We prove that these singular artifacts arise from points at the boundary of the data set. Our results show that, depending on the geometry of this boundary, two types of artifacts can arise: object-dependent and object-independent artifacts. Object-dependent artifacts are generated by singularities of the object being scanned, and these artifacts can extend along lines. They generalize the streak artifacts observed in limited-angle tomography. Object-independent artifacts, on the other hand, are essentially independent of the object and take one of two forms: streaks on lines if the boundary of the data set is not smooth at a point and curved artifacts if the boundary is smooth locally. We prove that these streak and curve artifacts are the only singular artifacts that can occur for FBP in the continuous case. In addition to the geometric description of artifacts, the article provides characterizations of their strength in Sobolev scale in certain cases. The results of this article apply to the well-known incomplete data problems, including limited-angle and regionof-interest tomography, as well as to unconventional X-ray CT imaging setups that arise in new practical applications. Reconstructions from simulated and real data are analyzed to illustrate our theorems, including the reconstruction that motivated this work a synchrotron data set in which artifacts appear on lines that have no relation to the object.
Introduction: Improving energy efficiency and reducing energy wastage is an important topic of our time. But it is quite difficult to figure out how much of our total electricity bill can be mapped to which device or at what time the device used it. We believe energy efficiency of normal households can be improved, if this kind of transparency would be available. In this article, we present a system for energy measurement at mains sockets to gain a transparent view of energy consumption for each device in a household. It consists of several smart energy measuring devices (SEMDs) that use a low-power radio protocol to dynamically build and connect to a radio network to transfer power usage date to a server. At the server, the data is stored and can be accessed via web interface.
Results: Our primary goal was to build a back-end system for an energy metering platform with very low energy consumption. This platform can provide data for a variety of services that enables users (the consumers) to understand and improve their energy consumption behavior and increase overall energy efficiency of their households.
Background and Objective: Even today, pointing out an exam that can diagnose a patient with Parkinson's disease (PD) accurately enough is not an easy task. Although a number of techniques have been used in search for a more precise method, detecting such illness and measuring its level of severity early enough to postpone its side effects are not straightforward. In this work, after reviewing a considerable number of works, we conclude that only a few techniques address the problem of PD recognition by means of micrography using computer vision techniques. Therefore, we consider the problem of aiding automatic PD diagnosis by means of spirals and meanders filled out in forms, which are then compared with the template for feature extraction.
Methods: In our work, both the template and the drawings are identified and separated automatically using image processing techniques, thus needing no user intervention. Since we have no registered images, the idea is to obtain a suitable representation of both template and drawings using the very same approach for all images in a fast and accurate approach.
Results: The results have shown that we can obtain very reasonable recognition rates (around approximate to 67%), with the most accurate class being the one represented by the patients, which outnumbered the control individuals in the proposed dataset.
Conclusions: The proposed approach seemed to be suitable for aiding in automatic PD diagnosis by means of computer vision and machine learning techniques. Also, meander images play an important role, leading to higher accuracies than spiral images. We also observed that the main problem in detecting PD is the patients in the early stages, who can draw near-perfect objects, which are very similar to the ones made by control patients. (C) 2016 Elsevier Ireland Ltd. All rights reserved.
We present a novel derivative-based parameter identification method to improve the precision at the tool center point of an industrial manipulator. The tool center point is directly considered in the optimization as part of the problem formulation as a key performance indicator. Additionally, our proposed method takes collision avoidance as special nonlinear constraints into account and is therefore suitable for industrial use. The performed numerical experiments show that the optimum experimental designs considering key performance indicators during optimization achieve a significant improvement in comparison to other methods. An improvement in terms of precision at the tool center point of 40% to 44% was achieved in experiments with three KUKA robots and 90 notional manipulator models compared to the heuristic experimental designs chosen by an experimenter as well as 10% to 19% compared to an existing state-of-the-art method.
Dieses Projekt befasst sich mit Fragestellungen zur Anwendung von künstlicher Intelligenz im Bereich der Technischen Sauberkeit. Durch Literaturforschungen, die einen Ablauf nach Fettke 2006 haben, werden KI-Anwendungen gesucht, die bereits im Zusammenhang mit der Technischen Sauberkeit verwendet werden. Die Forschungen konnten zwölf Literaturquellen ermitteln.
Durch diese Literaturanalyse wurde festgestellt, dass 91 Prozent CV (Computer Vision) zur Partikelerkennung verwenden. Aus diesem Grund wurde anschließend ein CV Modell zur Partikelerkennung implementiert. Durch die Konfusionsmatrix konnte eine Treffergenauigkeit von 82 Prozent festgestellt werden. Daraus folgt, dass eine Partikelklassifikation möglich ist. Abschließend wurde eine weitere Literaturforschung zu Text Mining Applikationen durchgeführt, da der Bereich der Qualitätsanalyse im Monitoring laut Aufgabenstellung
eingezogen werden sollte. In dieser konnte kein positives Ergebnis erzielt werden, da speziell nach fertigen Anwendungen gesucht wurde, die Analysetexte im Bereich der technischen Sauberkeit kategorisieren
können.
Lately, parallel task models have received much attention in the development of real-time multiprocessor systems, as they allow highly compute-intensive tasks to have shorter deadlines which is very much required in modern reactive systems. However, missing modularity and portability can make parallel programming a cumbersome endeavor. As a consequence, compute-intensive sectors in the desktop and server segment have relied on parallelism frameworks such as Intel Threading Building Blocks, Cilk and OpenMP. These parallelism frameworks, however, are optimized for decent average case performance and consequently, do not meet the strict requirements imposed by real-time systems.
In this paper, we present a proof-of-concept parallelism framework which was implemented in particular for soft real-time systems and having tight timing and safety requirements of such critical systems in mind. The proposed runtime system implements static memory allocation in a work-stealing environment that conforms to the strict space and tight probabilistic time bounds of work-stealing schedulers. Furthermore, we evaluate the performance of this framework by conducting multiprogrammed benchmarks on a real-time embedded multicore architecture.
Mit PowerPoint oder LaTeX Beamer erstellte Vorlesungsfolien sind meist statisch und dienen hauptsächlich der Präsentation von Lehrinhalten. Als Alternative dazu werden drei Erweiterungen für das HTML- und JavaScript-basierte Präsentationsframework reveal.js vorgestellt, die für mehr Interaktion in der Datenbankenlehre sorgen sollen: (1) Eine Live-Ausführung von SQL-Anfragen und eine Darstellung des Anfrageergebnisses direkt auf der Folie; mit Möglichkeit zur Anpassung der Anfrage im Präsentationsbetrieb, (2) eine JSON-basierte Beschreibung von ER-Diagrammen, welche graphisch auf den Folien dargestellt werden sollen und (3) eingebettete Smartphone-Umfragen, um zwischendurch – ohne einen Kontextwechsel – Quiz-Fragen zu stellen.
Metadata management constitutes a key prerequisite for enterprises as they engage in data analytics and governance. Today, however, the context of data is often only manually documented by subject matter experts, and lacks completeness and reliability due to the complex nature of data pipelines. Thus, collecting data lineage—describing the origin, structure, and dependencies of data—in an automated fashion increases quality of provided metadata and reduces manual effort, making it critical for the development and operation of data pipelines. In our practice report, we propose an end-to-end solution that digests lineage via (Py‑)Spark execution plans. We build upon the open-source component Spline, allowing us to reliably consume lineage metadata and identify interdependencies. We map the digested data into an expandable data model, enabling us to extract graph structures for both coarse- and fine-grained data lineage. Lastly, our solution visualizes the extracted data lineage via a modern web app, and integrates with BMW Group’s soon-to-be open-sourced Cloud Data Hub.
A Hybrid Solution Method for the Capacitated Vehicle Routing Problem Using a Quantum Annealer
(2019)
he Capacitated Vehicle Routing Problem (CVRP) is an NP-optimization problem (NPO) that has been of great interest for decades for both, science and industry. The CVRP is a variant of the vehicle routing problem characterized by capacity constrained vehicles. The aim is to plan tours for vehicles to supply a given number of customers as efficiently as possible. The problem is the combinatorial explosion of possible solutions, which increases superexponentially with the number of customers. Classical solutions provide good approximations to the globally optimal solution. D-Wave's quantum annealer is a machine designed to solve optimization problems. This machine uses quantum effects to speed up computation time compared to classic computers. The problem on solving the CVRP on the quantum annealer is the particular formulation of the optimization problem. For this, it has to be mapped onto a quadratic unconstrained binary optimization (QUBO) problem. Complex optimization problems such as the CVRP can be translated to smaller subproblems and thus enable a sequential solution of the partitioned problem. This work presents a quantum-classic hybrid solution method for the CVRP. It clarifies whether the implementation of such a method pays off in comparison to existing classical solution methods regarding computation time and solution quality. Several approaches to solving the CVRP are elaborated, the arising problems are discussed, and the results are evaluated in terms of solution quality and computation time.
For its third installment, the Data Science Challenge of the 19th symposium “Database Systems for Business, Technology and Web” (BTW) of the Gesellschaft für Informatik (GI) tackled the problem of predictive energy management in large production facilities. For the first time, this year’s challenge was organized as a cooperation between Technische Universität Dresden, GlobalFoundries, and ScaDS.AI Dresden/Leipzig. The Challenge’s participants were given real-world production and energy data from the semiconductor manufacturer GlobalFoundries and had to solve the problem of predicting the energy consumption for production equipment. The usage of real-world data gave the participants a hands-on experience of challenges in Big Data integration and analysis. After a leaderboard-based preselection round, the accepted participants presented their approach to an expert jury and audience in a hybrid format. In this article, we give an overview of the main points of the Data Science Challenge, like organization and problem description. Additionally, the winning team presents its solution.
Over the past few years, ontology merging, and ontology semantic alignment has gained significant interest as research topics in automotive application domain for finding solutions to semantic data heterogeneity. To accomplish the complex and novel vehicle service requirements such as autonomous driving, V2X (Vehicle-to-Vehicle communication), etc. the automotive applications involve collaborations of several platform-specific data from heterogeneous enterprises component frameworks and consequently there has been increase in data interoperability issues. At the application component level, data interoperability relies on the semantic alignment or mapping between the various component framework interfaces data models represented as XML schemas (XSD). With the XML schemas being the preferred standard for the interface description exchange between most of the automotive application domain components, however, the data interoperability between the semantically equivalent but structurally different data constructs of multiple heterogeneous XSDs stands as a challenge in the absence of an ontology-based approach. To confront this crucial requirement for data interoperability and to increase in effect the reuse of existing components through their interfaces, we propose an approach to semantically map the various component framework interface data models when expressed as ontology schemas, based on the exploration of semantic synergies. The transformation between XSD and RDF (Resource Description Framework) schema representations and the use of queries over the ontology schemas for semantic mapping are demonstrated including a real-world case study.
Internet of Things (IoT) devices are critical to operate and maintain, because of their number and high connectivity.
A lot of security issues concern IoT devices and the networks they
are integrated. To help getting an overview of an IoT network,
the devices and the security, we propose a scoring system to get
a good impression of IT security. This system generates single
scores for each device, using features like encryption, update
behavior, etc. Furthermore, a summarized score for the whole
network is calculated, to show the status of the network security
in an easy way for the administrator. To enable the scoring
system, a precise list of the existing devices and their operating
status is necessary. To achieve this, we present an open standard
for the IoT Device IdentificAtion and RecoGnition (short IoTAG),
which requires that devices report, e.g., their name, an unique ID,
the firmware version and the supported encryption. The proposed
standard is described in detail and an implementation guideline
is given in this paper. Additionally, information about how to
realize the serialization, the integrity and the communication
with IoTAG.
Der Einzug Künstlicher Intelligenz (KI) in die Medizin scheint angesichts der Nutzenpotenziale unvermeidlich. Durch den Agentencharakter KI-basierter Systeme ergeben sich teils neuartige normative An- und Herausforderungen. Für den hochgradig sensiblen Anwendungsbereich der Medizin erscheint es daher notwendig, den KI-Einsatz mit ethischen Leitlinien einzuhegen. Dies wirft die Frage auf, auf welche Erfahrungsbasis eine ethische Fundierung des Einsatzes KI-basierter Technik gestellt werden könnte. Damit ist kein Schluss vom Sein auf das Sollen gemeint, sondern die Berücksichtigung bereits geführter normativer Debatten. Eine Möglichkeit, sich der normativen Landschaft der KI anzunähern, liegt in der Auseinandersetzung mit der Entwicklungsgeschichte der KI und den damit verbundenen Debatten um ethische und soziale Aspekte. Mit diesem explorativen Ansatz können relevante Problemfelder identifiziert, vorläufige Gestaltungs- und Einsatzempfehlungen für KI-Systeme in der Praxis formuliert und Vorschläge zu deren Einbettung in existierende Organisationsstrukturen generiert werden.