Digitalisierung
Refine
Year of publication
Document Type
- conference proceeding (article) (344)
- Article (181)
- conference proceeding (presentation, abstract) (45)
- Part of a Book (36)
- Book (13)
- Preprint (11)
- Working Paper (5)
- conference proceeding (volume) (4)
- Report (4)
- conference talk (3)
Is part of the Bibliography
- no (657)
Keywords
- Offshoring (13)
- Betriebliches Informationssystem (12)
- Informationstechnik (11)
- Datenschutz (10)
- Digitalisierung (10)
- Datensicherung (8)
- Elektronische Gesundheitskarte (8)
- Information systems (8)
- Internet of Things (8)
- Literaturbericht (8)
Institute
- Fakultät Informatik und Mathematik (370)
- Fakultät Elektro- und Informationstechnik (221)
- Laboratory for Safe and Secure Systems (LAS3) (206)
- Labor für Digitalisierung (LFD) (84)
- Regensburg Strategic IT Management (ReSITM) (54)
- Labor eHealth (eH) (36)
- Fakultät Maschinenbau (32)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (25)
- Labor Parallele und Verteilte Systeme (23)
- Fakultät Angewandte Sozial- und Gesundheitswissenschaften (18)
Begutachtungsstatus
- peer-reviewed (263)
- begutachtet (7)
Digitale Transformation in Echtzeit: Die Ziele von morgen basierend auf dem Datenmodell von gestern
(2022)
Die Digitale Transformation fordert Unternehmen aller Couleur. Ironischer Weise sind es gerade die bisher verwendeten IT-Systeme mit ihren starren Strukturen, die Unternehmen in Ihrer digitalen Trans-formation oft ausbremsen. Auch wenn die Soft-warehersteller längst reagiert haben und neue, flexib-lere Versionen ihrer Produkte anbieten, so ist ein größerer Softwarewechsel immer noch eine Heraus-forderung für Unternehmen und ein Schritt der wohl-überlegt und geplant sein will. In dieser Arbeit wird deshalb ein Vorgehen vorge-stellt, um mittels In-Memory Technologie und Vir-tualisierung zumindest die wichtigsten Ergebnisse der Transformation bereits auf den bestehenden Da-tenmodellen in Echtzeit zu generieren. Dadurch wird genug Zeit gewonnen, um die eigentliche Transfor-mation der IT-Landschaft geplant und mit der not-wendigen Sorgfalt durchzuführen.
The prospect of achieving computational speedups by exploiting quantum phenomena makes the use of quantum processing units (QPUs) attractive for many algorithmic database problems. Query optimisation, which concerns problems that typically need to explore large search spaces, seems like an ideal match for the known quantum algorithms. We present the first quantum implementation of join ordering, which is one of the most investigated and fundamental query optimisation problems, based on a reformulation to quadratic binary unconstrained optimisation problems. We empirically characterise our method on two state-of-the-art approaches (gate-based quantum computing and quantum annealing), and identify speed-ups compared to the best know classical join ordering approaches for input sizes that can be processed with current quantum annealers. However, we also confirm that limits of early-stage technology are quickly reached.
Current QPUs are classified as noisy, intermediate scale quantum computers (NISQ), and are restricted by a variety of limitations that reduce their capabilities as compared to ideal future quantum computers, which prevents us from scaling up problem dimensions and reaching practical utility. To overcome these challenges, our formulation accounts for specific QPU properties and limitations, and allows us to trade between achievable solution quality and possible problem size.
In contrast to all prior work on quantum computing for query optimisation and database-related challenges, we go beyond currently available QPUs, and explicitly target the scalability limitations: Using insights gained from numerical simulations and our experimental analysis, we identify key criteria for co-designing QPUs to improve their usefulness for join ordering, and show how even relatively minor physical architectural improvements can result in substantial enhancements. Finally, we outline a path towards practical utility of custom-designed QPUs.
We evaluate the applicability of quantum computing on two fundamental query optimization problems, join order optimization and multi query optimization (MQO). We analyze the problem dimensions that can be solved on current gate-based quantum systems and quantum annealers, the two currently commercially available architectures.
First, we evaluate the use of gate-based systems on MQO, previously solved with quantum annealing. We show that, contrary to classical computing, a different architecture requires involved adaptations. We moreover propose a multi-step reformulation for join ordering problems to make them solvable on current quantum systems. Finally, we systematically evaluate our contributions for gate-based quantum systems and quantum annealers. Doing so, we identify the scope of current limitations, as well as the future potential of quantum computing technologies for database systems.
This paper addresses the problem of properly placing a given task in the manipulator workspace by a heuristic and numeric approach. Thus, the task is placed relatively to the manipulator for each element of the discretized workspace and the required joint torques are determined. The results are are by a torque-based optimization criterion. The modularity of this approach ensures general applicability on various systems and tasks while the high computational effort is treated by GPU parallelization. The method is presented for a given 6DOF manipulator and a highly dynamic trajectory. The resulting interactive map of the manipulator workspace gives an overview of the task dependent dynamic performance, detailed evaluation of certain solutions will show the dexterity of the proposed approach.
The design of the NoSQL schema has a direct impact on the scalability of web applications. Especially for developers with little experience in NoSQL stores, the risks inherent in poor schema design can be incalculable. Worse yet, the issues will only manifest once the application has been deployed, and the growing user base causes highly concurrent writes. In this paper, we present a model checking approach to reveal scalability bottlenecks in NoSQL schemas. Our approach draws on formal methods from tree automata theory to perform a conservative static analysis on both the schema and the expected write-behavior of users. We demonstrate the impact of schema-inherent bottlenecks for a popular NoSQL store, and show how concurrent writes can ultimately lead to a considerable share of failed transactions.
The modular addition is a popular building block when designing lightweight ciphers. While algorithms mainly based on the addition can reach very high performance, masking their implementations results in a huge penalty. Since efficient protection against side-channel attacks is a requirement in lots of use cases, we focus on optimizing the Boolean masking of the modular addition. Contrary to recent related work, we target evolving a masked full adder instead of parts of a parallel prefix adder. We study how techniques typically found in neural network evolution and genetic algorithms can be adapted in order to help in evolving an efficiently masked adder. We customize a well-known neuroevolution algorithm, develop an optimized masked adder with our new approach and implement the ChaCha20 cipher on an ARM Cortex-M3 controller. We compare the performance of the protected neuroevolved implementation to solutions found by traditional search methods. Moreover, the leakage of our new solution is validated by a t-test conducted with a leakage simulator. We present under which circumstances our masked implementation outperforms related work and prove the feasibility of successfully using neuroevolution when searching for complex Boolean networks.
In this work, we present our benchmarking results for the ten finalist ciphers of the Lightweight Cryptography (LWC) project initiated by National Institute of Standards and Technology (NIST). We evaluate the speed and code size of various software implementations on five different platforms featuring four different architectures. Moreover, we benchmark the dynamic memory utilization of the remaining NIST LWC algorithms on one 32-bit ARM controller. We describe our test cases and methodology and provide some information regarding the design and properties of the finalists before showing and discussing our results. Altogether, we evaluated almost 300 implementations of the 3rd round candidates and pick the most appropriate and best (primary) implementation of each cipher for our comparisons. We include a variant of AES-GCM in our benchmarking in order to be able to compare the state-of-the-art to the novel LWC ciphers. Our research gives an overview over the performance of the latest software implementations of the NIST LWC finalists and shows under which circumstances which candidate is performing the best in our individual test cases. Additionally, we make all benchmarking results, the code for our test framework and every tested implementation available to the public to ensure a transparent testing process.
EMDLAB: A toolbox for analysis of single-trial EEG dynamics using empirical mode decomposition
(2015)
Background:
Empirical mode decomposition (EMD) is an empirical data decomposition technique. Recently there is growing interest in applying EMD in the biomedical field.
New method:
EMDLAB is an extensible plug-in for the EEGLAB toolbox, which is an open software environment for electrophysiological data analysis.
Results:
EMDLAB can be used to perform, easily and effectively, four common types of EMD: plain EMD, ensemble EMD (EEMD), weighted sliding EMD (wSEMD) and multivariate EMD (MEMD) on EEG data. In addition, EMDLAB is a user-friendly toolbox and closely implemented in the EEGLAB toolbox.
Comparison with existing methods:
EMDLAB gains an advantage over other open-source toolboxes by exploiting the advantageous visualization capabilities of EEGLAB for extracted intrinsic mode functions (IMFs) and Event-Related Modes (ERMs) of the signal.
Conclusions:
EMDLAB is a reliable, efficient, and automated solution for extracting and visualizing the extracted IMFs and ERMs by EMD algorithms in EEG study.
Virtualization has come a long way since its beginnings in the 1960s. Nowadays, Virtual Machine Monitor (VMM) - or hypervisor-based virtualization of servers is the de facto standard in data centers and a building block of the cloud hype. In recent years, virtualization has also been adopted to embedded devices such as avionics systems and mobile phones. The first mass deployment of embedded virtualization can probably be seen in video game consoles, though. However, it is still not employed by automotive electronics. This is despite the fact that with the upcoming domain controller architecture, virtualization can yield benefits beyond a mere consolidation of a multitude of Electronic Control Units (ECUs) into a few Domain Controller Units (DCUs). This paper presents merits of automotive virtualization, especially as a foundation for DCUs.
Today, ubiquitous mobile devices have not only arrived but entered the safety critical domain. There, systems are about to be controlled where human health or even human life is put at risk. For example, in automation systems first ideas surface to control parts of the system via a COTS smartphone. Another example is the idea to control the autonomous parking function of a car via a COTS smartphone too. As beneficial and convenient these ideas are on the first thought, on the second thought, dangers of these approaches become obvious. Especially in case of failures the system’s safety has to be maintained. The open question is how to achieve this mandatory requirement with COTS components, e.g. smartphones that are not developed following the development process necessary for safetycritical systems. This paper presents a concept to reliably detect human interaction while activating safety critical functions via COTS mobile devices. Thus a means is provided to detect erroneous activation requests for the safetycritical function.
We present two methods that combine image reconstruction and edge detection in computed tomography (CT) scans. Our first method is as an extension of the prominent filtered backprojection algorithm. In our second method we employ ℓ1-regularization for stable calculation of the gradient. As opposed to the first method, we show that this approach is able to compensate for undersampled CT data.
We present a paradigm for characterization of artifacts in limited data tomography problems. In particular, we use this paradigm to characterize artifacts that are generated in reconstructions from limited angle data with generalized Radon transforms and general filtered backprojection type operators. In order to find when visible singularities are imaged, we calculate the symbol of our reconstruction operator as a pseudodifferential operator.
The performance of cognitive models often depends on the settings of specific model parameters, such as the rate of memory decay or the speed of motor responses. The systematic exploration of a model’s parameter space can yield relevant insights into model behavior and can also be used to improve the fit of a model to human data. However, exhaustive parameter space searches quickly run into a combinatorial explosion as the number of parameters investigated increases. Taking an established instance-based learning task as example, we show
how simulation using parallel computing and derivative-free optimization methods can be applied to investigate the effects
of different parameter settings. We find that both global optimization methods involving genetic algorithms as well as local methods yield satisfactory results in this case. Furthermore, we show how a model implemented in a specific cognitive architecture (ACT-R) can be mathematically reformulated to prepare the application of derivative-based optimization methods which promise further efficiency gains for quantitative analysis.
PURPOSE
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography.
METHODS
In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization.
RESULTS
Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection.
CONCLUSIONS
The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
Differential phase contrast imaging (DPCI) enables the visualization of soft tissue contrast using X-rays. In this work we introduce a reconstruction framework based on curvelet expansion and sparse regularization for DPCI. We will show that curvelets provide a suitable data representation for DPCI reconstruction that allows preservation of edges as well as an exact analytic representation of the system matrix. As a first evaluation, we show results using simulated phantom data
This thesis is devoted to the problem of tomographic reconstruction at limited angular range. In the first part, we prove a characterization of filtered backprojection reconstructions from limited angle data. Moreover, we develop a strategy for artifact reduction and stabilization. In the second part, we introduce a new edge-preserving reconstruction algorithm for limited angle tomography and analyze this algorithm mathematically. Some numerical experiments are also presented.
We propose a new framework for limited angle tomographic reconstruction. Our approach is based on the observation that for a given acquisition geometry only a few (visible) structures of the object can be reconstructed reliably using a limited angle data set. By formulating this problem in the curvelet domain, we can characterize those curvelet coefficients which correspond to visible structures in the image domain. The integration of this information into the formulation of the reconstruction problem leads to a considerable dimensionality reduction and yields a speedup of the corresponding reconstruction algorithms.
In order to better understand the mechanisms of gas transport during High Frequency Oscillatory Ventilation (HFOV) Magnetic Resonance Imaging (MRI) with contrast gases and numerical flow simulations based on Computational Fluid Dynamics(CFD) methods are performed. Validation of these new techniques is conducted by comparing the results obtained with simplified models of the trachea and a first lung bifurcation as well as in a cast model of the upper central airways with results achieved from conventional fluid mechanical measurement techniques like e.g. Laser Doppler Anemometry (LDA). Further it is demonstrated that MRI of experimental HFOV is feasible and that Hyperpolarized 3He allows for imaging the gas re-distribution inside the lung. Finally, numerical results of oscillatory flow in a 3rd generation model of the lung as well as the impact of endotracheal tubes on the flow regime development in a trachea model are presented.
A Step Towards the Automated Diagnosis of Parkinson's Disease: Analyzing Handwriting Movements
(2015)
Parkinson’s disease (PD) has affected millions of people world-wide, being its major problem the loss of movements and, consequently, the ability of working and locomotion. Although we can find several works that attempt at dealing with this problem out there, most of them make use of datasets composed by a few subjects only. In this work, we present some results toward the automated diagnosis of PD by means of computer vision-based techniques in a dataset composed by dozens of patients, which is one of the main contributions of this work. The dataset is part of a joint research project that aims at extracting both visual and signal-based information from healthy and PD patients in order to go forward the early diagnosis of PD patients. The dataset is composed by handwriting clinical exams that are analyzed by means of image processing and machine learning techniques, being the preliminary results encouraging and promising. Additionally, a new quantitative feature to measure the amount of tremor of an individual’s handwritten trace called Mean Relative Tremor is also presented.
We consider the reconstruction problem for limited angle tomography using filtered backprojection (FBP) and lambda tomography. We use microlocal analysis to explain why the well-known streak artifacts are present at the end of the limited angular range. We explain how to mitigate the streaks and prove that our modified FBP and lambda operators are standard pseudodifferential operators, and so they do not add artifacts. We provide reconstructions to illustrate our mathematical results.
We investigate the reconstruction problem of limited angle tomography. Such problems arise naturally in applications like digital breast tomosynthesis, dental tomography, electron microscopy, etc. Since the acquired tomographic data is highly incomplete, the reconstruction problem is severely ill-posed and the traditional reconstruction methods, e.g. filtered backprojection (FBP), do not perform well in such situations.
To stabilize the reconstruction procedure additional prior knowledge about the unknown object has to be integrated into the reconstruction process. In this work, we propose the use of the sparse regularization technique in combination with curvelets. We argue that this technique gives rise to an edge-preserving reconstruction. Moreover, we show that the dimension of the problem can be significantly reduced in the curvelet domain. To this end, we give a characterization of the kernel of the limited angle Radon transform in terms of curvelets and derive a characterization of solutions obtained through curvelet sparse regularization. In numerical experiments, we will show that the theoretical results directly translate into practice and that the proposed method outperforms classical reconstructions.
Artifacts in Incomplete Data Tomography with Applications to Photoacoustic Tomography and Sonar
(2015)
We develop a paradigm using microlocal analysis that allows one to characterize the visible and added singularities in a broad range of incomplete data tomography problems. We give precise characterizations for photoacoustic and thermoacoustic tomography and sonar, and provide artifact reduction strategies. In particular, our theorems show that it is better to arrange sonar detectors so that the boundary of the set of detectors does not have corners and is smooth. To illustrate our results, we provide reconstructions from synthetic spherical mean data as well as from experimental photoacoustic data.
We investigate the reconstruction problem for limited angle tomography. Such problems arise naturally in applications like digital breast tomosynthesis, dental tomography, etc. Since the acquired tomographic data is highly incomplete, the reconstruction problem is severely ill-posed and the traditional reconstruction methods, such as filtered backprojection (FBP), do not perform well in such situations. To stabilize the inversion we propose the use of a sparse regularization technique in combination with curvelets. We argue that this technique has the ability to preserve edges. As our main result, we present a characterization of the kernel of the limited angle Radon transform in terms of curvelets. Moreover, we characterize reconstructions which are obtained via curvelet sparse regularizations at a limited angular range. As a result, we show that the dimension of the limited angle problem can be significantly reduced in the curvelet domain.
We propose a new algorithmic approach to the non-smooth and non-convex Potts problem (also called piecewise-constant Mumford–Shah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require a priori knowledge on the gray levels nor on the number of segments of the reconstruction. Further, it avoids anisotropic artifacts such as geometric staircasing. We demonstrate the suitability of our method for joint image reconstruction and segmentation. We focus on Radon data, where we in particular consider limited data situations. For instance, our method is able to recover all segments of the Shepp–Logan phantom from seven angular views only. We illustrate the practical applicability on a real positron emission tomography dataset. As further applications, we consider spherical Radon data as well as blurred data.
Simultaneous EEG-fMRI provides an increasingly attractive research tool to investigate cognitive processes with high temporal and spatial resolution. However, artifacts in EEG data introduced by the MR scanner still remain a major obstacle. This study, employing commonly used artifact correction steps, shows that head motion, one overlooked major source of artifacts in EEG-fMRI data, can cause plausible EEG effects and EEG–BOLD correlations. Specifically, low-frequency EEG (< 20 Hz) is strongly correlated with in-scanner movement. Accordingly, minor head motion (< 0.2 mm) induces spurious effects in a twofold manner: Small differences in task-correlated motion elicit spurious low-frequency effects, and, as motion concurrently influences fMRI data, EEG–BOLD correlations closely match motion-fMRI correlations. We demonstrate these effects in a memory encoding experiment showing that obtained theta power (~ 3–7 Hz) effects and channel-level theta–BOLD correlations reflect motion in the scanner. These findings highlight an important caveat that needs to be addressed by future EEG-fMRI studies.
Background
The purpose of this study was to evaluate the impact of Cone Beam CT (CBCT) based setup correction on total dose distributions in fractionated frameless stereotactic radiation therapy of intracranial lesions.
Methods
Ten patients with intracranial lesions treated with 30 Gy in 6 fractions were included in this study. Treatment planning was performed with Oncentra® for a SynergyS® (Elekta Ltd, Crawley, UK) linear accelerator with XVI® Cone Beam CT, and HexaPOD™ couch top. Patients were immobilized by thermoplastic masks (BrainLab, Reuther). After initial patient setup with respect to lasers, a CBCT study was acquired and registered to the planning CT (PL-CT) study. Patient positioning was corrected according to the correction values (translational, rotational) calculated by the XVI® system. Afterwards a second CBCT study was acquired and registered to the PL-CT to confirm the accuracy of the corrections. An in-house developed software was used for rigid transformation of the PL-CT to the CBCT geometry, and dose calculations for each fraction were performed on the transformed CT. The total dose distribution was achieved by back-transformation and summation of the dose distributions of each fraction. Dose distributions based on PL-CT, CBCT (laser set-up), and final CBCT were compared to assess the influence of setup inaccuracies.
Results
The mean displacement vector, calculated over all treatments, was reduced from (4.3 ± 1.3) mm for laser based setup to (0.5 ± 0.2) mm if CBCT corrections were applied. The mean rotational errors around the medial-lateral, superior-inferior, anterior-posterior axis were reduced from (−0.1 ± 1.4)°, (0.1 ± 1.2)° and (−0.2 ± 1.0)°, to (0.04 ± 0.4)°, (0.01 ± 0.4)° and (0.02 ± 0.3)°. As a consequence the mean deviation between planned and delivered dose in the planning target volume (PTV) could be reduced from 12.3% to 0.4% for D95 and from 5.9% to 0.1% for Dav. Maximum deviation was reduced from 31.8% to 0.8% for D95, and from 20.4% to 0.1% for Dav.
Conclusion
Real dose distributions differ substantially from planned dose distributions, if setup is performed according to lasers only. Thermoplasic masks combined with a daily CBCT enabled a sufficient accuracy in dose distribution.
Re-irradiation of spinal column metastases by IMRT: Impact of setup errors on the dose distribution
(2013)
Background
This study investigates the impact of an automated image guided patient setup correction on the dose distribution for ten patients with in-field IMRT re-irradiation of vertebral metastases.
Methods
10 patients with spinal column metastases who had previously been treated with 3D-conformal radiotherapy (3D-CRT) were simulated to have an in-field recurrence. IMRT plans were generated for treatment of the vertebrae sparing the spinal cord. The dose distributions were compared for a patient setup based on skin marks only and a Cone Beam CT (CBCT) based setup with translational and rotational couch corrections using an automatic robotic image guided couch top (Elekta - HexaPOD™ IGuide® - system). The biological equivalent dose (BED) was calculated to evaluate and rank the effects of the automatic setup correction for the dose distribution of CTV and spinal cord.
Results
The mean absolute value (± standard deviation) over all patients and fractions of the translational error is 6.1 mm (±4 mm) and 2.7° (±1.1 mm) for the rotational error. The dose coverage of the 95% isodose for the CTV is considerable decreased for the uncorrected table setup. This is associated with an increasing of the spinal cord dose above the tolerance dose.
Conclusions
An automatic image guided table correction ensures the delivery of accurate dose distribution and reduces the risk of radiation induced myelopathy.
This paper introduces a novel chaotic flower pollination algorithm (CFPA) to solve a tardiness-constrained flow-shop scheduling problem with simultaneously loaded stations. This industrial manufacturing problem is modeled from a filter basket production line in Germany and has been generally solved using standard deterministic algorithms. This research develops a metaheuristic approach based on the highly efficient flower pollination algorithm coupled with different chaos maps for stochasticity. The objective function targeted is the tardiness constraint of the due dates. Fifteen different experiments with thirty scenarios are generated to mimic industrial conditions. The results are compared with the genetic algorithm and with the four standard benchmark priority rule-based deterministic algorithms of First In First Out, Raghu and Rajendran, Shortest Processing Time and Slack. From the obtained results and analysis of the relative difference, percentage relative difference and t tests, CFPA was found to be significantly better performing than the deterministic heuristics and the GA algorithm.
This paper briefly presents the challenges for order control and release of multi-zone order picking systems. On the one hand, the order control must ensure that all orders are processed on time, and on the other hand, the space requirements (buffer) and the utilisation of the zones must be considered.
Within the framework of a case study, different strategies for order release were developed. The paper shortly describes the ideas of the strategies and presents results of a case-based simulative evaluation of the strategies. The findings of the simulation study are the basis for the development of a digital twin for the operational control of multi-zone picking systems.
Der Digitale Zwilling (DZ) ist ein wichtiger Bestandteil der Industrie 4.0 und ermöglicht Anwendungen wie Predictive Maintenance, virtuelles Prototyping oder die Steuerung von Produktions- und Logistikprozessen. Herausforderungen bei der Entwicklung des Digitalen Zwillings entstehen durch fehlende Struktur und Standards. Mit diesem Beitrag soll ein Vorgehensmodell für die Erstellung eines Digitalen Zwillings im Bereich der Produktion und Logistik aufgezeigt werden. Das Vorgehensmodell hilft bei der Einordnung, für welche Anwendungsfälle ein Digitaler Zwilling entwickelt werden kann, welche Schritte bei einer Umsetzung erfolgen müssen, und gibt einen Überblick über die Voraussetzungen und Komplexität bei der Entwicklung. Das zentrale Element bildet dabei die zielgerichtete Aufbereitung und Analyse der zugrunde liegenden Daten mittels des in der Industrie etablierten Vorgehensmodell CRISP-DM.
Radiative transfer modelling of high resolution infrared (or microwave) spectra still represents a major challenge for the processing of atmospheric remote sensing data despite significant advances in the numerical techniques utilized in line-by-line modelling by, e.g., optimized Voigt function algorithms or multigrid approaches. Special purpose computing hardware such as Field Programmable Gate Arrays (FPGAs) can be used to cope with the dramatic increase of data quality and quantity. Utilizing a highly optimized implementation of an uniform rational function approximation of the Voigt function, the molecular absorption cross section computation-representing the most compute intensive part of radiative transfer codes-has been realized on FPGA. Design and implementation of the FPGA coprocessor is presented along with first performance tests and an outlook for the ongoing further development.
In this work, a method for reducing the number of degrees of freedom in online optimal dynamic experiment design problems for systems described by differential equations is proposed. The online problems are posed such that only the inputs which extend an operation policy resulting from an experiment designed offline are optimized. This is done by formulating them as multiple experiment designs, considering explicitly the information of the experiment designed offline and possible time delays unknown a priori. The performance of the method is shown for the case of the separation of isopropanolol isomers in a Simulated Moving Bed plant.
Multiple hop routing in mobile ad hoc networks can minimize energy consumption and increase data throughput. Yet, the problem of radio interferences remain. However if the routes are restricted to a basic network based on local neighborhoods, these interferences can be reduced such that standard routing algorithms can be applied.
We compare different network topologies for these basic networks, i.e. the Yao-graph (aka. Θ-graph) and some also known related models, which will be called the SymmYgraph (aka. YS-graph), the SparsY-graph (aka. YY-graph) and the BoundY-graph. Further, we present a promising network topology called the HL-graph (based on Hierarchical Layers).
We compare these topologies regarding degree, spanner-properties, and communication features. We investigate how these network topologies bound the number of (uni- and bidirectional) interferences and whether these basic networks provide energy-optimal or congestion-minimal routing. Then, we compare the ability of these topologies to handle
dynamic changes of the network when radio stations appear and disappear. For this we measure the number of involved radio stations and present distributed algorithms for repairing the network structure.
Rational functions are frequently used as efficient yet accurate numerical approximations for real and complex valued special functions. For the complex error function , whose real part is the Voigt function , the rational approximation developed by Hui, Armstrong, and Wray [Rapid computation of the Voigt and complex error functions, J. Quant. Spectrosc. Radiat. Transfer 19 (1978) 509–516] is investigated. Various optimizations for the algorithm are discussed. In many applications, where these functions have to be calculated for a large x grid with constant y, an implementation using real arithmetic and factorization of invariant terms is especially efficient.
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
The goal of this paper is to increase the computation speed of MapReduce jobs by reducing the accuracy of the result. Often, the timely processing is more important than the precision of the result. Hadoop has no built-in functionality for such an approximation technique, so the user has to implement sampling techniques manually.
We introduce an automatic system for computing arithmetic approximations. The sampling is based on techniques from statistics and the extrapolation is done generically. This system is also extended by an incremental component which enables the reuse of already computed results to enlarge the sampling size. This can be used iteratively to further increase the sampling size and also the precision of the approximation. We present a transparent incremental sampling approach, so the developed components can be integrated in the Hadoop framework in a non-invasive manner.
Nonlinear ill-posed problem analysis in model-based parameter estimation and experimental design
(2015)
Discrete ill-posed problems are often encountered in engineering applications. Still, their sound analysis is not yet common practice and difficulties arising in the determination of uncertain parameters are typically not assigned properly. This contribution provides a tutorial review on methods for identifiability analysis, regularization techniques and optimal experimental design. A guideline for the analysis and classification of nonlinear ill-posed problems to detect practical identifiability problems is given. Techniques for the regularization of experimental design problems resulting from ill-posed parameter estimations are discussed. Applications are presented for three different case studies of increasing complexity.
The method of loci is one, if not the most, efficient mnemonic encoding strategy. This spatial mnemonic combines the core cognitive processes commonly linked to medial temporal lobe (MTL) activity: spatial and associative memory processes. During such processes, fMRI studies consistently demonstrate MTL activity, while electrophysiological studies have emphasized the important role of theta oscillations (3–8 Hz) in the MTL. However, it is still unknown whether increases or decreases in theta power co-occur with increased BOLD signal in the MTL during memory encoding. To investigate this question, we recorded EEG and fMRI separately, while human participants used the spatial method of loci or the pegword method, a similarly associative but nonspatial mnemonic. The more effective spatial mnemonic induced a pronounced theta power decrease source localized to the left MTL compared with the nonspatial associative mnemonic strategy. This effect was mirrored by BOLD signal increases in the MTL. Successful encoding, irrespective of the strategy used, elicited decreases in left temporal theta power and increases in MTL BOLD activity. This pattern of results suggests a negative relationship between theta power and BOLD signal changes in the MTL during memory encoding and spatial processing. The findings extend the well known negative relation of alpha/beta oscillations and BOLD signals in the cortex to theta oscillations in the MTL.
Electric and electronic functionalities increase exponentially in every mobility domain. The automotive in-dustry is confronted with a rising system complexity and several restricting requirements and standards (likeAUTOSAR), in particular to design embedded software for electronic control units. To stand against rampantfunctionalities software units could be restructured according to their affiliation and should not be attached toa certain place. This can be effected by integration on single controllers. On the one hand the system wideamount of hardware controllers could such be limited. On the other hand the workload for integration CPUswill rise. To support this paradigm, multi-core systems can provide enough processing power in an efficientway. This paper shows a first approach to combine automotive functionality on such a single controller.
The generic REMOS (REverberation MOdeling for robust Speech recognition) concept is extended in this contribution to cope with additional noise components. REMOS originally embeds an explicit reverberation model into a hiddenMarkov model (HMM) leading to a relaxed conditional independence assumption for the observed feature vectors. During recognition, a nonlinear optimization problem is to be solved in order to adapt the HMMs' output probability density functions to the current reverberation conditions. The extension for additional noise components necessitates a modified numerical solver for the nonlinear optimization problem. We propose an approximation scheme based on continuous piecewise linear regression. Connected-digit recognition experiments demonstrate the potential of REMOS in reverberant and noisy environments. They furthermore reveal that the benefit of an explicit reverberation model, overcoming the conditional independence assumption, increases with increasing signal-to-noise-ratios.
NoSQL-Datenbanksysteme sind in den letzten Jahren sehr populär geworden, gute Gründe sprechen für ihren Einsatz: Eine attraktive Eigenschaft vieler Systeme ist ihre Schema-Flexibilität, die insbesondere in der agilen Anwendungsentwicklung Vorteile bietet. Durch horizontale Skalierbarkeit ermöglichen NoSQL-Datenbanksysteme eine effiziente Verarbeitung großer Datenmengen. Einige Systeme, die für die Datenhaltung interaktiver Anwendungen konzipiert sind, können zudem hochfrequente Nutzeranfragen bedienen. Diesen Vorteilen stehen eine Reihe von Nachteilen gegenüber, aus denen sich neue Herausforderungen für die Anwendungsentwicklung ergeben: Fehlende Standards bei den Anfragesprachen erschweren die Entwicklung datenbanksystemunabhängiger Anwendungen. Schema-Flexibilität im Datenbankmanagementsystem führt dazu, dass die Verantwortung für das Schema-Management in die Anwendung verlagert wird. Im vorliegenden Beitrag werden wesentliche Herausforderungen identifiziert und Lösungsansätze aus Forschung und Praxis vorgestellt. Dabei liegt der Fokus auf schema-flexiblen NoSQL-Datenbanksystemen, mit einem aggregat-orientierten Datenmodell, d. h. Key-Value Datenbanksysteme, dokumentenorientierten Datenbanksystemen und Column-Family Datenbanksystemen.
NoSQL data stores have become very popular over the last years, as good reasons are justifying their application: One attractive feature of many systems is their schema flexibility, which may be preferable in agile software development projects. Due to their horizontal scalability, NoSQL data stores make it possible to efficiently process large amounts of data. Some systems, designed as data backends for interactive applications, can also manage highly frequent user requests. Apart from these advantages, there are also downsides to NoSQL data stores that create new challenges for software development: Missing standards in query languages make it difficult to build data store independent applications. Schema flexibility in the data store shifts the responsibility for schema management into the application. This article identifies substantial challenges as well as solution statements from research and practice. The focus of our survey is on schema-flexible NoSQL data management systems with an aggregate-oriented data model, i. e., key-value data management systems, as well as document and column family data management systems.
Inverse problems are at the heart of many practical problems such as image reconstruction or nondestructive testing. A characteristic feature is their instability with respect to data perturbations. To stabilize the inversion process, regularization methods must be developed and applied. In this paper, we introduce the concept of filtered diagonal frame decomposition, which extends the classical filtered SVD to the case of frames. The use of frames as generalized singular systems allows a better match to a given class of potential solutions and is also beneficial for problems where the SVD is not analytically available. We show that filtered diagonal frame decompositions yield convergent regularization methods, derive convergence rates under source conditions and prove order optimality. Our analysis applies to bounded and unbounded forward operators. As a practical application of our tools, we study filtered diagonal frame decompositions for inverting the Radon transform as an unbounded operator on L2(R2).
We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.
Quantum key distribution is among the foremost applications of quantum mechanics, both in terms of fundamental physics and as a technology on the brink of commercial deployment. Starting from principal schemes and initial proofs of unconditional security for perfect systems, much effort has gone into providing secure schemes which can cope with numerous experimental imperfections unavoidable in real world implementations. In this paper, we provide a comparison of various schemes and protocols. We analyse their efficiency and performance when implemented with imperfect physical components. We consider how experimental faults are accounted for using effective parameters. We compare various recent protocols and provide guidelines as to which components propose best advances when being improved.