Digitalisierung
Refine
Year of publication
Document Type
- conference proceeding (article) (344)
- Article (181)
- conference proceeding (presentation, abstract) (45)
- Part of a Book (36)
- Book (13)
- Preprint (11)
- Working Paper (5)
- conference proceeding (volume) (4)
- Report (4)
- conference talk (3)
Is part of the Bibliography
- no (657)
Keywords
- Offshoring (13)
- Betriebliches Informationssystem (12)
- Informationstechnik (11)
- Datenschutz (10)
- Digitalisierung (10)
- Datensicherung (8)
- Elektronische Gesundheitskarte (8)
- Information systems (8)
- Internet of Things (8)
- Literaturbericht (8)
Institute
- Fakultät Informatik und Mathematik (370)
- Fakultät Elektro- und Informationstechnik (221)
- Laboratory for Safe and Secure Systems (LAS3) (206)
- Labor für Digitalisierung (LFD) (84)
- Regensburg Strategic IT Management (ReSITM) (54)
- Labor eHealth (eH) (36)
- Fakultät Maschinenbau (32)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (25)
- Labor Parallele und Verteilte Systeme (23)
- Fakultät Angewandte Sozial- und Gesundheitswissenschaften (18)
Begutachtungsstatus
- peer-reviewed (263)
- begutachtet (7)
The prospect of achieving computational speedups by exploiting quantum phenomena makes the use of quantum processing units (QPUs) attractive for many algorithmic database problems. Query optimisation, which concerns problems that typically need to explore large search spaces, seems like an ideal match for the known quantum algorithms. We present the first quantum implementation of join ordering, which is one of the most investigated and fundamental query optimisation problems, based on a reformulation to quadratic binary unconstrained optimisation problems. We empirically characterise our method on two state-of-the-art approaches (gate-based quantum computing and quantum annealing), and identify speed-ups compared to the best know classical join ordering approaches for input sizes that can be processed with current quantum annealers. However, we also confirm that limits of early-stage technology are quickly reached.
Current QPUs are classified as noisy, intermediate scale quantum computers (NISQ), and are restricted by a variety of limitations that reduce their capabilities as compared to ideal future quantum computers, which prevents us from scaling up problem dimensions and reaching practical utility. To overcome these challenges, our formulation accounts for specific QPU properties and limitations, and allows us to trade between achievable solution quality and possible problem size.
In contrast to all prior work on quantum computing for query optimisation and database-related challenges, we go beyond currently available QPUs, and explicitly target the scalability limitations: Using insights gained from numerical simulations and our experimental analysis, we identify key criteria for co-designing QPUs to improve their usefulness for join ordering, and show how even relatively minor physical architectural improvements can result in substantial enhancements. Finally, we outline a path towards practical utility of custom-designed QPUs.
We evaluate the applicability of quantum computing on two fundamental query optimization problems, join order optimization and multi query optimization (MQO). We analyze the problem dimensions that can be solved on current gate-based quantum systems and quantum annealers, the two currently commercially available architectures.
First, we evaluate the use of gate-based systems on MQO, previously solved with quantum annealing. We show that, contrary to classical computing, a different architecture requires involved adaptations. We moreover propose a multi-step reformulation for join ordering problems to make them solvable on current quantum systems. Finally, we systematically evaluate our contributions for gate-based quantum systems and quantum annealers. Doing so, we identify the scope of current limitations, as well as the future potential of quantum computing technologies for database systems.
This paper addresses the problem of properly placing a given task in the manipulator workspace by a heuristic and numeric approach. Thus, the task is placed relatively to the manipulator for each element of the discretized workspace and the required joint torques are determined. The results are are by a torque-based optimization criterion. The modularity of this approach ensures general applicability on various systems and tasks while the high computational effort is treated by GPU parallelization. The method is presented for a given 6DOF manipulator and a highly dynamic trajectory. The resulting interactive map of the manipulator workspace gives an overview of the task dependent dynamic performance, detailed evaluation of certain solutions will show the dexterity of the proposed approach.
The design of the NoSQL schema has a direct impact on the scalability of web applications. Especially for developers with little experience in NoSQL stores, the risks inherent in poor schema design can be incalculable. Worse yet, the issues will only manifest once the application has been deployed, and the growing user base causes highly concurrent writes. In this paper, we present a model checking approach to reveal scalability bottlenecks in NoSQL schemas. Our approach draws on formal methods from tree automata theory to perform a conservative static analysis on both the schema and the expected write-behavior of users. We demonstrate the impact of schema-inherent bottlenecks for a popular NoSQL store, and show how concurrent writes can ultimately lead to a considerable share of failed transactions.
The modular addition is a popular building block when designing lightweight ciphers. While algorithms mainly based on the addition can reach very high performance, masking their implementations results in a huge penalty. Since efficient protection against side-channel attacks is a requirement in lots of use cases, we focus on optimizing the Boolean masking of the modular addition. Contrary to recent related work, we target evolving a masked full adder instead of parts of a parallel prefix adder. We study how techniques typically found in neural network evolution and genetic algorithms can be adapted in order to help in evolving an efficiently masked adder. We customize a well-known neuroevolution algorithm, develop an optimized masked adder with our new approach and implement the ChaCha20 cipher on an ARM Cortex-M3 controller. We compare the performance of the protected neuroevolved implementation to solutions found by traditional search methods. Moreover, the leakage of our new solution is validated by a t-test conducted with a leakage simulator. We present under which circumstances our masked implementation outperforms related work and prove the feasibility of successfully using neuroevolution when searching for complex Boolean networks.
In this work, we present our benchmarking results for the ten finalist ciphers of the Lightweight Cryptography (LWC) project initiated by National Institute of Standards and Technology (NIST). We evaluate the speed and code size of various software implementations on five different platforms featuring four different architectures. Moreover, we benchmark the dynamic memory utilization of the remaining NIST LWC algorithms on one 32-bit ARM controller. We describe our test cases and methodology and provide some information regarding the design and properties of the finalists before showing and discussing our results. Altogether, we evaluated almost 300 implementations of the 3rd round candidates and pick the most appropriate and best (primary) implementation of each cipher for our comparisons. We include a variant of AES-GCM in our benchmarking in order to be able to compare the state-of-the-art to the novel LWC ciphers. Our research gives an overview over the performance of the latest software implementations of the NIST LWC finalists and shows under which circumstances which candidate is performing the best in our individual test cases. Additionally, we make all benchmarking results, the code for our test framework and every tested implementation available to the public to ensure a transparent testing process.
EMDLAB: A toolbox for analysis of single-trial EEG dynamics using empirical mode decomposition
(2015)
Background:
Empirical mode decomposition (EMD) is an empirical data decomposition technique. Recently there is growing interest in applying EMD in the biomedical field.
New method:
EMDLAB is an extensible plug-in for the EEGLAB toolbox, which is an open software environment for electrophysiological data analysis.
Results:
EMDLAB can be used to perform, easily and effectively, four common types of EMD: plain EMD, ensemble EMD (EEMD), weighted sliding EMD (wSEMD) and multivariate EMD (MEMD) on EEG data. In addition, EMDLAB is a user-friendly toolbox and closely implemented in the EEGLAB toolbox.
Comparison with existing methods:
EMDLAB gains an advantage over other open-source toolboxes by exploiting the advantageous visualization capabilities of EEGLAB for extracted intrinsic mode functions (IMFs) and Event-Related Modes (ERMs) of the signal.
Conclusions:
EMDLAB is a reliable, efficient, and automated solution for extracting and visualizing the extracted IMFs and ERMs by EMD algorithms in EEG study.
Virtualization has come a long way since its beginnings in the 1960s. Nowadays, Virtual Machine Monitor (VMM) - or hypervisor-based virtualization of servers is the de facto standard in data centers and a building block of the cloud hype. In recent years, virtualization has also been adopted to embedded devices such as avionics systems and mobile phones. The first mass deployment of embedded virtualization can probably be seen in video game consoles, though. However, it is still not employed by automotive electronics. This is despite the fact that with the upcoming domain controller architecture, virtualization can yield benefits beyond a mere consolidation of a multitude of Electronic Control Units (ECUs) into a few Domain Controller Units (DCUs). This paper presents merits of automotive virtualization, especially as a foundation for DCUs.
Today, ubiquitous mobile devices have not only arrived but entered the safety critical domain. There, systems are about to be controlled where human health or even human life is put at risk. For example, in automation systems first ideas surface to control parts of the system via a COTS smartphone. Another example is the idea to control the autonomous parking function of a car via a COTS smartphone too. As beneficial and convenient these ideas are on the first thought, on the second thought, dangers of these approaches become obvious. Especially in case of failures the system’s safety has to be maintained. The open question is how to achieve this mandatory requirement with COTS components, e.g. smartphones that are not developed following the development process necessary for safetycritical systems. This paper presents a concept to reliably detect human interaction while activating safety critical functions via COTS mobile devices. Thus a means is provided to detect erroneous activation requests for the safetycritical function.
We present two methods that combine image reconstruction and edge detection in computed tomography (CT) scans. Our first method is as an extension of the prominent filtered backprojection algorithm. In our second method we employ ℓ1-regularization for stable calculation of the gradient. As opposed to the first method, we show that this approach is able to compensate for undersampled CT data.
We present a paradigm for characterization of artifacts in limited data tomography problems. In particular, we use this paradigm to characterize artifacts that are generated in reconstructions from limited angle data with generalized Radon transforms and general filtered backprojection type operators. In order to find when visible singularities are imaged, we calculate the symbol of our reconstruction operator as a pseudodifferential operator.
The performance of cognitive models often depends on the settings of specific model parameters, such as the rate of memory decay or the speed of motor responses. The systematic exploration of a model’s parameter space can yield relevant insights into model behavior and can also be used to improve the fit of a model to human data. However, exhaustive parameter space searches quickly run into a combinatorial explosion as the number of parameters investigated increases. Taking an established instance-based learning task as example, we show
how simulation using parallel computing and derivative-free optimization methods can be applied to investigate the effects
of different parameter settings. We find that both global optimization methods involving genetic algorithms as well as local methods yield satisfactory results in this case. Furthermore, we show how a model implemented in a specific cognitive architecture (ACT-R) can be mathematically reformulated to prepare the application of derivative-based optimization methods which promise further efficiency gains for quantitative analysis.
PURPOSE
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography.
METHODS
In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization.
RESULTS
Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection.
CONCLUSIONS
The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
Differential phase contrast imaging (DPCI) enables the visualization of soft tissue contrast using X-rays. In this work we introduce a reconstruction framework based on curvelet expansion and sparse regularization for DPCI. We will show that curvelets provide a suitable data representation for DPCI reconstruction that allows preservation of edges as well as an exact analytic representation of the system matrix. As a first evaluation, we show results using simulated phantom data
This thesis is devoted to the problem of tomographic reconstruction at limited angular range. In the first part, we prove a characterization of filtered backprojection reconstructions from limited angle data. Moreover, we develop a strategy for artifact reduction and stabilization. In the second part, we introduce a new edge-preserving reconstruction algorithm for limited angle tomography and analyze this algorithm mathematically. Some numerical experiments are also presented.
We propose a new framework for limited angle tomographic reconstruction. Our approach is based on the observation that for a given acquisition geometry only a few (visible) structures of the object can be reconstructed reliably using a limited angle data set. By formulating this problem in the curvelet domain, we can characterize those curvelet coefficients which correspond to visible structures in the image domain. The integration of this information into the formulation of the reconstruction problem leads to a considerable dimensionality reduction and yields a speedup of the corresponding reconstruction algorithms.
In order to better understand the mechanisms of gas transport during High Frequency Oscillatory Ventilation (HFOV) Magnetic Resonance Imaging (MRI) with contrast gases and numerical flow simulations based on Computational Fluid Dynamics(CFD) methods are performed. Validation of these new techniques is conducted by comparing the results obtained with simplified models of the trachea and a first lung bifurcation as well as in a cast model of the upper central airways with results achieved from conventional fluid mechanical measurement techniques like e.g. Laser Doppler Anemometry (LDA). Further it is demonstrated that MRI of experimental HFOV is feasible and that Hyperpolarized 3He allows for imaging the gas re-distribution inside the lung. Finally, numerical results of oscillatory flow in a 3rd generation model of the lung as well as the impact of endotracheal tubes on the flow regime development in a trachea model are presented.
A Step Towards the Automated Diagnosis of Parkinson's Disease: Analyzing Handwriting Movements
(2015)
Parkinson’s disease (PD) has affected millions of people world-wide, being its major problem the loss of movements and, consequently, the ability of working and locomotion. Although we can find several works that attempt at dealing with this problem out there, most of them make use of datasets composed by a few subjects only. In this work, we present some results toward the automated diagnosis of PD by means of computer vision-based techniques in a dataset composed by dozens of patients, which is one of the main contributions of this work. The dataset is part of a joint research project that aims at extracting both visual and signal-based information from healthy and PD patients in order to go forward the early diagnosis of PD patients. The dataset is composed by handwriting clinical exams that are analyzed by means of image processing and machine learning techniques, being the preliminary results encouraging and promising. Additionally, a new quantitative feature to measure the amount of tremor of an individual’s handwritten trace called Mean Relative Tremor is also presented.
We consider the reconstruction problem for limited angle tomography using filtered backprojection (FBP) and lambda tomography. We use microlocal analysis to explain why the well-known streak artifacts are present at the end of the limited angular range. We explain how to mitigate the streaks and prove that our modified FBP and lambda operators are standard pseudodifferential operators, and so they do not add artifacts. We provide reconstructions to illustrate our mathematical results.
We investigate the reconstruction problem of limited angle tomography. Such problems arise naturally in applications like digital breast tomosynthesis, dental tomography, electron microscopy, etc. Since the acquired tomographic data is highly incomplete, the reconstruction problem is severely ill-posed and the traditional reconstruction methods, e.g. filtered backprojection (FBP), do not perform well in such situations.
To stabilize the reconstruction procedure additional prior knowledge about the unknown object has to be integrated into the reconstruction process. In this work, we propose the use of the sparse regularization technique in combination with curvelets. We argue that this technique gives rise to an edge-preserving reconstruction. Moreover, we show that the dimension of the problem can be significantly reduced in the curvelet domain. To this end, we give a characterization of the kernel of the limited angle Radon transform in terms of curvelets and derive a characterization of solutions obtained through curvelet sparse regularization. In numerical experiments, we will show that the theoretical results directly translate into practice and that the proposed method outperforms classical reconstructions.
Artifacts in Incomplete Data Tomography with Applications to Photoacoustic Tomography and Sonar
(2015)
We develop a paradigm using microlocal analysis that allows one to characterize the visible and added singularities in a broad range of incomplete data tomography problems. We give precise characterizations for photoacoustic and thermoacoustic tomography and sonar, and provide artifact reduction strategies. In particular, our theorems show that it is better to arrange sonar detectors so that the boundary of the set of detectors does not have corners and is smooth. To illustrate our results, we provide reconstructions from synthetic spherical mean data as well as from experimental photoacoustic data.
We investigate the reconstruction problem for limited angle tomography. Such problems arise naturally in applications like digital breast tomosynthesis, dental tomography, etc. Since the acquired tomographic data is highly incomplete, the reconstruction problem is severely ill-posed and the traditional reconstruction methods, such as filtered backprojection (FBP), do not perform well in such situations. To stabilize the inversion we propose the use of a sparse regularization technique in combination with curvelets. We argue that this technique has the ability to preserve edges. As our main result, we present a characterization of the kernel of the limited angle Radon transform in terms of curvelets. Moreover, we characterize reconstructions which are obtained via curvelet sparse regularizations at a limited angular range. As a result, we show that the dimension of the limited angle problem can be significantly reduced in the curvelet domain.
We propose a new algorithmic approach to the non-smooth and non-convex Potts problem (also called piecewise-constant Mumford–Shah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require a priori knowledge on the gray levels nor on the number of segments of the reconstruction. Further, it avoids anisotropic artifacts such as geometric staircasing. We demonstrate the suitability of our method for joint image reconstruction and segmentation. We focus on Radon data, where we in particular consider limited data situations. For instance, our method is able to recover all segments of the Shepp–Logan phantom from seven angular views only. We illustrate the practical applicability on a real positron emission tomography dataset. As further applications, we consider spherical Radon data as well as blurred data.
Simultaneous EEG-fMRI provides an increasingly attractive research tool to investigate cognitive processes with high temporal and spatial resolution. However, artifacts in EEG data introduced by the MR scanner still remain a major obstacle. This study, employing commonly used artifact correction steps, shows that head motion, one overlooked major source of artifacts in EEG-fMRI data, can cause plausible EEG effects and EEG–BOLD correlations. Specifically, low-frequency EEG (< 20 Hz) is strongly correlated with in-scanner movement. Accordingly, minor head motion (< 0.2 mm) induces spurious effects in a twofold manner: Small differences in task-correlated motion elicit spurious low-frequency effects, and, as motion concurrently influences fMRI data, EEG–BOLD correlations closely match motion-fMRI correlations. We demonstrate these effects in a memory encoding experiment showing that obtained theta power (~ 3–7 Hz) effects and channel-level theta–BOLD correlations reflect motion in the scanner. These findings highlight an important caveat that needs to be addressed by future EEG-fMRI studies.
Background
The purpose of this study was to evaluate the impact of Cone Beam CT (CBCT) based setup correction on total dose distributions in fractionated frameless stereotactic radiation therapy of intracranial lesions.
Methods
Ten patients with intracranial lesions treated with 30 Gy in 6 fractions were included in this study. Treatment planning was performed with Oncentra® for a SynergyS® (Elekta Ltd, Crawley, UK) linear accelerator with XVI® Cone Beam CT, and HexaPOD™ couch top. Patients were immobilized by thermoplastic masks (BrainLab, Reuther). After initial patient setup with respect to lasers, a CBCT study was acquired and registered to the planning CT (PL-CT) study. Patient positioning was corrected according to the correction values (translational, rotational) calculated by the XVI® system. Afterwards a second CBCT study was acquired and registered to the PL-CT to confirm the accuracy of the corrections. An in-house developed software was used for rigid transformation of the PL-CT to the CBCT geometry, and dose calculations for each fraction were performed on the transformed CT. The total dose distribution was achieved by back-transformation and summation of the dose distributions of each fraction. Dose distributions based on PL-CT, CBCT (laser set-up), and final CBCT were compared to assess the influence of setup inaccuracies.
Results
The mean displacement vector, calculated over all treatments, was reduced from (4.3 ± 1.3) mm for laser based setup to (0.5 ± 0.2) mm if CBCT corrections were applied. The mean rotational errors around the medial-lateral, superior-inferior, anterior-posterior axis were reduced from (−0.1 ± 1.4)°, (0.1 ± 1.2)° and (−0.2 ± 1.0)°, to (0.04 ± 0.4)°, (0.01 ± 0.4)° and (0.02 ± 0.3)°. As a consequence the mean deviation between planned and delivered dose in the planning target volume (PTV) could be reduced from 12.3% to 0.4% for D95 and from 5.9% to 0.1% for Dav. Maximum deviation was reduced from 31.8% to 0.8% for D95, and from 20.4% to 0.1% for Dav.
Conclusion
Real dose distributions differ substantially from planned dose distributions, if setup is performed according to lasers only. Thermoplasic masks combined with a daily CBCT enabled a sufficient accuracy in dose distribution.
Re-irradiation of spinal column metastases by IMRT: Impact of setup errors on the dose distribution
(2013)
Background
This study investigates the impact of an automated image guided patient setup correction on the dose distribution for ten patients with in-field IMRT re-irradiation of vertebral metastases.
Methods
10 patients with spinal column metastases who had previously been treated with 3D-conformal radiotherapy (3D-CRT) were simulated to have an in-field recurrence. IMRT plans were generated for treatment of the vertebrae sparing the spinal cord. The dose distributions were compared for a patient setup based on skin marks only and a Cone Beam CT (CBCT) based setup with translational and rotational couch corrections using an automatic robotic image guided couch top (Elekta - HexaPOD™ IGuide® - system). The biological equivalent dose (BED) was calculated to evaluate and rank the effects of the automatic setup correction for the dose distribution of CTV and spinal cord.
Results
The mean absolute value (± standard deviation) over all patients and fractions of the translational error is 6.1 mm (±4 mm) and 2.7° (±1.1 mm) for the rotational error. The dose coverage of the 95% isodose for the CTV is considerable decreased for the uncorrected table setup. This is associated with an increasing of the spinal cord dose above the tolerance dose.
Conclusions
An automatic image guided table correction ensures the delivery of accurate dose distribution and reduces the risk of radiation induced myelopathy.
This paper introduces a novel chaotic flower pollination algorithm (CFPA) to solve a tardiness-constrained flow-shop scheduling problem with simultaneously loaded stations. This industrial manufacturing problem is modeled from a filter basket production line in Germany and has been generally solved using standard deterministic algorithms. This research develops a metaheuristic approach based on the highly efficient flower pollination algorithm coupled with different chaos maps for stochasticity. The objective function targeted is the tardiness constraint of the due dates. Fifteen different experiments with thirty scenarios are generated to mimic industrial conditions. The results are compared with the genetic algorithm and with the four standard benchmark priority rule-based deterministic algorithms of First In First Out, Raghu and Rajendran, Shortest Processing Time and Slack. From the obtained results and analysis of the relative difference, percentage relative difference and t tests, CFPA was found to be significantly better performing than the deterministic heuristics and the GA algorithm.
This paper briefly presents the challenges for order control and release of multi-zone order picking systems. On the one hand, the order control must ensure that all orders are processed on time, and on the other hand, the space requirements (buffer) and the utilisation of the zones must be considered.
Within the framework of a case study, different strategies for order release were developed. The paper shortly describes the ideas of the strategies and presents results of a case-based simulative evaluation of the strategies. The findings of the simulation study are the basis for the development of a digital twin for the operational control of multi-zone picking systems.
Der Digitale Zwilling (DZ) ist ein wichtiger Bestandteil der Industrie 4.0 und ermöglicht Anwendungen wie Predictive Maintenance, virtuelles Prototyping oder die Steuerung von Produktions- und Logistikprozessen. Herausforderungen bei der Entwicklung des Digitalen Zwillings entstehen durch fehlende Struktur und Standards. Mit diesem Beitrag soll ein Vorgehensmodell für die Erstellung eines Digitalen Zwillings im Bereich der Produktion und Logistik aufgezeigt werden. Das Vorgehensmodell hilft bei der Einordnung, für welche Anwendungsfälle ein Digitaler Zwilling entwickelt werden kann, welche Schritte bei einer Umsetzung erfolgen müssen, und gibt einen Überblick über die Voraussetzungen und Komplexität bei der Entwicklung. Das zentrale Element bildet dabei die zielgerichtete Aufbereitung und Analyse der zugrunde liegenden Daten mittels des in der Industrie etablierten Vorgehensmodell CRISP-DM.
Radiative transfer modelling of high resolution infrared (or microwave) spectra still represents a major challenge for the processing of atmospheric remote sensing data despite significant advances in the numerical techniques utilized in line-by-line modelling by, e.g., optimized Voigt function algorithms or multigrid approaches. Special purpose computing hardware such as Field Programmable Gate Arrays (FPGAs) can be used to cope with the dramatic increase of data quality and quantity. Utilizing a highly optimized implementation of an uniform rational function approximation of the Voigt function, the molecular absorption cross section computation-representing the most compute intensive part of radiative transfer codes-has been realized on FPGA. Design and implementation of the FPGA coprocessor is presented along with first performance tests and an outlook for the ongoing further development.
In this work, a method for reducing the number of degrees of freedom in online optimal dynamic experiment design problems for systems described by differential equations is proposed. The online problems are posed such that only the inputs which extend an operation policy resulting from an experiment designed offline are optimized. This is done by formulating them as multiple experiment designs, considering explicitly the information of the experiment designed offline and possible time delays unknown a priori. The performance of the method is shown for the case of the separation of isopropanolol isomers in a Simulated Moving Bed plant.
Multiple hop routing in mobile ad hoc networks can minimize energy consumption and increase data throughput. Yet, the problem of radio interferences remain. However if the routes are restricted to a basic network based on local neighborhoods, these interferences can be reduced such that standard routing algorithms can be applied.
We compare different network topologies for these basic networks, i.e. the Yao-graph (aka. Θ-graph) and some also known related models, which will be called the SymmYgraph (aka. YS-graph), the SparsY-graph (aka. YY-graph) and the BoundY-graph. Further, we present a promising network topology called the HL-graph (based on Hierarchical Layers).
We compare these topologies regarding degree, spanner-properties, and communication features. We investigate how these network topologies bound the number of (uni- and bidirectional) interferences and whether these basic networks provide energy-optimal or congestion-minimal routing. Then, we compare the ability of these topologies to handle
dynamic changes of the network when radio stations appear and disappear. For this we measure the number of involved radio stations and present distributed algorithms for repairing the network structure.
Rational functions are frequently used as efficient yet accurate numerical approximations for real and complex valued special functions. For the complex error function , whose real part is the Voigt function , the rational approximation developed by Hui, Armstrong, and Wray [Rapid computation of the Voigt and complex error functions, J. Quant. Spectrosc. Radiat. Transfer 19 (1978) 509–516] is investigated. Various optimizations for the algorithm are discussed. In many applications, where these functions have to be calculated for a large x grid with constant y, an implementation using real arithmetic and factorization of invariant terms is especially efficient.
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
The goal of this paper is to increase the computation speed of MapReduce jobs by reducing the accuracy of the result. Often, the timely processing is more important than the precision of the result. Hadoop has no built-in functionality for such an approximation technique, so the user has to implement sampling techniques manually.
We introduce an automatic system for computing arithmetic approximations. The sampling is based on techniques from statistics and the extrapolation is done generically. This system is also extended by an incremental component which enables the reuse of already computed results to enlarge the sampling size. This can be used iteratively to further increase the sampling size and also the precision of the approximation. We present a transparent incremental sampling approach, so the developed components can be integrated in the Hadoop framework in a non-invasive manner.
Nonlinear ill-posed problem analysis in model-based parameter estimation and experimental design
(2015)
Discrete ill-posed problems are often encountered in engineering applications. Still, their sound analysis is not yet common practice and difficulties arising in the determination of uncertain parameters are typically not assigned properly. This contribution provides a tutorial review on methods for identifiability analysis, regularization techniques and optimal experimental design. A guideline for the analysis and classification of nonlinear ill-posed problems to detect practical identifiability problems is given. Techniques for the regularization of experimental design problems resulting from ill-posed parameter estimations are discussed. Applications are presented for three different case studies of increasing complexity.
The method of loci is one, if not the most, efficient mnemonic encoding strategy. This spatial mnemonic combines the core cognitive processes commonly linked to medial temporal lobe (MTL) activity: spatial and associative memory processes. During such processes, fMRI studies consistently demonstrate MTL activity, while electrophysiological studies have emphasized the important role of theta oscillations (3–8 Hz) in the MTL. However, it is still unknown whether increases or decreases in theta power co-occur with increased BOLD signal in the MTL during memory encoding. To investigate this question, we recorded EEG and fMRI separately, while human participants used the spatial method of loci or the pegword method, a similarly associative but nonspatial mnemonic. The more effective spatial mnemonic induced a pronounced theta power decrease source localized to the left MTL compared with the nonspatial associative mnemonic strategy. This effect was mirrored by BOLD signal increases in the MTL. Successful encoding, irrespective of the strategy used, elicited decreases in left temporal theta power and increases in MTL BOLD activity. This pattern of results suggests a negative relationship between theta power and BOLD signal changes in the MTL during memory encoding and spatial processing. The findings extend the well known negative relation of alpha/beta oscillations and BOLD signals in the cortex to theta oscillations in the MTL.
Electric and electronic functionalities increase exponentially in every mobility domain. The automotive in-dustry is confronted with a rising system complexity and several restricting requirements and standards (likeAUTOSAR), in particular to design embedded software for electronic control units. To stand against rampantfunctionalities software units could be restructured according to their affiliation and should not be attached toa certain place. This can be effected by integration on single controllers. On the one hand the system wideamount of hardware controllers could such be limited. On the other hand the workload for integration CPUswill rise. To support this paradigm, multi-core systems can provide enough processing power in an efficientway. This paper shows a first approach to combine automotive functionality on such a single controller.
The generic REMOS (REverberation MOdeling for robust Speech recognition) concept is extended in this contribution to cope with additional noise components. REMOS originally embeds an explicit reverberation model into a hiddenMarkov model (HMM) leading to a relaxed conditional independence assumption for the observed feature vectors. During recognition, a nonlinear optimization problem is to be solved in order to adapt the HMMs' output probability density functions to the current reverberation conditions. The extension for additional noise components necessitates a modified numerical solver for the nonlinear optimization problem. We propose an approximation scheme based on continuous piecewise linear regression. Connected-digit recognition experiments demonstrate the potential of REMOS in reverberant and noisy environments. They furthermore reveal that the benefit of an explicit reverberation model, overcoming the conditional independence assumption, increases with increasing signal-to-noise-ratios.
NoSQL-Datenbanksysteme sind in den letzten Jahren sehr populär geworden, gute Gründe sprechen für ihren Einsatz: Eine attraktive Eigenschaft vieler Systeme ist ihre Schema-Flexibilität, die insbesondere in der agilen Anwendungsentwicklung Vorteile bietet. Durch horizontale Skalierbarkeit ermöglichen NoSQL-Datenbanksysteme eine effiziente Verarbeitung großer Datenmengen. Einige Systeme, die für die Datenhaltung interaktiver Anwendungen konzipiert sind, können zudem hochfrequente Nutzeranfragen bedienen. Diesen Vorteilen stehen eine Reihe von Nachteilen gegenüber, aus denen sich neue Herausforderungen für die Anwendungsentwicklung ergeben: Fehlende Standards bei den Anfragesprachen erschweren die Entwicklung datenbanksystemunabhängiger Anwendungen. Schema-Flexibilität im Datenbankmanagementsystem führt dazu, dass die Verantwortung für das Schema-Management in die Anwendung verlagert wird. Im vorliegenden Beitrag werden wesentliche Herausforderungen identifiziert und Lösungsansätze aus Forschung und Praxis vorgestellt. Dabei liegt der Fokus auf schema-flexiblen NoSQL-Datenbanksystemen, mit einem aggregat-orientierten Datenmodell, d. h. Key-Value Datenbanksysteme, dokumentenorientierten Datenbanksystemen und Column-Family Datenbanksystemen.
NoSQL data stores have become very popular over the last years, as good reasons are justifying their application: One attractive feature of many systems is their schema flexibility, which may be preferable in agile software development projects. Due to their horizontal scalability, NoSQL data stores make it possible to efficiently process large amounts of data. Some systems, designed as data backends for interactive applications, can also manage highly frequent user requests. Apart from these advantages, there are also downsides to NoSQL data stores that create new challenges for software development: Missing standards in query languages make it difficult to build data store independent applications. Schema flexibility in the data store shifts the responsibility for schema management into the application. This article identifies substantial challenges as well as solution statements from research and practice. The focus of our survey is on schema-flexible NoSQL data management systems with an aggregate-oriented data model, i. e., key-value data management systems, as well as document and column family data management systems.
Inverse problems are at the heart of many practical problems such as image reconstruction or nondestructive testing. A characteristic feature is their instability with respect to data perturbations. To stabilize the inversion process, regularization methods must be developed and applied. In this paper, we introduce the concept of filtered diagonal frame decomposition, which extends the classical filtered SVD to the case of frames. The use of frames as generalized singular systems allows a better match to a given class of potential solutions and is also beneficial for problems where the SVD is not analytically available. We show that filtered diagonal frame decompositions yield convergent regularization methods, derive convergence rates under source conditions and prove order optimality. Our analysis applies to bounded and unbounded forward operators. As a practical application of our tools, we study filtered diagonal frame decompositions for inverting the Radon transform as an unbounded operator on L2(R2).
We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.
Quantum key distribution is among the foremost applications of quantum mechanics, both in terms of fundamental physics and as a technology on the brink of commercial deployment. Starting from principal schemes and initial proofs of unconditional security for perfect systems, much effort has gone into providing secure schemes which can cope with numerous experimental imperfections unavoidable in real world implementations. In this paper, we provide a comparison of various schemes and protocols. We analyse their efficiency and performance when implemented with imperfect physical components. We consider how experimental faults are accounted for using effective parameters. We compare various recent protocols and provide guidelines as to which components propose best advances when being improved.
We experimentally analyze the complete photon number statistics of parametric down-conversion and ascertain the influence of multimode effects. Our results clearly reveal a difference between single-mode theoretical description and the measured distributions. Further investigations assure the applicability of loss-tolerant photon number reconstruction and prove strict photon number correlation between signal and idler modes.
Every security analysis of quantum-key distribution (QKD) relies on a faithful modeling of the employed quantum states. Many photon sources, such as for instance a parametric down-conversion (PDC) source, require a multimode description but are usually only considered in a single-mode representation. In general, the important claim in decoy-based QKD protocols for indistinguishability between signal and decoy states does not hold for all sources. We derive bounds on the single-photon transmission probability and error rate for multimode states and apply these bounds to the output state of a PDC source. We observe two opposing effects on the secure key rate. First, the multimode structure of the state gives rise to a new attack that decreases the key rate. Second, more contributing modes change the photon number distribution from a thermal toward a Poissonian distribution, which increases the key rate.
Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics4,5,6,7,8,9,10,11. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique12,13,14,15. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.
Wer seine Daten mit ansehnlichen und informativen Graphen veranschaulichen möchte, braucht meist viel Geduld. Die R-Erweiterung Ggplot2 bringt System in die Grafik, drückt sich in knappem Quellcode aus und bläst frischen Wind in den Alltag der Datenvisualisierung.
The discipline of causal inference uses so-called causal graphs to model cause and effect relations of random variables. As those graphs only encode a relation structure there is no hard rule concerning their alignment. The present paper presents a study with the aim of working out the optimal alignment of causal graphs with respect to comprehensibility and interestingness. In addition, the study examines whether the central gestalt principles of psychology apply for causal graphs. Data from 29 participants is acquired by triangulating eye tracking with a questionnaire. The results of the study suggest that causal graphs should be aligned downwards. Moreover, the gestalt principles proximity, similarity and closure are shown to hold true for causal graphs.
Telenursing bei Schlaganfall
(2022)
Beim Projekt DeinHaus 4.0 Oberpfalz – TePUS (Telepräsenzroboter für die Pflege und Unterstützung von Schlaganfallpatientinnen und -patienten) handelt es sich um eine Längsschnittstudie im Mixed-Methods-Design zur Untersuchung telepräsenz und
appgestützter Angebote aus den Bereichen Pflege, Logopädie und Physiotherapie.
As use of digital fabrication increases in architecture, engineering and construction, the industry seeks appropriate management and processes to enable the adoption during the design/planning phase. Many enablers have been identified across various studies; however, a comprehensive synthesis defining the enablers of design for digital fabrication does not yet exist. This work conducts a systematic literature review of 59 journal articles published in the past decade and identifies 140 enablers under eight categories: actors, resources, conditions, attributes, processes, artefacts, values and risks. The enablers’ frequency network is illustrated using an adjacency matrix. Through the lens of actor-network theory, the work creates a relational ontology to demonstrate the linkages between different enablers. Three examples are presented using onion diagrams: circular construction focus, business model focus and digital twin in industrialisation focus. Finally, this work discusses the intersection of relational ontology with process modelling to design future digital fabrication work routines.
Special issue ISARC 2021
(2022)
The research filed of construction robotics broadens increasingly in terms of complexity, approaches, technologies used, active stakeholders, and application areas. Worldwide labour and resource shortages, the need to increase circularity and resource efficiency, new materials and the increasing utilisation of digital construction tools in the planning and construction industry massively spur the uptake of robotic solutions for on-site construction.
The initial boom of construction robots happened in the 1970s, driven by the Japanese construction industry. In the 1980s, a combination with parallel developments was supposed to achieve complete, integrated robotic on-site factories. From the mid-1980s onwards, the global interest in construction robots decreased gradually. Bulky and expensive systems, complex on-site navigation and logistics approaches, a narrow scope of tasks, inflexibility, incompatibility with on-site work organisation and professional qualification, low usability and insufficient inter-robot coordination capabilities revealed the immaturity of the systems. Only a few organisations predominantly situated in Asia such as Takenaka, Obayashi, Kajima Corporation, Nihon Bisho Co., Samsung, and Hitachi maintained development activities.
However, since the mid-2010s, development activities are gaining traction again. On the application side, this is mainly driven by trends such as the need to upgrade the energy performance of buildings in Europe, a global necessity to remove asbestos from existing structures, and a demand for enormous quantities of high-rise buildings all over East Asia. On the system side, the renewed interest stems from major advances in physical–mechanical robot technology in other automation-driven industries such as the automotive industry. Robots became lighter, more flexible, their parts modular and interchangeable, more user friendly as well as significantly cheaper. On the digital side, the BIM-to-Robot pipeline was subject of intensive reserach and development. More and more methods and tools help to increase the usability of robots and facilitate the simulation and optimisation of robot-driven construction processes.
In the last 4–5 years, the worldwide growing need and interest in construction robotics became highly evident. More than 200 robot systems are pushed by start-ups and spin-offs and their investors to the market. This is backed by an enormous number of activities and projects carried out in the academic area pushing to the boundaries of what is technologically possible.
Major associations and their conferences increase significantly in popularity such as ISARC (International Association for Automation and Robotics in Construction), EC3 (European Council of Computing in Construction), and Robots in Architecture. Competency in digital construction, automation and robotics becomes a key for all stakeholders in the construction industry and many universities worldwide launch dedicated interdisciplinary programs. Powerful governments (China) and major funding programs such as Horizon Europe (Europe) massively request and fund the development of robotic solutions for construction such as drones, mobile robots, 3D-printing solutions, cable-driven robots, and exoskeletons. Regulators and standardisation organisation start to develop the first certification and standardisation schemes for construction robots and large software companies make attempts to allow to simulate and program robotic construction processes efficiently and robustly based on digital building and construction data.
To showcase the diversity of cutting-edge research in the area, this special issue invited eight extended versions of selected papers from the ISARC 2021 conference. As such, this issue covers digital approaches to embed fabrication and robot information in BIM and IFC and program robots directly from digital building models. New robot systems spur novel robotic production processes, and machine learning enable novel logistics approaches for building components that may ultimately lead to robotic cranes and other robotic on-site logistics and handling solutions (including autonomous construction machines). In parallel, systematic evaluation and robot development methods are developed that allow to shed light on their performance in the construction process.
Dynamic Time Warping (DTW) is a well-known similarity measure for time series. The standard dynamic programming approach to compute the DTW distance of two length-n time series, however, requires O(n2) time, which is often too slow for real-world applications. Therefore, many heuristics have been proposed to speed up the DTW computation. These are often based on lower bounding techniques, approximating the DTW distance, or considering special input data such as binary or piecewise constant time series. In this paper, we present a first exact algorithm to compute the DTW distance of two run-length encoded time series whose running time only depends on the encoding lengths of the inputs. The worst-case running time is cubic in the encoding length. In experiments we show that our algorithm is indeed fast for time series with short encoding lengths.
DevOps paradigm is widely used in industry to develop software faster, deploy high quality frequent releases of features by integrating and harmonizing the Development and IT Operations activities.
Industries are taking strategic decisions to remove the barriers that existed between Development and Operational teams by encouraging collaborations among these teams throughout System Development Life Cycle (SDLC). These strategic decisions to implement DevOps paradigm resulted in the development and emergence of large arrays of tool chains to support, monitor, and automate activities of various SDLC stages. In this paper authors attempt to give practical insights on how the using of DevOps can speed up the management, development and deployment process of a simple web application. Widely used DevOps model consisting of eight stages is used to implement the example application. A toolchain consisting of state of arts tools is used at various DevOps stages. A detailed explanation of each tool, including details to their implementation and a short evaluation concludes the study. The results revealed that the usage of DevOps enables to accelerate the development process of web applications, as most steps during the build and testing process can be automated. Especially the outsourcing of operational overhead to an external cloud provider can lead to economic advantages, which will impact the future of software development.
The modular addition is used as a non-linear operation in ARX ciphers because it achieves the requirement of introducing non-linearity in a cryptographic primitive while only taking one clock cycle to execute on most modern architectures. This makes ARX ciphers especially fast in software implementations, but comes at the cost of making it harder to protect against side-channel information leakages using Boolean masking: the best known 2-shares masked adder for ARM Thumb micro-controllers takes 83 instructions to add two 32-bit numbers together. Our approach is to operate in bitsliced mode, performing 32 additions in parallel on a 32-bit microcontroller. We show that, even after taking into account the cost of bitslicing before and after the encryption, it is possible to achieve a higher throughput on the tested ciphers (CRAX and ChaCha20) when operating in bitsliced mode. Furthermore, we prove that no first-order information leakage is happening in either simulated power traces and power traces acquired from real hardware, after sufficient countermeasures are put into place to guard against pipeline leakages.
Nach einer Einführung in die Grundlagen der informierten Einwilligung werden Empfehlungen zur Durchführung zusammengefasst und Besonderheiten im Hinblick auf die Gestaltung des Einwilligungsprozesses bei Personen mit Aphasie herausgearbeitet. Zuletzt wird das aphasiefreundliche Vorgehen im Projekt DeinHaus 4.0 Oberpfalz kurz skizziert.
This article provides a mathematical classification of artifacts from arbitrary incom-plete X-ray tomography data when using the classical filtered backprojection algorithm. Usingmicrolocal analysis, we prove that all artifacts arise from points at the boundary of the data set.Our results show that, depending on the geometry of the data set boundary, two types of artifactscan arise: object-dependent and object-independent artifacts. The object-dependent artifacts aregenerated by singularities of the object being scanned and these artifacts can extend all along lines.This is a generalization of the streak artifacts observed in limited angle CT. The article also char-acterizes two new phenomena: the object-independent artifacts are caused only by the geometryof the data set boundary; they occur along lines if the boundary of the data set is not smooth andalong curves if the boundary of the data set is smooth. In addition to the geometric descriptionof artifacts, the article also provides characterizations of their strength in Sobolev scale in certaincases. Moreover, numerical reconstructions from simulated and real data are presented illustratingour theorems.This work is motivated by a reconstruction we present from a synchrotron data set in whichartifacts along lines appeared that were independent of the object.The results of this article apply to a wide range of well-known incomplete data problems, in-cluding limited angle CT and region of interest tomography, as well as to unconventional x-ray CTimaging setups. Some of those problems are explicitly addressed in this article, theoretically and numerically.
The use of quantum processing units (QPUs) promises speed-ups for solving computational problems, but the quantum devices currently available possess only a very limited number of qubits and suffer from considerable imperfections. One possibility to progress towards practical utility is to use a co-design approach: Problem formulation and algorithm, but also the physical QPU properties are tailored to the specific application. Since QPUs will likely be used as accelerators for classical computers, details of systemic integration into existing architectures are another lever to influence and improve the practical utility of QPUs.
In this work, we investigate the influence of different parameters on the runtime of quantum programs on tailored hybrid CPU-QPU-systems. We study the influence of communication times between CPU and QPU, how adapting QPU designs influences quantum and overall execution performance, and how these factors interact. Using a simple model that allows for estimating which design choices should be subjected to optimisation for a given task, we provide an intuition to the HPC community on potentials and limitations of co-design approaches. We also discuss physical limitations for implementing the proposed changes on real quantum hardware devices.
In this paper, we present a new Hybrid Genetic Search (HGS) algorithm for solving the Capacitated Vehicle Routing Problem for Pickup and Delivery (CVRPPD) as it is required for public transport in rural areas. One of the biggest peculiarities here is that a large area has to be covered with as few vehicles as possible. The basic idea of this algorithm is based on a more general version of HGS, which we adopted to solve the CVRPPD in rural areas. It also implements improvements that lead to the acceleration of the algorithm and, thereby, to a faster generation of a fastest route. We tested the algorithm on real road data from Roding, a rural district in Bavaria, Germany. Moreover, we designed an API for converting data from the Openrouteservice, so that our algorithm can be applied on real world examples as well.
Independent component analysis (ICA), being a data-driven method, has been shown to be a powerful tool for functional magnetic resonance imaging (fMRI) data analysis. One drawback of this multivariate approach is that it is not, in general, compatible with the analysis of group data. Various techniques have been proposed to overcome this limitation of ICA. In this paper, a novel ICA-based workflow for extracting resting-state networks from fMRI group studies is proposed. An empirical mode decomposition (EMD) is used, in a data-driven manner, to generate reference signals that can be incorporated into a constrained version of ICA (cICA), thereby eliminating the inherent ambiguities of ICA. The results of the proposed workflow are then compared to those obtained by a widely used group ICA approach for fMRI analysis. In this study, we demonstrate that intrinsic modes, extracted by EMD, are suitable to serve as references for cICA. This approach yields typical resting-state patterns that are consistent over subjects. By introducing these reference signals into the ICA, our processing pipeline yields comparable activity patterns across subjects in a mathematically transparent manner. Our approach provides a user-friendly tool to adjust the trade-off between a high similarity across subjects and preserving individual subject features of the independent components.
Independent component analysis (ICA), as a data driven method, has shown to be a powerful tool for functional magnetic resonance imaging (fMRI) data analysis. One drawback of this multivariate approach is, that it is naturally not convenient for analysis of group studies. Therefore various techniques have been proposed in order to overcome this limitation of ICA. In this paper a novel ICA based work-flow for extracting resting state networks from fMRI group studies is proposed. An empirical mode decomposition (EMD) is used to generate reference signals in a data driven manner, which can be incorporated into a constrained version of ICA (cICA), what helps to overcome the inherent ambiguities. The results of the proposed workflow are then compared to those obtained by a widely used group ICA approach. It is demonstrated that intrinsic modes, extracted by EMD, are suitable to serve as references for cICA to obtain typical resting state patterns, which are consistent over subjects. This novel processing pipeline makes it transparent for the user, how comparable activity patterns across subjects emerge, and also the trade-off between similarity across subjects and preserving individual features can be well adjusted and adapted for different requirements in the new work-flow.
Background and objective
The study follows the proposal of decomposing a given data matrix into a product of independent spatial and temporal component matrices. A multi-variate decomposition approach is presented, based on an approximate diagonalization of a set of matrices computed using a latent space representation.
Methods
The proposed methodology follows an algebraic approach, which is common to space, temporal or spatiotemporal blind source separation algorithms. More specifically, the algebraic approach relies on singular value decomposition techniques, which avoids computationally costly and numerically instable matrix inversion. The method is equally applicable to correlation matrices determined from second order correlations or by considering fourth order correlations.
Results
The resulting algorithms are applied to fMRI data sets either to extract the underlying fMRI components or to extract connectivity maps from resting state fMRI data collected for a dynamic functional connectivity analysis. Intriguingly, our algorithm shows increased spatial specificity compared to common approaches, while temporal precision stays similar.
Conclusion
The study presents a novel spatiotemporal blind source separation algorithm, which is both robust and avoids parameters that are difficult to fine tune. Applied on experimental data sets, the new method yields highly confined and focused areas with least spatial extent in the retinotopy case, and similar results in the dynamic functional connectivity analyses compared to other blind source separation algorithms. Therefore, we conclude that our novel algorithm is highly competitive and yields results, which are superior or at least similar to existing approaches.
Investigating temporal variability of functional connectivity is an emerging field in connectomics. Entering dynamic functional connectivity by applying sliding window techniques on resting-state fMRI (rs-fMRI) time courses emerged from this topic. We introduce frequency-resolved dynamic functional connectivity (frdFC) by means of multivariate empirical mode decomposition (MEMD) followed up by filter-bank investigations. In general, we find that MEMD is capable of generating time courses to perform frdFC and we discover that the structure of connectivity-states is robust over frequency scales and even becomes more evident with decreasing frequency. This scale-stability varies with the number of extracted clusters when applying k-means. We find a scale-stability drop-off from k = 4 to k = 5 extracted connectivity-states, which is corroborated by null-models, simulations, theoretical considerations, filter-banks, and scale-adjusted windows. Our filter-bank studies show that filter design is more delicate in the rs-fMRI than in the simulated case. Besides offering a baseline for further frdFC research, we suggest and demonstrate the use of scale-stability as a possible quality criterion for connectivity-state and model selection. We present first evidence showing that connectivity-states are both a multivariate, and a multiscale phenomenon. A data repository of our frequency-resolved time-series is provided.
On embedded processors that are increasingly equipped with multiple CPU cores, static hardware partitioning is an established means of consolidating and isolating workloads onto single chips. This architectural pattern is suitable for mixed-criticality workloads that need to satisfy both, real-time and safety requirements, given suitable hardware properties. In this work, we focus on exploiting contemporary virtualisation mechanisms to achieve freedom from interference respectively isolation between workloads. Possibilities to achieve temporal and spatial isolation-while maintaining real-time capabilities-include statically partitioning resources, avoiding the sharing of devices, and ascertaining zero interventions of superordinate control structures. This eliminates overhead due to hardware partitioning, but implies certain hardware capabilities that are not yet fully implemented in contemporary standard systems. To address such hardware limitations, the customisable and configurable RISC-V instruction set architecture offers the possibility of swift, unrestricted modifications. We present findings on the current RISC-V specification and its implementations that necessitate interventions of superordinate control structures. We identify numerous issues adverse to implementing our goal of achieving zero interventions respectively zero overhead: On the design level, and especially with regards to handling interrupts. Based on micro-benchmark measurements, we discuss the implications of our findings, and argue how they can provide a basis for future extensions and improvements of the RISC-V architecture.
Over the last years, configurational research has become increasingly popular in the Information Systems (IS) discipline. Researchers value configurational methods like Qualitative Comparative Analysis (QCA) as their application contributes to a better understanding of complex phenomena. QCA helps to uncover interrelations of conditions that lead to an outcome, building on the principles of equifinality, conjunctural causation, and asymmetry. More recently, IS researchers have started to analyze qualitative data, like case study data, with QCA. However, there is a lack of methodological guidance on how to calibrate qualitative data into set membership values for QCA. Therefore, this paper structures methodological steps and the associated options to calibrate qualitative data from an interdisciplinary perspective and critically reviews the observed methodological choices in IS research. This paper also gives recommendations for calibrating qualitative data to support informed methodological choices for future research.
A Deep Learning System to Transform Cross-Section Spectra to Varying Environmental Conditions
(2022)
Absorption cross-sections provide a basis for many gas sensing applications. Therefore, any error in molecular cross-sections caused by varying environmental conditions propagates to spectroscopic applications. Original molecular cross-sections in varying environmental conditions can only be simulated for some molecules, whereas for most multi-atom molecules, one must rely on high-precision measurements at certain environmental configurations. In this study, a deep learning system trained with simulated absorption cross-sections for predicting cross-sections at a different pressure configuration is presented. The system’s capability to transfer to measured, multi-atom cross-sections is demonstrated. Thus, it provides an alternative to (pseudo-) line lists whenever the required information for simulation is unavailable. The predictive performance of the system was evaluated on validation data via simulation, and its transfer learning capabilities were demonstrated on actual measurement chlorine nitrate data. From the comparison between the system and line lists, the system shows slightly worse performance than pseudo-line lists but its predictive quality is still deemed acceptable with less than 5% relative integral change with a highly localized error around the peak center. This opens a promising way for further research to use deep learning to simulate the effect of varying environmental conditions on absorption cross-sections.
When done right, the use of low code development promises a significant competitive advantage in the software development process for organizations. Thus, multiple vendors have created low code development platforms to ease the use of low code development. However, current research on low code development platforms mainly focuses on the technological aspects of the platforms but not on their adoption. Hence, it remains unclear what drives and inhibits the adoption of low code development platforms. We conducted a literature review and identified thirteen factors that inhibit the adoption and seven factors that drive it. We structure these factors along with the diffusion of innovation framework that helps to disentangle drivers and inhibitors. As a result, we provide an initial explanation of the adoption of low code development platforms. Nevertheless, we conclude that existing research on the adoption of low code development platforms is not specific enough to understand the phenomenon substantially. Further, for some factors (e.g., cost), there is a disagreement in the academic literature on whether they are drivers or inhibitors. Hence, we identify gaps and derive avenues for future research.
We introduce the Merkel Podcast Corpus, an audio-visual-text corpus in German collected from 16 years of (almost) weekly Internet podcasts of former German chancellor Angela Merkel. To the best of our knowledge, this is the first single speaker corpus in the German language consisting of audio, visual and text modalities of comparable size and temporal extent. We describe the methods used with which we have collected and edited the data which involves downloading the videos, transcripts and other metadata, forced alignment, performing active speaker recognition and face detection to finally curate the single speaker dataset consisting of utterances spoken by Angela Merkel. The proposed pipeline is general and can be used to curate other datasets of similar nature, such as talk show contents. Through various statistical analyses and applications of the dataset in talking face generation and TTS, we show the utility of the dataset. We argue that it is a valuable contribution to the research community, in particular, due to its realistic and challenging material at the boundary between prepared and spontaneous speech.
Accepted at LREC 2022
Increasing user participation or changing behavior are key goals when applying gamification. Existing studies in domains such as education, health, and enterprise show that gamification can have a positive impact on meeting these goals. However, there is still a lack of detailed insights into how certain game design elements affect user behavior and motivation. To gain further insight, this paper presents a user study in the field with 20, 000 participants of a mobile e-commerce application over a one-month time period to analyze the impact of gamification in the e-commerce domain and to compare the effectiveness of tangible versus intangible rewards. Results show that gamification has a positive impact in the e-commerce domain. The study also reveals that tangible rewards increase the user activity substantially more than intangible rewards. We further show how tangible rewards affect certain user types and provide a first discussion on the lastingness of these rewards.
Lately, parallel task models have received much attention in the development of real-time multiprocessor systems, as they allow highly compute-intensive tasks to have shorter deadlines which is very much required in modern reactive systems. However, missing modularity and portability can make parallel programming a cumbersome endeavor. As a consequence, compute-intensive sectors in the desktop and server segment have relied on parallelism frameworks such as Intel Threading Building Blocks, Cilk and OpenMP. These parallelism frameworks, however, are optimized for decent average case performance and consequently, do not meet the strict requirements imposed by real-time systems.
In this paper, we present a proof-of-concept parallelism framework which was implemented in particular for soft real-time systems and having tight timing and safety requirements of such critical systems in mind. The proposed runtime system implements static memory allocation in a work-stealing environment that conforms to the strict space and tight probabilistic time bounds of work-stealing schedulers. Furthermore, we evaluate the performance of this framework by conducting multiprogrammed benchmarks on a real-time embedded multicore architecture.
Background:
Brain lesions in language-related cortical areas remain a challenge in the clinical routine. In recent years the resting-state fMRI (rs-fMRI) was shown to be a feasible method for preoperative language assessment. The aim of this study was to examine whether language-related resting-state components, which have been obtained using a data-driven independent-component-based identification algorithm, can be supportive in determining language dominance in the left or right hemisphere.
Methods:
Twenty patients suffering from brain lesions close to supposed language relevant cortical areas were included. Rs-fMRI and task-based (tb-fMRI) were performed for the purpose of preoperative language assessment. Tb-fMRI included a verb generation task with an appropriate control condition (a syllable switching task) to decompose language critical and language supportive processes. Subsequently, the best fitting ICA component for the resting-state language network (RSLN) referential to general linear models (GLMs) of the tb-fMRI (including models with and without linguistic control conditions) was identified using an algorithm based on the Dice-index.
Results:
The RSLNs associated with GLMs using a linguistic control condition led to significantly higher laterality indices than GLM baseline contrasts. LIs derived from GLM contrasts with and without control conditions alone did not differ significantly.
Conclusion:
In general, the results suggest that determining language dominance in the human brain is feasible both with tb-fMRI and rs-fMRI, and in particular, the combination of both approaches yields a higher specificity in preoperative language assessment. Moreover, we can conclude that the choice of the language mapping paradigm is crucial for the mentioned benefits.
This short survey reviews the recent literature on the relationship between the brain structure and its functional dynamics. Imaging techniques such as diffusion tensor imaging (DTI) make it possible to reconstruct axonal fiber tracks and describe the structural connectivity (SC) between brain regions. By measuring fluctuations in neuronal activity, functional magnetic resonance imaging (fMRI) provides insights into the dynamics within this structural network. One key for a better understanding of brain mechanisms is to investigate how these fast dynamics emerge on a relatively stable structural backbone. So far, computational simulations and methods from graph theory have been mainly used for modeling this relationship. Machine learning techniques have already been established in neuroimaging for identifying functionally independent brain networks and classifying pathological brain states. This survey focuses on methods from machine learning, which contribute to our understanding of functional interactions between brain regions and their relation to the underlying anatomical substrate.
Semantic alignment of application software components’ ontologies represents a great interest in vehicle application domains that manipulate heterogeneous overlapping knowledge application frameworks. In the past few years, with the growth in the novel vehicle service requirements such as autonomous driving, V2X (Vehicle-to-Vehicle communication) and many others, automotive application software component models are becoming increasingly collaborative with other qualified cross-enterprise industrial partners to accomplish these complex service requirements. The most daunting impediment to this cross-enterprise collaboration is semantic interoperability. For efficient services collaboration through cross-enterprise semantic interoperability between the vehicle application frameworks’ software components, aligning the interface ontologies of these components by identifying the depth of semantic alignment relationships between the concepts of the interface ontologies is the major focus of this paper. In contrast to several existing ontology structural metrics, this work defines, evaluates and validates ontology metrics to measure the depth of semantic alignment between the vehicle domain software component frameworks’ interface ontological models. To emphasize the substantial role of semantic alignment of software component frameworks’ interface ontologies in semantic interoperability, a typical vehicle domain case study involving vehicle applications is considered for demonstration.
When an incremental release of a web application is deployed, the structure of data already persisted in the production database may no longer match what the application code expects. Traditionally, eager schema migration is called for, where all legacy data is migrated in one go. With the growing popularity of schema-flexible NoSQL data stores, lazy forms of data migration have emerged: Legacy entities are migrated on-the-fly, one at-a-time, when they are loaded by the application. In this demo, we present Datalution, a tool demonstrating the merits of lazy data migration. Datalution can apply chains of pending schema changes, due to its Datalog-based internal representation. The Datalution approach thus ensures that schema evolution, as part of continous deployment, is carried out correctly.
Frequency conversion (FC) and type-II parametric down-conversion (PDC) processes serve as basic building blocks for the implementation of quantum optical experiments: type-II PDC enables the efficient creation of quantum states such as photon-number states and Einstein–Podolsky–Rosen (EPR)-states. FC gives rise to technologies enabling efficient atom–photon coupling, ultrafast pulse gates and enhanced detection schemes. However, despite their widespread deployment, their theoretical treatment remains challenging. Especially the multi-photon components in the high-gain regime as well as the explicit time-dependence of the involved Hamiltonians hamper an efficient theoretical description of these nonlinear optical processes. In this paper, we investigate these effects and put forward two models that enable a full description of FC and type-II PDC in the high-gain regime. We present a rigorous numerical model relying on the solution of coupled integro-differential equations that covers the complete dynamics of the process. As an alternative, we develop a simplified model that, at the expense of neglecting time-ordering effects, enables an analytical solution. While the simplified model approximates the correct solution with high fidelity in a broad parameter range, sufficient for many experimental situations, such as FC with low efficiency, entangled photon-pair generation and the heralding of single photons from type-II PDC, our investigations reveal that the rigorous model predicts a decreased performance for FC processes in quantum pulse gate applications and an enhanced EPR-state generation rate during type-II PDC, when EPR squeezing values above 12 dB are considered.
Praxisprojekte in Zusammenarbeit mit externen Auftraggebern erfreuen sich in der Projektmanagementlehre zunehmender Beliebtheit - Learning by Doing. Die digitale Lehre eröffnet hier Chancen: Sie ermöglicht z. B. eine Zusammenarbeit über größere Distanzen und dient dem Ideen- und Erfahrungstransfer aus der online-Lehre in die Geschäftswelt. Die spezifischen Erfahrungen aus der Perspektive der OTH Regensburg sowie der des Auftraggebers, der Unternehmensberatung fifty1 aus Wien, werden vorgestellt. Studierende erarbeiteten Ideen für digitale Beratungsprodukte, die zu marktfähigen Angeboten des Auftraggebers weiterentwickelt werden.
-Projekte planen und steuern mit Excel
-Mit Praxisbeispiel, Schritt für Schritt aufgebaut
-Termine, Kosten und Ressourcen im Griff
-Nützliche VBA-Makros für Projektmanager
-Business Intelligence-Berichte mit PowerQuery und Power BI Desktop Projekte planen, überwachen und steuern - das geht auch mit Excel in Microsoft 365. Ignatz Schels und Prof. Dr. Uwe M. Seidel sind erfahrene Projektmanager und Projektcontroller. Sie zeigen Ihnen, wie Sie das Kalkulationsprogramm von Microsoft für effizientes Projektmanagement nutzen können. Hier üben Sie an einem realen Projekt: Sie erstellen Checklisten, Projektstrukturen und Kostenpläne, überwachen Termine und Budgets und dokumentieren mit Infografiken und Diagrammen. Sie lernen mit den beiden Autoren die besten Funktionen und die wichtigsten Analysewerkzeuge von Excel kennen und programmieren Ihre ersten Makros mit der Makrosprache VBA. Projektmanagement mit Excel – probieren Sie es aus, es funktioniert! In der Neuauflage finden Sie praxisnahe Beispiele zu den BI-Tools PowerQuery, Power Pivot und Power BI sowie Tipps zu den aktuellsten Excel-Funktionen und -werkzeugen wie dynamische Arrays. Alle Beispiele, Tools und VBA-Makros stehen zum Download unter plus.hanser-fachbuch.de bereit.
Currently, Parkinson’s Disease (PD) has no cure or accurate diagnosis, reaching approximately 60, 000 new cases yearly and worldwide, being more often in the elderly population. Its main symptoms can not be easily uncorrelated with other illness, being way more difficult to be identified at the early stages. As such, computer-aided tools have been recently used to assist in this task, but the challenge in the automatic identification of Parkinson’s Disease still persists. In order to cope with this problem, we propose to employ Restricted Boltzmann Machines (RBMs) to learn features in an unsupervised fashion by analyzing images from handwriting exams, which aim at assessing the writing skills of potential individuals. These are one of the main symptoms of PD-prone people, since such kind of ability ends up being severely affected. We show that RBMs can learn proper features that help supervised classifiers in the task of automatic identification of PD patients, as well as one can obtain a more compact representation of the exam for the sake of storage and computational load purposes.
We present jHound, a tool for profiling large collections of JSON data, and apply it to thousands of data sets holding open government data. jHound reports key characteristics of JSON documents, such as their nesting depth. As we show, jHound can help detect structural outliers, and most importantly, badly encoded documents: jHound can pinpoint certain cases of documents that use string-typed values where other native JSON datatypes would have been a better match. Moreover, we can detect certain cases of maladaptively structured JSON documents, which obviously do not comply with good data modeling practices. By interactively exploring particular example documents, we hope to inspire discussions in the community about what makes a good JSON encoding.
SQL-on-Hadoop processing engines have become state-of-the-art in data lake analysis. However, the skills required to tune such systems are rare. This has inspired automated tuning advisors which profile the query workload and produce tuning setups for the low-level MapReduce jobs. Yet with highly dynamic query workloads, repeated re-tuning costs time and money in IaaS environments. In this paper, we focus on reducing the costs for up-front tuning. At the heart of our approach is the observation that a SQL query is compiled into a query plan of MapReduce jobs. While the plans differ from query to query, single jobs tend to be similar between queries. We introduce the notion of the code signature of a MapReduce job and, based on this, our concept of job similarity. We show that we can effectively recycle tuning setups from similar MapReduce jobs already profiled. In doing so, we can leverage any third-party tuning adviser for MapReduce engines. We are able to show that by recycling tuning setups, we can reduce the time spent on profiling by 50% in the TPC-H benchmark.
Vorgestellt wird der Structure from Motion (SFM)Toolchain Bundler zur Erzeugung von Punktwolken aus unkalibrierten Kameraaufnahmen und deren Optimierungsmöglichkeiten durch zeitnahe Verfahren zur Interessenpunktextraktion. Das Extrahieren markanter Punkte aus Bildern beinhaltet eines der zeitintensivsten sowie herausforderndsten Aufgaben im dreidimensionalen Rekonstruktionsprozess. Vor allem im Hinblick auf die Integration von Structure from Motion Verfahren auf eingebettete Systeme sind Rechenzeit und Speicherverbrauch von hohem Interesse. Für dreidimensionale Rekonstruktionsalgorithmen ist das Finden akkurater Punktkorrespondenzen von hoher Priorität. Die Anforderungen an Punktdetektor, -deskriptor und -matcher sind dementsprechend hoch. Untersucht wurden die Ansätze zur Interessenpunktextraktion. Hierzu gehören, der von der Bundler SFM Toolchain genutzte, Scale-invariant feature transform (SIFT) sowie Speeded Up Robust Features (SURF), Binary Robust Invariant Scalable Keypoints (BRISK) und Fast Retina Keypoint (FREAK). Es wird gezeigt, dass das Ersetzen von SIFT in der Toolchain vor allem durch die binären Deskriptoren BRISK und FREAK zu wesentlich kürzeren Laufzeiten führen kann. Dies ist vor allem für eingebettete Systeme mit eingeschränkter Rechenleistung von Bedeutung. In Bezug auf die Genauigkeit der gefundenen Punktkorrespondenzen übertreffen sowohl FREAK als auch BRISK die nicht binären Ansätze im Anwendungsprofil. Zur Beschreibung von Interessenpunkten sind die binären Ansätze eine sinnvolle Optimierungsmöglichkeit hinsichtlich Laufzeit und Genauigkeit, während der in BRISK enthaltene Detektor nicht an die Performance der nicht binären Alternativen herankommt. Insgesamt betrachtet wurden die besten Ergebnisse (Laufzeit und Genauigkeit) mit den Kombinationen SURF/FREAK und SURF/BRISK erreicht. Die evaluierten Algorithmen(SIFT,SURF,BRISK,FREAK) sollen künftig in der Umgebung einer kompletten 3D-Rekonstruktions-Toolchain getestet und ausgewertet werden. Hierbei ist auch auf die Abhängigkeit von verschiedenen Matching-Verfahren zu achten. Darüber hinaus sollen weitere Optimierungsmöglichkeiten für Anwendungen basierend auf Bündelausgleichung untersucht werden.
AUTOSAR spezifiziert ein statisches Echtzeitbetriebssystem (AUTOSAR-OS), welches auf die Bedürfnisse der Automobilindustrie zugeschnitten ist. Dabei stellt die AUTOSAR-OS-Spezifikation im Wesentlichen eine Weiterentwicklung von OSEK-OS dar. Die Unterschiede zu einem OSEK-OS sind im Wesentlichen eine erweiterte Programmierschnittstelle für Counter, Schedule Tables zur Abbildung komplexer zeitgesteuerter Abläufe und Stack Monitoring. Der Scheduler arbeitet prioritätsbasiert und unterstützt unterbrechbare und nicht unterbrechbare Tasks. Eine Ressourcenverwaltung mit Priority Ceiling Protokoll verhindert Deadlocks und Prioritäteninversion. Tasks sind entweder als "Basic Tasks" oder "Extended Tasks" konfiguriert. Nach einer kurzen Vorstellung der OS-Spezifikation wird eine darauf basierte Architektur für das Betriebssystem hergeleitet. Die konkrete Implementierung ist aufgeteilt in einen konfigurationsabhängigen Teil und einen statischen Quellcodeteil. Der konfigurationsabhängige Teil wird von einem Quellcodegenerator erzeugt. Der statische Teil wird in einer Schichtenarchitektur erstellt, um eine Trennung zwischen plattformabhängigem und plattformunabhängigem Quellcode zu erreichen, da die hierfür vorgesehene AUTOSAR-Methode nicht ausreicht. Die Portierung des OS auf eine neue Hardwareplattform zeigt, dass nur die plattormabhängige Quellcodeschicht des Betriebssystems geändert werden muss. Bei der Validierung des Betriebssystems wurde neben einem funktionellen Test auch die Performance für charakteristische Aufgaben, wie beispielsweise einem Kontextwechsel, gemessen. Diese Werte werden mit einem kommerziellen, stark optimierten AUTOSAR-OS verglichen.
Basierend auf einer systematischen und umfangreichen Analyse von Praxisbeiträgen zum Thema Schatten-IT und einer Interviewstudie mit 16 IT-Führungskräften beschreibt der vorliegende Artikel Governance-Aspekte zu diesem Phänomen. Er ergänzt damit vorhergehende akademische Studien. Es zeigt sich, dass unter Praktikern der Eindruck vorherrscht, dass IT-Abteilungen unter zunehmendem Druck stehen, schneller auf sich ändernde Anforderungen aus den Fachbereichen reagieren zu müssen. Können IT-Abteilungen diesen Erwartungen nicht entsprechen, beschaffen sich Fachbereiche und Nutzer selbst Lösungen in Form von Schatten-IT. Als mögliche Antwort darauf kann sich die IT-Abteilung agiler organisieren und die IT-Architektur im Unternehmen modernisieren. Eine weitere Möglichkeit besteht darin, sich das innovative Potenzial von Schatten-IT zunutze zu machen und deren Umsetzung aktiv durch organisatorische und technische Maßnahmen zu unterstützen. IT-Sicherheitsmanagement und technische Schutzmechanismen können helfen, die so entstandenen Lösungen abzusichern und die Risiken zu minimieren. Die IT-Abteilung könnte sich als Konsequenz aus all diesen Maßnahmen zu einem nutzerorientierten, internen Service-Provider und strategischen Partner für die Fachbereiche entwickeln.
Neu eingeführte Funktionen in der Automobilindustrie, wie zum Beispiel das autonome Fahren, erfordern den Einsatz von performanten Mehrkernprozessoren sowie von komplexen (POSIXkompatiblen) Betriebssystemen. Im Rahmen des branchenspezifischen Preisdrucks kommt es zudem zu einer Konsolidierung von Steuergeräten. Gleichzeitig erfordert der Einsatz im Automobil hohe funktionale Sicherheit (ASIL-Level), was unter anderem robuste Echtzeiteigenschaften der verwendeten Hard- und Software voraussetzt. Als Folge dessen werden zur Trennung von harten und weichen Echtzeitsystemen auf derselben Hardware Hypervisoren eingesetzt. Dieses Paper beleuchtet die Latenzauswirkungen diverser Softwarekonfigurationen auf Hardware der nächsten Generation mithilfe eines vorgestellten Testsetups und dessen Ergebnissen.
Im Beitrag wird das Konzept mit JiTT-Einheiten, klassischen Vorlesungen, Übungen und dem Praktikum vorgestellt. Insbesondere erfolgt eine Beschreibung der Struktur der Lehrtexte und der zugehörenden Fragen. Nachfolgend werden Herausforderungen bei der Umsetzung des Konzeptes dargestellt. Betrachtet wird hier die Entscheidung über Anzahl der Lehrtexte sowie die Erstellung derselben durch die Autorin sowie die Umsetzung des gesamten Konzeptes auf einer Online-Plattform. Abschließend werden die Reaktionen und Rückmeldungen der Studierenden zu dem neuen Lehrkonzept dargestellt. Basis sind hierbei die Auswertung der Beantwortung der Fragen in den JiTT-Einheiten sowie Befragungen der Studierenden mit verschiedenen Fragebögen.
Recent advances in the development of smart homes have led to the availability of a wide variety of devices providing a high level of convenience via gesture and speech control or fully automated operation. Many smart home appliances also address the aspects of safety and electricity savings by automatically powering themselves off after not being used for a while. However, many devices remain in a typical household that are not themselves "smart", or are not primarily electric (such as heating systems). We address the savings aspect by identifying processes involving the use of multiple devices in the electrical flow data, as captured by a smart meter in a modern household, rather than focusing on a single appliance. Therefore, we introduce a novel approach to usage pattern analysis based on the idea that a pattern of device usages as a result of a resident's 'routine' (such as making breakfast) can be interpreted similarly to a natural language 'sentence'; Natural Language Processing (NLP) algorithms can then be used for interpreting the residents' behavior. We introduce the notion of bag-of-devices (BoD), derived from the bag-of-words model used in document classification. In an experiment, we show how we use this model to infer predictions about the inhabitants from device usage, such as the resident leaving for the day or just to fetch the newspaper.
Automotive Original Equipment Manufacturer (OEM) and suppliers started shifting their focus towards the security of their connected electronic programmable products recently since cars used to be mainly mechanical products. However, this has changed due to the rising digitalization of vehicles. Security and functional safety have grown together and need to be addressed as a single issue, referred to as automotive security, in the following article. One way to accomplish security is automotive security education. The scientific contribution of this paper is to establish an Automotive Penetration Testing Education Platform (APTEP). It consists of three layers representing different attack points of a vehicle. The layers are the outer, inner, and core layers. Each of those contains multiple interfaces, such as Wireless Local Area Network (WLAN) or electric vehicle charging interfaces in the outer layer, message bus systems in the inner layer, and debug or diagnostic interfaces in the core layer. One implementation of APTEP is in a hardware case and as a virtual platform, referred to as the Automotive Network Security Case (ANSKo). The hardware case contains emulated control units and different communication protocols. The virtual platform uses Docker containers to provide a similar experience over the internet. Both offer two kinds of challenges. The first introduces users to a specific interface, while the second combines multiple interfaces, to a complex and realistic challenge. This concept is based on modern didactic theory, such as constructivism and problem-based learning. Computer Science students from the Ostbayerische Technische Hochschule (OTH)Regensburg experienced the challenges as part of a special topic course and provided positive feedback.
The ongoing digitization and digitalization entails the increasing risk of privacy breaches through cyber attacks. Internet of Things (IoT) environments often contain devices monitoring sensitive data such as vital signs, movement or surveil-lance data. Unfortunately, many of these devices provide limited security features. The purpose of this paper is to investigate how artificial intelligence and static analysis can be implemented in practice-oriented intelligent Intrusion Detection Systems to monitor IoT networks. In addition, the question of how static and dynamic methods can be developed and combined to improve net-work attack detection is discussed. The implementation concept is based on a layer-based architecture with a modular deployment of classical security analysis and modern artificial intelligent methods. To extract important features from the IoT network data a time-based approach has been developed. Combined with
network metadata these features enhance the performance of the artificial intelligence driven anomaly detection and attack classification. The paper demonstrates that artificial intelligence
and static analysis methods can be combined in an intelligent Intrusion Detection System to improve the security of IoT environments.
In computer science, game-based learning is an exciting and entertaining way to learn a programming language or coding fundamentals. MonstER Park is a game which applies this concept to entityrelationship models (ERM) an teach it in an easy, fun, and effective way. The plot of the game is about a theme park named MonstER Park that is opening soon, but it’s not yet ready. The player of the game has to talk to little monsters and create an ER diagram step-by-step. The player gets instant feedback, and the game continues after correctly solving a task. On completion of a game, the player knows the following fundamentals of ER diagrams and can download a certificate: entity types, (recursive) relationships, (complex, multi-valued) attributes, (compound) primary keys, generalization. The game is free and available at https://www.monst-er.de without any registration.