Digitalisierung
Refine
Year of publication
Document Type
- Article (174) (remove)
Is part of the Bibliography
- no (174)
Keywords
- Betriebliches Informationssystem (10)
- Offshoring (7)
- Business-managed IT (5)
- Shadow IT (5)
- Datensicherung (4)
- IT governance (4)
- Information systems (4)
- Informationstechnik (4)
- Lean Management (4)
- Literaturbericht (4)
Institute
- Fakultät Informatik und Mathematik (114)
- Fakultät Elektro- und Informationstechnik (34)
- Laboratory for Safe and Secure Systems (LAS3) (29)
- Regensburg Strategic IT Management (ReSITM) (24)
- Labor für Digitalisierung (LFD) (21)
- Fakultät Maschinenbau (12)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (11)
- Labor eHealth (eH) (8)
- Institut für Sozialforschung und Technikfolgenabschätzung (IST) (7)
- Labor Industrielle Elektronik (5)
Begutachtungsstatus
- peer-reviewed (103)
The estimation of illuminant color is mandatory for many applications in the field of color image quantification. However, it is an unresolved problem if no additional heuristics or restrictive assumptions apply. Assuming uniformly colored and roundly shaped objects, Lee has presented a theory and a method for computing the scene-illuminant chromaticity from specular highlights [H. C. Lee, J. Opt. Soc. Am. A 3, 1694 (1986)]. However, Lee’s method, called image path search, is less robust to noise and is limited in the handling of microtextured surfaces. We introduce a novel approach to estimate the color of a single illuminant for noisy and microtextured images, which frequently occur in real-world scenes. Using dichromatic regions of different colored surfaces, our approach, named color line search, reverses Lee’s strategy of image path search. Reliable color lines are determined directly in the domain of the color diagrams by three steps. First, regions of interest are automatically detected around specular highlights, and local color diagrams are computed. Second, color lines are determined according to the dichromatic reflection model by Hough transform of the color diagrams. Third, a consistency check is applied by a corresponding path search in the image domain. Our method is evaluated on 40 natural images of fruit and vegetables. In comparison with those of Lee’s method, accuracy and stability are substantially improved. In addition, the color line search approach can easily be extended to scenes of objects with macrotextured surfaces.
Integrative Co-occurrence matrices are introduced as novel features for color texture classification. The extended Co-occurrence notation allows the comparison between integrative and parallel color texture concepts. The information profit of the new matrices is shown quantitatively using the Kolmogorov distance and by extensive classification experiments on two datasets. Applying them to the RGB and the LUV color space the combined color and intensity textures are studied and the existence of intensity independent pure color patterns is demonstrated. The results are compared with two baselines: gray-scale texture analysis and color histogram analysis. The novel features improve the classification results up to 20% and 32% for the first and second baseline, respectively.
Multiple hop routing in mobile ad hoc networks can minimize energy consumption and increase data throughput. Yet, the problem of radio interferences remain. However if the routes are restricted to a basic network based on local neighborhoods, these interferences can be reduced such that standard routing algorithms can be applied.
We compare different network topologies for these basic networks, i.e. the Yao-graph (aka. Θ-graph) and some also known related models, which will be called the SymmYgraph (aka. YS-graph), the SparsY-graph (aka. YY-graph) and the BoundY-graph. Further, we present a promising network topology called the HL-graph (based on Hierarchical Layers).
We compare these topologies regarding degree, spanner-properties, and communication features. We investigate how these network topologies bound the number of (uni- and bidirectional) interferences and whether these basic networks provide energy-optimal or congestion-minimal routing. Then, we compare the ability of these topologies to handle
dynamic changes of the network when radio stations appear and disappear. For this we measure the number of involved radio stations and present distributed algorithms for repairing the network structure.
We discuss the spectral structure and decomposition of multi-photon states. Ordinarily 'multi-photon states' and 'Fock states' are regarded as synonymous. However, when the spectral degrees of freedom are included this is not the case, and the class of 'multi-photon' states is much broader than the class of 'Fock' states. We discuss the criteria for a state to be considered a Fock state. We then address the decomposition of general multi-photon states into bases of orthogonal eigenmodes, building on existing multi-mode theory, and introduce an occupation number representation that provides an elegant description of such states. This representation allows us to work in bases imposed by experimental constraints, simplifying calculations in many situations. Finally we apply this technique to several example situations, which are highly relevant for state of the art experiments. These include Hong–Ou–Mandel interference, spectral filtering, finite bandwidth photo-detection, homodyne detection and the conditional preparation of Schrödinger kitten and Fock states. Our techniques allow for very simple descriptions of each of these examples.
We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.
Rational functions are frequently used as efficient yet accurate numerical approximations for real and complex valued special functions. For the complex error function , whose real part is the Voigt function , the rational approximation developed by Hui, Armstrong, and Wray [Rapid computation of the Voigt and complex error functions, J. Quant. Spectrosc. Radiat. Transfer 19 (1978) 509–516] is investigated. Various optimizations for the algorithm are discussed. In many applications, where these functions have to be calculated for a large x grid with constant y, an implementation using real arithmetic and factorization of invariant terms is especially efficient.
We consider the development of software systems that integrate collaborative real-time modeling and distributed computing. Our main goal is user-orientation: we need a collaborative workspace for geographically dispersed users with a seamless access of every user to high-performance servers. This paper presents a particular prototype, Clayworks, that allows modeling of virtual clay objects and running computation-intensive deformation simulations for objects crashing into each other. In order to integrate heterogeneous computational resources, we adopt modern Grid middleware and provide the users with an intuitive graphical interface. Simulations are parallelized using a higher-order component (HOC) which abstracts over the web service resource framework (WSRF) used to interconnect our worksuite to the computation server. Clayworks is a representative of a large class of demanding systems which combine collaborative, user-oriented modeling with performance-critical computations, e.g., crash-tests or simulations of biological population evolution.
Quantum key distribution is among the foremost applications of quantum mechanics, both in terms of fundamental physics and as a technology on the brink of commercial deployment. Starting from principal schemes and initial proofs of unconditional security for perfect systems, much effort has gone into providing secure schemes which can cope with numerous experimental imperfections unavoidable in real world implementations. In this paper, we provide a comparison of various schemes and protocols. We analyse their efficiency and performance when implemented with imperfect physical components. We consider how experimental faults are accounted for using effective parameters. We compare various recent protocols and provide guidelines as to which components propose best advances when being improved.
We experimentally analyze the complete photon number statistics of parametric down-conversion and ascertain the influence of multimode effects. Our results clearly reveal a difference between single-mode theoretical description and the measured distributions. Further investigations assure the applicability of loss-tolerant photon number reconstruction and prove strict photon number correlation between signal and idler modes.
The identification of suitable applications or projects is a main initial step in any software development or maintenance related IS offshoring arrangement. This paper examines evaluation criteria and their importance for selecting application or project candidates for offshoring. Based on a literature analysis and interviews with 47 experts from 36 different German companies describing 64 case examples, we find that in contrast to the literature, “size”, “codification”, and “language” are perceived as important selection criteria by experts. Case examples additionally show that “business specificity” seems to be a main reason for application or project failures, that “business criticality” appears to be less important than suggested by the literature, and that adequate “size” might be a necessary prerequisite, but seems not to be a sufficient criterion for an application’s or project’s suitability for offshoring. These differences in comparison to findings from the literature may be explained by cultural and language differences.
Terahertzscanner
(2009)
Every security analysis of quantum-key distribution (QKD) relies on a faithful modeling of the employed quantum states. Many photon sources, such as for instance a parametric down-conversion (PDC) source, require a multimode description but are usually only considered in a single-mode representation. In general, the important claim in decoy-based QKD protocols for indistinguishability between signal and decoy states does not hold for all sources. We derive bounds on the single-photon transmission probability and error rate for multimode states and apply these bounds to the output state of a PDC source. We observe two opposing effects on the secure key rate. First, the multimode structure of the state gives rise to a new attack that decreases the key rate. Second, more contributing modes change the photon number distribution from a thermal toward a Poissonian distribution, which increases the key rate.
Measurement is the only part of a general quantum system that has yet to be characterised experimentally in a complete manner. Detector tomography provides a procedure for doing just this; an arbitrary measurement device can be fully characterised, and thus calibrated, in a systematic way without access to its components or its design. The result is a reconstructed POVM containing the measurement operators associated with each measurement outcome. We consider two detectors, a single-photon detector and a photon-number counter, and propose an easily realised experimental apparatus to perform detector tomography on them. We also present a method of visualising the resulting measurement operators.
Parametric down-conversion (PDC) is a technique of ubiquitous experimental significance in the production of nonclassical, photon-number-correlated twin beams. Standard theory of PDC as a two-mode squeezing process predicts and homodyne measurements observe a thermal photon number distribution per beam. Recent experiments have obtained conflicting distributions. In this article, we explain the observation by an a priori theoretical model solely based on directly accessible physical quantities. We compare our predictions with experimental data and find excellent agreement.
Safely embedded software
(2009)
Report Datenmigration
(2010)
Mehrere Terabyte Daten aus einem auf dem Mainframe laufenden COBOL-Programm auf eine SOA-Architektur unter Linux zu migrieren, stellt besondere Anforderungen an die Werkzeuge und Entwickler. Geschickte Kombination vorhandener Tools und effizienter Strategien vermeiden Stillstandszeiten und beschleunigen den Datentransfer.
Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics4,5,6,7,8,9,10,11. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique12,13,14,15. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.
Partly Proportionate fair (Partly-Pfair) scheduling, which allows task migration at runtime and assigns each task processing time with regard to its weight, makes it possible to build highly efficient embedded multi-core systems. Due to its non-work-conserving behavior, which might leave the CPU idle even when tasks are ready to execute, tasks finish only shortly before their deadlines are reached. Benefits are lower task jitter, but additional workload, e.g. through interrupts, can lead to deadline violations. In this paper we present a work-conserving extension of Partly-Pfair scheduling, called PERfair scheduling and the algorithm P-ERfair-PD2 which applies Pfair modifications used for Partly-Pfair on the concept of ERfairness and PD2 policies. With a simulation based schedulability examination we show for multiple time base (MTB) task sets that P-ERfair- PD2 has the same performance as Partly-Pfair-PD2. Additionally, we show that P-ERfair- PD2 has a much higher robustness against perturbations, and therefore it is well suited for embedded domains, especially for the Automotive domain.
Eingebettete Systeme unterliegen neben den funktionalen Anforderungen besonders nichtfunktionalen Qualitätsanforderungen wie Effizienz, Zuverlässigkeit und Echtzeitfähigkeit. Mit steigendem Bedarf an Rechenkapazität können bisherige Konzepte zur Leistungssteigerung von Singlecore-Systemen jedoch nicht mehr eingesetzt werden - der Umstieg auf Multicore-Systeme wird erforderlich. Im zweiten Teil dieser Arbeit wird ein simulationsbasierter Ansatz zum Vergleich von Multicore-Scheduling-Algorithmen vorgestellt, mit dem Algorithmen für Multicore-Systeme mit voller Migration und dynamischer Task-Priorität untersucht werden. Wir erweitern diesen Ansatz um ein Verfahren zur Untersuchung einer Tasksetmenge mit stochastisch beschriebenen Eigenschaften und vergleichen ihn mit den im Teil 1 beschriebenen Algorithmen BinPacking-EDF und P-ERfair-PD² für eine Gruppe von Automotive Powertrain Systemen.
Der Embedded Markt stellt sich auf eine neue Herausforderung ein: denUmstieg von Singlecore- auf Multicore-Prozessorsysteme. Dabei soll dieUmsetzung der Norm ISO 26262 die Funktionale Sicherheit der elektri-schen und elektronischen Systeme im Kraftfahrzeug gewährleisten. In diesem Beitrag betrachten die Hochschule Regensburg und die TÜV SüdAutomotive GmbH das Scheduling eines Echtzeitsystems als ein sicher-heitsrelevantes Sub-System.
Eingebettete Systeme unterliegen neben den funktionalen Anforderungen besonders nicht funktionalen Qualitätsanforderungen wie Effizienz, Zuverlässigkeit und Echtzeitfähigkeit. Mit steigendem Bedarf an Rechenkapazität können bisherige Konzepte zur Leistungssteigerung von Singlecore-Systemen jedoch nicht mehr eingesetzt werden – der Umstieg auf Multicore-Systeme wird erforderlich. Im ersten Teil dieser Arbeit werden eine mögliche Prozessorarchitektur für zukünftige Automotive Multicore-Systeme und die Abstraktion der Software für diese Systeme vorgestellt. Nach einer Klassifkation von Multicore-Scheduling-Algorithmen präsentieren wir exemplarisch einen Algorithmus mit statischer Taskallokation und einen Algorithmus mit dynamischer Taskallokation. Bei beiden Algorithmen handelt es sich um eine Überführung theoretisch behandelter Algorithmen auf Automotive Systeme.
This paper empirically examines the current state of the IS offshoring phenomenon in Germany regarding project characteristics and success patterns. Relying on a sample of 304 projects conducted at various industry sectors and companies, results show that IS offshoring primarily occurs in sectors Telecommunications and IT at large corporations. Cost reduction is the main reason for going offshore and offshore projects are executed as part of a larger program at companies. Noticeably, most projects are delivered from India. Additionally, neither captive offshoring nor offshore outsourcing dominates as a de-livery option. Comparing different project subgroups regarding project success, the results reveal that projects delivered by an internal or partially owned service provider are more successful. Other project characteristics such as a project’s embedded-ness in a larger offshoring program, a project’s size, or a project’s offshoring degree in terms of relatively offshored labor hours show few significant differences. The paper addresses the paucity of empirical research on the current state of the IS offshoring phenomenon in Germany.
High labor cost in western countries allows cost savings by companies engaging in IS offshoring. However, studies worldwide indicate that a large number of companies that engaged in IS offshoring are not satisfied with the outcome. Our study examined the determinants of IS offshore project success: We developed a model and empirically tested it with data collected from 304 experts who reported on projects offshored from Germany to a wide range of near and distant countries. The model posited a direct effect of offshoring expertise and trust in offshore service provider (OSP) on success, as well as an indirect effect mediated by project suitability, knowledge transfer, and liaison quality. An analysis using partial least squares (PLSs) provided significant support for almost all these relationships. However, it showed that offshoring expertise played a minor role in explaining success and the mediating constructs. Trust in OSP had a small direct effect on success and a medium to large effect on the mediating constructs. Project suitability, knowledge transfer, and liaison quality all had small direct effects on success.
Radiative transfer modelling of high resolution infrared (or microwave) spectra still represents a major challenge for the processing of atmospheric remote sensing data despite significant advances in the numerical techniques utilized in line-by-line modelling by, e.g., optimized Voigt function algorithms or multigrid approaches. Special purpose computing hardware such as Field Programmable Gate Arrays (FPGAs) can be used to cope with the dramatic increase of data quality and quantity. Utilizing a highly optimized implementation of an uniform rational function approximation of the Voigt function, the molecular absorption cross section computation-representing the most compute intensive part of radiative transfer codes-has been realized on FPGA. Design and implementation of the FPGA coprocessor is presented along with first performance tests and an outlook for the ongoing further development.
We investigate the reconstruction problem for limited angle tomography. Such problems arise naturally in applications like digital breast tomosynthesis, dental tomography, etc. Since the acquired tomographic data is highly incomplete, the reconstruction problem is severely ill-posed and the traditional reconstruction methods, such as filtered backprojection (FBP), do not perform well in such situations. To stabilize the inversion we propose the use of a sparse regularization technique in combination with curvelets. We argue that this technique has the ability to preserve edges. As our main result, we present a characterization of the kernel of the limited angle Radon transform in terms of curvelets. Moreover, we characterize reconstructions which are obtained via curvelet sparse regularizations at a limited angular range. As a result, we show that the dimension of the limited angle problem can be significantly reduced in the curvelet domain.
In this article, we study the effect of skill-biased technological change on unemployment and wage inequality in the presence of a link between social benefits and average income. In this case, an increase in the productivity of skilled workers, and hence their wage, leads to an increase in average income and hence in benefits. The increased fallback income, in turn, makes unskilled workers ask for higher wages. As higher wages are not justified by corresponding productivity increases, unemployment rises. Generally, we show that skill-biased technological change leads to increasing unemployment of the unskilled and to a moderately increasing wage inequality when benefits are endogenous. The model provides a theoretical explanation for diverging dynamics in wage inequality and unemployment under different social benefits regimes. Analysing the social legislation in 14 countries, we find that benefits are linked to the evolution of average income in Continental Europe but not in the US and the UK. Given this institutional difference, our model predicts that skill-biased technological change leads to rising unemployment in Continental Europe and rising wage inequality in the US and the UK.
Wer seine Daten mit ansehnlichen und informativen Graphen veranschaulichen möchte, braucht meist viel Geduld. Die R-Erweiterung Ggplot2 bringt System in die Grafik, drückt sich in knappem Quellcode aus und bläst frischen Wind in den Alltag der Datenvisualisierung.
Wie kann die funktionale Sicherheit in Fahrzeugen zukunftssicher und effektiv gewährleistet werden? Und wie kann dies speziell in elektrifizierten Antrieben gelingen? Mit dieser Aufgabenstellung haben sich AVL in Kooperation mit dem LaS³ und der Universität der Bundeswehr München in einem Forschungsprojekt beschäftigt. Die Antwort lautet: Die automatischen Speichertests in Zusammenspiel mit der Programmfluss-Überwachung und redundanter Hardware können besonders effektiv durch die „Codierte Verarbeitung“ ersetzt werden. Denn hier wird die Diversität in Software erhöht, um die aufwendigere und kostspielige Redundanz von Hardware zu reduzieren.
On the one hand technological innovations can create new markets and can provide for solutions of current social problems; on the other hand the can raise questions concerning environmental and/or social adequacy on short-, middle-, and long-range time scales. In or-der to strengthen the positive outcomes of innovations and to reduce or even prevent nega-tive effects, the task of technology assessment is to predict the positive as well as the nega-tive effects and repercussions of innovations as early as possible. However, it must be pre-supposed that first, it must be possible to make to a certain extent reliable predications about the future development and second, that regulatory measurements in order to strengthen the positive outcomes of innovations and to reduce or even prevent negative effects will find their respective addressees. But if one takes, for instance, ubiquitous information and com-munication technology into account it can be demonstrated that those conditions rarely can be met. This is the result of the types of innovation which take place in case of ICT as well as of the so-called Collingridge dilemma. Both factors reduce the reliability of predictions. Furthermore, regulatory measurements often do not find their addressees since the borders of the societal subsystems in which innovation take place are blurring and therefore, these subsystems often cannot be reached with mainstream regulatory approaches like laws.
Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports that assertion. Finally, some tentative conclusions concerning moral implications of the arguments presented here shall be drawn
Frequency conversion (FC) and type-II parametric down-conversion (PDC) processes serve as basic building blocks for the implementation of quantum optical experiments: type-II PDC enables the efficient creation of quantum states such as photon-number states and Einstein–Podolsky–Rosen (EPR)-states. FC gives rise to technologies enabling efficient atom–photon coupling, ultrafast pulse gates and enhanced detection schemes. However, despite their widespread deployment, their theoretical treatment remains challenging. Especially the multi-photon components in the high-gain regime as well as the explicit time-dependence of the involved Hamiltonians hamper an efficient theoretical description of these nonlinear optical processes. In this paper, we investigate these effects and put forward two models that enable a full description of FC and type-II PDC in the high-gain regime. We present a rigorous numerical model relying on the solution of coupled integro-differential equations that covers the complete dynamics of the process. As an alternative, we develop a simplified model that, at the expense of neglecting time-ordering effects, enables an analytical solution. While the simplified model approximates the correct solution with high fidelity in a broad parameter range, sufficient for many experimental situations, such as FC with low efficiency, entangled photon-pair generation and the heralding of single photons from type-II PDC, our investigations reveal that the rigorous model predicts a decreased performance for FC processes in quantum pulse gate applications and an enhanced EPR-state generation rate during type-II PDC, when EPR squeezing values above 12 dB are considered.
We investigate the reconstruction problem of limited angle tomography. Such problems arise naturally in applications like digital breast tomosynthesis, dental tomography, electron microscopy, etc. Since the acquired tomographic data is highly incomplete, the reconstruction problem is severely ill-posed and the traditional reconstruction methods, e.g. filtered backprojection (FBP), do not perform well in such situations.
To stabilize the reconstruction procedure additional prior knowledge about the unknown object has to be integrated into the reconstruction process. In this work, we propose the use of the sparse regularization technique in combination with curvelets. We argue that this technique gives rise to an edge-preserving reconstruction. Moreover, we show that the dimension of the problem can be significantly reduced in the curvelet domain. To this end, we give a characterization of the kernel of the limited angle Radon transform in terms of curvelets and derive a characterization of solutions obtained through curvelet sparse regularization. In numerical experiments, we will show that the theoretical results directly translate into practice and that the proposed method outperforms classical reconstructions.
Re-irradiation of spinal column metastases by IMRT: Impact of setup errors on the dose distribution
(2013)
Background
This study investigates the impact of an automated image guided patient setup correction on the dose distribution for ten patients with in-field IMRT re-irradiation of vertebral metastases.
Methods
10 patients with spinal column metastases who had previously been treated with 3D-conformal radiotherapy (3D-CRT) were simulated to have an in-field recurrence. IMRT plans were generated for treatment of the vertebrae sparing the spinal cord. The dose distributions were compared for a patient setup based on skin marks only and a Cone Beam CT (CBCT) based setup with translational and rotational couch corrections using an automatic robotic image guided couch top (Elekta - HexaPOD™ IGuide® - system). The biological equivalent dose (BED) was calculated to evaluate and rank the effects of the automatic setup correction for the dose distribution of CTV and spinal cord.
Results
The mean absolute value (± standard deviation) over all patients and fractions of the translational error is 6.1 mm (±4 mm) and 2.7° (±1.1 mm) for the rotational error. The dose coverage of the 95% isodose for the CTV is considerable decreased for the uncorrected table setup. This is associated with an increasing of the spinal cord dose above the tolerance dose.
Conclusions
An automatic image guided table correction ensures the delivery of accurate dose distribution and reduces the risk of radiation induced myelopathy.
Background
The purpose of this study was to evaluate the impact of Cone Beam CT (CBCT) based setup correction on total dose distributions in fractionated frameless stereotactic radiation therapy of intracranial lesions.
Methods
Ten patients with intracranial lesions treated with 30 Gy in 6 fractions were included in this study. Treatment planning was performed with Oncentra® for a SynergyS® (Elekta Ltd, Crawley, UK) linear accelerator with XVI® Cone Beam CT, and HexaPOD™ couch top. Patients were immobilized by thermoplastic masks (BrainLab, Reuther). After initial patient setup with respect to lasers, a CBCT study was acquired and registered to the planning CT (PL-CT) study. Patient positioning was corrected according to the correction values (translational, rotational) calculated by the XVI® system. Afterwards a second CBCT study was acquired and registered to the PL-CT to confirm the accuracy of the corrections. An in-house developed software was used for rigid transformation of the PL-CT to the CBCT geometry, and dose calculations for each fraction were performed on the transformed CT. The total dose distribution was achieved by back-transformation and summation of the dose distributions of each fraction. Dose distributions based on PL-CT, CBCT (laser set-up), and final CBCT were compared to assess the influence of setup inaccuracies.
Results
The mean displacement vector, calculated over all treatments, was reduced from (4.3 ± 1.3) mm for laser based setup to (0.5 ± 0.2) mm if CBCT corrections were applied. The mean rotational errors around the medial-lateral, superior-inferior, anterior-posterior axis were reduced from (−0.1 ± 1.4)°, (0.1 ± 1.2)° and (−0.2 ± 1.0)°, to (0.04 ± 0.4)°, (0.01 ± 0.4)° and (0.02 ± 0.3)°. As a consequence the mean deviation between planned and delivered dose in the planning target volume (PTV) could be reduced from 12.3% to 0.4% for D95 and from 5.9% to 0.1% for Dav. Maximum deviation was reduced from 31.8% to 0.8% for D95, and from 20.4% to 0.1% for Dav.
Conclusion
Real dose distributions differ substantially from planned dose distributions, if setup is performed according to lasers only. Thermoplasic masks combined with a daily CBCT enabled a sufficient accuracy in dose distribution.
We consider the reconstruction problem for limited angle tomography using filtered backprojection (FBP) and lambda tomography. We use microlocal analysis to explain why the well-known streak artifacts are present at the end of the limited angular range. We explain how to mitigate the streaks and prove that our modified FBP and lambda operators are standard pseudodifferential operators, and so they do not add artifacts. We provide reconstructions to illustrate our mathematical results.
Software engineering in open source projectsfaces similar challenges as in traditional software development(coordination of and cooperation between contributors, changeand release management, quality assurance, . . .), but often usesdifferent means of solving them. This leads to some salientdistinctions between both worlds, especially with respect tocommunication and how technical issues are addressed. Thevariations within open source software (OSS) communities areconsiderable, and many different approaches are currently inuse, ranging from informal conventions to highly systematic,formally specified and rigidly applied processes. We discussthe archetypal best practises in the field, illustrate them bypresenting example projects, and provide a comparison to tradi-tional approaches.
IT-Offshoring
(2014)
We present a transformation rule to convert linear codes into arithmetic codes. Linear codes are usually used for error detection and correction in broadcast and storage systems. In contrast, arithmetic codes are very suitable for protection of software processing in computer systems. This paper shows how to transform linear codes protecting the data stored in a computer system into arithmetic codes safeguarding the operations built on this data. Combination of the advantages of both coding mechanisms will increase the error detection capability in safety critical applications for embedded systems by detection and correction of arbitrary hardware faults.
Bierdeckelsalto
(2015)
Ein beliebtes Spiel besteht darin, einen auf einer Tischkante liegenden Bierdeckel von unten mit den ausgestreckten Fingern hochzuschnellen und dann nach einem oder mehreren Saltos zwischen Finger und Daumen wieder aufzufangen. Physikalisch gesehen, übt man einen Stoß auf den Bierdeckel aus. Die Anwendung von Impuls- und Drehimpulssatz führt zu einfachen Abschätzungen zur Mechanik des Bierdeckelsaltos. Mit Physik-Simulationsprogrammen lässt sich dieses Experiment nachvollziehen. Highspeed-Videos ergänzen Theorie und Simulation.
Today, ubiquitous mobile devices have not only arrived but entered the safety critical domain. There, systems are about to be controlled where human health or even human life is put at risk. For example, in automation systems first ideas surface to control parts of the system via a COTS smartphone. Another example is the idea to control the autonomous parking function of a car via a COTS smartphone too. As beneficial and convenient these ideas are on the first thought, on the second thought, dangers of these approaches become obvious. Especially in case of failures the system’s safety has to be maintained. The open question is how to achieve this mandatory requirement with COTS components, e.g. smartphones that are not developed following the development process necessary for safetycritical systems. This paper presents a concept to reliably detect human interaction while activating safety critical functions via COTS mobile devices. Thus a means is provided to detect erroneous activation requests for the safetycritical function.
In this paper, we introduce a novel technique for pre-filtering multi-layer shadow maps. The occluders in the scene are stored as variable-length lists of fragments for each texel. We show how this representation can be filtered by progressively merging these lists. In contrast to previous pre-filtering techniques, our method better captures the distribution of depth values, resulting in a much higher shadow quality for overlapping occluders and occluders with different depths. The pre-filtered maps are generated and evaluated directly on the GPU, and provide efficient queries for shadow tests with arbitrary filter sizes. Accurate soft shadows are rendered in real-time even for complex scenes and difficult setups. Our results demonstrate that our pre-filtered maps are general and particularly scalable.
Artifacts in Incomplete Data Tomography with Applications to Photoacoustic Tomography and Sonar
(2015)
We develop a paradigm using microlocal analysis that allows one to characterize the visible and added singularities in a broad range of incomplete data tomography problems. We give precise characterizations for photoacoustic and thermoacoustic tomography and sonar, and provide artifact reduction strategies. In particular, our theorems show that it is better to arrange sonar detectors so that the boundary of the set of detectors does not have corners and is smooth. To illustrate our results, we provide reconstructions from synthetic spherical mean data as well as from experimental photoacoustic data.
We propose a new algorithmic approach to the non-smooth and non-convex Potts problem (also called piecewise-constant Mumford–Shah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require a priori knowledge on the gray levels nor on the number of segments of the reconstruction. Further, it avoids anisotropic artifacts such as geometric staircasing. We demonstrate the suitability of our method for joint image reconstruction and segmentation. We focus on Radon data, where we in particular consider limited data situations. For instance, our method is able to recover all segments of the Shepp–Logan phantom from seven angular views only. We illustrate the practical applicability on a real positron emission tomography dataset. As further applications, we consider spherical Radon data as well as blurred data.
PURPOSE
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography.
METHODS
In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization.
RESULTS
Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection.
CONCLUSIONS
The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
An increasing amount of IS offshoring research has been published during the last four years. This paper presents a comprehensive view of the field of study from a managerial point of view. It provides a consolidated view of the field of study between 2010 and 2013 based on a manual search comprising 69 selected journals and 9 conferences, as well as a search using 6 journal databases. The literature review ensures continuity of research by connecting to a comprehensive literature analysis covering the years 1999 to 2009. This way it consolidates and critically reflects the state of the research of the last 14 years. Overall, we compiled 95 relevant publications originating from leading IS journals and IS conference proceedings. The results indicate that IS offshoring research is largely non-theory based, using almost entirely empirical data and interpretive research methods and -to a smaller extent -positivist research designs. The ISO research of the last 14 years focuses on the implementation stages "how" and "outcome" while the pre-implementation-stages “why”, “what”, and “which” are comparatively sparsely researched. Future studies should apply a more theory-driven approach with a greater attention on pre-implementation aspects of information systems offshoring. In addition, future research should investigate the special nature of near- and onshoring, captive offshoring as well asagile (project) management techniques suitable for ISO.
Nonlinear ill-posed problem analysis in model-based parameter estimation and experimental design
(2015)
Discrete ill-posed problems are often encountered in engineering applications. Still, their sound analysis is not yet common practice and difficulties arising in the determination of uncertain parameters are typically not assigned properly. This contribution provides a tutorial review on methods for identifiability analysis, regularization techniques and optimal experimental design. A guideline for the analysis and classification of nonlinear ill-posed problems to detect practical identifiability problems is given. Techniques for the regularization of experimental design problems resulting from ill-posed parameter estimations are discussed. Applications are presented for three different case studies of increasing complexity.
EMDLAB: A toolbox for analysis of single-trial EEG dynamics using empirical mode decomposition
(2015)
Background:
Empirical mode decomposition (EMD) is an empirical data decomposition technique. Recently there is growing interest in applying EMD in the biomedical field.
New method:
EMDLAB is an extensible plug-in for the EEGLAB toolbox, which is an open software environment for electrophysiological data analysis.
Results:
EMDLAB can be used to perform, easily and effectively, four common types of EMD: plain EMD, ensemble EMD (EEMD), weighted sliding EMD (wSEMD) and multivariate EMD (MEMD) on EEG data. In addition, EMDLAB is a user-friendly toolbox and closely implemented in the EEGLAB toolbox.
Comparison with existing methods:
EMDLAB gains an advantage over other open-source toolboxes by exploiting the advantageous visualization capabilities of EEGLAB for extracted intrinsic mode functions (IMFs) and Event-Related Modes (ERMs) of the signal.
Conclusions:
EMDLAB is a reliable, efficient, and automated solution for extracting and visualizing the extracted IMFs and ERMs by EMD algorithms in EEG study.
Software evolution is a fundamental process that transcends the realm of technical artifacts and permeates the entire organizational structure of a software project. By means of a longitudinal empirical study of 18 large open-source projects, we examine and discuss the evolutionary principles that govern the coordination of developers. By applying a network-analytic approach, we found that the implicit and self-organizing structure of developer coordination is ubiquitously described by non-random organizational principles that defy conventional software-engineering wisdom. In particular, we found that: (a) developers form scale-free networks, in which the majority of coordination requirements arise among an extremely small number of developers, (b) developers tend to accumulate coordination requirements with more and more developers over time, presumably limited by an upper bound, and (c) initially developers are hierarchically arranged, but over time, form a hybrid structure, in which core developers are hierarchically arranged and peripheral developers are not. Our results suggest that the organizational structure of large projects is constrained to evolve towards a state that balances the costs and benefits of developer coordination, and the mechanisms used to achieve this state depend on the project’s scale.
Background and Objective: Even today, pointing out an exam that can diagnose a patient with Parkinson's disease (PD) accurately enough is not an easy task. Although a number of techniques have been used in search for a more precise method, detecting such illness and measuring its level of severity early enough to postpone its side effects are not straightforward. In this work, after reviewing a considerable number of works, we conclude that only a few techniques address the problem of PD recognition by means of micrography using computer vision techniques. Therefore, we consider the problem of aiding automatic PD diagnosis by means of spirals and meanders filled out in forms, which are then compared with the template for feature extraction.
Methods: In our work, both the template and the drawings are identified and separated automatically using image processing techniques, thus needing no user intervention. Since we have no registered images, the idea is to obtain a suitable representation of both template and drawings using the very same approach for all images in a fast and accurate approach.
Results: The results have shown that we can obtain very reasonable recognition rates (around approximate to 67%), with the most accurate class being the one represented by the patients, which outnumbered the control individuals in the proposed dataset.
Conclusions: The proposed approach seemed to be suitable for aiding in automatic PD diagnosis by means of computer vision and machine learning techniques. Also, meander images play an important role, leading to higher accuracies than spiral images. We also observed that the main problem in detecting PD is the patients in the early stages, who can draw near-perfect objects, which are very similar to the ones made by control patients. (C) 2016 Elsevier Ireland Ltd. All rights reserved.
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Abstract Social network analysis is extremely well supported by the R community and is routinely used for studying the relationships between people engaged in collaborative activities. While there has been rapid development of new approaches and metrics in this field, the challenging question of validity (how well insights derived from social networks agree with reality) is often difficult to address. We propose the use of several R packages to generate interactive surveys that are specifically well suited for validating social network analyses. Using our web-based survey application, we were able to validate the results of applying community-detection algorithms to infer the organizational structure of software developers contributing to open-source projects.
Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on-the-fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this paper, we present a novel solution to this problem. We propose a compression scheme for a-priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.
The method of loci is one, if not the most, efficient mnemonic encoding strategy. This spatial mnemonic combines the core cognitive processes commonly linked to medial temporal lobe (MTL) activity: spatial and associative memory processes. During such processes, fMRI studies consistently demonstrate MTL activity, while electrophysiological studies have emphasized the important role of theta oscillations (3–8 Hz) in the MTL. However, it is still unknown whether increases or decreases in theta power co-occur with increased BOLD signal in the MTL during memory encoding. To investigate this question, we recorded EEG and fMRI separately, while human participants used the spatial method of loci or the pegword method, a similarly associative but nonspatial mnemonic. The more effective spatial mnemonic induced a pronounced theta power decrease source localized to the left MTL compared with the nonspatial associative mnemonic strategy. This effect was mirrored by BOLD signal increases in the MTL. Successful encoding, irrespective of the strategy used, elicited decreases in left temporal theta power and increases in MTL BOLD activity. This pattern of results suggests a negative relationship between theta power and BOLD signal changes in the MTL during memory encoding and spatial processing. The findings extend the well known negative relation of alpha/beta oscillations and BOLD signals in the cortex to theta oscillations in the MTL.
In this work, a method for reducing the number of degrees of freedom in online optimal dynamic experiment design problems for systems described by differential equations is proposed. The online problems are posed such that only the inputs which extend an operation policy resulting from an experiment designed offline are optimized. This is done by formulating them as multiple experiment designs, considering explicitly the information of the experiment designed offline and possible time delays unknown a priori. The performance of the method is shown for the case of the separation of isopropanolol isomers in a Simulated Moving Bed plant.
Simultaneous EEG-fMRI provides an increasingly attractive research tool to investigate cognitive processes with high temporal and spatial resolution. However, artifacts in EEG data introduced by the MR scanner still remain a major obstacle. This study, employing commonly used artifact correction steps, shows that head motion, one overlooked major source of artifacts in EEG-fMRI data, can cause plausible EEG effects and EEG–BOLD correlations. Specifically, low-frequency EEG (< 20 Hz) is strongly correlated with in-scanner movement. Accordingly, minor head motion (< 0.2 mm) induces spurious effects in a twofold manner: Small differences in task-correlated motion elicit spurious low-frequency effects, and, as motion concurrently influences fMRI data, EEG–BOLD correlations closely match motion-fMRI correlations. We demonstrate these effects in a memory encoding experiment showing that obtained theta power (~ 3–7 Hz) effects and channel-level theta–BOLD correlations reflect motion in the scanner. These findings highlight an important caveat that needs to be addressed by future EEG-fMRI studies.
In recent years, substantial progress has been made in the field of reverberant speech signal processing, including both single- and multichannel dereverberation techniques and automatic speech recognition (ASR) techniques that are robust to reverberation. In this paper, we describe the REVERB challenge, which is an evaluation campaign that was designed to evaluate such speech enhancement (SE) and ASR techniques to reveal the state-of-the-art techniques and obtain new insights regarding potential future research directions. Even though most existing benchmark tasks and challenges for distant speech processing focus on the noise robustness issue and sometimes only on a single- channel scenario, a particular novelty of the REVERB challenge is that it is carefully designed to test robustness against reverberation, based on both real, single- channel, and multichannel recordings. This challenge attracted 27 papers, which represent 25 systems specifically designed for SE purposes and 49 systems specifically designed for ASR purposes. This paper describes the problems dealt within the challenge, provides an overview of the submitted systems, and scrutinizes them to clarify what current processing strategies appear effective in reverberant speech processing.
In logical circuits, like arithmetic operations in a processor system, arbitrary faults become a more tremendous aspect in future. Modern manufacturing processes lead to less reliability and higher vulnerability of software execution to soft-errors. The correctness of certain results is important especially for safety–critical applications whose reliability depends on the fault-free execution of each single instruction and the dependencies between them. The more complex a software is the more unreliable the outcome is. But, there is a contrary effect. If the probability for multiple faults increases, there is also the chance that two faults compensate each other and the result is correct again. This paper presents the basic ideas for such a reliability evaluation of a software's data flow with arbitrary soft-errors and the effect of fault compensation. Further, this evaluation provides a possibility to compare different implementations of a data flow with respect to the reliability. This is shown by the comparison of two different error codes as alternatives for coded data processing.
NoSQL-Datenbanksysteme sind in den letzten Jahren sehr populär geworden, gute Gründe sprechen für ihren Einsatz: Eine attraktive Eigenschaft vieler Systeme ist ihre Schema-Flexibilität, die insbesondere in der agilen Anwendungsentwicklung Vorteile bietet. Durch horizontale Skalierbarkeit ermöglichen NoSQL-Datenbanksysteme eine effiziente Verarbeitung großer Datenmengen. Einige Systeme, die für die Datenhaltung interaktiver Anwendungen konzipiert sind, können zudem hochfrequente Nutzeranfragen bedienen. Diesen Vorteilen stehen eine Reihe von Nachteilen gegenüber, aus denen sich neue Herausforderungen für die Anwendungsentwicklung ergeben: Fehlende Standards bei den Anfragesprachen erschweren die Entwicklung datenbanksystemunabhängiger Anwendungen. Schema-Flexibilität im Datenbankmanagementsystem führt dazu, dass die Verantwortung für das Schema-Management in die Anwendung verlagert wird. Im vorliegenden Beitrag werden wesentliche Herausforderungen identifiziert und Lösungsansätze aus Forschung und Praxis vorgestellt. Dabei liegt der Fokus auf schema-flexiblen NoSQL-Datenbanksystemen, mit einem aggregat-orientierten Datenmodell, d. h. Key-Value Datenbanksysteme, dokumentenorientierten Datenbanksystemen und Column-Family Datenbanksystemen.
NoSQL data stores have become very popular over the last years, as good reasons are justifying their application: One attractive feature of many systems is their schema flexibility, which may be preferable in agile software development projects. Due to their horizontal scalability, NoSQL data stores make it possible to efficiently process large amounts of data. Some systems, designed as data backends for interactive applications, can also manage highly frequent user requests. Apart from these advantages, there are also downsides to NoSQL data stores that create new challenges for software development: Missing standards in query languages make it difficult to build data store independent applications. Schema flexibility in the data store shifts the responsibility for schema management into the application. This article identifies substantial challenges as well as solution statements from research and practice. The focus of our survey is on schema-flexible NoSQL data management systems with an aggregate-oriented data model, i. e., key-value data management systems, as well as document and column family data management systems.
In der Produktion gilt „Lean Management“ als einer der de-facto Standardmanagementansätze. Lean Management in IT-Organisationen (Lean IT) ist dahingegen in der Praxis weniger verbreitet und in der Wissenschaft kaum erforscht. Dieser Beitrag aggregiert und erweitert die Ergebnisse mehrerer Forschungsarbeiten. Dabei werden vorrangig zwei Aspekte diskutiert: (1) Ein mögliches Einführungsmodell für Lean IT. Dieses verknüpft fünf Rollen (Sponsor, Programmleiter, Navigator, Linienführungskraft und Linienexperte) mit vier Phasen (Vorbereitung, Analyse, Gestaltung und Implementierung). (2) Besondere Herausforderungen für Linienführungskräfte, die aufgrund der „bottom up“-Ausrichtung bei Lean-IT-Einführungen stark gefordert werden. Neben einer klaren Vision für die Organisationseinheit und dem Verständnis, an welcher Stelle Lean IT die Organisationseinheit konkret unterstützen kann, benötigen sie Offenheit, Veränderungswillen und die Bereitschaft Verantwortung an Mitarbeiter zu delegieren. Außerdem sollten sie über ein ausreichendes Zeitbudget für die Einführung verfügen, um ihrer gestaltenden und qualitätssichernden Funktion nachkommen zu können.
We present a ready to compute trace formula for Hecke operators on vector-valued modular forms of integral weight for SL2(Z) transforming under the Weil representation. As a corollary, we obtain a ready to compute dimension formula for the corresponding space of vector-valued cusp forms, which is more general than the dimension formulae previously published in the vector-valued setting.
Steigende Anforderungen an die Qualität von zum Teil manuell gefertigten Produkten führen dazu, dass Handarbeitsplätze mit Assistenzsystemen für die Unterstützung der am Arbeitsplatz arbeitenden Mitarbeiterinnen und Mitarbeiter ausgestattet werden. Der Beitrag beschreibt einen neuen Ansatz, um mittels Verfahren des maschinellen Lernens die Objekterkennung sowie die Transitionen eines, den Arbeitsprozess repräsentierenden Zustandsautomaten eines solchen Systems einzulernen. Hierfür werden nach einer Vorverarbeitung Daten aus einer Tiefenkamera in drei Stufen durch Support Vector Machines (SVM) klassifiziert und das Ergebnis mit dem Zustandsautomaten verknüpft. Das Konzept wird an einem industriellen Montageprozess überschaubarer Komplexität evaluiert; es zeigt gute Ergebnisse hinsichtlich der Robustheit gegenüber Fehlern bei der Objektklassifikation.
Tightening quality requirements of industrial products involving manual assembly lead to the development of assisting workbenches with integrated functions to support workers performing these manual tasks. This contribution discusses a new approach to learning transitions of a finite state automaton representing the sequence of work tasks based on the video stream of a 3D depth camera. Preprocessed video data is fed into a three-stage classification scheme based on support vector machines. The results of the classification are then related to the state automation to trigger state transitions indicating the completion of a specific work task and the start of the next one. The proposed approach has been evaluated at an industrial assembly process of moderate complexity and shows very robust results with respect to disturbances caused by inaccurate object classification.
We present a novel derivative-based parameter identification method to improve the precision at the tool center point of an industrial manipulator. The tool center point is directly considered in the optimization as part of the problem formulation as a key performance indicator. Additionally, our proposed method takes collision avoidance as special nonlinear constraints into account and is therefore suitable for industrial use. The performed numerical experiments show that the optimum experimental designs considering key performance indicators during optimization achieve a significant improvement in comparison to other methods. An improvement in terms of precision at the tool center point of 40% to 44% was achieved in experiments with three KUKA robots and 90 notional manipulator models compared to the heuristic experimental designs chosen by an experimenter as well as 10% to 19% compared to an existing state-of-the-art method.
If citizens actively participate in the process of collecting empirical data, as a key element of empirically oriented scientific projects, this can be seen as a contribution to an open and citizen-oriented science. Such participation can be supported by providing technical tools. The paper therefore presents examples of participatory sensing as the provision of affordable sensors for measuring environmental parameters as well as wearable technologies for recording quantified vital data and physiological states. Conceptually, the provision of data collected with these tools can be understood as a commons – with all opportunities and risks associated with such goods. After describing examples of participatory sensing and wearable technologies, the authors identify potential challenges and outline technical and organizational approaches to solve them.
Background and objective
The study follows the proposal of decomposing a given data matrix into a product of independent spatial and temporal component matrices. A multi-variate decomposition approach is presented, based on an approximate diagonalization of a set of matrices computed using a latent space representation.
Methods
The proposed methodology follows an algebraic approach, which is common to space, temporal or spatiotemporal blind source separation algorithms. More specifically, the algebraic approach relies on singular value decomposition techniques, which avoids computationally costly and numerically instable matrix inversion. The method is equally applicable to correlation matrices determined from second order correlations or by considering fourth order correlations.
Results
The resulting algorithms are applied to fMRI data sets either to extract the underlying fMRI components or to extract connectivity maps from resting state fMRI data collected for a dynamic functional connectivity analysis. Intriguingly, our algorithm shows increased spatial specificity compared to common approaches, while temporal precision stays similar.
Conclusion
The study presents a novel spatiotemporal blind source separation algorithm, which is both robust and avoids parameters that are difficult to fine tune. Applied on experimental data sets, the new method yields highly confined and focused areas with least spatial extent in the retinotopy case, and similar results in the dynamic functional connectivity analyses compared to other blind source separation algorithms. Therefore, we conclude that our novel algorithm is highly competitive and yields results, which are superior or at least similar to existing approaches.
Introduction: Improving energy efficiency and reducing energy wastage is an important topic of our time. But it is quite difficult to figure out how much of our total electricity bill can be mapped to which device or at what time the device used it. We believe energy efficiency of normal households can be improved, if this kind of transparency would be available. In this article, we present a system for energy measurement at mains sockets to gain a transparent view of energy consumption for each device in a household. It consists of several smart energy measuring devices (SEMDs) that use a low-power radio protocol to dynamically build and connect to a radio network to transfer power usage date to a server. At the server, the data is stored and can be accessed via web interface.
Results: Our primary goal was to build a back-end system for an energy metering platform with very low energy consumption. This platform can provide data for a variety of services that enables users (the consumers) to understand and improve their energy consumption behavior and increase overall energy efficiency of their households.
Smart grid, smart metering, electromobility, and the regulation of the power network are keywords of the transition in energy politics. In the future, the power grid will be smart. Based on different works, this article presents a data collection, analyzing, and monitoring software for a reference smart grid. We discuss two possible architectures for collecting data from energy analyzers and analyze their performance with respect to real-time monitoring, load peak analysis, and automated regulation of the power grid. In the first architecture, we analyze the latency, needed bandwidth, and scalability for collecting data over the Modbus TCP/IP protocol and in the second one over a RESTful web service. The analysis results show that the solution with Modbus is more scalable as the one with RESTful web service. However, the performance and scalability of both architectures are sufficient for our reference smart grid and use cases.
Basierend auf einer systematischen und umfangreichen Analyse von Praxisbeiträgen zum Thema Schatten-IT, die zwischen September 2015 und August 2016 erschienen sind, beschreibt der vorliegende Artikel Governance-Aspekte zu diesem Phänomen. Er ergänzt damit vorhergehende akademische Studien. Es zeigt sich, dass unter Praktikern der Eindruck vorherrscht, dass IT-Abteilungen unter zunehmendem Druck stehen, schneller auf sich ändernde Anforderungen aus den Fachbereichen reagieren zu müssen. Können IT-Abteilungen diesen Erwartungen nicht entsprechen, beschaffen sich Fachbereiche und Nutzer selbst Lösungen in Form von Schatten-IT. Als mögliche Antwort darauf kann sich die IT-Abteilung agiler organisieren und die IT-Architektur im Unternehmen modernisieren. Eine weitere Möglichkeit besteht darin, sich das innovative Potenzial von Schatten-IT zunutze zu machen und deren Umsetzung aktiv durch organisatorische und technische Maßnahmen zu unterstützen. IT-Sicherheitsmanagement und technische Schutzmechanismen können helfen, die so entstandenen Lösungen abzusichern und die Risiken zu minimieren. Nach vorherrschender Ansicht entwickelt sich die IT-Abteilung als Konsequenz aus all diesen Maßnahmen zu einem nutzerorientierten, internen Service-Provider und strategischen Partner für die Fachbereiche.
Lean Management is a standard production mode that has been familiar to production organizations for several decades. To date, however, academic literature has presented surprisingly little information about the application of Lean Management in Information Technology (IT) organizations, or what is called Lean IT. Drawing upon an empirical qualitative case study of the IT departments of two multinational companies, in this paper we identify change management lessons learned for Lean IT implementations, as well as seven characteristics of a corresponding change management approach. As an extension of our work, researchers should validate and expand our initial findings, preferably in a quantitative setting.
Lean IT
(2017)
This article provides a mathematical analysis of singular (nonsmooth) artifacts added to reconstructions by filtered backprojection (FBP) type algorithms for X-ray computed tomography (CT) with arbitrary incomplete data. We prove that these singular artifacts arise from points at the boundary of the data set. Our results show that, depending on the geometry of this boundary, two types of artifacts can arise: object-dependent and object-independent artifacts. Object-dependent artifacts are generated by singularities of the object being scanned, and these artifacts can extend along lines. They generalize the streak artifacts observed in limited-angle tomography. Object-independent artifacts, on the other hand, are essentially independent of the object and take one of two forms: streaks on lines if the boundary of the data set is not smooth at a point and curved artifacts if the boundary is smooth locally. We prove that these streak and curve artifacts are the only singular artifacts that can occur for FBP in the continuous case. In addition to the geometric description of artifacts, the article provides characterizations of their strength in Sobolev scale in certain cases. The results of this article apply to the well-known incomplete data problems, including limited-angle and regionof-interest tomography, as well as to unconventional X-ray CT imaging setups that arise in new practical applications. Reconstructions from simulated and real data are analyzed to illustrate our theorems, including the reconstruction that motivated this work a synchrotron data set in which artifacts appear on lines that have no relation to the object.
Background and objective
Parkinson’s disease (PD) is considered a degenerative disorder that affects the motor system, which may cause tremors, micrography, and the freezing of gait. Although PD is related to the lack of dopamine, the triggering process of its development is not fully understood yet.
Methods
In this work, we introduce convolutional neural networks to learn features from images produced by handwritten dynamics, which capture different information during the individual’s assessment. Additionally, we make available a dataset composed of images and signal-based data to foster the research related to computer-aided PD diagnosis.
Results
The proposed approach was compared against raw data and texture-based descriptors, showing suitable results, mainly in the context of early stage detection, with results nearly to 95%.
Conclusions
The analysis of handwritten dynamics using deep learning techniques showed to be useful for automatic Parkinson’s disease identification, as well as it can outperform handcrafted features.
Drohnen werden inzwischen in vielen und sehr unterschiedlichen Kon-texten verwendet. Aus dem Blickwinkel der Technikfolgenabschätzung (TA) scheint es daher sinnvoll, den Umfang der momentanen und zu-künftigen Nutzung von Drohnen und daraus resultierende Implikatio-nen näher zu beleuchten und eine Bestandsaufnahme durchzuführen. Darüber hinaus sollen die voraussichtlichen Pfade der weiteren Tech-nikentwicklung, relevante Akteure und deren Interessenslage sowie zu-künftige Anwendungspotenziale und Einsatzfelder analysiert werden