Gold Open Access- Erstveröffentlichung in einem/als Open-Access-Medium
Refine
Year of publication
Document Type
Is part of the Bibliography
- no (80)
Keywords
- Barrett's esophagus (3)
- Handchirurgie (3)
- Wirtschaftsinformatik (3)
- 3D-Druck (2)
- Benchmarks (2)
- Deep Learning (2)
- Deep learning (2)
- Diagnose (2)
- Digital anthropometry (2)
- Dreidimensionale Bildverarbeitung (2)
Institute
- Fakultät Informatik und Mathematik (80) (remove)
Begutachtungsstatus
- peer-reviewed (57)
Small and medium-sized enterprises (SMEs) increasingly need to manage nformation technology (IT) effectively in order to remain competitive. However, compared to larger organizations, SMEs often face challenges in terms of resources and employer attractiveness, and regularly do not have the need to employ a Chief Information Officer (CIO) on a full-time basis. To address this issue, a growing number of global experts have begun to provide CIO services on a part-time basis for multiple clients simultaneously. This approach allows SMEs to tap into the expertise of experienced IT leaders at a fraction of the cost and without committing to long-term arrangements. While these professionals, known as “Fractional CIOs”, have proven their value in the field, there has been a lack of academic research on this emerging trend. Therefore, we carried out a comprehensive research project between 2020 and 2023, involving 62 Fractional CIOs from 10 countries. The research produced a definition, different types of engagements, and success factors for Fractional CIOs and their engagements. This paper summarizes these findings for a wider audience of academics and practitioners.
Utility of Smartphone-based Three-dimensional Surface Imaging for Digital Facial Anthropometry
(2024)
Background
The utilization of three-dimensional (3D) surface imaging for facial anthropometry is a significant asset for patients undergoing maxillofacial surgery. Notably, there have been recent advancements in smartphone technology that enable 3D surface imaging.
In this study, anthropometric assessments of the face were performed using a smartphone and a sophisticated 3D surface imaging system.
Methods
30 healthy volunteers (15 females and 15 males) were included in the study. An iPhone 14 Pro (Apple Inc., USA) using the application 3D Scanner App (Laan Consulting Corp., USA) and the Vectra M5 (Canfield Scientific, USA) were employed to create 3D surface models. For each participant, 19 anthropometric measurements were conducted on the 3D surface models. Subsequently, the anthropometric measurements generated by the two approaches were compared. The statistical techniques employed included the paired t-test, paired Wilcoxon signed-rank test, Bland–Altman analysis, and calculation of the intraclass correlation coefficient (ICC).
Results
All measurements showed excellent agreement between smartphone-based and Vectra M5-based measurements (ICC between 0.85 and 0.97). Statistical analysis revealed no statistically significant differences in the central tendencies for 17 of the 19 linear measurements. Despite the excellent agreement found, Bland–Altman analysis revealed that the 95% limits of agreement between the two methods exceeded ±3 mm for the majority of measurements.
Conclusion
Digital facial anthropometry using smartphones can serve as a valuable supplementary tool for surgeons, enhancing their communication with patients. However, the proposed data suggest that digital facial anthropometry using smartphones may not yet be suitable for certain diagnostic purposes that require high accuracy.
At least 80% of modern and postmodern poems exhibit neither rhyme nor metrical schemes such as iamb or trochee. However, does this mean that they are free of any rhythmical features?TheUS American research onfree verse prosody claimsthe opposite: Modern poets like Whitman, the Imagists, the Beat poets and contemporary Slam poets have developed a postmetrical idea of prosody, using rhythmical features of everyday language, prose, and musical styles like Jazz or Hip Hop. It has spawned a large and complex variety intheir poetic prosodies
which,however,appearto bemuchharderto quantify and regularize than traditional patterns. In our project, we examinethe largest portal for spoken poetry Lyrikline and analysed and classified such rhythmical patterns by using pattern recognition and classification techniques. We integrate a human-in-the-loop approach in which we interleave manual annotation with computational modelling and data-based analysis. Our results are integrated into the website of Lyrikline. Our follow-up project makes our research results available to a wider audience, in particular to high school-level teaching.
Although adopting Low Code Development Platforms (LCDPs) promises significant efficiency and effectiveness improvements for application development, its adoption still needs further empirical research. This paper uses a combinatorial approach to research LCDP adoption and presents the results of a multiple mini case study with 36 cases on LCDP adoption. A combination of the Socio-Technical Systems theory and the Technology-Organisational-Environment model is used as a theoretical lens. In this paper, we show that LCDP adoption is a multifaceted phenomenon and identify three archetypes for LCDP adoption (i.e., IT Resource Shortage Mitigators, Application Development Democratisers, and Synergy Realisers) and one archetype for LCDP non-adoption (i.e., Intricacy Adversaries). Each archetype can be interpreted as an individual path towards LCDP (non-)adoption. Based on these archetypes, we derive seven starting points for practitioners to adopt LCDPs in work systems. Moreover, by using the theoretical lenses, the paper shows that for an LCDP adoption to occur, an optimisation of the social and technical sub-systems is required.
Business process improvement (BPI) is of high priority for practitioners. But especially the most value-adding phase in a BPI project, namely the “act of improvement”, is insufficiently supported despite the many existing methods and techniques. Until now, it is largely unclear as to what degree existing BPI techniques support each other and are interrelated with one another. Thus, the purpose of this paper is to investigate the functional interdependencies between BPI techniques to get a better understanding for the beneficial synergies between the BPI techniques and to provide a basis for purposefully combining them within projects. Based on the functional interdependencies, a graphical “Functional Interdependency Map” is developed and its usability demonstrated in an experiment. The paper is valuable for academics and practitioners alike because the impact of BPI on organizational performance is high.
A large number of researchers have addressed social aspects in hierarchical production planning. This article responds to research gaps identified in our previous literature review. Accordingly, consideration of social aspects and the economic implications of social improvements are required in a longer term planning approach. For this, we integrate work intensity as employee utilization in a general mixed-integer programming model for master production scheduling. Following existing fatigue functions, we represent the relationship between work intensity and exhaustion through an employee-utilization-dependent exhaustion function. We account for the economic implications through exhaustion-dependent capacity load factors. We solve our model with a CPLEX standard solver and analyze a case study based on a realistic production system and numerical data. We demonstrate that the consideration of economic implications is necessary to evaluate social improvements. Otherwise, monetary disadvantages are overestimated, and social improvements are, thus, negatively affected. Moreover, from a certain level of work-intensity reduction, demand peaks are smoothed more by pre-production, which requires more core employees, while temporary employment is reduced. Further potential may arise from considering and quantifying other interdependencies, such as employee exhaustion and employee days off. In addition, the relationship between social working conditions and employee turnover can be integrated.
Computing a sample mean of time series under dynamic time warping is NP-hard. Consequently, there is an ongoing research effort to devise efficient heuristics. The majority of heuristics have been developed for the constrained sample mean problem that assumes a solution of predefined length. In contrast, research on the unconstrained sample mean problem is underdeveloped. In this article, we propose a generic average-compress (AC) algorithm to address the unconstrained problem. The algorithm alternates between averaging (A-step) and compression (C-step). The A-step takes an initial guess as input and returns an approximation of a sample mean. Then the C-step reduces the length of the approximate solution. The compressed approximation serves as initial guess of the A-step in the next iteration. The purpose of the C-step is to direct the algorithm to more promising solutions of shorter length. The proposed algorithm is generic in the sense that any averaging and any compression method can be used. Experimental results show that the AC algorithm substantially outperforms current state-of-the-art algorithms for time series averaging.
Design propositions for nudging in healthcare: Adoption of national electronic health recordsystems
(2023)
Objectives: Electronic health records (EHRs) are considered important for improving efficiency and reducing costs of ahealthcare system. However, the adoption of EHR systems differs among countries and so does the way the decision to par-ticipate in EHRs is presented. Nudging is a concept that deals with influencing human behaviour within the research streamof behavioural economics. In this paper, we focus on the effects of the choice architecture on the decision for the adoption ofnational EHRs. Our study aims to link influences on human behaviour through nudging with the adoption of EHRs to inves-tigate how choice architects can facilitate the adoption of national information systems.
Methods: We employ a qualitative explorative research design, namely the case study method. Using theoretical sampling,we selected four cases (i.e., countries) for our study: Estonia, Austria, the Netherlands, and Germany. We collected and ana-lyzed data from various primary and secondary sources: ethnographic observation, interviews, scientific papers, homepages,press releases, newspaper articles, technical specifications, publications from governmental bodies, and formal studies.
Results: The findings from our European case studies show that designing for EHR adoption should encompass choice archi-tecture elements (i.e., defaults), technical elements (i.e., choice granularity and access transparency), and institutional ele-ments (i.e., regulations for data protection, information campaigns, and financial incentives) in combination.
Conclusions: Our findings provide insights on the design of the adoption environments of large-scale, national EHR systems.Future research could estimate the magnitude of effects of the determinants.
We present an industrial end-user perspective on the current state of quantum computing hardware for one specific technological approach, the neutral atom platform. Our aim is to assist developers in understanding the impact of the specific properties of these devices on the effectiveness of algorithm execution. Based on discussions with different vendors and recent literature, we discuss the performance data of the neutral atom platform. Specifically, we focus on the physical qubit architecture, which affects state preparation, qubit-to-qubit connectivity, gate fidelities, native gate instruction set, and individual qubit stability. These factors determine both the quantum-part execution time and the end-to-end wall clock time relevant for end-users, but also the ability to perform fault-tolerant quantum computation in the future. We end with an overview of which applications have been shown to be well suited for the peculiar properties of neutral atom-based quantum computers.
Organizations are under increasing pressure to develop applications within budget and time at high quality. Therefore, multiple organizations adopt Low Code Development Platforms (LCDP) to develop applications faster and cheaper compared to traditional application development. However, current research on LCDP adoption lacks empirical grounding as well as a deeper understanding of the importance of adoption drivers and inhibitors. We conducted semi-structured interviews and a Delphi study with seventeen experts to address these gaps. As a result, we identified twelve drivers and nineteen inhibitors for adopting LCDPs. We show that the experts have a consensus on the most and the least important drivers and inhibitors for LCDP adoption. Yet, the ranking of the drivers and inhibitors between the most and least important is highly context dependent. For some drivers and inhibitors, the experts’ ranking is similar to academic literature, whereas, for others, it differs. In conclusion, the study at hand empirically validates drivers and inhibitors for LCDP adoption, adds six new drivers and six new inhibitors to the body of knowledge, and analyses the importance of these factors.
We compute the Fourier expansion of Hecke operators on vector-valued modular forms for the Weil representation associated to a lattice L. The Hecke operators considered in this paper include operators T(p^2l) where p is a prime dividing the level of the lattice L. Additionally, an explicit formula for a general type of Gauss sum associated to a lattice L drops out as a by-product.
Background:
Reliable, time- and cost-effective, and clinician-friendly diagnostic tools are cornerstones in facial palsy (FP) patient management. Different automated FP grading systems have been developed but revealed persisting downsides such as insufficient accuracy and cost-intensive hardware. We aimed to overcome these barriers and programmed an automated grading system for FP patients utilizing the House and Brackmann scale (HBS).
Methods:
Image datasets of 86 patients seen at the Department of Plastic, Hand, and Reconstructive Surgery at the University Hospital Regensburg, Germany, between June 2017 and May 2021, were used to train the neural network and evaluate its accuracy. Nine facial poses per patient were analyzed by the algorithm.
Results:
The algorithm showed an accuracy of 100%. Oversampling did not result in altered outcomes, while the direct form displayed superior accuracy levels when compared to the modular classification form (n = 86; 100% vs. 99%). The Early Fusion technique was linked to improved accuracy outcomes in comparison to the Late Fusion and sequential method (n = 86; 100% vs. 96% vs. 97%).
Conclusions:
Our automated FP grading system combines high-level accuracy with cost- and time-effectiveness. Our algorithm may accelerate the grading process in FP patients and facilitate the FP surgeon’s workflow.
Today, ubiquitous mobile devices have not only arrived but entered the safety critical domain. There, systems are about to be controlled where human health or even human life is put at risk. For example, in automation systems first ideas surface to control parts of the system via a COTS smartphone. Another example is the idea to control the autonomous parking function of a car via a COTS smartphone too. As beneficial and convenient these ideas are on the first thought, on the second thought, dangers of these approaches become obvious. Especially in case of failures the system’s safety has to be maintained. The open question is how to achieve this mandatory requirement with COTS components, e.g. smartphones that are not developed following the development process necessary for safetycritical systems. This paper presents a concept to reliably detect human interaction while activating safety critical functions via COTS mobile devices. Thus a means is provided to detect erroneous activation requests for the safetycritical function.
In the context of a "Smart Grid" research project, together with industrial partners, the OTH-Regensburg realized an intelligent medium-voltage grid in the local area. Goal was to improve the current voltage regulation and to counter the problems with inconsistent energy feed-in of decentralized renewable energy producers. In this paper we discuss the possibilities of using 3rd generation (3G) cellular networks (UMTS) as basic technology to communicate the voltage-levels within a medium-voltage grid. We build an experimental
hardware setup to generate data-traffic as specified for the smart
grid. By analyzing the performance of 3G cellular networks in
terms of transmission latency and rate of failure, we tried to
evaluate the usability of this technology for such critical data
exchange. Though mobile communication in its structure is not
specified for the use of transmitting such infrastructure critical
data, the results show a promising high reliability with low
transmission latency. The experiments served just to test a
fragment of the conditions of use in a real scenario. An expanded
test scope is needed to further analyze the performance of mobile
radio for automatic control in smart grids. In the end the results
discussed in this paper led to a successful prototype of an
intelligent medium-voltage grid with mobile radio as communication technology
We investigate the reconstruction problem of limited angle tomography. Such problems arise naturally in applications like digital breast tomosynthesis, dental tomography, electron microscopy, etc. Since the acquired tomographic data is highly incomplete, the reconstruction problem is severely ill-posed and the traditional reconstruction methods, e.g. filtered backprojection (FBP), do not perform well in such situations.
To stabilize the reconstruction procedure additional prior knowledge about the unknown object has to be integrated into the reconstruction process. In this work, we propose the use of the sparse regularization technique in combination with curvelets. We argue that this technique gives rise to an edge-preserving reconstruction. Moreover, we show that the dimension of the problem can be significantly reduced in the curvelet domain. To this end, we give a characterization of the kernel of the limited angle Radon transform in terms of curvelets and derive a characterization of solutions obtained through curvelet sparse regularization. In numerical experiments, we will show that the theoretical results directly translate into practice and that the proposed method outperforms classical reconstructions.
Purpose
We present a systematic Bayesian formulation of the stochastic localization/triangulation problem close to constraining interfaces.
Methods
For this purpose, the terminology of Bayesian estimation is summarized suitably for applied researchers including the presentation of Maximum Likelihood (ML), Maximum A Posteriori (MAP), and Minimum Mean Square Error (MMSE) estimation. Explicit estimators for triangulation are presented for the linear 2D parallel beam and the nonlinear 3D cone beam model. The priors in MAP and MMSE optionally incorporate (A) the hard constraints about the interface and (B) knowledge about the probability of the object with respect to the interface. All presented estimators are compared in several simulation studies for live acquisition scenarios with 10,000 samples each.
Results
First, the presented application shows that MAP and MMSE perform considerably better, leading to lower Root Mean Square Errors (RMSEs) in the simulation studies compared to the ML approach by typically introducing a bias. Second, utilizing priors including (A) and (B) is very beneficial compared to just including (A). Third, typically MMSE leads to better results than MAP, by the cost of significantly higher computational effort.
Conclusion
Depending on the specific application and prior knowledge, MAP and MMSE estimators strongly increase the estimation accuracy for localization close to interfaces.
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Companies often use specially-designed production systems and change them from time to time. They produce small batches in order to satisfy specific demands with the least tardiness. This imposes high demands on high-performance scheduling algorithms which can be rapidly adapted to changes in the production system. As a solution, this paper proposes a generic approach: solutions were obtained using a widely-used commercially-available tool for solving linear optimization models, which is available in an Enterprise Resource Planning System (in the SAP system for example) or can be connected to it. In a real-world application of a flow shop with special restrictions this approach is successfully used on a standard personal computer. Thus, the main implication is that optimal scheduling with a commercially-available tool, incorporated in an Enterprise Resource Planning System, may be the best approach.
Purpose:
Total body irradiation (TBI) in extended source surface distance (SSD) is a common treatment technique before hematopoietic stem cell transplant. The lungs are organs at risk, which often are treated with a lower dose than the whole body.
Methods:
This can be achieved by the application of blocks. Three-dimensional (3D) printers are a modern tool to be used in the production process of these blocks.
Results:
We demonstrate the applicability of a specific printer and printing material, describe the process, and evaluate the accuracy of the product.
Conclusion:
The blocks and apertures were found to be applicable in clinical routine.
Sustainable production includes economic, environmental, and social aspects. However, social sustainability has received less attention, especially compared to the economic aspects. Next to technical and organizational measures, social improvements within supply chains can also be achieved through suitable production planning. Within production planning, production programs are determined, and the assignment of available resources (e.g., employees) is specified. Thus, the utilization and workload of employees are defined. This systematic literature review investigates to what extent such employee-related social aspects are reflected in production planning and discusses whether economic aspects dominate them. For this, a Scopus database search was carried out and 76 identified approaches were analyzed and categorized regarding the occurring employee-related social aspects and their implementation. Thus far, the approaches mainly consider single aspects on single planning levels. A consideration of a broad set of aspects along the entire production planning has rarely been studied. In particular, health and safety aspects are considered on the levels of assembly line balancing and job rotation. However, their impact is primarily determined by the specific settings of the decision-maker. To support decision-makers, only a few studies have investigated the effects based on real application scenarios. Further potential might be an extended modeling of social and economic interdependencies and a consideration of employee-related social aspects in medium- to long-term production planning.
Frequency conversion (FC) and type-II parametric down-conversion (PDC) processes serve as basic building blocks for the implementation of quantum optical experiments: type-II PDC enables the efficient creation of quantum states such as photon-number states and Einstein–Podolsky–Rosen (EPR)-states. FC gives rise to technologies enabling efficient atom–photon coupling, ultrafast pulse gates and enhanced detection schemes. However, despite their widespread deployment, their theoretical treatment remains challenging. Especially the multi-photon components in the high-gain regime as well as the explicit time-dependence of the involved Hamiltonians hamper an efficient theoretical description of these nonlinear optical processes. In this paper, we investigate these effects and put forward two models that enable a full description of FC and type-II PDC in the high-gain regime. We present a rigorous numerical model relying on the solution of coupled integro-differential equations that covers the complete dynamics of the process. As an alternative, we develop a simplified model that, at the expense of neglecting time-ordering effects, enables an analytical solution. While the simplified model approximates the correct solution with high fidelity in a broad parameter range, sufficient for many experimental situations, such as FC with low efficiency, entangled photon-pair generation and the heralding of single photons from type-II PDC, our investigations reveal that the rigorous model predicts a decreased performance for FC processes in quantum pulse gate applications and an enhanced EPR-state generation rate during type-II PDC, when EPR squeezing values above 12 dB are considered.
In simultaneous interpreting, human experts incrementally construct and extend partial hypotheses about the source speaker’s message, and start to verbalize a corresponding message in the target language, based on a partial translation – which may have to be corrected occasionally. They commence the target utterance in the hope that they will be able to finish understanding the source speaker’s message and determine its translation in time for the unfolding delivery. Of course, both incremental understanding and translation by humans can be garden-pathed, although experts are able to optimize their delivery so as to balance the goals of minimal latency, translation quality and high speech fluency with few corrections. We investigate the temporal properties of both translation input and output to evaluate the tradeoff between low latency and translation quality. In addition, we estimate the improvements that can be gained with a tempo-elastic
speech synthesizer.
A Novel Design Flow for a Security-Driven Synthesis of Side-Channel Hardened Cryptographic Modules
(2017)
Over the last few decades, computer-aided engineering (CAE) tools have been developed and improved in order to ensure a short time-to-market in the chip design business. Up to now, these design tools do not yet support an integrated design strategy for the development of side-channel-resistant hardware implementations. In order to close this gap, a novel framework named AMASIVE (Adaptable Modular Autonomous SIde-Channel Vulnerability Evaluator) was developed. It supports the designer in implementing devices hardened against power attacks by exploiting novel security-driven synthesis methods. The article at hand can be seen as the second of the two contributions that address the AMASIVE framework. While the first one describes how the framework automatically detects vulnerabilities against power attacks, the second one explains how a design can be hardened in an automatic way by means of appropriate countermeasures, which are tailored to the identified weaknesses. In addition to the theoretical introduction of the fundamental concepts, we demonstrate an application to the hardening of a complete hardware implementation of the block cipher PRESENT.
The endoscopic features associated with eosinophilic esophagitis (EoE) may be missed during routine endoscopy. We aimed to develop and evaluate an Artificial Intelligence (AI) algorithm for detecting and quantifying the endoscopic features of EoE in white light images, supplemented by the EoE Endoscopic Reference Score (EREFS). An AI algorithm (AI-EoE) was constructed and trained to differentiate between EoE and normal esophagus using endoscopic white light images extracted from the database of the University Hospital Augsburg. In addition to binary classification, a second algorithm was trained with specific auxiliary branches for each EREFS feature (AI-EoE-EREFS). The AI algorithms were evaluated on an external data set from the University of North Carolina, Chapel Hill (UNC), and compared with the performance of human endoscopists with varying levels of experience. The overall sensitivity, specificity, and accuracy of AI-EoE were 0.93 for all measures, while the AUC was 0.986. With additional auxiliary branches for the EREFS categories, the AI algorithm (AI-EoEEREFS) performance improved to 0.96, 0.94, 0.95, and 0.992 for sensitivity, specificity, accuracy, and AUC, respectively. AI-EoE and AI-EoE-EREFS performed significantly better than endoscopy beginners and senior fellows on the same set of images. An AI algorithm can be trained to detect and quantify endoscopic features of EoE with excellent performance scores. The addition of the EREFS criteria improved the performance of the AI algorithm, which performed significantly better than endoscopists with a lower or medium experience level.
The translation of poetry is a complex, multifaceted challenge: the translated text should communicate the same meaning, similar metaphoric expressions, and also match the style and prosody of the original poem. Research on machine poetry translation is existing since 2010, but for four reasons it is still rather insufficient:
1. The few approaches existing completely lack any knowledge about current developments in both lyric theory and translation theory.
2. They are based on very small datasets.
3. They mostly ignored the neural learning approach that superseded the long-standing dominance of phrase-based approaches within machine translation.
4. They have no concept concerning the pragmatic function of their research and the resulting tools.
Our paper describes how to improve the existing research and technology for poetry translations in exactly these four points. With regards to 1) we will describe the “Poetics of Translation”. With regards to 2) we will introduce the Worlds largest corpus for poetry translations from lyrikline. With regards to 3) we will describe first steps towards a neural machine translation of poetry. With regards to 4) we will describe first steps towards the development of a poetry translation mapping system.
We discuss the spectral structure and decomposition of multi-photon states. Ordinarily 'multi-photon states' and 'Fock states' are regarded as synonymous. However, when the spectral degrees of freedom are included this is not the case, and the class of 'multi-photon' states is much broader than the class of 'Fock' states. We discuss the criteria for a state to be considered a Fock state. We then address the decomposition of general multi-photon states into bases of orthogonal eigenmodes, building on existing multi-mode theory, and introduce an occupation number representation that provides an elegant description of such states. This representation allows us to work in bases imposed by experimental constraints, simplifying calculations in many situations. Finally we apply this technique to several example situations, which are highly relevant for state of the art experiments. These include Hong–Ou–Mandel interference, spectral filtering, finite bandwidth photo-detection, homodyne detection and the conditional preparation of Schrödinger kitten and Fock states. Our techniques allow for very simple descriptions of each of these examples.
Background:
The aim of the study was to compare the two irradiation modes with (FF) and without flattening filter (FFF) for three different treatment techniques for simultaneous integrated boost radiation therapy of patients with right sided breast cancer.
Methods:
An Elekta Synergy linac with Agility collimating device is used to simulate the treatment of 10 patients. Six plans were generated in Monaco 5.0 for each patient treating the whole breast and a simultaneous integrated boost (SIB) volume: intensity modulated radiation therapy (IMRT), volumetric modulated arc therapy (VMAT) and a tangential arc VMAT (tVMAT), each with and without flattening filter. Plan quality was assessed considering target coverage, sparing of the contralateral breast, the lungs, the heart and the normal tissue. All plans were verified by a 2D-ionisation-chamber-array and delivery times were measured and compared. The Wilcoxon test was used for statistical analysis with a significance level of 0.05.
Results:
Significantly best target coverage and homogeneity was achieved using VMAT FFF with V95% = (98.7 +/- 0.8) % and HI = (8.2 +/- 0.9) % for the SIB and V95% = (98.3 +/- 0.7) % for the PTV, whereas tVMAT showed significantly lowest doses to the contralateral organs at risk with a D-mean of (0.7 +/- 0.1) Gy for the contralateral lung, (1.0 +/- 0.2) Gy for the contralateral breast and (1.4 +/- 0.2) Gy for the heart. All plans passed the gamma evaluation with a mean passing rate of (99.2 +/- 0.8) %. Delivery times were significantly reduced for VMAT and tVMAT but increased for IMRT, when FFF was used. Lowest delivery times were observed for tVMAT FFF with (1:20 +/- 0:07) min.
Conclusion:
Balancing target coverage, OAR sparing and delivery time, VMAT FFF and tVMAT FFF are considered the preferable of the investigated treatment options in simultaneous integrated boost irradiation of right sided breast cancer for the combination of an Elekta Synergy linac with Agility and the treatment planning system Monaco 5.0.
The advent of multi-core CPUs in nearly all embedded markets has prompted an architectural trend towards combining safety critical and uncritical software on single hardware units. We present a novel architecture for mixed criticality systems based on Linux that allows us to consolidate critical and uncritical parts onto a single hardware unit. CPU virtualisation extensions enable strict and static partitioning of hardware by direct assignment of resources, which allows us to boot additional operating systems or bare metal applications running aside Linux. The hypervisor Jailhouse is at the core of the architecture and ensures that the resulting domains may serve workloads of different criticality and can not interfere in an unintended way. This retains Linux’s feature-richness in uncritical parts, while frugal safety and real-time critical applications execute in isolated domains. Architectural simplicity is a central aspect of our approach and a precondition for reliable implementability and successful certification. While standard virtualisation extensions provided by current hardware seem to suffice for a straight forward implementation of our approach, there are a number of further limitations that need to be worked around. This paper discusses the arising issues, and evaluates the suitability of our approach for real-world safety and real-time critical scenarios.
In this paper, we consider the problem of feature reconstruction from incomplete X-ray CT data. Such incomplete data problems occur when the number of measured X-rays is restricted either due to limit radiation exposure or due to practical constraints, making the detection of certain rays challenging. Since image reconstruction from incomplete data is a severely ill-posed (unstable) problem, the reconstructed images may suffer from characteristic artefacts or missing features, thus significantly complicating subsequent image processing tasks (e.g., edge detection or segmentation).
In this paper, we introduce a framework for the robust reconstruction of convolutional image features directly from CT data without the need of computing a reconstructed image first. Within our framework, we use non-linear variational regularization methods that can be adapted to a variety of feature reconstruction tasks and to several limited data situations. The proposed variational regularization method minimizes an energy functional being the sum of a feature dependent datafitting term and an additional penalty accounting for specific properties of the features. In our numerical experiments, we consider instances of edge reconstructions from angular under-sampled data and show that our approach is able to reliably reconstruct feature maps in this case.
Final assembly at Krones AG must make the best possible use of its production space and meeting the specified customer due dates is critical. Via a self developed simulation tool, the present scheduling procedure is compared with the one by priority rule shortest slack. As a consequence slack should have a higher importance in the planning
We summarize our project Rhythmicalizer in which we analyze a corpus of post-modern poetry in a combination of qualitative hermeneutical and computational methods, as we have run the project over the course of the past three years (and preparing it for some time before that). Interdisciplinary work is always challenging and we here focus on some of the highlights of our collaboration.
Quantum computing promises to overcome computational limitations with better and faster solutions for optimization, simulation, and machine learning problems. Europe and Germany are in the process of successfully establishing research and funding programs with the objective to dvance the technology’s ecosystem and industrialization, thereby ensuring digital sovereignty, security, and competitiveness. Such an ecosystem comprises hardware/software solution providers, system integrators, and users from research institutions, start-ups, and industry. The vision of the Quantum Technology and Application Consortium (QUTAC) is to establish and advance the quantum computing ecosystem, supporting the ambitious goals of the German government and various research programs. QUTAC is comprised of ten members representing different industries, in particular automotive manufacturing, chemical and pharmaceutical production, insurance, and technology. In this paper, we survey the current state of quantum computing in these sectors as well as the aerospace industry and identify the contributions of QUTAC to the ecosystem. We propose an application-centric approach for the industrialization of the technology based on proven business impact. This paper identifies 24 different use cases. By formalizing high-value use cases into well-described reference problems and benchmarks, we will guide technological progress and eventually commercialization. Our results will be beneficial to all ecosystem participants, including suppliers, system integrators, software developers, users, policymakers, funding program managers, and investors.
The SMOOTH-robot is a mobile robot that-due to its modularity-combines a relatively low price with the possibility to be used for a large variety of tasks in a wide range of domains. In this article, we demonstrate the potential of the SMOOTH-robot through three use cases, two of which were performed in elderly care homes. The robot is designed so that it can either make itself ready or be quickly changed by staff to perform different tasks. We carefully considered important design parameters such as the appearance, intended and unintended interactions with users, and the technical complexity, in order to achieve high acceptability and a sufficient degree of utilization of the robot. Three demonstrated use cases indicate that such a robot could contribute to an improved work environment, having the potential to free resources of care staff which could be allocated to actual care-giving tasks. Moreover, the SMOOTH-robot can be used in many other domains, as we will also exemplify in this article.
Pediatric patients suffering from ependymoma are usually treated with cranial or craniospinal three-dimensional (3D) conformal radiotherapy (3DCRT). Intensity-modulated techniques spare dose to the surrounding tissue, but the risk for second malignancies may be increased due to the increase in low-dose volume. The aim of this study is to investigate if the flattening filter free (FFF) mode allows reducing the risk for second malignancies compared to the mode with flattening filter (FF) for intensity-modulated techniques and to 3DCRT. A reduction of the risk would be advantageous for treating pediatric ependymoma. 3DCRT was compared to intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) with and without flattening filter. Dose-volume histograms (DVHs) were compared to evaluate the plan quality and used to calculate the excess absolute risk (EAR) to develop second cancer in the brain. Dose verification was performed with a two-dimensional (2D) ionization chamber array and the out-of-field dose was measured with an ionization chamber to determine the EAR in peripheral organs. Delivery times were measured. Both VMAT and IMRT achieved similar plan quality in terms of dose sparing in the OAR and higher PTV coverage as compared to 3DCRT. Peripheral dose in low-dose region, which is proportional to the EAR in organs located in this region, for example, gonads, bladder, or bowel, could be significantly reduced using FFF. The lowest peripheral EAR and lowest delivery times were hereby achieved with VMATFFF . The EAR calculated based on DVH in the brain could not be reduced using FFF mode. VMATFFF improved the target coverage and homogeneity and kept the dose in the OAR similar compared to 3DCRT. In addition, delivery times were significantly reduced using VMATFFF . Therefore, for radiotherapy of ependymoma patients, VMATFFF may be considered advantageous for the combination of Elekta Synergy linac and Oncentra External Beam planning system used in this study.
Introduction: Improving energy efficiency and reducing energy wastage is an important topic of our time. But it is quite difficult to figure out how much of our total electricity bill can be mapped to which device or at what time the device used it. We believe energy efficiency of normal households can be improved, if this kind of transparency would be available. In this article, we present a system for energy measurement at mains sockets to gain a transparent view of energy consumption for each device in a household. It consists of several smart energy measuring devices (SEMDs) that use a low-power radio protocol to dynamically build and connect to a radio network to transfer power usage date to a server. At the server, the data is stored and can be accessed via web interface.
Results: Our primary goal was to build a back-end system for an energy metering platform with very low energy consumption. This platform can provide data for a variety of services that enables users (the consumers) to understand and improve their energy consumption behavior and increase overall energy efficiency of their households.
We present a novel derivative-based parameter identification method to improve the precision at the tool center point of an industrial manipulator. The tool center point is directly considered in the optimization as part of the problem formulation as a key performance indicator. Additionally, our proposed method takes collision avoidance as special nonlinear constraints into account and is therefore suitable for industrial use. The performed numerical experiments show that the optimum experimental designs considering key performance indicators during optimization achieve a significant improvement in comparison to other methods. An improvement in terms of precision at the tool center point of 40% to 44% was achieved in experiments with three KUKA robots and 90 notional manipulator models compared to the heuristic experimental designs chosen by an experimenter as well as 10% to 19% compared to an existing state-of-the-art method.
A Hybrid Solution Method for the Capacitated Vehicle Routing Problem Using a Quantum Annealer
(2019)
he Capacitated Vehicle Routing Problem (CVRP) is an NP-optimization problem (NPO) that has been of great interest for decades for both, science and industry. The CVRP is a variant of the vehicle routing problem characterized by capacity constrained vehicles. The aim is to plan tours for vehicles to supply a given number of customers as efficiently as possible. The problem is the combinatorial explosion of possible solutions, which increases superexponentially with the number of customers. Classical solutions provide good approximations to the globally optimal solution. D-Wave's quantum annealer is a machine designed to solve optimization problems. This machine uses quantum effects to speed up computation time compared to classic computers. The problem on solving the CVRP on the quantum annealer is the particular formulation of the optimization problem. For this, it has to be mapped onto a quadratic unconstrained binary optimization (QUBO) problem. Complex optimization problems such as the CVRP can be translated to smaller subproblems and thus enable a sequential solution of the partitioned problem. This work presents a quantum-classic hybrid solution method for the CVRP. It clarifies whether the implementation of such a method pays off in comparison to existing classical solution methods regarding computation time and solution quality. Several approaches to solving the CVRP are elaborated, the arising problems are discussed, and the results are evaluated in terms of solution quality and computation time.
The purpose of this study was to evaluate the quality of surface contouring of chondromalacic cartilage by bipolar radio frequency energy using different treatment patterns in an animal model, as well as examining the impact of the treatment onto chondrocyte viability by two different methods. Our experiments were conducted on 36 fresh osteochondral sections from the tibia plateau of slaughtered 6-month-old pigs, where the thickness of the cartilage is similar to that of human wrist cartilage. An area of 1 cm(2) was first treated with emery paper to simulate the chondromalacic cartilage. Then, the treatment with RFE followed in 6 different patterns. The osteochondral sections were assessed for cellular viability (live/dead assay, caspase (cell apoptosis marker) staining, and quantitative analysed images obtained by fluorescent microscopy). For a quantitative characterization of none or treated cartilage surfaces, various roughness parameters were measured using confocal laser scanning microscopy (Olympus LEXT OLS 4000 3D). To describe the roughness, the Root-Mean-Square parameter (Sq) was calculated. A smoothing effect of the cartilage surface was detectable upon each pattern of RFE treatment. The Sq for native cartilage was Sq=3.8 +/- 1.1 mu m. The best smoothing pattern was seen for two RFE passes and a 2-second pulsed mode (B2p2) with an Sq=27.3 +/- 4.9 mu m. However, with increased smoothing, an augmentation in chondrocyte death up to 95% was detected. Using bipolar RFE treatment in arthroscopy for small joints like the wrist or MCP joints should be used with caution. In the case of chondroplasty, there is a high chance to destroy the joint cartilage.
Scarcity of resources, structural change during the further development of renewable energy sources, and their corresponding costs, such as increasing resource costs or penalties due to dirty production, lead industrial firms to adapt ecological actions. In this regard, research on energy utilization in production planning has received increased attention in the last years, resulting in a large number of research articles so far. With the paper at hand, we review the literature on energy-oriented production planning. The aim of this study is to derive similar core issues and related properties along energy-oriented models within hierarchical production planning. For this, we carry out a systematic literature review and analyze and synthesize 375 research articles. We classify the underlying literature with a novel two-dimensional classification scheme and identify three key topics and five frequently found characteristics, which are presented in detail throughout this article. Based on these results, we state several potentials for further research.
Men treated for localized prostate cancer by radiotherapy have often a remaining life span of 10 yr or more. Therefore, the risk for secondary malignancies should be taken into account. Plans for ten patients were evaluated which had been performed on an Oncentra (R) treatment planning system for a treatment with an Elekta Synergy (TM) linac with Agility (TM) head. The investigated techniques involved IMRT and VMTA with and without flattening filter. Different dose response models were applied for secondary carcinoma and sarcoma risk in the treated region and also in the periphery. As organs at risk we regarded for carcinoma risk urinary bladder, rectum, colon, esophagus, thyroid, and for sarcoma risk bone and soft tissue. The excess absolute risk (EAR) was found very similar in the treated region for both techniques (IMRT and VMAT) and also for both with and without flattening filter. The secondary sarcoma risk resulted about one magnitude smaller than the secondary carcinoma risk. The EAR to the peripheral organs was statistically significant reduced by application of the flattening filter free mode concerning the flattening filter as main source of scattered dose. Application of flattening filter free mode can thus support to reduce second malignancy risk for patients with localized prostate cancer.
Aim of the study was to develop a standardized model system to investigate endodontic irrigation techniques and assess the efficiency of different activation methods on the removal of hard tissue debris in complex root canal systems. Mesial roots of mandibular molars were firstly scanned by micro-computed tomography (µCT) and allocated to three groups of irrigant activation: sonic activation (EDDY, VDW, Munich, Germany), laser activation (AutoSWEEPS, FOTONA, Ljubljana, Slovenia) and conventional needle irrigation (control). Roots were fixed in individual 3D-printed holders to facilitate root canal enlargement under constant irrigation with NaOCl (5%). To enable standardized quantification of remaining debris, BaSO4-enriched dentine powder was compacted into the canals, followed by another µCT-scan. The final irrigation was performed using 17% ethylenediaminetetraacetic acid (EDTA) and 5% sodium hypochlorite (NaOCl) with the respective activation method, and the volume of remaining artificial debris was quantified after a final µCT-scan. The newly developed model system allowed for reliable, reproducible and standardized assessment of irrigation methods. Activation of the irrigant proved to be significantly more effective than conventional needle irrigation regarding the removal of debris, which persisted particularly in the apical third of the root canal in the control group. The efficiency of irrigation was significantly enhanced with laser- and sonic-based activation, especially in the apical third.
BACKGROUND:
This planning study compares different radiotherapy techniques for patients with pituitary adenoma, including flatness filter free mode (FFF), concerning plan quality and secondary malignancies for potentially young patients. The flatness filter has been described as main source of photon scatter.
MATERIAL AND METHODS:
Eleven patients with pituitary adenoma were included. An Elekta Synergy™ linac was used in the treatment planning system Oncentra® and for the measurements. 3D plans, IMRT, and VMAT plans and non-coplanar varieties were considered. The plan quality was evaluated regarding homogeneity, conformity, delivery time and dose to the organs at risk. The secondary malignancy risk was calculated from dose volume data and from measured dose to the periphery using different models for carcinoma and sarcoma risk.
RESULTS:
The homogeneity and conformity were nearly unchanged with and without flattening filter, neither was the delivery time found substantively different. VMAT plans were more homogenous, conformal and faster in delivery than IMRT plans. The secondary cancer risk was reduced with FFF both in the treated region and in the periphery. VMAT plans resulted in a higher secondary brain cancer risk than IMRT plans, but the risk for secondary peripheral cancer was reduced. Secondary sarcoma risk plays a minor role. No advantage was found for non-coplanar techniques. The FFF delivery times were not shortened due to additional monitor units needed and technical limitations. The risk for secondary brain cancer seems to depend on the irradiated volume. Secondary sarcoma risk is much smaller than carcinoma risk in accordance to the results of the atomic bomb survivors. The reduction of the peripheral dose and resulting secondary malignancy risk for FFF is statistically significant. However, it is negligible in comparison to the risk in the treated region.
CONCLUSION:
Treatments with FFF can reduce secondary malignancy risk while retaining similar quality as with flattening filter and should be preferred. VMAT plans show the best plan quality combined with lowest peripheral secondary malignancy risk, but highest level of second brain cancer risk. Taking this into account VMAT FFF seems the most advantageous technique for the treatment of pituitary adenomas with the given equipment.
Burgard et al. (2020) stellen in ihrem Artikel zu Qualitätszielfunktionen für stark variierende Gemeindegrößen im Zensus 2021 Erweiterungen der Stichproben- und Schätzmethoden des Zensus 2011 vor, die kleine Gemeinden unter 10.000 Einwohnern in den Entscheidungsprozess integrieren. Die Dringlichkeit zur Lösung dieses Problems wurde ebenso im Urteil des Bundesverfassungsgerichts zur Volkszählung 2011 festgestellt. Ziel dieser Erwiderung ist eine eingehende Diskussion der Ergebnisse des vorangegangenen Beitrags mit namhaften Experten auf diesem Gebiet. Insbesondere geht es um eine Einordnung des Artikels in den Wissenschaftskontext (Krämer), die Bedeutung von Nichtstichprobenfehlern für den Zensus (Küchenhoff), den Zensus aus Sicht der Amtsstatistik (Bleninger und Fürnrohr) sowie aus statistisch-methodischer Sicht (Kiesl). Darüber hinaus werden aktuelle Entwicklungen vorgestellt.
Hintergrund
Impfungen stellen eine bedeutende Präventionsmaßnahme dar. Grundlegend für die Eindämmung der Coronapandemie mittels Durchimpfung der Gesellschaft ist eine ausgeprägte Impfbereitschaft.
Ziel der Arbeit
Die Impfbereitschaft mit einem COVID‑19-Vakzin (Impfstoff gegen das Coronavirus) und deren Einflussfaktoren werden anhand einer Zufallsstichprobe der Gesamtbevölkerung in Deutschland untersucht.
Material und Methoden
Die Studie basiert auf einer telefonischen Zufallsstichprobe und berücksichtigt ältere und vorerkrankte Personen ihrem Bevölkerungsanteil entsprechend. Die Ein-Themen-Bevölkerungsbefragung zur Impfbereitschaft (n = 2014) wurde im November/Dezember 2020 durchgeführt.
Ergebnisse
Die Impfbereitschaft in der Stichprobe liegt bei rund 67 %. Vorerfahrungen mit Impfungen moderieren die Impfbereitschaft. Sie steigt bei Zugehörigkeit zu einer Risikogruppe. Der Glaube an die Wirksamkeit alternativer Heilmethoden und Befürwortung alternativer Behandlungsverfahren geht mit geringerer Impfbereitschaft einher. Ältere Menschen sind impfbereiter, kovariierend mit ihrer Einschätzung höherer Gefährdung bei Erkrankung. Ebenso ist die Ablehnung einer Impfung mit der Überschätzung von Nebenwirkungen assoziiert.
Schlussfolgerung
Die Impfbereitschaft hängt mit Impferfahrungen und Einstellungen zu Gesundheitsbehandlungsverfahren allgemein zusammen. Die Überschätzung der Häufigkeit ernsthafter Nebenwirkungen bei Impfungen weist auf weit verbreitete Fehlinformationen hin.
The vehicular ad hoc network (VANET) technology based on the approved IEEE 802.11p standard and the appendant inter-vehicle communication (IVC) has the potential to dramatically change the way transportation systems work. The fundamental idea is to change the individual behavior of each vehicle by exchanging information among traffic participants to realize a cooperative and more efficient ransportation system. Certainly, the evaluation of such systems is a comprehensive and challenging task in a real world test bed, therefore, simulation frameworks are a key tool to analyze IVC. Several models are needed to emulate the real behavior of a VANET in all aspects as much realistically as necessary. The intention of this survey is to provide a comprehensive overview of publications concerning IVC simulations of the year 2013 and to see how IVC simulation has changed since 2009. Based on this analysis, we will answer the following questions: What simulation techniques are applied to IVC? Which aspects of IVS have been evaluated? What has changed within five years of IVC simulations? We also take a closer look at commonly used software tools and discuss their functionality and drawbacks. Finally, we present open questions concerning IVC simulations.
Smart grid, smart metering, electromobility, and the regulation of the power network are keywords of the transition in energy politics. In the future, the power grid will be smart. Based on different works, this article presents a data collection, analyzing, and monitoring software for a reference smart grid. We discuss two possible architectures for collecting data from energy analyzers and analyze their performance with respect to real-time monitoring, load peak analysis, and automated regulation of the power grid. In the first architecture, we analyze the latency, needed bandwidth, and scalability for collecting data over the Modbus TCP/IP protocol and in the second one over a RESTful web service. The analysis results show that the solution with Modbus is more scalable as the one with RESTful web service. However, the performance and scalability of both architectures are sufficient for our reference smart grid and use cases.
DevOps paradigm
(2021)
DevOps, the widely used term in software industry, integrates the Development and IT Operations activities to frequently deliver, deploy, and release quality software features. DevOps approach emphasizes collaboration among Developments and IT operations teams throughout System Development Life Cycle (SDLC). The DevOps process is supported by wide variety of tool chains for various phases of SDLC. There exist many DevOps models. However, in this paper authors use a simple four phase pedagogical models to demonstrate principles of DevOps. In this paper authors attempt to show how DevOps principles can effectively be used to manage and implement business problems in classroom setting. Specifically, DevOps methodology is applied to manage develop and implement a small web application.
This pedagogical approach is specially aimed at students who do not have prior experiences and skillsets in applying DevOps methodology and associated toolsets to every stages of SDLC. At the conclusion of the project, students gained valuable insights on how to apply DevOps principles to business problems and to select and use commonly used state of the arts tools to plan, manage, build, test, monitor, deploy tasks at every stages of DevOps. The authors also discuss the limitations and practical issues related to implementing DevOps within classroom settings.
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of earlycancerous tissues in Barrett’s esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts’ previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts’ delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model’s sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts’ insights, demonstrating how human knowledge may influence the correct computational learning.
Die Akzeptanz unter Bewohnern und Bewohnerinnen gewinnt bei hoch komplexen, technisch anspruchsvollen energetischen Sanierungen als „Innovationsmotor“ zunehmend an Relevanz. Der Beitrag basiert auf zwei Fallstudien zur partizipativen Nutzereinbindung bei energetischen Sanierungen im genossenschaftlichen Wohnbau historischer Stadtquartiere in Regensburg. Neben einer sozialverträglichen Sanierung wurde jeweils ein hohes Maß an Energieeffizienz bei den technischen Lösungen ange-strebt. Haushaltsbefragungen und qualitative Interviews zeigen die hohe Akzeptanz von Sanierungsmaßnahmen, sofern die Senkung der Energiekosten die erhöhte Miete kompensiert. Abschließend werden Akzeptanzfaktoren wie Partizipation, Vertrauen, Sozialverträglichkeit und Autarkie erörtert.
GinJinn: An object-detection pipeline for automated feature extraction from herbarium specimens
(2020)
PREMISE:
The generation of morphological data in evolutionary, taxonomic, and ecological studies of plants using herbarium material has traditionally been a labor-intensive task. Recent progress in machine learning using deep artificial neural networks (deep learning) for image classification and object detection has facilitated the establishment of a pipeline for the automatic recognition and extraction of relevant structures in images of herbarium specimens.
METHODS AND RESULTS:
We implemented an extendable pipeline based on state-of-the-art deep-learning object-detection methods to collect leaf images from herbarium specimens of two species of the genus Leucanthemum. Using 183 specimens as the training data set, our pipeline extracted one or more intact leaves in 95% of the 61 test images.
CONCLUSIONS:
We establish GinJinn as a deep-learning object-detection tool for the automatic recognition and extraction of individual leaves or other structures from herbarium specimens. Our pipeline offers greater flexibility and a lower entrance barrier than previous image-processing approaches based on hand-crafted features.
Cybersecurity in Health Care
(2020)
Ethical questions have always been crucial in health care; the rapid dissemination of ICT makes some of those questions even more pressing and also raises new ones. One of these new questions is cybersecurity in relation to ethics in health care. In order to more closely examine this issue, this chapter introduces Beauchamp and Childress’ four principles of biomedical ethics as well as additional ethical values and technical aims of relevance for health care. Based on this, two case studies—implantable medical devices and electronic Health Card—are presented, which illustrate potential conflicts between ethical values and technical aims as well as between ethical values themselves. It becomes apparent that these conflicts cannot be eliminated in general but must be reconsidered on a case-by-case basis. An ethical debate on cybersecurity regarding the design and implementation of new (digital) technologies in health care is essential.
This article highlights methodological and ethical challenges in research with adults of older and oldest age, by presenting field experiences of the current research project “Motion Monitoring of Geriatric Trauma Patients - Explorative Study on the Rehabilitation Process after Hip Fracture Using Sensor-based Data”. Depiction of the survey situation, with regard to the subjects in particular, can serve as practical examples for designing future research projects.
The group of older adults is a rather large and growing group for which research is required, especially concerning their heterogeneity, their individual autonomy and quality of life. It is assumed, that research designs of studies on the target group must be specifically adjusted, in particular when considering the attribution of vulnerability of the group members. At the same time, it is not clear yet what exact specifics of the subjects and target group must be considered in research designs, as surprisingly little is known about the target group as subjects and corresponding theories have been insufficiently tested.
The exploratory long-term design of the research project presented in the second section of this chapter has a positive evaluation of an ethics committee. Still ethical challenges occurred in the field situation, that are illustrated in the third section of this chapter, by providing information on the patients, their role as research subjects, how they were recruited, how an informed consensus was reached, and in some cases how participation was rejected or abandoned. After a summary, the end of the paper is marked by recommendations on how to design future research projects.
Cumulatively it must always be expected that interaction between researchers and research subjects of this target group can become very intensive, what requires to follow clearly defined procedures and at the same time to be prepared to act flexibly.
Background: This preparatory study accelerates an implementation of individualized monitoring and feedback of physical motion using conventional motion trackers in the rehabilitation process of geriatric trauma patients. Regaining mobility is accompanied with improved quality of life in persons of very advanced age recovering from fragility fractures.
Objectives: Quantitative survey of regaining physical mobility provides recommendations for action on how to use motion trackers effectively in a clinical geriatric setting.
Methods: Method mix of quantitative and qualitative interdisciplinary and mutual complementary research approaches (sociology, health research, philosophy/ethics, medical informatics, nursing science, gerontology and physical therapy). While validating motion tracker use in geriatric traumatology preliminary data are used to develop a target group oriented motion feedback. In addition measurement accuracy of a questionnaire about quality of life of multimorbid geriatric patients (FLQM) is tested.
Conclusion: Implementing a new technology in a complex clinical setting needs to be based on a strong theoretical background but will not succeed without careful field testing.
Automatisiertes Fahren stößt derzeit noch auf große Skepsis. Eine disruptive Strategie bei der Einführung (voll-)automatisierten Fahrens könnte daher auf fehlende Akzeptanz treffen. Um dem zu entgehen, laufen evolutionäre Strategien darauf hinaus, durch die Entwicklung adaptiver Fahrassistenzsysteme Vertrautheit, Vertrauen und damit Akzeptanz bei den prospektiven NutzerInnen zu schaffen. Erste Ergebnisse einer Pilotstudie lassen jedoch Zweifel an der Nachhaltigkeit dieser Strategie aufkommen.
With examples concerning the development and dissemination of computer technology in the Soviet Union, the U.S., and other Western countries it shall be demonstrated that computer development on the one hand
and social change as well as changes in policy making and administration on the other hand are mingled with each other without a clear direction of causation being discernible.It also shall be shown that perceived social and political threats imposed by early computer technology sometimes actually helped to stop or at least slow down social change.
One conclusion that can be drawn from the case studies described for RRI is that the conscious steering of innovations fails because of diffuse and uncoordinated resistance from very different stakeholders. The case studies also suggest that the effectiveness of RRI might be rather limited.
Background
Breast reconstruction is an important coping tool for patients undergoing a mastectomy. There are numerous surgical techniques in breast reconstruction surgery (BRS). Regardless of the technique used, creating a symmetric outcome is crucial for patients and plastic surgeons. Three-dimensional surface imaging enables surgeons and patients to assess the outcome’s symmetry in BRS. To discriminate between autologous and alloplastic techniques, we analyzed both techniques using objective optical computerized symmetry analysis. Software was developed that enables clinicians to assess optical breast symmetry using three-dimensional surface imaging.
Methods
Twenty-seven patients who had undergone autologous (n = 12) or alloplastic (n = 15) BRS received three-dimensional surface imaging. Anthropomorphic data were collected digitally using semiautomatic measurements and automatic measurements. Automatic measurements were taken using the newly developed software. To quantify symmetry, a Symmetry Index is proposed.
Results
Statistical analysis revealed that there is no dif- ference in the outcome symmetry between the two groups (t test for independent samples; p = 0.48, two-tailed).
Conclusion
This study’s findings provide a foundation for qualitative symmetry assessment in BRS using automatized digital anthropometry. In the present trial, no difference in the outcomes’ optical symmetry was detected between autologous and alloplastic approaches.
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.
Based on previous work by our group with manual annotation of visible Barrett oesophagus (BE) cancer images, a real-time deep learning artificial intelligence (AI) system was developed. While an expert endoscopist conducts the endoscopic assessment of BE, our AI system captures random images from the real-time camera livestream and provides a global prediction (classification), as well as a dense prediction (segmentation) differentiating accurately between normal BE and early oesophageal adenocarcinoma (EAC). The AI system showed an accuracy of 89.9% on 14 cases with neoplastic BE.
The growing number of publications on the application of artificial intelligence (AI) in medicine underlines the enormous importance and potential of this emerging field of research.
In gastrointestinal endoscopy, AI has been applied to all segments of the gastrointestinal tract most importantly in the detection and characterization of colorectal polyps. However, AI research has been published also in the stomach and esophagus for both neoplastic and non-neoplastic disorders.
The various technical as well as medical aspects of AI, however, remain confusing especially for non-expert physicians.
This physician-engineer co-authored review explains the basic technical aspects of AI and provides a comprehensive overview of recent publications on AI in gastrointestinal endoscopy. Finally, a basic insight is offered into understanding publications on AI in gastrointestinal endoscopy.
Background: For surgical fixation of bone fractures of the human hand, so-called Kirschner-wires (K-wires) are drilled through bone fragments. Due to the minimally invasive drilling procedures without a view of risk structures like vessels and nerves, a thorough training of young surgeons is necessary. For the development of a virtual reality (VR) based training system, a three-dimensional (3D) printed phantom hand is required. To ensure an intuitive operation, this phantom hand has to be realistic in both, its position relative to the driller as well as in its haptic features. The softest 3D printing material available on the market, however, is too hard to imitate human soft tissue. Therefore, a support-material (SUP) filled metamaterial is used to soften the raw material. Realistic haptic features are important to palpate protrusions of the bone to determine the drilling starting point and angle. An optical real-time tracking is used to transfer position and rotation to the training system.
Methods: A metamaterial already developed in previous work is further improved by use of a new unit cell. Thus, the amount of SUP within the volume can be increased and the tissue is softened further. In addition, the human anatomy is transferred to the entire hand model. A subcutaneous fat layer and penetration of air through pores into the volume simulate shiftability of skin layers. For optical tracking, a rotationally symmetrical marker attached to the phantom hand with corresponding reference marker is developed. In order to ensure trouble-free position transmission, various types of marker point applications are tested.
Results: Several cuboid and forearm sample prints lead to a final 30 centimeter long hand model. The whole haptic phantom could be printed faultless within about 17 hours. The metamaterial consisting of the new unit cell results in an increased SUP share of 4.32%. Validated by an expert surgeon study, this allows in combination with a displacement of the uppermost skin layer a good palpability of the bones. Tracking of the hand marker in dodecahedron design works trouble-free in conjunction with a reference marker attached to the worktop of the training system.
Conclusions: In this work, an optically tracked and haptically correct phantom hand was developed using dual-material 3D printing, which can be easily integrated into a surgical training system.
Das aufgerufene Thema „Herausforderungen an die Wirtschaftsinformatik: Integration und Konnexion“ provozierte Beiträge, die thematisch ein sehr breites Spektrum abdecken. Neben theoretischen Betrachtungen und Definitionen des sicher noch nicht final geprägten Begriffs der Konnexion gab es auch sehr praktische Beiträge wie die Darstellung von konkreten prototypischen Entwicklungsvorhaben. Auch das ist ein Indiz für die lebendige Landschaft der Wirtschaftsinformatik an den deutschsprachigen Hochschulen für Angewandte Wissenschaften.
Die Wirtschaftsinformatik, die von den Mitgliedern des Arbeitskreises Wirtschaftsinformatik an Fachhochschulen im deutschsprachigen Raum vertreten wird, basiert auf drei Säulen, nämlich der Informatik, der Betriebswirtschaft und der Wirtschaftsinformatik selbst, worunter Systeme zur Steuerung betriebswirtschaftlicher Prozesse verstanden werden – dies entspricht auch der Empfehlung der Gesellschaft für Informatik und liegt den Evaluierungen der Akkreditierungsverfahren zugrunde. Diese Säulen inkludieren fast zwangsläufig eine substantielle Heterogenität an Projekten mit der industriellen Praxis wie auch in der Forschung. Die Einreichungen zur Tagung bestätigten diese Erwartung. Beispiele sind mehrere Arbeiten zur „Datenbank SAP HANA“, zu (allgemeinen) Entwicklungsthemen wie zum „Datenbank-Join“ oder der „Internetsuche“ und zu IT-Systemen wie beispielsweise „Fünf Jahre produktiver Einsatz eines man- dantenfähigen Data-Warehouse-Systems“. Andere Arbeiten befassen sich mit eher technischen Themen wie mobilen (End-)Geräten wie „Plattformunabhängiges, mobiles, integriertes Hochwassermeldesystem für Mittelhessen“ oder „Electronic vs. Mobile Banking“ und der Nutzung von Cloud wie beim „Aufbau einer ‚Private Cloud‘ mit OpenStack“. Mehrere Arbeiten befassen sich mit der IT-Unterstützung logistischer Fragestellungen wie die „Ermittlung von prognoserelevanten Absatzzahlen in der Praxis eines großen Lebensmittelkonzerns“, mit Prozessen wie „Business Process Excellence und der Zusammenhang mit dem Unternehmenserfolg“ sowie mit Management wie „Faktoren für nachhaltigen Unternehmenserfolg“.
Wirtschaftsinformatik befasst sich mit allen Themen, die an der Schnittstelle zwischen Informatik und Betriebswirtschaft anzutreffen sind. So geht es in der Wirtschaftsinformatik – basierend auf dem Wissen und dem Verstehen der betriebswirtschaftlichen Konzepte und Anwendungen – insbesondere darum, IT-Systeme für die betriebliche Praxis zu entwickeln, einzuführen und zu betreiben. Eine wissenschaftliche Fachtagung, die den Titel „Management und IT“ trägt, setzt an einer solchen Beschreibung der Wirtschaftsinformatik an.
Seit jeher zeichnet sich die Wirtschaftsinformatik durch eine große thematische Breite aus, da sie ihre Aufgabe in der Entwicklung und Anwendung von Theorien, Konzepten, Modellen, Methoden und Werkzeugen für die Analyse, Gestaltung und Nutzung von Informationssystemen sieht. Dabei greift die Wirtschaftsinformatik auch auf Ansätze der Betriebswirtschaftslehre sowie der Informatik zurück, die sie erweitert, integriert und um eigene spezifische Ansätze ergänzt. Es freut uns mit diesem Tagungsband Forschungs- und Entwicklungsarbeiten vorstellen zu dürfen, die belegen, dass und wie an Fachhochschulen an dieser Aufgabe der Wirtschaftsinformatik im Rahmen der Logistik gearbeitet wird. Eine grundlegende Aufgabe von Wirtschaftsinformatikern ist die Integration heterogener Softwaresysteme zur Unterstützung von Unternehmensprozessen. Ausgehend von einer Anforderungsspezifikation werden für Unternehmen geeignete Softwaresysteme beschafft. In dem Beitrag „Verknüpfung von Softwarelösungen in einem IT-Logistiknetzwerk“ wird quasi der umgekehrten Aufgabestellung nachgegangen, wie nämlich heterogene Systeme so verknüpft werden können, dass mit ihnen gemeinsam Unternehmensprozesse unterstützt werden können. Während in diesem Beitrag die technische Verknüpfung über eine dedizierte Architektur nicht im Detail vorgestellt wird, ist sie Gegenstand in dem Beitrag „Ein Informationssystem für das RFID-gestützte Behältermanagement“. Konkret handelt es sich um den Architekturstil REST (Representational State Transfer). Prozessverbesserungen lassen sich durch die Nutzung von Simulations- und Optimierungsmodellen erreichen. Typischerweise werden diese als „stand-alone“- Systeme betrieben, und sie erhalten ihre Daten von einem ERP-, PPS- oder allgemein von einem „Decision Support“-System. Dies erfordert eine Schnittstelle. Einen Vorschlag enthält der Beitrag „Konzeption einer Datenschnittstelle für Klassen von Simulations- und Optimierungsmodellen“. Beispiele für die Lösung von Optimierungsproblemen werden in den Beiträgen „Einfuhrzollmanagement – neue Chancen und Herausforderungen für die Standortoptimierung“ und „Gestaltung von Logistiknetzen mit der EDA-Heuristik“ beschrieben. Kern des ersten Beitrags ist die Formulierung eines geeigneten Optimierungsmodells, welches mit einem rechnergestützter Optimierungssystem gelöst wird. Im zweiten Beitrag wird die Lösung von einem sehr anspruchsvollen Optimierungsproblem durch ein heuristisches Verfahren angenähert. Ein wichtiges Themengebiet der Wirtschaftsinformatik ist die Verbindung von Geschäftsprozessmodellierung und Softwareentwicklung. Einen Ansatz dazu beschreibt der Beitrag „Modellgetriebene Softwareentwicklung auf der Grundlage realer Geschäftsprozessoptimierung“. Generell zunehmende Bedeutung in der Wirtschaftsinformatik hat das Wissensmanagement. In dem Beitrag „Konzept, Methoden und offene Fragen für einen Web-Service zur Analyse des Wissensmanagement-Reifegrades in Logistikunternehmen“ wird die Relevanz dieses Themas und sein Umsetzungsgrad bei Logistikunternehmen aus verschiedenen Ländern erläutert, und es wird ein Rahmenkonzept für ein Wissensmanagement in Logistikunternehmen in Form einer webbasierten Infrastruktur dargestellt.
In heutigen Unternehmen werden im Kern alle Aufgaben durch Anwendungssysteme direkt oder durch diese unterstützt erledigt. Folglich beschreiben betriebliche Anwendungssysteme heute im Grunde, welche Aufgaben in Unternehmen überhaupt zu lösen sind und welche davon automatisiert und somit durch Software erledigt bzw. unterstützt werden können. Die Arbeit an und mit Betrieblichen Anwendungssystemen ist gekennzeichnet durch eine große thematische Breite und demonstriert die für die Wirtschaftsinformatik charakteristische Nutzung von Ansätzen der Betriebswirtschaftslehre sowie der Informatik. Deswegen erwarteten die Herausgeber sehr heterogene Themenvorschläge und sie wurden nicht enttäuscht. Die letztlich ausgewählten Themen stellen aktuelle Entwicklungs- und anwendungsorientierte Forschungsprojekte zu Geschäftsprozessen, Standardsoftware, Softwareentwicklung und Betrieb von Anwendungssystemen vor. Dadurch beschreiben sie das heute existierende Berufsbild von Wirtschaftsinformatikern und -innen in der industriellen Praxis.
Companies use special designed flow shops, in order to satisfy specific demands. Products need to be transported (from one station to the next station) by a crane and the way of working of this crane excludes the intermediate storage (of a work piece). In addition, the way of working restricts the set of feasible schedules even more than the no-buffer restriction discussed in the literature in the case of limited storage. Since this scheduling problem is integrated in the usual hierarchical planning, the tardiness is minimised. A linear optimisation model is presented to provide a formal description of this NP-hard problem. It is also used to explain the performance of priority rule based heuristic solutions on small test problems. In detail, priority rules as well as a priority rule based branch and bound procedure are analysed; priority rules are regarded, because a priority rule is still the standard procedure for on-line scheduling in industrial practise. Out of successful priority rules in the literature the best one is identified by an extensive simulative investigation. An improved look-ahead is realised by a restricted search over all possible schedules.
Folgender Beitrag untersucht die Notwendigkeit einer Berücksichtigung von Mitarbeiterauslastungen und davon abhängiger Bearbeitungszeiten in der Hauptproduktionsprogrammplanung. Zudem wird aufgezeigt, dass dieses Themengebiet aktuell eine Forschungslücke darstellt. Aus diesem Grund wird ein selbst entwickeltes lineares Optimierungsmodell für die Hauptproduktionsprogrammplanung vorgestellt, welches soziale Kriterien und eine geeignete Fallstudie enthält. Die Ergebnisse verdeutlichen, dass erhebliche Abweichungen durch eine Ignoranz von Mitarbeiterauslastungen und davon abhängiger Bearbeitungszeiten entstehen. Deshalb ist eine Integration zu empfehlen. Allerdings wird auch deutlich, dass weitere Forschungen notwendig sind, speziell für die Bestimmung geeigneter Erschöpfungsfunktionen, so wie beispielsweise Funktionen für Lerneffekte existieren.
In diesem Beitrag wird anhand einer Fallstudie die Bedeutung der Hauptproduktionsprogrammplanung (HPPLAN) für die hierarchische Produktionsplanung herausgearbeitet werden. Dazu wird versucht das Ergebnis der aggregierten Gesamtplanung (AGGRPLAN) durch Veränderung einzelner Paramater so zu verbessern, dass es dem Ergebnis der HPPLAN entspricht. Im Rahmen der Untersuchungen konnten dabei Verbesserungen für einzelne Planungssituationen erzielt werden, eine generelle Lösung konnte jedoch nicht gefunden werden. Insbesondere der Einsatz eines geeigneten Kapazitätsreduktionsfaktors in Verbindung mit der Berücksichtigung der Durchlaufzeit führt zu einer Verbesserung der Lösung. Bei der Planung von mehr als einem Produkt ist allerdings schwierig einen geeigneten Kapazitätsreduktionsfaktor zu finden, da dieser abhängig von der Nachfrage ist.
Companies often use specially-designed production systems and change them from time to time. They produce small batches in order to satisfy specific demands with the least tardiness. This imposes high demands on high-performance scheduling algorithms which can be rapidly adapted to changes in the production system. As a solution, this paper proposes a generic approach: solutions were obtained using a widely-used commercially-available tool for solving linear optimization models, which is available in an Enterprise Resource Planning System (in the SAP system for example) or can be connected to it. In a real-world application of a flow shop with special restrictions this approach is successfully used on a standard personal computer. Thus, the main implication is that optimal scheduling with a commercially-available tool, incorporated in an Enterprise Resource Planning System, may be the best approach.
In this paper, the risks of a Smart Factory are to be examined and structured in order to be able to evaluate the status of the Smart Factory. This thesis thus serves as an overview of the technical components of a Smart Factory and the associated risks. The study takes a holistic view of the smart factory. The results show that the greatest need for action lies in the technological field. Thus, the topics of standardization, information security, availability of IT infrastructure, availability of fast internet and complex systems were prioritized. The organizational and financial risks, which also play an important role in a Smart Factory transformation, are addressed.
Backround
Scaphoidectomy and midcarpal fusion can be performed using traditional fixation methods like K-wires, staples, screws or different dorsal (non)locking arthrodesis systems. The aim of this study is to test the Aptus four corner locking plate and to compare the clinical findings to the data revealed by CT scans and semi-automated segmentation.
Methods:
This is a retrospective review of eleven patients suffering from scapholunate advanced collapse (SLAC) or scaphoid non-union advanced collapse (SNAC) wrist, who received a four corner fusion between August 2011 and July 2014. The clinical evaluation consisted of measuring the range of motion (ROM), strength and pain on a visual analogue scale (VAS). Additionally, the Disabilities of the Arm, Shoulder and Hand (QuickDASH) and the Mayo Wrist Score were assessed. A computerized tomography (CT) of the wrist was obtained six weeks postoperatively. After semi-automated segmentation of the CT scans, the models were post processed and surveyed.
Results
During the six-month follow-up mean range of motion (ROM) of the operated wrist was 60°, consisting of 30° extension and 30° flexion. While pain levels decreased significantly, 54% of grip strength and 89% of pinch strength were preserved compared to the contralateral healthy wrist. Union could be detected in all CT scans of the wrist. While X-ray pictures obtained postoperatively revealed no pathology, two user related technical complications were found through the 3D analysis, which correlated to the clinical outcome.
Conclusion
Due to semi-automated segmentation and 3D analysis it has been proved that the plate design can keep up to the manufacturers’ promises. Over all, this case series confirmed that the plate can compete with the coexisting techniques concerning clinical outcome, union and complication rate.
Background: Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D-printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand.
Methods: The goal of this experiment is to imitate human soft tissue with its haptic and elasticity for a realistic hand phantom fabrication, using only a dual-material 3D printer and support-material-filled metamaterial between skin and bone. We present our workflow to generate lattice structures between hard bone and soft skin with iterative cube edge (CE) or cube face (CF) unit cells. Cuboid and finger shaped sample prints with and without inner hard bone in different lattice thickness are constructed and 3D printed.
Results: The most elastic available rubber-like material is too firm to imitate soft tissue. By reducing the amount of rubber in the inner volume through support material (SUP), objects become significantly softer. Without metamaterial, after disintegration, the SUP can be shifted through the volume and thus the body loses its original shape. Although the CE design increases the elasticity, it cannot restore the fabric form. In contrast to CE, the CF design increases not only the elasticity but also guarantees a local limitation of the SUP. Therefore, the body retains its shape and internal bones remain in its intended place. Various unit cell sizes, lattice thickening and skin thickness regulate the rubber material and SUP ratio. Test prints with higher SUP and lower rubber material percentage appear softer and vice versa. This was confirmed by an expert surgeon evaluation. Subjects adjudged pure rubber-like material as too firm and samples only filled with SUP or lattice structure in CE design as not suitable for imitating tissue. 3D-printed finger samples in CF design were rated as realistic compared to the haptic of human tissue with a good palpable bone structure.
Conclusions: We developed a new dual-material 3D print technique to imitate soft tissue of the human hand with its haptic properties. Blowy SUP is trapped within a lattice structure to soften rubber-like 3D print material, which makes it possible to reproduce a realistic replica of human hand soft tissue.