Fakultät Informatik und Mathematik
Refine
Year of publication
- 2016 (61) (remove)
Document Type
Is part of the Bibliography
- no (61)
Keywords
- Hierarchische Produktionsplanung (3)
- Lean Management (3)
- 3D-CRT (2)
- Big data (2)
- Data models (2)
- Datenschutz (2)
- Datensicherung (2)
- IMRT (2)
- Informationstechnik (2)
- Lean IT (2)
Institute
- Fakultät Informatik und Mathematik (61)
- Fakultät Angewandte Sozial- und Gesundheitswissenschaften (7)
- Labor Empirische Sozialforschung (7)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (7)
- Institut für Sozialforschung und Technikfolgenabschätzung (IST) (6)
- Regensburg Strategic IT Management (ReSITM) (6)
- Labor für Digitalisierung (LFD) (5)
- Regensburg Center of Biomedical Engineering - RCBE (4)
- Labor eHealth (eH) (2)
- Regensburg Medical Image Computing (ReMIC) (2)
Begutachtungsstatus
- peer-reviewed (31)
The adaptive input design (also called online redesign of experiments) for parameter estimation is very effective for the compensation of uncertainties in nonlinear processes. Moreover, it enables substantial savings in experimental effort and greater reliability in modeling.
We present theoretical details and experimental results from the real-time adaptive optimal input design for parameter estimation. The case study considers separation of three benzoate by reverse phase liquid chromatography. Following a receding horizon scheme, adaptive D-optimal input designs are generated for a precise determination of competitive adsorption isotherm parameters. Moreover, numerical techniques for the regularization of arising ill-posed problems, e.g. due to scarce measurements, lack of prior information about parameters, low sensitivities and parameter correlations are discussed. The estimated parameter values are successfully validated by Frontal Analysis and the benefits of optimal input designs are highlighted when compared to various standard/heuristic input designs in terms of parameter accuracy and precision.
In this work, a method for reducing the number of degrees of freedom in online optimal dynamic experiment design problems for systems described by differential equations is proposed. The online problems are posed such that only the inputs which extend an operation policy resulting from an experiment designed offline are optimized. This is done by formulating them as multiple experiment designs, considering explicitly the information of the experiment designed offline and possible time delays unknown a priori. The performance of the method is shown for the case of the separation of isopropanolol isomers in a Simulated Moving Bed plant.
The rising adoption of NoSQL technology in enterprises causes a heterogeneous landscape of different data stores. Different stores provide distinct advantages and disadvantages, making it necessary for enterprises to facilitate multiple systems for specific purposes. This resulting polyglot persistence is difficult to handle for developers since some data needs to be replicated and aggregated between different and within the same stores. Currently, there are no uniform tools to perform these data transformations since all stores feature different APIs and data models. In this paper, we present the transformation language NotaQL that allows cross-system data transformations. These transformations are output-oriented, meaning that the structure of a transformation script is similar to that of the output. Besides, we provide an aggregation-centric approach, which makes aggregation operations as easy as possible.
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
We study the limited data problem of the spherical Radon transform in two and three-dimensional spaces with general acquisition surfaces. In such situations, it is known that the application of filtered-backprojection reconstruction formulas might generate added artifacts and degrade the quality of reconstructions. In this article, we explicitly analyze a family of such inversion formulas, depending on a smoothing function that vanishes to order k on the boundary of the acquisition surfaces. We show that the artifacts are k orders smoother than their generating singularity. Moreover, in two-dimensional space, if the generating singularity is conormal satisfying a generic condition then the artifacts are even k+1/2 orders smoother than the generating singularity. Our analysis for three-dimensional space contains an important idea of lifting up space. We also explore the theoretical findings in a series of numerical experiments. Our experiments show that a good choice of the smoothing function leads to a significant improvement of reconstruction quality.
The Spoken Wikipedia project unites volunteer readers of encyclopedic entries. Their recordings make encyclopedic knowledge accessible to persons who are unable to read (out of alexia, visual impairment, or because their sight is currently occupied, e. g. while driving). However, on Wikipedia, recordings are available as raw audio files that can only be consumed linearly, without the possibility for targeted navigation or search. We present a reading application which uses an alignment between the recording, text and article structure and which allows to navigate spoken articles, through a graphical or voice-based user interface (or a combination thereof). We present the results of a usability study in which we compare the two interaction modalities. We find that both types of interaction enable users to navigate articles and to find specific information much more quickly compared to a sequential presentation of the full article. In particular when the VUI is not restricted by speech recognition and understanding issues, this interface is on par with the graphical interface and thus a real option for browsing the Wikipedia without the need for vision or reading.
We present a corpus of time-aligned spoken data of Wikipedia articles as well as the pipeline that allows to generate such corpora for many
languages. There are initiatives to create and sustain spoken Wikipedia versions in many languages and hence the data is freely available,
grows over time, and can be used for automatic corpus creation. Our pipeline automatically downloads and aligns this data. The resulting
German corpus currently totals 293h of audio, of which we align 71h in full sentences and another 86h of sentences with some missing
words. The English corpus consists of 287h, for which we align 27h in full sentence and 157h with some missing words. Results are publically available.
Das ständige Umblättern von Noten ist für Musiker ein wiederkehrendes Problem. Dieses wird häufig durch einen Assistenten des Musikers, dem sogenannten Notenwender, gelöst. Diese Unterstützung haben allerdings viele Musiker nur selten während des Übens. In diesem Artikel stellen wir eine Anwendung für mobile Geräte vor, die auf verschiedene Arten das Umblättern von Klavierpartituren unterstützt. In einer Studie mit professionellen Musikern und Klavierschülern wurden diese Arten gegeneinander abgewogen. Die Ergebnisse zeigen auf, dass computer-unterstütztes Blättern Vorteile gegenüber herkömmlichem Blättern hat.
Companies often use specially-designed production systems and change them from time to time. They produce small batches in order to satisfy specific demands with the least tardiness. This imposes high demands on high-performance scheduling algorithms which can be rapidly adapted to changes in the production system. As a solution, this paper proposes a generic approach: solutions were obtained using a widely-used commercially-available tool for solving linear optimization models, which is available in an Enterprise Resource Planning System (in the SAP system for example) or can be connected to it. In a real-world application of a flow shop with special restrictions this approach is successfully used on a standard personal computer. Thus, the main implication is that optimal scheduling with a commercially-available tool, incorporated in an Enterprise Resource Planning System, may be the best approach.
NoSQL-Datenbanksysteme sind in den letzten Jahren sehr populär geworden, gute Gründe sprechen für ihren Einsatz: Eine attraktive Eigenschaft vieler Systeme ist ihre Schema-Flexibilität, die insbesondere in der agilen Anwendungsentwicklung Vorteile bietet. Durch horizontale Skalierbarkeit ermöglichen NoSQL-Datenbanksysteme eine effiziente Verarbeitung großer Datenmengen. Einige Systeme, die für die Datenhaltung interaktiver Anwendungen konzipiert sind, können zudem hochfrequente Nutzeranfragen bedienen. Diesen Vorteilen stehen eine Reihe von Nachteilen gegenüber, aus denen sich neue Herausforderungen für die Anwendungsentwicklung ergeben: Fehlende Standards bei den Anfragesprachen erschweren die Entwicklung datenbanksystemunabhängiger Anwendungen. Schema-Flexibilität im Datenbankmanagementsystem führt dazu, dass die Verantwortung für das Schema-Management in die Anwendung verlagert wird. Im vorliegenden Beitrag werden wesentliche Herausforderungen identifiziert und Lösungsansätze aus Forschung und Praxis vorgestellt. Dabei liegt der Fokus auf schema-flexiblen NoSQL-Datenbanksystemen, mit einem aggregat-orientierten Datenmodell, d. h. Key-Value Datenbanksysteme, dokumentenorientierten Datenbanksystemen und Column-Family Datenbanksystemen.
NoSQL data stores have become very popular over the last years, as good reasons are justifying their application: One attractive feature of many systems is their schema flexibility, which may be preferable in agile software development projects. Due to their horizontal scalability, NoSQL data stores make it possible to efficiently process large amounts of data. Some systems, designed as data backends for interactive applications, can also manage highly frequent user requests. Apart from these advantages, there are also downsides to NoSQL data stores that create new challenges for software development: Missing standards in query languages make it difficult to build data store independent applications. Schema flexibility in the data store shifts the responsibility for schema management into the application. This article identifies substantial challenges as well as solution statements from research and practice. The focus of our survey is on schema-flexible NoSQL data management systems with an aggregate-oriented data model, i. e., key-value data management systems, as well as document and column family data management systems.
With the increase of centralization of resources in IT-infrastructure and the growing amount of cloud services, database management systems (DBMS) will be more and more outsourced to Infrastructure-as-a-Service (IaaS) providers. The outsourcing of entire databases, or the computation power for processing Big Data to an external provider also means that the provider has full access to the information contained in the database. In this article we propose a feasible solution with Order-Preserving Encryption (OPE) and further, state of the art, encryption methods to sort and process Big Data on external resources without exposing the unencrypted data to the IaaS provider. We also introduce a proof-of-concept client for Google BigQuery as example IaaS Provider.
Automatic speech recognition (ASR) is not only becoming increasingly
accurate, but also increasingly adapted for producing timely, incremental output. However, overall accuracy and timeliness alone are insufficient when it comes to interactive dialogue systems which require stability in the output and responsivity to the utterance as it is unfolding. Furthermore, for a dialogue system to deal with
phenomena such as disfluencies, to achieve deep understanding of user utterances these should be preserved or marked up for use by downstream components, such as language understanding, rather than be filtered out. Similarly, word timing can be informative for analyzing deictic expressions in a situated environment and should
be available for analysis. Here we investigate the overall accuracy and incremental performance of three widely used systems and discuss their suitability for the aforementioned perspectives. From the differing performance along these measures we provide a picture of the requirements for incremental ASR in dialogue systems and describe freely available tools for using and evaluating incremental ASR.
Most modern and post-modern poems have developed a post-metrical idea of lyrical prosody that employs rhythmical features of everyday language and prose instead of a strict adherence to rhyme and metrical schemes. This development is subsumed under the term free verse prosody. We present our methodology for the large-scale analysis of modern and post-modern poetry in both their written form and as spoken aloud by the author. We employ language processing tools to align text and speech, to generate a null-model of how the poem would be spoken by a naïve reader, and to extract contrastive prosodic features used by the poet. On these, we intend to build our model of free verse prosody, which will help to understand, differentiate and relate the different styles of free verse poetry. We plan to use our processing scheme on large amounts of data to iteratively build models of styles, to validate and guide manual style annotation, to identify further rhythmical categories, and ultimately to broaden our understanding of free verse poetry. In this paper, we report on a proof-of-concept of our methodology using smaller amounts of poems and a limited set of features. We find that our methodology helps to extract differentiating features in the authors’ speech that can be explained by philological insight. Thus, our automatic method helps to guide the literary analysis and this in turn helps to improve our computational models.
Predictive incremental parsing produces syntactic representations of sentences as they are produced, e.g. by typing or speaking. In order to generate connected parses for such unfinished sentences, upcoming word types can be hypothesized and structurally integrated with already realized words. For example, the presence of a determiner as the last word of a sentence prefix may indicate that a noun will appear somewhere in the completion of that sentence, and the determiner can be attached to the predicted noun. We combine the forward-looking parser predictions with backward-looking N-gram histories and analyze in a set of experiments the impact on language models, i.e. stronger discriminative power but also higher data sparsity. Conditioning N-gram models, MaxEnt models or RNN-LMs on parser predictions yields perplexity reductions of about 6%. Our method (a) retains online decoding capabilities and (b) incurs relatively little computational overhead which sets it apart from previous approaches that use syntax for language modeling. Our method is particularly attractive for modular systems that make use of a syntax parser anyway, e.g. as part of an understanding pipeline where predictive parsing improves language modeling at no additional cost.
When an incremental release of a web application is deployed, the structure of data already persisted in the production database may no longer match what the application code expects. Traditionally, eager schema migration is called for, where all legacy data is migrated in one go. With the growing popularity of schema-flexible NoSQL data stores, lazy forms of data migration have emerged: Legacy entities are migrated on-the-fly, one at-a-time, when they are loaded by the application. In this demo, we present Datalution, a tool demonstrating the merits of lazy data migration. Datalution can apply chains of pending schema changes, due to its Datalog-based internal representation. The Datalution approach thus ensures that schema evolution, as part of continous deployment, is carried out correctly.
Abstract Social network analysis is extremely well supported by the R community and is routinely used for studying the relationships between people engaged in collaborative activities. While there has been rapid development of new approaches and metrics in this field, the challenging question of validity (how well insights derived from social networks agree with reality) is often difficult to address. We propose the use of several R packages to generate interactive surveys that are specifically well suited for validating social network analyses. Using our web-based survey application, we were able to validate the results of applying community-detection algorithms to infer the organizational structure of software developers contributing to open-source projects.
In big data software engineering, the schema flexibility of NoSQL document stores is a major selling point: When the document store itself does not actively manage a schema, the data model is maintained within the application. Just like object-relational mappers for relational databases, object-NoSQL mappers are part of professional software development with NoSQL document stores. Some mappers go beyond merely loading and storing Java objects: Using dedicated evolution annotations, developers may conveniently add, remove, or rename attributes from stored objects, and also conduct more complex transformations. In this paper, we analyze the dissemination of this technology in Java open source projects. While we find evidence on GitHub that evolution annotations are indeed being used, developers do not employ them so much for evolving the data model, but to solve different tasks instead. Our observations trigger interesting questions for further research.
Background:
The aim of the study was to compare the two irradiation modes with (FF) and without flattening filter (FFF) for three different treatment techniques for simultaneous integrated boost radiation therapy of patients with right sided breast cancer.
Methods:
An Elekta Synergy linac with Agility collimating device is used to simulate the treatment of 10 patients. Six plans were generated in Monaco 5.0 for each patient treating the whole breast and a simultaneous integrated boost (SIB) volume: intensity modulated radiation therapy (IMRT), volumetric modulated arc therapy (VMAT) and a tangential arc VMAT (tVMAT), each with and without flattening filter. Plan quality was assessed considering target coverage, sparing of the contralateral breast, the lungs, the heart and the normal tissue. All plans were verified by a 2D-ionisation-chamber-array and delivery times were measured and compared. The Wilcoxon test was used for statistical analysis with a significance level of 0.05.
Results:
Significantly best target coverage and homogeneity was achieved using VMAT FFF with V95% = (98.7 +/- 0.8) % and HI = (8.2 +/- 0.9) % for the SIB and V95% = (98.3 +/- 0.7) % for the PTV, whereas tVMAT showed significantly lowest doses to the contralateral organs at risk with a D-mean of (0.7 +/- 0.1) Gy for the contralateral lung, (1.0 +/- 0.2) Gy for the contralateral breast and (1.4 +/- 0.2) Gy for the heart. All plans passed the gamma evaluation with a mean passing rate of (99.2 +/- 0.8) %. Delivery times were significantly reduced for VMAT and tVMAT but increased for IMRT, when FFF was used. Lowest delivery times were observed for tVMAT FFF with (1:20 +/- 0:07) min.
Conclusion:
Balancing target coverage, OAR sparing and delivery time, VMAT FFF and tVMAT FFF are considered the preferable of the investigated treatment options in simultaneous integrated boost irradiation of right sided breast cancer for the combination of an Elekta Synergy linac with Agility and the treatment planning system Monaco 5.0.
We consider the generalized Radon transform (defined in terms of smooth weight functions) on hyperplanes in R-n. We analyze general filtered backprojection type reconstruction methods for limited data with filters given by general pseudodifferential operators. We provide microlocal characterizations of visible and added singularities in R-n and define modified versions of reconstruction operators that do not generate added artifacts. We calculate the symbol of our general reconstruction operators as pseudodifferential operators and provide conditions for the filters under which the reconstruction operators are elliptic for the visible singularities. If the filters are chosen according to those conditions, we show that almost all visible singularities can be recovered reliably. Our work generalizes the results for the classical line transforms in R-2 and the classical reconstruction operators (that use specific filters). In our proofs, we employ a general paradigm that is based on the calculus of Fourier integral operators. Since this technique does not rely on explicit expressions of the reconstruction operators, it enables us to analyze more general imaging situations.
Software evolution is a fundamental process that transcends the realm of technical artifacts and permeates the entire organizational structure of a software project. By means of a longitudinal empirical study of 18 large open-source projects, we examine and discuss the evolutionary principles that govern the coordination of developers. By applying a network-analytic approach, we found that the implicit and self-organizing structure of developer coordination is ubiquitously described by non-random organizational principles that defy conventional software-engineering wisdom. In particular, we found that: (a) developers form scale-free networks, in which the majority of coordination requirements arise among an extremely small number of developers, (b) developers tend to accumulate coordination requirements with more and more developers over time, presumably limited by an upper bound, and (c) initially developers are hierarchically arranged, but over time, form a hybrid structure, in which core developers are hierarchically arranged and peripheral developers are not. Our results suggest that the organizational structure of large projects is constrained to evolve towards a state that balances the costs and benefits of developer coordination, and the mechanisms used to achieve this state depend on the project’s scale.
Modifications to open-source software (OSS) are often provided in the form of "patch stacks" -- sets of changes (patches) that modify a given body of source code. Maintaining patch stacks over extended periods of time is problematic when the underlying base project changes frequently. This necessitates a continuous and engineering-intensive adaptation of the stack. Nonetheless, long-term maintenance is an important problem for changes that are not integrated into projects, for instance when they are controversial or only of value to a limited group of users.
We present and implement a methodology to systematically examine the temporal evolution of patch stacks, track non-functional properties like integrability and maintainability, and estimate the eventual economic and engineering effort required to successfully develop and maintain patch stacks. Our results provide a basis for quantitative research on patch stacks, including statistical analyses and other methods that lead to actionable advice on the construction and long-term maintenance of custom extensions to OSS.
This paper explores scalable implementation strategies for carrying out lazy schema evolution in NoSQL data stores. For decades, schema evolution has been an evergreen in database research. Yet new challenges arise in the context of cloud-hosted data backends: With all database reads and writes charged by the provider, migrating the entire data instance eagerly into a new schema can be prohibitively expensive. Thus, lazy migration may be more cost-efficient, as legacy entities are only migrated in case they are actually accessed by the application. Related work has shown that the overhead of migrating data lazily is affordable when a single evolutionary change is carried out, such as adding a new property. In this paper, we focus on long-term schema evolution, where chains of pending schema evolution operations may have to be applied. Chains occur when legacy entities written several application releases back are finally accessed by the application. We discuss strategies for dealing with chains of evolution operations, in particular, the composition into a single, equivalent composite migration that performs the required version jump. Our experiments with MongoDB focus on scalable implementation strategies. Our lineup further compares the number of write operations, and thus, the operational costs of different data migration strategies.
Survey practitioners regularly face the task to draw a sample from a (sub-) population for which no sampling frame exists. Indirect sampling might be a way out in such situations, given that connections exist between the target population and another population for which probability sampling is feasible. While the theory of indirect sampling originated in the context of household panel studies, a wider area of applications emerged during the last decade. We first give a short review of the theory of indirect sampling, show that estimators from indirect samples might have smaller variance than the corresponding direct estimators (contrary to some claims in the literature), summarize recent applications and discuss some issues that are relevant for applying indirect sampling in practice. We also present some theory for unbiased estimation after an additional subsampling stage that was necessary for sampling kindergarten children in the German National Educational Panel Study (NEPS).
n this paper we present a method to efficiently cull large parts of a scene prior to shadow map computations for many-lights settings. Our method is agnostic to how the light sources are generated and thus works with any method of light distribution. Our approach is based on previous work in culling for ray traversal to speed up area light sampling. Applied to shadow mapping our method works for high- and low-resolution shadow maps and, in contrast to previous work on many-lights rendering, does neither entail scene approximations nor imposes limits on light range, while still providing significant gains in performance. In contrast to standard culling methods shadow map rendering itself is sped up by a factor of 1.5 to 8.6 while the speedup of shadow map rendering, lookup and shading together ranges from 1.1 to 4.2
We present a method to compute post-processing depth of field (DOF) that produces more accurate results than previous approaches. Our method is based on existing approaches, namely DOF rendering by splatting and fast, tile-based particle accumulation. Using tile-based accumulation allows us to correctly sort out of focus pixels and apply proper alpha-blending to avoid artifacts commonly encountered with filter-based depth of field methods.
Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on-the-fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this paper, we present a novel solution to this problem. We propose a compression scheme for a-priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.
Over the last decade a number of high performance, domain-specific languages (DSLs) have started to grow and help tackle the problem of ever diversifying hard- and software employed in fields such as HPC (high performance computing), medical imaging, computer vision etc. Most of those approaches rely on frameworks such as LLVM for efficient code generation and, to reach a broader audience, take input in C-like form. In this paper we present a DSL for image processing that is on-par with competing methods, yet its design principles are in strong contrast to previous approaches. Our tool chain is much simpler, easing the burden on implementors and maintainers, while our output, C-family code, is both adaptable and shows high performance. We believe that our methodology provides a faster evaluation of language features and abstractions in the domains above.
In this paper we show how a feature-oriented development methodology can be exploited to investigate a large set of possible implementations for a real-time rendering algorithm. We rely on previously published work to explore potential dimensions of the implementation space of an algorithm to be run on a graphics processing unit (GPU) using CUDA. The main contribution of our paper is to provide a clear example of the benefit to be gained from existing methods in a domain that only slowly moves toward higher level abstractions. Our method employs a generative approach and makes heavy use of Common Lisp-macros before the code is ultimately transformed to CUDA.
Background
The aim of this study was to investigate if the flattening filter free mode (FFF) of a linear accelerator reduces the excess absolute risk (EAR) for second cancer as compared to the flat beam mode (FF) in simultaneous integrated boost (SIB) radiation therapy of right-sided breast cancer.
Patients and methods
Six plans were generated treating the whole breast to 50.4 Gy and a SIB volume to 63 Gy on CT data of 10 patients: intensity-modulated radiation therapy (IMRT), volumetric modulated arc therapy (VMAT), and a tangential arc VMAT (tVMAT), each with flattening filter and without. The EAR was calculated for the contralateral breast and the lungs from dose-volume histograms (DVH) based on the linear-exponential, the plateau, and the full mechanistic dose-response model. Peripheral low-dose measurements were performed to compare the EAR in more distant regions as the thyroids and the uterus.
Results
FFF reduces the EAR significantly in the contralateral and peripheral organs for tVMAT and in the peripheral organs for VMAT. No reduction was found for IMRT. The lowest EAR for the contralateral breast and lung was achieved with tVMAT FFF, reducing the EAR by 25 % and 29 % as compared to tVMAT FF, and by 44 % to 58 % as compared to VMAT and IMRT in both irradiation modes. tVMAT FFF showed also the lowest peripheral dose corresponding to the lowest EAR in the thyroids and the uterus.
Conclusion
The use of FFF mode allows reducing the EAR significantly when tVMAT is used as the treatment technique. When second cancer risk is a major concern, tVMAT FFF is considered the preferred treatment option in SIB irradiation of right-sided breast cancer.
Background and Objective: Even today, pointing out an exam that can diagnose a patient with Parkinson's disease (PD) accurately enough is not an easy task. Although a number of techniques have been used in search for a more precise method, detecting such illness and measuring its level of severity early enough to postpone its side effects are not straightforward. In this work, after reviewing a considerable number of works, we conclude that only a few techniques address the problem of PD recognition by means of micrography using computer vision techniques. Therefore, we consider the problem of aiding automatic PD diagnosis by means of spirals and meanders filled out in forms, which are then compared with the template for feature extraction.
Methods: In our work, both the template and the drawings are identified and separated automatically using image processing techniques, thus needing no user intervention. Since we have no registered images, the idea is to obtain a suitable representation of both template and drawings using the very same approach for all images in a fast and accurate approach.
Results: The results have shown that we can obtain very reasonable recognition rates (around approximate to 67%), with the most accurate class being the one represented by the patients, which outnumbered the control individuals in the proposed dataset.
Conclusions: The proposed approach seemed to be suitable for aiding in automatic PD diagnosis by means of computer vision and machine learning techniques. Also, meander images play an important role, leading to higher accuracies than spiral images. We also observed that the main problem in detecting PD is the patients in the early stages, who can draw near-perfect objects, which are very similar to the ones made by control patients. (C) 2016 Elsevier Ireland Ltd. All rights reserved.
Smart Workbench
(2016)
Um die Lebensqualität und Einsatzfähigkeit von Menschen mit Behinderung oder älteren Menschen zu verbessern, wurde untersucht, inwiefern Gamification-Anwendungen beim Anlernen einer Gestensteuerung geeignet sind. Grundlage des Experiments stellt ein intelligenter Arbeitsplatz (Smart Workbench, SWoB) dar, der Personen bei manuellen Handhabungsaufgaben unterstützt sowie bestimmte Produktionsprozesse teilautomatisiert ausführt. Um die Anlage bedienen zu können, muss im Vorfeld eine Einweisung erfolgen, welche von Menschen oder durch ein Lerntutorial mit Gamification-Elementen zur Motivationssteigerung durchgeführt werden kann. In der Studie wurde untersucht, welche Form des Anleitens aus welchen Gründen von unterschiedlichen Personen eher akzeptiert oder abgelehnt wird.
Programming GPUs with low-level libraries like CUDA and OpenCL is a tedious and error-prone task. Fortunately, algorithmic skeletons can shield developers from the complexity of parallel programming by encapsulating common parallel computing patterns. However, this simplification typically constrains programmers to write their applications using the GPU library employed by the skeleton implementation. In this work, we combine skeletal programming with modeldriven software development (MDSD) to increase the freedom of choice regarding the employed GPU library instead of leaving all technical decisions to the skeleton implementation. We present a code-generator that transforms models comprising skeletons, their input data and input functions to parallel C++ code while taking care of data-offset calculations. The generator has been tested using different GPU and multi-GPU communication libraries such as Thrust and CUDA-MPI. We demonstrate our novel approach to GPU programming with two example applications: affinity propagation and n-body simulation.
In this research, we investigate the possibility of applying ranking task activity in teaching and learning software engineering courses. We introduce three types of ranking tasks, conceptual-, contextual- and sequential ranking questions, which cover most core topics such as requirement analysis, architecture design and quality validation in the course. We have also done experiments on a group of students to see if ranking tasks could increase their conceptual knowledge in specific areas. Assessments were given in order to evaluate the effectiveness of this activity, showing an obvious increase in complex conceptual understanding.
Der Projektverbund ‚ als Schwerpunkt einer nachhaltigen Restaurierung historisch bedeutender Stadtquartiere des frühen 20. Jahrhunderts‘ (RENARHIS) dient der Entwicklung eines energetischen Modernisierungskonzeptes unter Berücksichtigung erhaltenswerter Bausubstanz historischer Stadtquartiere mit dem Ziel, durch eine dezentrale (autarke) regenerative Energieversorgung Synergieeffekte in Verbindung mit dem speziellen Energiebedarf einer historischen Ensemble-Architektur nutzbar zu machen. Exemplarisch wurden für das ‚Plato-Wild-Ensemble‘ im Regensburger ‚Kasernenviertel‘ aus den 1920er Jahren nachhaltige Restaurierungskonzepte erarbeitet.
Medical confidentiality is very important because it builds the fundament of trust between doctors and patients. In modern health services data protection is essential. But there are also other important security objectives, such as integrity and authenticity of medical data and health services, as their breach can potentially lead to life threatening conditions. State of the art security mechanisms are necessary to protect medical data and services and prevent attacks known as “hacking”. They should include in particular cryptography, as you can usually rely on mathematics, more than in software security and access control mechanisms. As patients normally cannot assess the security of a service, security audits or certifications of health services should be provided to generate trust and confidence.
In recent years, a considerable amount of interest has arisen for scheduling problems with technological restrictions. For example, no-buffer flow-shop scheduling problem are well investigated. Here a real world flow shop with a transportation restriction is regarded. It has to produce small batches, very often with a lot size of one, with short response times. Thus, scheduling algorithms are needed to ensure that under the constraint of a high average load of the flow shop, the duedates of the production orders are met. This transportation restriction reduces the set of feasible schedules even more than the nobuffer restrictions discussed in the literature in the case of limited storage. Still this problem is NP-hard. Due to the transportation restriction, the duration of a job A on the flow-shop depends on the other jobs processedon the flow-shop in the same time frame which is called a cycle. Realistic processing times are achieved by a simulation of the scheduling of A which includes the next jobs until A has left the flow-shop. The usage of such realistic processing times instead of net processing times improves the priority rules by around 16%. There are pools of jobs where a large variance of the cycle times is beneficial and pools of jobs where a small variance is better. This is partially detected by agenetic algorithm. This improves the performance by another 34%.
Production planning and production control mainly focus on optimising the entire production system of a company. On the basis of hierarchical planning as a suitable method for solving this task, this paper shows - besides the economic dimension taken into account so far - that there are also social and ecological effects which will have to be considered in the process of planning. For this purpose, we would like to indicate here which social and ecological parameters can be or have already been taken into account for master production scheduling, for lot sizing and resource scheduling. As a result, an overview has been created which presents the existing concepts of sustainable production planning and production control as well as the existing deficits regarding the sustainability perspective.
Bei der Produktionsplanung steht die Optimierung des Produktionssystems eines Unternehmens im Vordergrund. In diesem Beitrag wird auf Basis der hierarchischen Produktionsplanung aufgezeigt, dass neben ökonomischen und ökologischen Aspekten insbesondere auch soziale Effekte bei der Entscheidungsfindung zu berücksichtigen sind. Dazu werden für die Hauptproduktionsprogrammplanung, die Losgrößenplanung und die Ressourcenbelegungsplanung dargelegt, welche sozialen Größen berücksichtigt werden können beziehungsweise bereits berücksichtigt sind. Überdies wird ein Modellansatz zur Hauptproduktionsprogrammplanung vorgestellt, der auf die Beseitigung identifizierter Forschungslücken abzielt.
The rising energy prices – particularly over the last decade – pose a new challenge for the manufacturing industry. Reactions to climate change, such as the advancement of renewable energies, raise the expectation of further price increases and variations. Regarding the manufacturing industry, production planning and controlling can have a significant influence on the in-plant energy consumption. In this paper, we develop a scheduling method as a linear optimization model with the objective to minimize energy costs in a job shop production system.