Fakultät Informatik und Mathematik
Refine
Year of publication
Document Type
- Preprint (22) (remove)
Is part of the Bibliography
- no (22)
Keywords
- 3D breast scan registration (1)
- ADAM Optimizer (1)
- Akzeptanz (1)
- Bibliometric Analysis (1)
- Breast imaging (1)
- CNN (1)
- COVID-19 (1)
- Chief Information Officer (1)
- Convergence (1)
- DFD (1)
Institute
- Fakultät Informatik und Mathematik (22)
- Institut für Sozialforschung und Technikfolgenabschätzung (IST) (4)
- Labor für Digitalisierung (LFD) (4)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (4)
- Regensburg Center of Health Sciences and Technology - RCHST (4)
- Regensburg Medical Image Computing (ReMIC) (4)
- Fakultät Angewandte Sozial- und Gesundheitswissenschaften (3)
- Labor Empirische Sozialforschung (3)
- Regensburg Center of Biomedical Engineering - RCBE (1)
- Regensburg Strategic IT Management (ReSITM) (1)
Begutachtungsstatus
- peer-reviewed (1)
Real-time computational speed and a high degree of precision are requirements for computer-assisted interventions. Applying a segmentation network to a medical video processing task can introduce significant inter-frame prediction noise. Existing approaches can reduce inconsistencies by including temporal information but often impose requirements on the architecture or dataset. This paper proposes a method to include temporal information in any segmentation model and, thus, a technique to improve video segmentation performance without alterations during training or additional labeling. With Motion-Corrected Moving Average, we refine the exponential moving average between the current and previous predictions. Using optical flow to estimate the movement between consecutive frames, we can shift the prior term in the moving-average calculation to align with the geometry of the current frame. The optical flow calculation does not require the output of the model and can therefore be performed in parallel, leading to no significant runtime penalty for our approach. We evaluate our approach on two publicly available segmentation datasets and two proprietary endoscopic datasets and show improvements over a baseline approach.
Finding the optimal join order (JO) is one of the most important problems in query optimisation, and has been extensively considered in research and practise. As it involves huge search spaces, approximation approaches and heuristics are commonly used, which explore a reduced solution space at the cost of solution quality. To explore even large JO search spaces, we may consider special-purpose software, such as mixed-integer linear programming (MILP) solvers, which have successfully solved JO problems. However, even mature solvers cannot overcome the limitations of conventional hardware prompted by the end of Moore’s law. We consider quantum-inspired digital annealing hardware, which takes inspiration from quantum processing units (QPUs). Unlike QPUs, which likely remain limited in size and reliability in the near and mid-term future, the digital annealer (DA) can solve large instances of mathematically encoded optimisation problems today. We derive a novel, native encoding for the JO problem tailored to this class of machines that substantially improves over known MILP and quantum-based encodings, and reduces encoding size over the state-of-the-art. By augmenting the computation with a novel readout method, we derive valid join orders for each solution obtained by the (probabilistically operating) DA. Most importantly and despite an extremely large solution space, our approach scales to practically relevant dimensions of around 50 relations and improves result quality over conventionally employed approaches, adding a novel alternative to solving the long-standing JO problem.
Inverse problems are inherently ill-posed and therefore require regularization techniques to achieve a stable solution. While traditional variational methods have wellestablished theoretical foundations, recent advances in machine learning based approaches have shown remarkable practical performance. However, the theoretical foundations of learning-based methods in the context of regularization are still underexplored. In this paper, we propose a general framework that addresses the current gap between learning-based methods and regularization strategies. In particular, our approach emphasizes the crucial role of data consistency in the solution of inverse problems and introduces the concept of data-proximal null-space networks as a key component for their solution. We provide a complete convergence analysis by extending the concept of regularizing null-space networks with data proximity in the visual part. We present numerical results for limited-view computed tomography to illustrate the validity of our framework.
Hintergrund /Zielsetzung
Der Beitrag befasst sich mit Wissenstand und Einstellung der Bevölkerung. Betrachtet werden die Übermittlung und Verfügbarkeit von Gesundheitsdaten, Gesundheitsregister, die elektronische Patientenakte, Einwilligungsverfahren für die Übermittlung von Daten und der Zugriff auf Gesundheitsdaten zu Forschungszwecken.
Methoden
Die Studie basiert auf einer computergestützten Telefonbefragung (Dual-Frame) bei einer Zufallsstichprobe der Bevölkerung in Deutschland im Zeitraum 01.-27.06.2022 (n=1.308).
Ergebnisse
Der Wissensstand zur Übermittlung von Gesundheitsdaten an Krankenkassen ist hoch, wohingegen das Vorhandensein zentraler Sterbe-, Impf-, und Gesundheitsregister und der Zugriff auf Gesundheitsdaten durch behandelnde Ärztinnen und Ärzte überschätzt werden. Die Akzeptanz medizinischer Register ist sehr hoch. Die elektronische Patientenakte ist bei der Hälfte der Bevölkerung unbekannt, die Nutzungsbereitschaft ist eher gering ausgeprägt, bei der Übertragung von Daten wird eine Zustimmungsoption bevorzugt und über achtzig Prozent würden die Daten der elektronischen Patientenakte zur Forschung freigeben. Drei Viertel würden ihre Gesundheitsdaten allgemein zur Forschung freigeben, insbesondere an Universitäten in Deutschland, wobei meist Anonymität Bedingung ist. Die Bereitschaft zur Datenfreigabe steigt mit Höhe des Vertrauens in Presse sowie in Universitäten und Hochschulen und sinkt, wenn ein Datenleck als schwerwiegend betrachtet wird.
Diskussion und Schlussfolgerung
In Deutschland besteht, wie in anderen europäischen Ländern, eine große Bereitschaft zur Freigabe von Gesundheitsdaten zu Forschungszwecken. Dagegen ist der Wunsch zur Nutzung der elektronischen Patientenakte eher gering. Ebenso niedrig ist die Akzeptanz einer Widerspruchsoption, die jedoch als Voraussetzung für eine erfolgreiche Einführung einer elektronischen Patientenakte gilt. Vertrauen in die Forschung und staatliche Stellen, die Gesundheitsdaten verarbeiten, sind zentrale Faktoren.
In this article, we address the challenge of solving the ill-posed reconstruction problem in computed tomography using a translation invariant diagonal frame decomposition (TIDFD). First, we review the concept of a TI-DFD for general linear operators and the corresponding filter-based regularization concept. We then introduce the TI-DFD for the Radon transform on L 2 (R 2) and provide an exemplary construction using the TI wavelet transform. Presented numerical results clearly demonstrate the benefits of our approach over non-translation invariant counterparts.
In a variety of tomographic applications, data cannot be fully acquired, leading to severely underdetermined image reconstruction. Conventional methods result in reconstructions with significant artifacts. In order to remove these artifacts, regularization methods have to be applied that incorporate additional information. An important example is TV reconstruction which is well known to efficiently compensate for missing data and well reduces reconstruction artifacts. At the same time, however, tomographic data is also contaminated by noise, which poses an additional challenge. The use of a single regularizer within a variational regularization framework must therefore account for both the missing data and the noise. However, a single regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction over different scales, in which case ℓ1 curvelet regularization methods work well. To address this issue, in this paper we introduce a novel variational regularization framework that combines the advantages of two different regularizers. The basic idea of our framework is to perform reconstruction in two stages, where the first stage mainly aims at accurate reconstruction in the presence of noise, and the second stage aims at artifact reduction. Both reconstruction stages are connected by a data proximity condition. The proposed method is implemented and tested for limited-view CT using a combined curvelet-TV approach. We define and implement a curvelet transform adapted to the limited view problem and demonstrate the advantages of our approach in a series of numerical experiments in this context.
In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images. Especially the determination of the position and type of the instruments is of great interest here. Current work involves both spatial and temporal information with the idea, that the prediction of movement of surgical tools over time may improve the quality of final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify datasets used for method development and evaluation, as well as quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images. The paper focuses on methods that work purely visually without attached markers of any kind on the instruments, taking into account both single-frame segmentation approaches as well as those involving temporal information. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing available potential for future developments. The publications considered were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking" and result in 408 articles published between 2015 and 2022 from which 109 were included using systematic selection criteria.
Finite state machines (FSMs) are an appealing mechanism for simple practical computations: They lend themselves to very effcient and deterministic implementation, are easy to understand, and allow for formally proving many properties of interest. Unfortunately, their computational power is deemed insuffcient for many tasks, and their usefulness has been further hampered by the state space explosion problem and other issues when naïvely trying to scale them to sizes large enough for many real–life applications.
This paper expounds on theory and implementation of multiple coupled fnite state machines (McFSMs), a novel mechanism that combines benefits of FSMs with near Turing-complete, practical computing power, and that was designed from the ground up to support static analysis and reasoning. We develop an elaborate category–theoretical foundation based on non–deterministic Mealy machines, which gives a suitable algebraic description for novel ways of blending di#erent computing models. Our experience is based on a domain specific language and an integrated development environment that can compile McFSM models to multiple target languages, applying it to use-cases based on industrial scenarios. We discuss properties and advantages of McFSMs, explain how the mechanism can interact with real–world systems and existing code without sacrificing provability, determinism or performance.
We discuss how McFSMs can be used to replace and improve on commonly employed programming patterns, and show how their effcient handling of large state spaces enables them to be used as core building blocks for distributed, safety critical, and real–time systems of industrial complexity, which contributes to the longdesired goal of providing executable specifications.
We present a paradigm for characterization of artifacts in limited data tomography problems. In particular, we use this paradigm to characterize artifacts that are generated in reconstructions from limited angle data with generalized Radon transforms and general filtered backprojection type operators. In order to find when visible singularities are imaged, we calculate the symbol of our reconstruction operator as a pseudodifferential operator.
This article provides a mathematical classification of artifacts from arbitrary incom-plete X-ray tomography data when using the classical filtered backprojection algorithm. Usingmicrolocal analysis, we prove that all artifacts arise from points at the boundary of the data set.Our results show that, depending on the geometry of the data set boundary, two types of artifactscan arise: object-dependent and object-independent artifacts. The object-dependent artifacts aregenerated by singularities of the object being scanned and these artifacts can extend all along lines.This is a generalization of the streak artifacts observed in limited angle CT. The article also char-acterizes two new phenomena: the object-independent artifacts are caused only by the geometryof the data set boundary; they occur along lines if the boundary of the data set is not smooth andalong curves if the boundary of the data set is smooth. In addition to the geometric descriptionof artifacts, the article also provides characterizations of their strength in Sobolev scale in certaincases. Moreover, numerical reconstructions from simulated and real data are presented illustratingour theorems.This work is motivated by a reconstruction we present from a synchrotron data set in whichartifacts along lines appeared that were independent of the object.The results of this article apply to a wide range of well-known incomplete data problems, in-cluding limited angle CT and region of interest tomography, as well as to unconventional x-ray CTimaging setups. Some of those problems are explicitly addressed in this article, theoretically and numerically.