Refine
Year of publication
Document Type
- Article (59)
- conference proceeding (article) (52)
- conference proceeding (presentation, abstract) (29)
- Part of Periodical (5)
- Preprint (4)
- Book (1)
- Part of a Book (1)
Is part of the Bibliography
- no (151)
Keywords
- Bildgebendes Verfahren (18)
- Deep Learning (16)
- Diagnose (13)
- Künstliche Intelligenz (13)
- Artificial Intelligence (11)
- Gehirn (11)
- Maschinelles Lernen (11)
- Kernspintomografie (9)
- Dreidimensionale Bildverarbeitung (8)
- Registrierung <Bildverarbeitung> (8)
Institute
- Fakultät Informatik und Mathematik (146)
- Regensburg Medical Image Computing (ReMIC) (145)
- Regensburg Center of Biomedical Engineering - RCBE (38)
- Regensburg Center of Health Sciences and Technology - RCHST (28)
- Hochschulleitung/Hochschulverwaltung (5)
- Institut für Angewandte Forschung und Wirtschaftskooperationen (IAFW) (4)
- Fakultät Angewandte Sozial- und Gesundheitswissenschaften (2)
- Labor Empirische Sozialforschung (2)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (2)
- Fakultät Maschinenbau (1)
Begutachtungsstatus
- peer-reviewed (72)
- begutachtet (1)
Zur Analyse von Lippenbewegungsabläufen wird ein aktives Konturmodell eingesetzt. Probleme bereitet die hohe Sprechgeschwindigkeit, die in star ken Objektverschiebungen result iert und bislang nicht durch eine alleinige Konturanpassung kompensiert werden kann. In diesem Beitrag werden die klassischen aktiven Konturmodelle um eine Vorjustierung der Grobkonturen erweitert, die eine energiebasierte Konturanpassung erst möglich macht. Die Schätzung der Verschiebung zur Vorjustierung basiert auf dem Gradientenbild und einem prädikatenlogisch formulierten Regelwerk, das Annahmen und Nebenbedingungen als Wissensbasis enthält. Mit Hilfe dieser Erweiterungen ist eine automatisierte Konturverfolgung der Lippen möglich.
Whole-body PET/CT imaging
(2008)
Aim
Combined whole-body (WB) PET/CT imaging provides better overall co-registration compared to separate CT and PET. However, in clinical routine local PET-CT mis-registration cannot be avoided. Thus, the reconstructed PET tracer distribution may be biased when using the misaligned CT transmission data for CT-based attenuation correction (CT-AC). We investigate the feasibility of retrospective co-registration techniques to align CT and PET images prior to CT-AC, thus improving potentially the quality of combined PET/CT imaging in clinical routine.
Methods
First, using a commercial software registration package CT images were aligned to the uncorrected PET data by rigid and non-rigid registration methods. Co-registration accuracy of both alignment approaches was assessed by reviewing the PET tracer uptake patterns (visual, linked cursor display) following attenuation correction based on the original and co-registered CT. Second, we investigated non-rigid registration based on a prototype ITK implementation of the B-spline algorithm on a similar targeted MR-CT registration task, there showing promising results.
Results
Manual rigid, landmark-based co-registration introduced unacceptable misalignment, in particular in peripheral areas of the whole-body images. Manual, non-rigid landmark-based co-registration prior to CT-AC was successful with minor loco-regional distortions. Nevertheless, neither rigid nor non-rigid automatic co-registration based on the Mutual Information image to image metric succeeded in co-registering the CT and noAC-PET images. In contrast to widely available commercial software registration our implementation of an alternative automated, non-rigid B-spline co-registration technique yielded promising results in this setting with MR-CT data.
Conclusion
In clinical PET/CT imaging, retrospective registration of CT and uncorrected PET images may improve the quality of the AC-PET images. As of today no validated and clinically viable commercial registration software is in routine use. This has triggered our efforts in pursuing new approaches to a validated, non-rigid co-registration algorithm applicable to whole-body PET/CT imaging of which first results are presented here. This approach appears suitable for applications in retrospective WB-PET/CT alignment.
Ziel
Kombinierte PET/CT-Bildgebung ermöglicht verbesserte Koregistrierung von PET- und CT-Daten gegenüber separat akquirierten Bildern. Trotzdem entstehen in der klinischen Anwendung lokale Fehlregistrierungen, die zu Fehlern in der rekonstruierten PET- Tracerverteilung führen können, falls die unregistrierten CT-Daten zur Schwächungskorrektur (AC) der Emissionsdaten verwendet werden. Wir untersuchen daher die Anwendung von Bildregistrierungsalgorithmen vor der CT-basierten AC zur Verbesserung der PET-Aufnahmen.
Methoden
Mittels einer kommerziellen Registrierungssoftware wurden die CT-Daten eines PET/CT- Tomographen durch landmarken- und intensitätsbasierte rigide (starre) und nicht-rigide Registrierungsverfahren räumlich an die unkorrigierten PET-Emissionsdaten angepasst und zur AC verwendet. Zur Bewertung wurden die Tracerverteilungen in den PET-Bildern (vor AC, CT-AC, CT-AC nach Koregistrierung) visuell und mit Hilfe korrelierter Fadenkreuze verglichen. Zusätzlich untersuchten wir die ITK-Implementierung der bekannten B-spline basierten, nicht-rigiden Registrierungsansätze im Hinblick auf ihre Verwendbarkeit für die multimodale PET/CT-Ganzkörperregistrierung.
Ergebnisse
Mittels landmarkenbasierter, nicht-rigider Registrierung konnte die Tracerverteilung in den PET-Daten lokal verbessert werden. Landmarkenbasierte rigide Registrierung führte zu starker Fehlregistrierung in entfernten Körperregionen. Automatische rigide und nicht-rigide Registrierung unter Verwendung der Mutual-Information-Ähnlichkeitsmetrik versagte auf allen verwendeten Datensätzen. Die automatische Registrierung mit B-spline-Funktionen zeigte vielversprechende Resultate in der Anwendung auf einem ähnlich gelagerten CT–MR-Registrierungsproblem.
Fazit
Retrospektive, nicht-rigide Registrierung unkorrigierter PET- und CT-Aufnahmen aus kombinierten Aufnahmensystemen vor der AC kann die Qualität von PET-Aufnahmen im klinischen Einsatz verbessern. Trotzdem steht bis heute im klinischen Alltag keine validierte, automatische Registrierungssoftware zur Verfügung. Wir verfolgen dazu Ansätze für validierte, nicht-rigide Bildregistrierung für den klinischen Einsatz und präsentieren erste Ergebnisse.
Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.
In this study, we aimed to develop an artificial intelligence clinical decision support solution to mitigate operator-dependent limitations during complex endoscopic procedures such as endoscopic submucosal dissection and peroral endoscopic myotomy, for example, bleeding and perforation. A DeepLabv3-based model was trained to delineate vessels, tissue structures and instruments on endoscopic still images from such procedures. The mean cross-validated Intersection over Union and Dice Score were 63% and 76%, respectively. Applied to standardised video clips from third-space endoscopic procedures, the algorithm showed a mean vessel detection rate of 85% with a false-positive rate of 0.75/min. These performance statistics suggest a potential clinical benefit for procedure safety, time and also training.
Einleitung Die Endoskopische Retrograde Cholangiopankreatikographie (ERCP) ist der Goldstandard in der Diagnostik und Therapie von Erkrankungen des pankreatobiliären Trakts. Jedoch ist sie technisch sehr anspruchsvoll und weist eine vergleichsweise hohe Komplikationsrate auf.
Ziele
In der vorliegenden Machbarkeitsstudie soll geprüft werden, ob mithilfe eines Deep-learning-Algorithmus die Papille und das Ostium zuverlässig detektiert werden können und somit für Endoskopiker mit geringer Erfahrung ein geeignetes Hilfsmittel, insbesondere für die Ausbildungssituation, darstellen könnten.
Methodik
Wir betrachteten insgesamt 606 Bilddatensätze von 65 Patienten. In diesen wurde sowohl die Papilla duodeni major als auch das Ostium segmentiert. Anschließend wurde eine neuronales Netz mittels eines Deep-learning-Algorithmus trainiert. Außerdem erfolgte eine 5-fache Kreuzvaldierung.
Ergebnisse
Bei einer 5-fachen Kreuzvaldierung auf den 606 gelabelten Daten konnte für die Klasse Papille eine F1-Wert von 0,7908, eine Sensitivität von 0,7943 und eine Spezifität von 0,9785 erreicht werden, für die Klasse Ostium eine F1-Wert von 0,5538, eine Sensitivität von 0,5094 und eine Spezifität von 0,9970 (vgl. [Tab. 1]). Unabhängig von der Klasse zeigte sich gemittelt (Klasse Papille und Klasse Ostium) ein F1-Wert von 0,6673, eine Sensitivität von 0,6519 und eine Spezifität von 0,9877 (vgl. [Tab. 2]).
Schlussfolgerung
In vorliegende Machbarkeitsstudie konnte das neuronale Netz die Papilla duodeni major mit einer hohen Sensitivität und sehr hohen Spezifität identifizieren. Bei der Detektion des Ostiums war die Sensitivität deutlich geringer. Zukünftig soll das das neuronale Netz mit mehr Daten trainiert werden. Außerdem ist geplant, den Algorithmus auch auf Videos anzuwenden. Somit könnte langfristig ein geeignetes Hilfsmittel für die ERCP etabliert werden.
Utility of Smartphone-based Three-dimensional Surface Imaging for Digital Facial Anthropometry
(2024)
Background
The utilization of three-dimensional (3D) surface imaging for facial anthropometry is a significant asset for patients undergoing maxillofacial surgery. Notably, there have been recent advancements in smartphone technology that enable 3D surface imaging.
In this study, anthropometric assessments of the face were performed using a smartphone and a sophisticated 3D surface imaging system.
Methods
30 healthy volunteers (15 females and 15 males) were included in the study. An iPhone 14 Pro (Apple Inc., USA) using the application 3D Scanner App (Laan Consulting Corp., USA) and the Vectra M5 (Canfield Scientific, USA) were employed to create 3D surface models. For each participant, 19 anthropometric measurements were conducted on the 3D surface models. Subsequently, the anthropometric measurements generated by the two approaches were compared. The statistical techniques employed included the paired t-test, paired Wilcoxon signed-rank test, Bland–Altman analysis, and calculation of the intraclass correlation coefficient (ICC).
Results
All measurements showed excellent agreement between smartphone-based and Vectra M5-based measurements (ICC between 0.85 and 0.97). Statistical analysis revealed no statistically significant differences in the central tendencies for 17 of the 19 linear measurements. Despite the excellent agreement found, Bland–Altman analysis revealed that the 95% limits of agreement between the two methods exceeded ±3 mm for the majority of measurements.
Conclusion
Digital facial anthropometry using smartphones can serve as a valuable supplementary tool for surgeons, enhancing their communication with patients. However, the proposed data suggest that digital facial anthropometry using smartphones may not yet be suitable for certain diagnostic purposes that require high accuracy.
Objectives: C-11-methionine (MET) is particularly useful in brain tumor diagnosis but unspecific uptake e.g. in cerebral ischemia has been reported (1). The F-18-labeled amino acid O-(2-[F-18]fluoroethyl)-L-tyrosine (FET) shows a similar clinical potential as MET in brain tumor diagnosis but is applicable on a wider clinical scale. The aim of this study was to evaluate the uptake of FET and H-3-MET in focal cortical ischemia in rats by dual tracer autoradiography.
Methods: Focal cortical ischemia was induced in 12 Fisher CDF rats using the photothrombosis model (PT). One day (n=3) , two days (n=5) and 7 days (n=4) after induction of the lesion FET and H-3-MET were injected intravenously. One hour after tracer injection animals were killed, the brains were removed immediately and frozen in 2-methylbutane at -50°C. Brains were cut in coronal sections (thickness: 20 µm) and exposed first to H-3 insensitive photoimager plates to measure FET distribution. After decay of F-18 the distribution of H-3-MET was determined. The autoradiograms were evaluated by regions of interest (ROIs) placed on areas with increased tracer uptake in the PT and the contralateral brain. Lesion to brain ratios (L/B) were calculated by dividing the mean uptake in the lesion and the brain. Based on previous studies in gliomas a L/B ratio > 1.6 was considered as pathological for FET.
Results: Variable increased uptake of both tracers was observed in the PT and its demarcation zone at all stages after PT. The cut-off level of 1.6 for FET was exceeded in 9/12 animals. One day after PT the L/B ratios were 2.0 ± 0.6 for FET vs. 2.1 ± 1.0 for MET (mean ± SD); two days after lesion 2.2 ± 0.7 for FET vs. 2.7 ± 1.0 for MET and 7 days after lesion 2.4 ± 0.4 for FET vs. 2.4 ± 0.1 for MET. In single cases discrepancies in the uptake pattern of FET and MET were observed.
Conclusions: FET like MET may exhibit significant uptake in infarcted areas or the immediate vincinity which has to be considered in the differential diagnosis of unkown brain lesions. The discrepancies in the uptake pattern of FET and MET in some cases indicates either differences in the transport mechanisms of both amino acids or a different affinity for certain cellular components.
Zur diagnostischen Unterstützung bei der Befundung laryngealer Erkrankungen soll eine Farb- und Formanalyse der Stimmlippen durchgeführt werden. In diesem Beitrag wird ein Verfahren zur Trennung der spiegelnden und diffusen Reflexionsanteile in Farbbildern des Larynx vorgestellt. Die Farbe der diffusen Komponente entspricht dabei der beleuchtungsunabhängigen Objektfarbe, während deren Wichtungsfaktoren als Eingabe für Shape-from-Shading-Verfahren zur Oberflächenrekonstruktion dienen.
Polarised light imaging (PLI) utilises the birefringence of the myelin sheaths in order to visualise the orientation of nerve fibres in microtome sections of adult human post-mortem brains at ultra-high spatial resolution. The preparation of post-mortem brains for PLI involves fixation, freezing and cutting into 100-μm-thick sections. Hence, geometrical distortions of histological sections are inevitable and have to be removed for 3D reconstruction and subsequent fibre tracking. We here present a processing pipeline for 3D reconstruction of these sections using PLI derived multimodal images of post-mortem brains. Blockface images of the brains were obtained during cutting; they serve as reference data for alignment and elimination of distortion artefacts. In addition to the spatial image transformation, fibre orientation vectors were reoriented using the transformation fields, which consider both affine and subsequent non-linear registration. The application of this registration and reorientation approach results in a smooth fibre vector field, which reflects brain morphology. PLI combined with 3D reconstruction and fibre tracking is a powerful tool for human brain mapping. It can also serve as an independent method for evaluating in vivo fibre tractography.
Background:
Reliable, time- and cost-effective, and clinician-friendly diagnostic tools are cornerstones in facial palsy (FP) patient management. Different automated FP grading systems have been developed but revealed persisting downsides such as insufficient accuracy and cost-intensive hardware. We aimed to overcome these barriers and programmed an automated grading system for FP patients utilizing the House and Brackmann scale (HBS).
Methods:
Image datasets of 86 patients seen at the Department of Plastic, Hand, and Reconstructive Surgery at the University Hospital Regensburg, Germany, between June 2017 and May 2021, were used to train the neural network and evaluate its accuracy. Nine facial poses per patient were analyzed by the algorithm.
Results:
The algorithm showed an accuracy of 100%. Oversampling did not result in altered outcomes, while the direct form displayed superior accuracy levels when compared to the modular classification form (n = 86; 100% vs. 99%). The Early Fusion technique was linked to improved accuracy outcomes in comparison to the Late Fusion and sequential method (n = 86; 100% vs. 96% vs. 97%).
Conclusions:
Our automated FP grading system combines high-level accuracy with cost- and time-effectiveness. Our algorithm may accelerate the grading process in FP patients and facilitate the FP surgeon’s workflow.
Time-Dependent Joint Probability Speed Function for Level-Set Segmentation of Rat-Brain Slices
(2008)
The segmentation of rat brain slices suffers from illumination inhomogeneities and staining effects. State-of-the-art level-set methods model slice and background with intensity mixture densities defining the speed function as difference between the respective probabilites. Nevertheless, the overlap of these distributions causes an inaccurate stopping at the slice border. In this work, we propose the characterisation of the border area with intensity pairs for inside and outside estimating joint intensity probabilities. Method - In contrast to global object and background models, we focus on the object border characterised by a joint mixture density. This specifies the probability of the occurance of an inside and an outside value in direct adjacency. These values are not known beforehand, because inside and outside depend on the level-set evolution and change during time. Therefore, the speed function is computed time-dependently at the position of the current zero level-set. Along this zero level-set curve, the inside and outside values are derived as mean along the curvature normal directing inside and outside the object. Advantage of the joint probability distribution is to resolve the distribution overlaps, because these are assumed to be not located at the same border position. Results - The novel time-dependent joint probability based speed function is compared expermimentally with single probability based speed functions. Two rat brains with about 40 slices are segmented and the results analysed using manual segmentations and the Tanimoto overlap measure. Improved results are recognised for both data sets.
The Impact of Semi-Automated Segmentation and 3D Analysis on Testing New Osteosynthesis Material
(2017)
A new protocol for testing osteosynthesis material postoperatively combining semi-automated segmentation and 3D analysis of surface meshes is proposed. By various steps of transformation and measuring, objective data can be collected. In this study the specifications of a locking plate used for mediocarpal arthrodesis of the wrist were examined. The results show, that union of the lunate, triquetrum, hamate and capitate was achieved and that the plate is comparable to coexisting arthrodesis systems. Additionally, it was shown, that the complications detected correlate to the clinical outcome. In synopsis, this protocol is considered beneficial and should be taken into account in further studies.
Local gray level dependencies of natural images can be modelled by means of co-occurrence matrices containing joint probabilities of gray-level pairs. Texture, however, is a resolution-dependent phenomenon and hence, classification depends on the chosen scale. Since there is no optimal scale for all textures we employ a multiscale approach that acquires textural features at several scales. Thus linear and nonlinear scale-spaces are analyzed by multiscale co-occurrence matrices that describe the statistical behavior of a texture in scale-space. Classification is then performed on the basis of texture features taken from the individual scale with the highest discriminatory power. By considering cross-scale occurrences of gray level pairs, the impact of filters on the feature is described and used for classification of natural textures. This novel method was found to improve classification rates of the common co-occurrence matrix approach on standard textures significantly.
The success of artificial intelligence in medicine is based on the need for large amounts of high quality training data. Sharing of medical image data, however, is often restricted by laws such as doctor-patient confidentiality. Although there are publicly available medical datasets, their quality and quantity are often low. Moreover, datasets are often imbalanced and only represent a fraction of the images generated in hospitals or clinics and can thus usually only be used as training data for specific problems. The introduction of generative adversarial networks (GANs) provides a mean to generate artificial images by training two convolutional networks. This paper proposes a method which uses GANs trained on medical images in order to generate a large number of artificial images that could be used to train other artificial intelligence algorithms. This work is a first step towards alleviating data privacy concerns and being able to publicly share data that still contains a substantial amount of the information in the original private data. The method has been evaluated on several public datasets and quantitative and qualitative tests showing promising results.
Objective: Artificial intelligence (AI) may reduce underdiagnosed or overlooked upper GI (UGI) neoplastic and preneoplastic conditions, due to subtle appearance and low disease prevalence. Only disease-specific AI performances have been reported, generating uncertainty on its clinical value.
Design: We searched PubMed, Embase and Scopus until July 2020, for studies on the diagnostic performance of AI in detection and characterisation of UGI lesions. Primary outcomes were pooled diagnostic accuracy, sensitivity and specificity of AI. Secondary outcomes were pooled positive (PPV) and negative (NPV) predictive values. We calculated pooled proportion rates (%), designed summary receiving operating characteristic curves with respective area under the curves (AUCs) and performed metaregression and sensitivity analysis.
Results: Overall, 19 studies on detection of oesophageal squamous cell neoplasia (ESCN) or Barrett's esophagus-related neoplasia (BERN) or gastric adenocarcinoma (GCA) were included with 218, 445, 453 patients and 7976, 2340, 13 562 images, respectively. AI-sensitivity/specificity/PPV/NPV/positive likelihood ratio/negative likelihood ratio for UGI neoplasia detection were 90% (CI 85% to 94%)/89% (CI 85% to 92%)/87% (CI 83% to 91%)/91% (CI 87% to 94%)/8.2 (CI 5.7 to 11.7)/0.111 (CI 0.071 to 0.175), respectively, with an overall AUC of 0.95 (CI 0.93 to 0.97). No difference in AI performance across ESCN, BERN and GCA was found, AUC being 0.94 (CI 0.52 to 0.99), 0.96 (CI 0.95 to 0.98), 0.93 (CI 0.83 to 0.99), respectively. Overall, study quality was low, with high risk of selection bias. No significant publication bias was found.
Conclusion: We found a high overall AI accuracy for the diagnosis of any neoplastic lesion of the UGI tract that was independent of the underlying condition. This may be expected to substantially reduce the miss rate of precancerous lesions and early cancer when implemented in clinical practice.
Heavy smoke development represents an important challenge for operating physicians during laparoscopic procedures and can potentially affect the success of an intervention due to reduced visibility and orientation. Reliable and accurate recognition of smoke is therefore a prerequisite for the use of downstream systems such as automated smoke evacuation systems. Current approaches distinguish between non-smoked and smoked frames but often ignore the temporal context inherent in endoscopic video data. In this work, we therefore present a method that utilizes the pixel-wise displacement from randomly sampled images to the preceding frames determined using the optical flow algorithm by providing the transformed magnitude of the displacement as an additional input to the network. Further, we incorporate the temporal context at evaluation time by applying an exponential moving average on the estimated class probabilities of the model output to obtain more stable and robust results over time. We evaluate our method on two convolutional-based and one state-of-the-art transformer architecture and show improvements in the classification results over a baseline approach, regardless of the network used.