Refine
Year of publication
Document Type
- Article (65)
- conference proceeding (article) (56)
- conference proceeding (presentation, abstract) (40)
- Preprint (7)
- Part of Periodical (5)
- Book (1)
- Part of a Book (1)
- Other (1)
Is part of the Bibliography
- no (176)
Keywords
- Bildgebendes Verfahren (18)
- Deep Learning (16)
- Diagnose (15)
- Künstliche Intelligenz (14)
- Artificial Intelligence (13)
- Maschinelles Lernen (12)
- Gehirn (11)
- Kernspintomografie (9)
- Dreidimensionale Bildverarbeitung (8)
- Registrierung <Bildverarbeitung> (8)
Institute
- Labor Regensburg Medical Image Computing (ReMIC) (171)
- Fakultät Informatik und Mathematik (169)
- Regensburg Center of Biomedical Engineering - RCBE (39)
- Regensburg Center of Health Sciences and Technology - RCHST (36)
- Hochschulleitung/Hochschulverwaltung (5)
- Zentrum für Forschung und Transfer (ZFT ab 2024; vorher: IAFW) (5)
- Fakultät Sozial- und Gesundheitswissenschaften (3)
- Labor Empirische Sozialforschung (2)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (2)
- Fakultät Maschinenbau (1)
Begutachtungsstatus
- peer-reviewed (93)
- begutachtet (2)
Forschungsbericht 2015
(2015)
Forschungsbericht 2016
(2016)
Forschungsbericht 2012
(2012)
Forschung 2019
(2019)
Forschungsbericht 2017
(2017)
In this study, we aimed to develop an artificial intelligence clinical decision support solution to mitigate operator-dependent limitations during complex endoscopic procedures such as endoscopic submucosal dissection and peroral endoscopic myotomy, for example, bleeding and perforation. A DeepLabv3-based model was trained to delineate vessels, tissue structures and instruments on endoscopic still images from such procedures. The mean cross-validated Intersection over Union and Dice Score were 63% and 76%, respectively. Applied to standardised video clips from third-space endoscopic procedures, the algorithm showed a mean vessel detection rate of 85% with a false-positive rate of 0.75/min. These performance statistics suggest a potential clinical benefit for procedure safety, time and also training.
GinJinn: An object-detection pipeline for automated feature extraction from herbarium specimens
(2020)
PREMISE:
The generation of morphological data in evolutionary, taxonomic, and ecological studies of plants using herbarium material has traditionally been a labor-intensive task. Recent progress in machine learning using deep artificial neural networks (deep learning) for image classification and object detection has facilitated the establishment of a pipeline for the automatic recognition and extraction of relevant structures in images of herbarium specimens.
METHODS AND RESULTS:
We implemented an extendable pipeline based on state-of-the-art deep-learning object-detection methods to collect leaf images from herbarium specimens of two species of the genus Leucanthemum. Using 183 specimens as the training data set, our pipeline extracted one or more intact leaves in 95% of the 61 test images.
CONCLUSIONS:
We establish GinJinn as a deep-learning object-detection tool for the automatic recognition and extraction of individual leaves or other structures from herbarium specimens. Our pipeline offers greater flexibility and a lower entrance barrier than previous image-processing approaches based on hand-crafted features.
Background
This study evaluated the effect of an artificial intelligence (AI)-based clinical decision support system on the performance and diagnostic confidence of endoscopists in their assessment of Barrett’s esophagus (BE).
Methods
96 standardized endoscopy videos were assessed by 22 endoscopists with varying degrees of BE experience from 12 centers. Assessment was randomized into two video sets: group A (review first without AI and second with AI) and group B (review first with AI and second without AI). Endoscopists were required to evaluate each video for the presence of Barrett’s esophagus-related neoplasia (BERN) and then decide on a spot for a targeted biopsy. After the second assessment, they were allowed to change their clinical decision and confidence level.
Results
AI had a stand-alone sensitivity, specificity, and accuracy of 92.2%, 68.9%, and 81.3%, respectively. Without AI, BE experts had an overall sensitivity, specificity, and accuracy of 83.3%, 58.1%, and 71.5%, respectively. With AI, BE nonexperts showed a significant improvement in sensitivity and specificity when videos were assessed a second time with AI (sensitivity 69.8% [95%CI 65.2%–74.2%] to 78.0% [95%CI 74.0%–82.0%]; specificity 67.3% [95%CI 62.5%–72.2%] to 72.7% [95%CI 68.2%–77.3%]). In addition, the diagnostic confidence of BE nonexperts improved significantly with AI.
Conclusion
BE nonexperts benefitted significantly from additional AI. BE experts and nonexperts remained significantly below the stand-alone performance of AI, suggesting that there may be other factors influencing endoscopists’ decisions to follow or discard AI advice.
Künstliche Intelligenz erhöht die Gefäßerkennung von Endoskopikern bei third space Endoskopie
(2024)
Einleitung: Künstliche Intelligenz (KI)-Algorithmen unterstützen Endoskopiker bei der Erkennung und Charakterisierung von Kolonpolypen in der klinischen Praxis und führen zu einer Erhöhung der Adenomdetektionsrate. Auch bei therapeutischen Maßnahmen wie der endoskopischen Submukosadissektion (ESD) könne relevante anatomische Strukturen durch KI mit hoher Genauigkeit erkannt und im endoskopischen Bild in Echtzeit markiert werden. Der Effekt einer solchen Applikation auf die Gefäßdetektion von Endoskopikern ist bislang nicht erforscht.
Ziele:
In dieser Studie wurde der Effekt eines KI-Algorithmus zur Echtzeit-Gefäßmarkierung bei ESD auf die Gefäßdetektionsrate von Endoskopikern untersucht.
Methodik:
59 third space Endoskopievideos wurde aus der Datenbank des Universitätsklinikums Augsburg extrahiert. Auf 5470 Einzelbildern dieser Untersuchungen wurde submukosale Blutgefäße annotiert. Zusammen mit weiteren 179681 unmarkierten Bildern wurde ein DeepLabV3+ neuronales Netzwerk mit einer semi-supervised learning Methode darin trainiert, submukosale Blutgefäße auf dem endoskopischen Bild zu erkennen und in Echtzeit einzuzeichnen. Anhand eines Videotests mit 101 Videoclips und 200 vordefinierten Blutgefäßen wurden 19 Endoskopiker mit und ohne KI Unterstützung getestet.
Ergebnis:
Der Algorithmus erkannte in dem Videotest 93.5% der Gefäße in einer Detektionszeit von im Median 0,3 Sekunden. Die Gefäßdetektionsrate von Endoskopikern erhöhte sich durch KI Unterstützung von 56,4% auf 72,4% (p<0.001). Die Gefäßdetektionszeit reduzierte sich durch KI-Unterstützung von 6,7 auf 5.2 Sekunden (p<0.001). Der Algorithmus zeigte eine Rate an falsch positiven Detektionen in 4.5% der Einzelbilder. Falsch positiv erkannte Strukturen wurde kürzer detektiert, als richtig positive (0.7 und 6.0 Sekunden, p<0.001).
Schlussfolgerung:
KI Unterstützung führte zu einer erhöhten Gefäßdetektionsrate und schnelleren Gefäßdetektionszeit von Endoskopikern. Ein möglicher klinischer Effekt auf die intraprozedurale Komplikationsrate oder Operationszeit könnte in prospektiven Studien ermittelt werden.
Aims
Artificial Intelligence (AI) systems in gastrointestinal endoscopy are narrow because they are trained to solve only one specific task. Unlike Narrow-AI, general AI systems may be able to solve multiple and unrelated tasks. We aimed to understand whether an AI system trained to detect, characterize, and segment early Barrett’s neoplasia (Barrett’s AI) is only capable of detecting this pathology or can also detect and segment other diseases like early squamous cell cancer (SCC).
Methods
120 white light (WL) and narrow-band endoscopic images (NBI) from 60 patients (1 WL and 1 NBI image per patient) were extracted from the endoscopic database of the University Hospital Augsburg. Images were annotated by three expert endoscopists with extensive experience in the diagnosis and endoscopic resection of early esophageal neoplasias. An AI system based on DeepLabV3+architecture dedicated to early Barrett’s neoplasia was tested on these images. The AI system was neither trained with SCC images nor had it seen the test images prior to evaluation. The overlap between the three expert annotations („expert-agreement“) was the ground truth for evaluating AI performance.
Results
Barrett’s AI detected early SCC with a mean intersection over reference (IoR) of 92% when at least 1 pixel of the AI prediction overlapped with the expert-agreement. When the threshold was increased to 5%, 10%, and 20% overlap with the expert-agreement, the IoR was 88%, 85% and 82%, respectively. The mean Intersection Over Union (IoU) – a metric according to segmentation quality between the AI prediction and the expert-agreement – was 0.45. The mean expert IoU as a measure of agreement between the three experts was 0.60.
Conclusions
In the context of this pilot study, the predictions of SCC by a Barrett’s dedicated AI showed some overlap to the expert-agreement. Therefore, features learned from Barrett’s cancer-related training might be helpful also for SCC prediction. Our results allow different possible explanations. On the one hand, some Barrett’s cancer features generalize toward the related task of assessing early SCC. On the other hand, the Barrett’s AI is less specific to Barrett’s cancer than a general predictor of pathological tissue. However, we expect to enhance the detection quality significantly by extending the training to SCC-specific data. The insight of this study opens the way towards a transfer learning approach for more efficient training of AI to solve tasks in other domains.