Refine
Year of publication
Document Type
Is part of the Bibliography
- no (78)
Keywords
- Deep Learning (14)
- Diagnose (13)
- Artificial Intelligence (12)
- Maschinelles Lernen (11)
- Künstliche Intelligenz (9)
- Bildgebendes Verfahren (8)
- Speiseröhrenkrankheit (7)
- Speiseröhrenkrebs (7)
- Adenocarcinoma (6)
- Dreidimensionale Bildverarbeitung (6)
- Barrett's esophagus (5)
- Bildsegmentierung (5)
- Deep learning (5)
- Endoscopy (5)
- Gehirn (5)
- Handchirurgie (5)
- Machine learning (5)
- Lernprogramm (4)
- Osteosynthese (4)
- Polarisiertes Licht (4)
- Computerunterstützte Medizin (3)
- Endoskopie (3)
- Medical Image Computing (3)
- Medizin (3)
- Neuronales Netz (3)
- 3D-Druck (2)
- Adenocarcinom (2)
- Automatische Klassifikation (2)
- Barrett-Ösophagus (2)
- Barrett’s esophagus (2)
- Bilderzeugung (2)
- Bildverarbeitung (2)
- Computer-aided diagnosis (2)
- Computertomographie (2)
- Convolutional neural networks (2)
- Digital anthropometry (2)
- Dual-material 3D printing (2)
- Gehirnkarte (2)
- Generative adversarial networks (2)
- ICP-Massenspektrometrie (2)
- Image analysis (2)
- Image processing (2)
- Machine Learning (2)
- Magnetic Resonance Imaging (2)
- Maschinelles Sehen (2)
- Metalle (2)
- Metallproteide (2)
- Mustererkennung (2)
- Surgical instrument segmentation (2)
- Third-Space Endoscopy (2)
- 3D analysis (1)
- 3D imaging (1)
- 3D segmentation (1)
- 3D surface imaging (1)
- 4FC (1)
- Adenokarzinom (1)
- Alloplastik (1)
- Arthrodese (1)
- Artificial intelligence (1)
- Autogene Transplantation (1)
- Barrett (1)
- Barrett's Carcinoma (1)
- Barrett's Esophagus (1)
- Barrett's esphagus (1)
- Barrett’s cancer (1)
- Barrett’s esophagus detection (1)
- Bilderkennung (1)
- Bioimaging (1)
- Biomaterial (1)
- Boltzmann-Maschine (1)
- Brain Segmentation (1)
- Brain tissue (1)
- Breast imaging (1)
- Breast reconstruction (1)
- Breast reconstruction surgery (1)
- Breast symmetry (1)
- Brustkorb (1)
- Celiac Disease (1)
- Chest X-Ray (1)
- Co-occurrence matrix (1)
- Color texture (1)
- Computerassistierte Chirurgie (1)
- Computerunterstütztes Verfahren (1)
- Connectome (1)
- Convex optimization (1)
- Data privacy (1)
- Datenschutz (1)
- Diffusion-weighted imaging (1)
- Digital endoscopy (1)
- Dokument (1)
- Elektrophorese (1)
- Encoder-Decoder Network (1)
- Endobrachyösophagus (1)
- Endoscopic surgery (1)
- Eosinophilic Esophagitis (1)
- Explainable artificial intelligence (1)
- Facial anthropometry (1)
- Farbenraum (1)
- Farbkonstanz (1)
- Felsenbein (1)
- Fluorescence imaging (1)
- Force-feedback haptic (1)
- Fräsen (1)
- Gastroenterologie (1)
- General Purpose Graphic Processing Unit (1)
- Glanzlichtelimination (1)
- Graphical user interface (1)
- Graphische Benutzeroberfläche (1)
- Graustufe (1)
- Hand surgery training (1)
- HaptiVisT (1)
- Hochschuldidaktik (1)
- Human brain (1)
- Image classification (1)
- Image generation (1)
- Implantatwerkstoff (1)
- Indexierung <Inhaltserschließung> (1)
- Infinity Restricted Boltzmann Machines (1)
- Information Retrieval (1)
- Integrative features (1)
- K-wire drilling (1)
- Kernspintomografie (1)
- KolmogKorov distance (1)
- Komponentenanalyse (1)
- Konvexe Optimierung (1)
- Krankheitsverlauf (1)
- LA-ICP-MS (1)
- Laser ablation inductively coupled plasma mass spectrometry (1)
- Laser microdissection inductively coupled plasma mass spectrometry (1)
- Literaturbericht (1)
- Literaturdatenbank (1)
- Mammoplastik (1)
- Massenspektrometrie (1)
- Materialprüfung (1)
- Mean-Teacher (1)
- Medical Imaging (1)
- Medical imaging (1)
- Medical training system (1)
- Meta-heuristics (1)
- Metaheuristik (1)
- Metallomics (1)
- Metals (1)
- Metamaterial (1)
- Method (1)
- Minimal-invasive Chirurgie (1)
- Minimally invasive hand surgery (1)
- Model-based imaging (1)
- Multi-modal imaging (1)
- Multimodal Imaging (1)
- Multistep training (1)
- Nano-LA-ICP-MS (1)
- Neuronale Netze (1)
- Non-rigid surface registration (1)
- Object detector (1)
- Objekterkennung (1)
- Operationstechnik (1)
- Optical Flow (1)
- Parallel Execution (1)
- Parallelverarbeitung (1)
- Pathologische Anatomie (1)
- Pattern recognition (1)
- Pflanzen (1)
- Polarized light imaging (1)
- Pseudo-labels (1)
- Robot-assisted surgery (1)
- SLAC wrist (1)
- SNAC wrist (1)
- Segmentation (1)
- Segmentierung (1)
- Semi-Supervised Learning (1)
- Semi-automated segmentation (1)
- Semi-supervised Segmentation (1)
- Senile Makuladegeneration (1)
- Signaltrennung (1)
- Signalverarbeitung (1)
- Simulation (1)
- Smart ESD (1)
- Smart Endoscopy (1)
- Smartphone-based surface imaging (1)
- Spatio-temporal information (1)
- Statistical shape model (1)
- Stereophotogrammetry (1)
- Suchmaschine (1)
- Support material (1)
- Surgical instrument tracking (1)
- Surgical outcome simulation (1)
- Systems biology (1)
- Three-dimensional surface imaging (1)
- Tissue-imitating hand phantom (1)
- Tractography (1)
- Tumour (1)
- Vascular Malformations (1)
- Vesicle membrane analysis (1)
- Video (1)
- Virtual fixtures (1)
- Virtual reality (1)
- Virtualisierung (1)
- Virtuelle Realität (1)
- Vizualization (1)
- Voxel Spacing (1)
- Zielverfolgung (1)
- artificial intelligence (1)
- celiac disease (1)
- dichromatisches Reflexionsmodell (1)
- digital anthropometry (1)
- endoscopy detection (1)
- eosinophilic esophagitis (1)
- hand surgery training (1)
- herbarium specimens (1)
- medizinische Bildverarbeitung (1)
- metamaterial (1)
- object detection (1)
- real-time (1)
- reconstructive surgery (1)
- robot-assisted surgery (1)
- smoke simulation (1)
- submucosal invasion (1)
- tissue imitating phantom hand (1)
- unpaired image-to-image translation (1)
- villous atrophy (1)
- visual recognition (1)
- Ähnlichkeitssuche (1)
Institute
Begutachtungsstatus
- peer-reviewed (78) (remove)
Barrett-Ampel
(2022)
Hintergrund
Adenokarzinome des Ösophagus sind bis heute mit einer infausten Prognose vergesellschaftet (1). Obwohl Endoskopiker mit Barrett-Ösophagus als Präkanzerose konfrontiert werden, ist vor allem für nicht-Experten die Differenzierung zwischen Barrett-Ösophagus ohne Dysplasie und assoziierten Neoplasien mitunter schwierig. Existierende Biopsieprotokolle (z.B. Seattle Protokoll) sind oftmals unzuverlässig (2). Eine frühzeitige Diagnose des Adenokarzinoms ist allerdings von fundamentaler Bedeutung für die Prognose des Patienten.
Forschungsansatz
Auf der Grundlage dieser Problematik, entwickelten wir in Kooperation mit dem Forschungslabor „Regensburg Medical Image Computing (ReMIC)“ der OTH Regensburg ein auf künstlicher Intelligenz (KI) basiertes Entscheidungsunterstützungssystem (CDSS). Das auf einer DeepLabv3+ neuronalen Netzwerkarchitektur basierende CDSS differenziert mittels Mustererkennung Barrett- Ösophagus ohne Dysplasie von Barrett-Ösophagus mit Dysplasie bzw. Neoplasie („Klassifizierung“). Hierbei werden gemittelte Ausgabewahrscheinlichkeiten mit einem vom Benutzer definierten Schwellenwert verglichen. Für Vorhersagen, die den Schwellenwert überschreiten, berechnen wir die Kontur der Region und die Fläche. Sobald die vorhergesagte Läsion eine bestimmte Größe in der Eingabe überschreitet, heben wir sie und ihren Umriss hervor. So ermöglicht eine farbkodierte Visualisierung eine Abgrenzung zwischen Dysplasie bzw. Neoplasie und normalem Barrett-Epithel („Segmentierung“).
In einer Studie an Bildern in „Weißlicht“ (WL) und „Narrow Band Imaging“ (NBI) demonstrierten wir eine Sensitivität von mehr als 90% und eine Spezifität von mehr als 80% (3). In einem nächsten Schritt, differenzierte unser KI-Algorithmus Barrett- Metaplasien von assoziierten Neoplasien anhand von zufällig abgegriffenen Bildern in Echtzeit mit einer Accuracy von 89.9% (4). Darauf folgend, entwickelten wir unser System dahingehend weiter, dass unser Algorithmus nun auch dazu in der Lage ist, Untersuchungsvideos in WL, NBI und „Texture and Color Enhancement Imaging“ (TXI) in Echtzeit zu analysieren (5).
Aktuell führen wir eine Studie in einem randomisiert-kontrollierten Ansatz an unveränderten Untersuchungsvideos in WL, NBI und TXI durch.
Ausblick
Um Patienten mit aus Barrett-Metaplasien resultierenden Neoplasien frühestmöglich an „High-Volume“-Zentren überweisen zu können, soll unser KI-Algorithmus zukünftig vor allem Endoskopiker ohne extensive Erfahrung bei der Beurteilung von Barrett- Ösophagus in der Krebsfrüherkennung unterstützen.
Einleitung
Übermäßige Bewegung im Bild kann die Performance von auf künstlicher Intelligenz (KI) basierenden klinischen Entscheidungsunterstützungssystemen (CDSS) reduzieren. Optical Flow (OF) ist eine Methode zur Lokalisierung und Quantifizierung von Bewegungen zwischen aufeinanderfolgenden Bildern.
Ziel
Ziel ist es, die Mensch-Computer-Interaktion (HCI) zu verbessern und Endoskopiker die unser KI-System „Barrett-Ampel“ zur Unterstützung bei der Beurteilung von Barrett-Ösophagus (BE) verwenden, ein Echtzeit-Feedback zur aktuellen Datenqualität anzubieten.
Methodik
Dazu wurden unveränderte Videos in „Weißlicht“ (WL), „Narrow Band Imaging“ (NBI) und „Texture and Color Enhancement Imaging“ (TXI) von acht endoskopischen Untersuchungen von histologisch gesichertem BE und mit Barrett-Ösophagus assoziierten Neoplasien (BERN) durch unseren KI-Algorithmus analysiert. Der zur Bewertung der Bildqualität verwendete OF beinhaltete die mittlere Magnitude und die Entropie des Histogramms der Winkel. Frames wurden automatisch extrahiert, wenn die vordefinierten Schwellenwerte von 3,0 für die mittlere Magnitude und 9,0 für die Entropie des Histogramms der Winkel überschritten wurden. Experten sahen sich zunächst die Videos ohne KI-Unterstützung an und bewerteten, ob Störfaktoren die Sicherheit mit der eine Diagnose im vorliegenden Fall gestellt werden kann negativ beeinflussen. Anschließend überprüften sie die extrahierten Frames.
Ergebnis
Gleichmäßige Bewegung in eine Richtung, wie etwa beim Vorschieben des Endoskops, spiegelte sich, bei insignifikant veränderter Entropie, in einer Erhöhung der Magnitude wider. Chaotische Bewegung, zum Beispiel während dem Spülen, war mit erhöhter Entropie assoziiert. Insgesamt war eine unruhige endoskopische Darstellung, Flüssigkeit sowie übermäßige Ösophagusmotilität mit erhöhtem OF assoziiert und korrelierte mit der Meinung der Experten über die Qualität der Videos. Der OF und die subjektive Wahrnehmung der Experten über die Verwertbarkeit der vorliegenden Bildsequenzen korrelierten direkt proportional. Wenn die vordefinierten Schwellenwerte des OF überschritten wurden, war die damit verbundene Bildqualität in 94% der Fälle für eine definitive Interpretation auch für Experten unzureichend.
Schlussfolgerung
OF hat das Potenzial Endoskopiker ein Echtzeit-Feedback über die Qualität des Dateninputs zu bieten und so nicht nur die HCI zu verbessern, sondern auch die optimale Performance von KI-Algorithmen zu ermöglichen.
Aims
AI has proven great potential in assisting endoscopists in diagnostics, however its role in therapeutic endoscopy remains unclear. Endoscopic submucosal dissection (ESD) is a technically demanding intervention with a slow learning curve and relevant risks like bleeding and perforation. Therefore, we aimed to develop an algorithm for the real-time detection and delineation of relevant structures during third-space endoscopy.
Methods
5470 still images from 59 full length videos (47 ESD, 12 POEM) were annotated. 179681 additional unlabeled images were added to the training dataset. Consequently, a DeepLabv3+ neural network architecture was trained with the ECMT semi-supervised algorithm (under review elsewhere). Evaluation of vessel detection was performed on a dataset of 101 standardized video clips from 15 separate third-space endoscopy videos with 200 predefined blood vessels.
Results
Internal validation yielded an overall mean Dice score of 85% (68% for blood vessels, 86% for submucosal layer, 88% for muscle layer). On the video test data, the overall vessel detection rate (VDR) was 94% (96% for ESD, 74% for POEM). The median overall vessel detection time (VDT) was 0.32 sec (0.3 sec for ESD, 0.62 sec for POEM).
Conclusions
Evaluation of the developed algorithm on a video test dataset showed high VDR and quick VDT, especially for ESD. Further research will focus on a possible clinical benefit of the AI application for VDR and VDT during third-space endoscopy.
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis.For this task, the deep learning techniques’ black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett’s esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett’s esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem.
Computer-aided diagnosis using deep learning in the evaluation of early oesophageal adenocarcinoma
(2019)
Computer-aided diagnosis using deep learning (CAD-DL) may be an instrument to improve endoscopic assessment of Barrett’s oesophagus
(BE) and early oesophageal adenocarcinoma (EAC). Based on still images from two databases, the diagnosis of EAC by CAD-DL reached sensitivities/specificities of 97%/88% (Augsburg data) and 92%/100% (Medical Image Computing and Computer-Assisted Intervention [MICCAI]
data) for white light (WL) images and 94%/80% for narrow band images (NBI) (Augsburg data), respectively. Tumour margins delineated by
experts into images were detected satisfactorily with a Dice coefficient (D) of 0.72. This could be a first step towards CAD-DL for BE assessment. If developed further, it could become a useful
adjunctive tool for patient management.
Barrett's esophagus denotes a disorder in the digestive system that affects the esophagus' mucosal cells, causing reflux, and showing potential convergence to esophageal adenocarcinoma if not treated in initial stages. Thus, fast and reliable computer-aided diagnosis becomes considerably welcome. Nevertheless, such approaches usually suffer from imbalanced datasets, which can be addressed through Generative Adversarial Networks (GANs). Such techniques generate realistic images based on observed samples, even though at the cost of a proper selection of its hyperparameters. Many works employed a class of nature-inspired algorithms called metaheuristics to tackle the problem considering distinct deep learning approaches. Therefore, this paper's main contribution is to introduce metaheuristic techniques to fine-tune GANs in the context of Barrett's esophagus identification, as well as to investigate the feasibility of generating high-quality synthetic images for early-cancer assisted identification.
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of earlycancerous tissues in Barrett’s esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts’ previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts’ delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model’s sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts’ insights, demonstrating how human knowledge may influence the correct computational learning.
Aims:
The delineation of outer margins of early Barrett's cancer can be challenging even for experienced endoscopists. Artificial intelligence (AI) could assist endoscopists faced with this task. As of date, there is very limited experience in this domain. In this study, we demonstrate the measure of overlap (Dice coefficient = D) between highly experienced Barrett endoscopists and an AI system in the delineation of cancer margins (segmentation task).
Methods:
An AI system with a deep convolutional neural network (CNN) was trained and tested on high-definition endoscopic images of early Barrett's cancer (n = 33) and normal Barrett's mucosa (n = 41). The reference standard for the segmentation task were the manual delineations of tumor margins by three highly experienced Barrett endoscopists. Training of the AI system included patch generation, patch augmentation and adjustment of the CNN weights. Then, the segmentation results from patch classification and thresholding of the class probabilities. Segmentation results were evaluated using the Dice coefficient (D).
Results:
The Dice coefficient (D) which can range between 0 (no overlap) and 1 (complete overlap) was computed only for images correctly classified by the AI-system as cancerous. At a threshold of t = 0.5, a mean value of D = 0.72 was computed.
Conclusions:
AI with CNN performed reasonably well in the segmentation of the tumor region in Barrett's cancer, at least when compared with expert Barrett's endoscopists. AI holds a lot of promise as a tool for better visualization of tumor margins but may need further improvement and enhancement especially in real-time settings.
The growing number of publications on the application of artificial intelligence (AI) in medicine underlines the enormous importance and potential of this emerging field of research.
In gastrointestinal endoscopy, AI has been applied to all segments of the gastrointestinal tract most importantly in the detection and characterization of colorectal polyps. However, AI research has been published also in the stomach and esophagus for both neoplastic and non-neoplastic disorders.
The various technical as well as medical aspects of AI, however, remain confusing especially for non-expert physicians.
This physician-engineer co-authored review explains the basic technical aspects of AI and provides a comprehensive overview of recent publications on AI in gastrointestinal endoscopy. Finally, a basic insight is offered into understanding publications on AI in gastrointestinal endoscopy.
Based on previous work by our group with manual annotation of visible Barrett oesophagus (BE) cancer images, a real-time deep learning artificial intelligence (AI) system was developed. While an expert endoscopist conducts the endoscopic assessment of BE, our AI system captures random images from the real-time camera livestream and provides a global prediction (classification), as well as a dense prediction (segmentation) differentiating accurately between normal BE and early oesophageal adenocarcinoma (EAC). The AI system showed an accuracy of 89.9% on 14 cases with neoplastic BE.