Refine
Year of publication
Document Type
Is part of the Bibliography
- no (78)
Keywords
- Deep Learning (14)
- Diagnose (13)
- Artificial Intelligence (12)
- Maschinelles Lernen (11)
- Künstliche Intelligenz (9)
- Bildgebendes Verfahren (8)
- Speiseröhrenkrankheit (7)
- Speiseröhrenkrebs (7)
- Adenocarcinoma (6)
- Dreidimensionale Bildverarbeitung (6)
- Barrett's esophagus (5)
- Bildsegmentierung (5)
- Deep learning (5)
- Endoscopy (5)
- Gehirn (5)
- Handchirurgie (5)
- Machine learning (5)
- Lernprogramm (4)
- Osteosynthese (4)
- Polarisiertes Licht (4)
- Computerunterstützte Medizin (3)
- Endoskopie (3)
- Medical Image Computing (3)
- Medizin (3)
- Neuronales Netz (3)
- 3D-Druck (2)
- Adenocarcinom (2)
- Automatische Klassifikation (2)
- Barrett-Ösophagus (2)
- Barrett’s esophagus (2)
- Bilderzeugung (2)
- Bildverarbeitung (2)
- Computer-aided diagnosis (2)
- Computertomographie (2)
- Convolutional neural networks (2)
- Digital anthropometry (2)
- Dual-material 3D printing (2)
- Gehirnkarte (2)
- Generative adversarial networks (2)
- ICP-Massenspektrometrie (2)
- Image analysis (2)
- Image processing (2)
- Machine Learning (2)
- Magnetic Resonance Imaging (2)
- Maschinelles Sehen (2)
- Metalle (2)
- Metallproteide (2)
- Mustererkennung (2)
- Surgical instrument segmentation (2)
- Third-Space Endoscopy (2)
- 3D analysis (1)
- 3D imaging (1)
- 3D segmentation (1)
- 3D surface imaging (1)
- 4FC (1)
- Adenokarzinom (1)
- Alloplastik (1)
- Arthrodese (1)
- Artificial intelligence (1)
- Autogene Transplantation (1)
- Barrett (1)
- Barrett's Carcinoma (1)
- Barrett's Esophagus (1)
- Barrett's esphagus (1)
- Barrett’s cancer (1)
- Barrett’s esophagus detection (1)
- Bilderkennung (1)
- Bioimaging (1)
- Biomaterial (1)
- Boltzmann-Maschine (1)
- Brain Segmentation (1)
- Brain tissue (1)
- Breast imaging (1)
- Breast reconstruction (1)
- Breast reconstruction surgery (1)
- Breast symmetry (1)
- Brustkorb (1)
- Celiac Disease (1)
- Chest X-Ray (1)
- Co-occurrence matrix (1)
- Color texture (1)
- Computerassistierte Chirurgie (1)
- Computerunterstütztes Verfahren (1)
- Connectome (1)
- Convex optimization (1)
- Data privacy (1)
- Datenschutz (1)
- Diffusion-weighted imaging (1)
- Digital endoscopy (1)
- Dokument (1)
- Elektrophorese (1)
- Encoder-Decoder Network (1)
- Endobrachyösophagus (1)
- Endoscopic surgery (1)
- Eosinophilic Esophagitis (1)
- Explainable artificial intelligence (1)
- Facial anthropometry (1)
- Farbenraum (1)
- Farbkonstanz (1)
- Felsenbein (1)
- Fluorescence imaging (1)
- Force-feedback haptic (1)
- Fräsen (1)
- Gastroenterologie (1)
- General Purpose Graphic Processing Unit (1)
- Glanzlichtelimination (1)
- Graphical user interface (1)
- Graphische Benutzeroberfläche (1)
- Graustufe (1)
- Hand surgery training (1)
- HaptiVisT (1)
- Hochschuldidaktik (1)
- Human brain (1)
- Image classification (1)
- Image generation (1)
- Implantatwerkstoff (1)
- Indexierung <Inhaltserschließung> (1)
- Infinity Restricted Boltzmann Machines (1)
- Information Retrieval (1)
- Integrative features (1)
- K-wire drilling (1)
- Kernspintomografie (1)
- KolmogKorov distance (1)
- Komponentenanalyse (1)
- Konvexe Optimierung (1)
- Krankheitsverlauf (1)
- LA-ICP-MS (1)
- Laser ablation inductively coupled plasma mass spectrometry (1)
- Laser microdissection inductively coupled plasma mass spectrometry (1)
- Literaturbericht (1)
- Literaturdatenbank (1)
- Mammoplastik (1)
- Massenspektrometrie (1)
- Materialprüfung (1)
- Mean-Teacher (1)
- Medical Imaging (1)
- Medical imaging (1)
- Medical training system (1)
- Meta-heuristics (1)
- Metaheuristik (1)
- Metallomics (1)
- Metals (1)
- Metamaterial (1)
- Method (1)
- Minimal-invasive Chirurgie (1)
- Minimally invasive hand surgery (1)
- Model-based imaging (1)
- Multi-modal imaging (1)
- Multimodal Imaging (1)
- Multistep training (1)
- Nano-LA-ICP-MS (1)
- Neuronale Netze (1)
- Non-rigid surface registration (1)
- Object detector (1)
- Objekterkennung (1)
- Operationstechnik (1)
- Optical Flow (1)
- Parallel Execution (1)
- Parallelverarbeitung (1)
- Pathologische Anatomie (1)
- Pattern recognition (1)
- Pflanzen (1)
- Polarized light imaging (1)
- Pseudo-labels (1)
- Robot-assisted surgery (1)
- SLAC wrist (1)
- SNAC wrist (1)
- Segmentation (1)
- Segmentierung (1)
- Semi-Supervised Learning (1)
- Semi-automated segmentation (1)
- Semi-supervised Segmentation (1)
- Senile Makuladegeneration (1)
- Signaltrennung (1)
- Signalverarbeitung (1)
- Simulation (1)
- Smart ESD (1)
- Smart Endoscopy (1)
- Smartphone-based surface imaging (1)
- Spatio-temporal information (1)
- Statistical shape model (1)
- Stereophotogrammetry (1)
- Suchmaschine (1)
- Support material (1)
- Surgical instrument tracking (1)
- Surgical outcome simulation (1)
- Systems biology (1)
- Three-dimensional surface imaging (1)
- Tissue-imitating hand phantom (1)
- Tractography (1)
- Tumour (1)
- Vascular Malformations (1)
- Vesicle membrane analysis (1)
- Video (1)
- Virtual fixtures (1)
- Virtual reality (1)
- Virtualisierung (1)
- Virtuelle Realität (1)
- Vizualization (1)
- Voxel Spacing (1)
- Zielverfolgung (1)
- artificial intelligence (1)
- celiac disease (1)
- dichromatisches Reflexionsmodell (1)
- digital anthropometry (1)
- endoscopy detection (1)
- eosinophilic esophagitis (1)
- hand surgery training (1)
- herbarium specimens (1)
- medizinische Bildverarbeitung (1)
- metamaterial (1)
- object detection (1)
- real-time (1)
- reconstructive surgery (1)
- robot-assisted surgery (1)
- smoke simulation (1)
- submucosal invasion (1)
- tissue imitating phantom hand (1)
- unpaired image-to-image translation (1)
- villous atrophy (1)
- visual recognition (1)
- Ähnlichkeitssuche (1)
Institute
Begutachtungsstatus
- peer-reviewed (78) (remove)
ARTIFICIAL INTELLIGENCE (AI) – ASSISTED VESSEL AND TISSUE RECOGNITION IN THIRD-SPACE ENDOSCOPY
(2022)
Aims
Third-space endoscopy procedures such as endoscopic submucosal dissection (ESD) and peroral endoscopic myotomy (POEM) are complex interventions with elevated risk of operator-dependent adverse events, such as intra-procedural bleeding and perforation. We aimed to design an artificial intelligence clinical decision support solution (AI-CDSS, “Smart ESD”) for the detection and delineation of vessels, tissue structures, and instruments during third-space endoscopy procedures.
Methods
Twelve full-length third-space endoscopy videos were extracted from the Augsburg University Hospital database. 1686 frames were annotated for the following categories: Submucosal layer, blood vessels, electrosurgical knife and endoscopic instrument. A DeepLabv3+neural network with a 101-layer ResNet backbone was trained and validated internally. Finally, the ability of the AI system to detect visible vessels during ESD and POEM was determined on 24 separate video clips of 7 to 46 seconds duration and showing 33 predefined vessels. These video clips were also assessed by an expert in third-space endoscopy.
Results
Smart ESD showed a vessel detection rate (VDR) of 93.94%, while an average of 1.87 false positive signals were recorded per minute. VDR of the expert endoscopist was 90.1% with no false positive findings. On the internal validation data set using still images, the AI system demonstrated an Intersection over Union (IoU), mean Dice score and pixel accuracy of 63.47%, 76.18% and 86.61%, respectively.
Conclusions
This is the first AI-CDSS aiming to mitigate operator-dependent limitations during third-space endoscopy. Further clinical trials are underway to better understand the role of AI in such procedures.
Utility of Smartphone-based Three-dimensional Surface Imaging for Digital Facial Anthropometry
(2024)
Background
The utilization of three-dimensional (3D) surface imaging for facial anthropometry is a significant asset for patients undergoing maxillofacial surgery. Notably, there have been recent advancements in smartphone technology that enable 3D surface imaging.
In this study, anthropometric assessments of the face were performed using a smartphone and a sophisticated 3D surface imaging system.
Methods
30 healthy volunteers (15 females and 15 males) were included in the study. An iPhone 14 Pro (Apple Inc., USA) using the application 3D Scanner App (Laan Consulting Corp., USA) and the Vectra M5 (Canfield Scientific, USA) were employed to create 3D surface models. For each participant, 19 anthropometric measurements were conducted on the 3D surface models. Subsequently, the anthropometric measurements generated by the two approaches were compared. The statistical techniques employed included the paired t-test, paired Wilcoxon signed-rank test, Bland–Altman analysis, and calculation of the intraclass correlation coefficient (ICC).
Results
All measurements showed excellent agreement between smartphone-based and Vectra M5-based measurements (ICC between 0.85 and 0.97). Statistical analysis revealed no statistically significant differences in the central tendencies for 17 of the 19 linear measurements. Despite the excellent agreement found, Bland–Altman analysis revealed that the 95% limits of agreement between the two methods exceeded ±3 mm for the majority of measurements.
Conclusion
Digital facial anthropometry using smartphones can serve as a valuable supplementary tool for surgeons, enhancing their communication with patients. However, the proposed data suggest that digital facial anthropometry using smartphones may not yet be suitable for certain diagnostic purposes that require high accuracy.
Aims
Celiac disease (CD) is a complex condition caused by an autoimmune reaction to ingested gluten. Due to its polymorphic manifestation and subtle endoscopic presentation, the diagnosis is difficult and thus the disorder is underreported. We aimed to use deep learning to identify celiac disease on endoscopic images of the small bowel.
Methods
Patients with small intestinal histology compatible with CD (MARSH classification I-III) were extracted retrospectively from the database of Augsburg University hospital. They were compared to patients with no clinical signs of CD and histologically normal small intestinal mucosa. In a first step MARSH III and normal small intestinal mucosa were differentiated with the help of a deep learning algorithm. For this, the endoscopic white light images were divided into five equal-sized subsets. We avoided splitting the images of one patient into several subsets. A ResNet-50 model was trained with the images from four subsets and then validated with the remaining subset. This process was repeated for each subset, such that each subset was validated once. Sensitivity, specificity, and harmonic mean (F1) of the algorithm were determined.
Results
The algorithm showed values of 0.83, 0.88, and 0.84 for sensitivity, specificity, and F1, respectively. Further data showing a comparison between the detection rate of the AI model and that of experienced endoscopists will be available at the time of the upcoming conference.
Conclusions
We present the first clinical report on the use of a deep learning algorithm for the detection of celiac disease using endoscopic images. Further evaluation on an external data set, as well as in the detection of CD in real-time, will follow. However, this work at least suggests that AI can assist endoscopists in the endoscopic diagnosis of CD, and ultimately may be able to do a true optical biopsy in live-time.
The early diagnosis of cancer in Barrett’s esophagus is crucial for improving the prognosis. However, identifying Barrett’s esophagus-related neoplasia (BERN) is challenging, even for experts [1]. Four-quadrant biopsies may improve the detection of neoplasia, but they can be associated with sampling errors. The application of artificial intelligence (AI) to the assessment of Barrett’s esophagus could improve the diagnosis of BERN, and this has been demonstrated in both preclinical and clinical studies [2] [3].
In this video demonstration, we show the accurate detection and delineation of BERN in two patients ([Video 1]). In part 1, the AI system detects a mucosal cancer about 20 mm in size and accurately delineates the lesion in both white-light and narrow-band imaging. In part 2, a small island of BERN with high-grade dysplasia is detected and delineated in white-light, narrow-band, and texture and color enhancement imaging. The video shows the results using a transparent overlay of the mucosal cancer in real time as well as a full segmentation preview. Additionally, the optical flow allows for the assessment of endoscope movement, something which is inversely related to the reliability of the AI prediction. We demonstrate that multimodal imaging can be applied to the AI-assisted detection and segmentation of even small focal lesions in real time.
We propose an automatic approach for early detection of adenocarcinoma in the esophagus. High-definition endoscopic images (50 cancer, 50 Barrett) are partitioned into a dataset containing approximately equal amounts of patches showing cancerous and non-cancerous regions. A deep convolutional neural network is adapted to the data using a transfer learning approach. The final classification of an image is determined by at least one patch, for which the probability being a cancer patch exceeds a given threshold. The model was evaluated with leave one patient out cross-validation. With sensitivity and specificity of 0.94 and 0.88, respectively, our findings improve recently published results on the same image data base considerably. Furthermore, the visualization of the class probabilities of each individual patch indicates, that our approach might be extensible to the segmentation domain.
Training data for Neural Networks is often scarce in the medical domain, which often results in models that struggle to generalize and consequently showpoor performance on unseen datasets. Generally, adding augmentation methods to the training pipeline considerably enhances a model’s performance. Using the dataset of the Foot Ulcer Segmentation Challenge, we analyze two additional augmentation methods in the domain of chronic foot wounds - local warping of wound edges along with projection and blurring of shapes inside wounds. Our experiments show that improvements in the Dice similarity coefficient and Normalized Surface Distance metrics depend on a sensible selection of those augmentation methods.
Celiac disease is an autoimmune disorder caused by gluten that results in an inflammatory response of the small intestine.We investigated whether celiac disease can be detected using endoscopic images through a deep learning approach. The results show that additional clinical parameters can improve the classification accuracy. In this work, we distinguished between healthy tissue and Marsh III, according to the Marsh score system. We first trained a baseline network to classify endoscopic images of the small bowel into these two classes and then augmented the approach with a multimodality component that took the antibody status into account.
Einleitung
Die sichere Detektion und Charakterisierung von Barrett-Ösophagus assoziierten Neoplasien (BERN) stellt selbst für erfahrene Endoskopiker eine Herausforderung dar.
Ziel
Ziel dieser Studie ist es, den Add-on Effekt eines künstlichen Intelligenz (KI) Systems (Barrett-Ampel) als Entscheidungsunterstüzungssystem für Endoskopiker ohne Expertise bei der Untersuchung von BERN zu evaluieren.
Material und Methodik
Zwölf Videos in „Weißlicht“ (WL), „narrow-band imaging“ (NBI) und „texture and color enhanced imaging“ (TXI) von histologisch bestätigten Barrett-Metaplasien oder BERN wurden von Experten und Untersuchern ohne Barrett-Expertise evaluiert. Die Probanden wurden dazu aufgefordert in den Videos auftauchende BERN zu identifizieren und gegebenenfalls die optimale Biopsiestelle zu markieren. Unser KI-System wurde demselben Test unterzogen, wobei dieses BERN in Echtzeit segmentierte und farblich von umliegendem Epithel differenzierte. Anschließend wurden den Probanden die Videos mit zusätzlicher KI-Unterstützung gezeigt. Basierend auf dieser neuen Information, wurden die Probanden zu einer Reevaluation ihrer initialen Beurteilung aufgefordert.
Ergebnisse
Die „Barrett-Ampel“ identifizierte unabhängig von den verwendeten Darstellungsmodi (WL, NBI, TXI) alle BERN. Zwei entzündlich veränderte Läsionen wurden fehlinterpretiert (Genauigkeit=75%). Während Experten vergleichbare Ergebnisse erzielten (Genauigkeit=70,8%), hatten Endoskopiker ohne Expertise bei der Beurteilung von Barrett-Metaplasien eine Genauigkeit von lediglich 58,3%. Wurden die nicht-Experten allerdings von unserem KI-System unterstützt, erreichten diese eine Genauigkeit von 75%.
Zusammenfassung
Unser KI-System hat das Potential als Entscheidungsunterstützungssystem bei der Differenzierung zwischen Barrett-Metaplasie und BERN zu fungieren und so Endoskopiker ohne entsprechende Expertise zu assistieren. Eine Limitation dieser Studie ist die niedrige Anzahl an eingeschlossenen Videos. Um die Ergebnisse dieser Studie zu bestätigen, müssen randomisierte kontrollierte klinische Studien durchgeführt werden.
Einleitung
Third-Space Interventionen wie die endoskopische Submukosadissektion (ESD) und die perorale endoskopische Myotomie (POEM) sind technisch anspruchsvoll und mit einem erhöhten Risiko für intraprozedurale Komplikationen wie Blutung oder Perforation assoziiert. Moderne Computerprogramme zur Unterstützung bei diagnostischen Entscheidungen werden unter Einsatz von künstlicher Intelligenz (KI) in der Endoskopie bereits erfolgreich eingesetzt. Ziel der vorliegenden Arbeit war es, relevante anatomische Strukturen mithilfe eines Deep-Learning Algorithmus zu detektieren und segmentieren, um die Sicherheit und Anwendbarkeit von ESD und POEM zu erhöhen.
Methoden
Zwölf Videoaufnahmen in voller Länge von Third-Space Endoskopien wurden aus der Datenbank des Universitätsklinikums Augsburg extrahiert. 1686 Einzelbilder wurden für die Kategorien Submukosa, Blutgefäß, Dissektionsmesser und endoskopisches Instrument annotiert und segmentiert. Mit diesem Datensatz wurde ein DeepLabv3+neuronales Netzwerk auf der Basis eines ResNet mit 101 Schichten trainiert und intern anhand der Parameter Intersection over Union (IoU), Dice Score und Pixel Accuracy validiert. Die Fähigkeit des Algorithmus zur Gefäßdetektion wurde anhand von 24 Videoclips mit einer Spieldauer von 7 bis 46 Sekunden mit 33 vordefinierten Gefäßen evaluiert. Anhand dieses Tests wurde auch die Gefäßdetektionsrate eines Experten in der Third-Space Endoskopie ermittelt.
Ergebnisse
Der Algorithmus zeigte eine Gefäßdetektionsrate von 93,94% mit einer mittleren Rate an falsch positiven Signalen von 1,87 pro Minute. Die Gefäßdetektionsrate des Experten lag bei 90,1% ohne falsch positive Ergebnisse. In der internen Validierung an Einzelbildern wurde eine IoU von 63,47%, ein mittlerer Dice Score von 76,18% und eine Pixel Accuracy von 86,61% ermittelt.
Zusammenfassung
Dies ist der erste KI-Algorithmus, der für den Einsatz in der therapeutischen Endoskopie entwickelt wurde. Präliminäre Ergebnisse deuten auf eine mit Experten vergleichbare Detektion von Gefäßen während der Untersuchung hin. Weitere Untersuchungen sind nötig, um die Leistung des Algorithmus im Vergleich zum Experten genauer zu eruieren sowie einen möglichen klinischen Nutzen zu ermitteln.
Introduction
We present a clinical case showing the real-time detection, characterization and delineation of an early Barrett’s cancer using AI.
Patients and methods
A 70-year old patient with a long-segment Barrett’s esophagus (C5M7) was assessed with an AI algorithm.
Results
The AI system detected a 10 mm focal lesion and AI characterization predicted cancer with a probability of >90%. After ESD resection, histopathology showed mucosal adenocarcinoma (T1a (m), R0) confirming AI diagnosis.
Conclusion
We demonstrate the real-time AI detection, characterization and delineation of a small and early mucosal Barrett’s cancer.