Refine
Year of publication
Document Type
- conference proceeding (article) (52)
- Article (50)
- conference proceeding (presentation, abstract) (33)
- Preprint (3)
- Book (1)
- Part of a Book (1)
Has Fulltext
- no (140) (remove)
Is part of the Bibliography
- no (140)
Keywords
- Bildgebendes Verfahren (17)
- Deep Learning (14)
- Diagnose (13)
- Gehirn (11)
- Künstliche Intelligenz (11)
- Maschinelles Lernen (11)
- Artificial Intelligence (9)
- Kernspintomografie (9)
- Dreidimensionale Bildverarbeitung (8)
- Registrierung <Bildverarbeitung> (8)
- Speiseröhrenkrankheit (8)
- Lernprogramm (7)
- Schnittdarstellung (7)
- Bildsegmentierung (6)
- Handchirurgie (6)
- Speiseröhrenkrebs (6)
- Adenocarcinoma (5)
- Bildverarbeitung (5)
- Deep learning (5)
- Machine learning (5)
- Osteosynthese (5)
- Polarisiertes Licht (5)
- Barrett's esophagus (4)
- Farbkonstanz (4)
- Gehirnkarte (4)
- Histologie (4)
- ICP-Massenspektrometrie (4)
- Image analysis (4)
- Image processing (4)
- Laryngoskopie (4)
- Medizin (4)
- Metalle (4)
- Positronen-Emissions-Tomografie (4)
- Schwächung (4)
- 3D-Druck (3)
- Automatische Klassifikation (3)
- Barrett-Ösophagus (3)
- Endoscopy (3)
- Endoskopie (3)
- Hirntumor (3)
- Magnetic Resonance Imaging (3)
- Metallproteide (3)
- Neuronales Netz (3)
- Simulation (3)
- Adenocarcinom (2)
- Anomalie (2)
- Barrett’s esophagus (2)
- Bilderkennung (2)
- Bilderzeugung (2)
- Brain (2)
- Computed tomography (2)
- Computertomographie (2)
- Computerunterstützte Medizin (2)
- Computerunterstütztes Verfahren (2)
- Convolutional neural networks (2)
- Dichromatisches Reflexionsmodell (2)
- Dreidimensionale Rekonstruktion (2)
- Dual-material 3D printing (2)
- Elektrophorese (2)
- Generative adversarial networks (2)
- HaptiVisT (2)
- Image classification (2)
- Image fusion (2)
- Image registration (2)
- Image segmentation (2)
- Konturverfolgung (2)
- Laser ablation inductively coupled plasma mass spectrometry (2)
- Literaturbericht (2)
- MRI (2)
- Machine Learning (2)
- Maschinelles Sehen (2)
- Medical Image Computing (2)
- Medical imaging (2)
- Mustererkennung (2)
- Objekterkennung (2)
- Operationstechnik (2)
- PET/CT (2)
- Segmentation (2)
- Snakes (2)
- Third-Space Endoscopy (2)
- Ultrastruktur (2)
- Video (2)
- Visualization (2)
- Zilie (2)
- artificial intelligence (2)
- dichromatisches Reflexionsmodell (2)
- 3D analysis (1)
- 3D breast scan registration (1)
- 3D image processing (1)
- 3D imaging (1)
- 3D reconstruction (1)
- 3D segmentation (1)
- 3D surface imaging (1)
- 4FC (1)
- A1 adenosine receptor (1)
- Adenokarzinom (1)
- Adenosinrezeptor (1)
- Aktives Konturmodell (1)
- Algorithmus (1)
- Alloplastik (1)
- Amino acid transport (1)
- Aminosäuren (1)
- Analytical models (1)
- Anatomy (1)
- Antiangiogenese (1)
- Antiangiogenic treatment (1)
- Arthrodese (1)
- Artificial intelligence (1)
- Attenuation correction (1)
- Aufnahme (1)
- Ausrichtung (1)
- Autogene Transplantation (1)
- Balloon Model (1)
- Barrett (1)
- Barrett's Carcinoma (1)
- Barrett's Esophagus (1)
- Barrett's esphagus (1)
- Barrett’s Esophagus (1)
- Barrett’s cancer (1)
- Barrett’s esophagus detection (1)
- Beryllium (1)
- Bewegungsanalyse (1)
- Bewegungsschatzung (1)
- Bildbasierte Verfahren (1)
- Bioimaging (1)
- Bioimaging of metals (1)
- Biomarker (1)
- Biomaterial (1)
- Biomedical imaging (1)
- Biopsy (1)
- Blood–brain barrier (1)
- Blut-Hirn-Schranke (1)
- Blutdruck (1)
- Blutgefäß (1)
- Boltzmann-Maschine (1)
- Brain Segmentation (1)
- Brain modeling (1)
- Brain tissue (1)
- Brain tumour (1)
- Breast imaging (1)
- Breast reconstruction (1)
- Breast symmetry (1)
- Brustkorb (1)
- Celiac Disease (1)
- Chest X-Ray (1)
- Co-occurrence Matrices (1)
- Co-occurrence matrix (1)
- Cochlea-Implantat (1)
- Color Texture (1)
- Color texture (1)
- Colorconstancy (1)
- Computer aided diagnosis and therapy (1)
- Computer-aided diagnosis (1)
- Computerassistierte Chirurgie (1)
- Computertomografie (1)
- Connectome (1)
- Contour Detection (1)
- Convex optimization (1)
- Cooccurrence Matrix (1)
- Data acquisition (1)
- Data models (1)
- Data privacy (1)
- Datenfusion (1)
- Datenschutz (1)
- Diagnosis (1)
- Diagnostik (1)
- Diffusion-weighted imaging (1)
- Digital anthropometry (1)
- Digital endoscopy (1)
- Dokument (1)
- Electron microscopy (1)
- Encoder-Decoder Network (1)
- Endobrachyösophagus (1)
- Eosinophilic Esophagitis (1)
- Farbanalyse (1)
- Farbbild (1)
- Farbbildverarbeitung (1)
- Farbenraum (1)
- Fehlerbehandlung (1)
- Felsenbein (1)
- Fluorescence imaging (1)
- Force-feedback haptic (1)
- Fräsen (1)
- GBM (1)
- Gabor Filter (1)
- Gamification in der Medizin (1)
- Gastroenterologie (1)
- General Purpose Graphic Processing Unit (1)
- Glanzlichtelimination (1)
- Gliom (1)
- Graphical user interface (1)
- Graphische Benutzeroberfläche (1)
- Graustufe (1)
- Hand surgery training (1)
- Haptische Feedback-Technologie (1)
- Haptisches Feedback (1)
- Hierarchische Wasserscheiden-Transformation (1)
- High-grade glioma (1)
- High-resolution imaging (1)
- Hochschuldidaktik (1)
- Human brain (1)
- Humans (1)
- Hämodynamik (1)
- Image processing (1)
- Image Processing (1)
- Image generation (1)
- Image resolution (1)
- Implantatwerkstoff (1)
- In vitro testing (1)
- In vivo imaging (1)
- Indexierung <Inhaltserschließung> (1)
- Inductively Couple Plasma Mass Spectrometry (1)
- Infinity Restricted Boltzmann Machines (1)
- Information Retrieval (1)
- Integrative features (1)
- Inverse registration consistency (1)
- K-wire drilling (1)
- Klassifikation (1)
- Kmeans algorithm (1)
- KolmogKorov distance (1)
- Komponentenanalyse (1)
- Konvexe Optimierung (1)
- Krankheitsverlauf (1)
- LA-ICP-MS (1)
- Laryngoscopy (1)
- Laser Ablation Inductively Couple Plasma Mass Spectrometry (1)
- Laser microdissection inductively coupled plasma mass spectrometry (1)
- Literaturdatenbank (1)
- MPTP (1)
- MPTP Treatment (1)
- Magnetic resonance imaging (1)
- Mammoplastik (1)
- Massenspektrometrie (1)
- Materialprüfung (1)
- Medical diagnostic imaging (1)
- Medical training system (1)
- Meta-heuristics (1)
- Metaheuristik (1)
- Metallomics (1)
- Metals (1)
- Metamaterial (1)
- Method (1)
- Minimally invasive hand surgery (1)
- Model-based imaging (1)
- Multi-modal imaging (1)
- Multimodal Imaging (1)
- Multimodal response assessment (1)
- Multimodales Verfahren (1)
- Multiskalenbilder (1)
- Multispektralbilder (1)
- Multistep training (1)
- NMDA receptors (1)
- Nano-LA-ICP-MS (1)
- Neoplasms (1)
- Nervenfaser (1)
- Neuroimaging (1)
- Neuronale Netze (1)
- Nichtlineare Optimierung (1)
- Non-rigid surface registration (1)
- Object detector (1)
- Optical Flow (1)
- Optimierung (1)
- PET (1)
- PET/MRI (1)
- Parallel Execution (1)
- Parallelverarbeitung (1)
- Parametric serial MR image registration (1)
- Pathologische Anatomie (1)
- Pattern recognition (1)
- Pflanzen (1)
- Point Distribution Model (1)
- Polarized light imaging (1)
- Positron emission tomography (1)
- Pradikatenlogik (1)
- Quantitative Image analysis (1)
- Receptor autoradiography (1)
- Referenzdaten (1)
- Reflexion (1)
- SLAC wrist (1)
- SNAC wrist (1)
- Schnittpräparat (1)
- Segmentierung (1)
- Segmentierung der Lippen (1)
- Sehrinde (1)
- Semi-Supervised Learning (1)
- Semi-automated segmentation (1)
- Senile Makuladegeneration (1)
- Signaltrennung (1)
- Signalverarbeitung (1)
- Smart ESD (1)
- Statistical shape mode (1)
- Stroboscopic Images (1)
- Substantia Nigra (1)
- Suchmaschine (1)
- Support material (1)
- Surgical instrument segmentation (1)
- Surgical outcome simulation (1)
- Systems biology (1)
- Table lookup (1)
- Texture Analysis (1)
- Texturerkennung (1)
- Tissue-imitating hand phantom (1)
- Tractography (1)
- Tumor disease progression (1)
- Tumour (1)
- Vascular Malformations (1)
- Vascular malformation (1)
- Vektorbilder (1)
- Vesicle membrane analysis (1)
- Vibration Profile (1)
- Virtual fixtures (1)
- Virtual reality (1)
- Virtualisierung (1)
- Virtuelle Realität (1)
- Virtuelles Training (1)
- Visualisierung (1)
- Vizualization (1)
- Voxel Spacing (1)
- Zielverfolgung (1)
- attenuation (1)
- camera calibration (1)
- celiac disease (1)
- combined imaging (1)
- correction (1)
- d/l-serine (1)
- deep learning (1)
- diagnostic laryngoscopy (1)
- digital anthropometry (1)
- electronic imaging (1)
- endoscopy (1)
- endoscopy detection (1)
- hand surgery training (1)
- healthcare (1)
- herbarium specimens (1)
- image co-registration (1)
- image distortion (1)
- magnetic resonance imaging (1)
- medizinische Bildverarbeitung (1)
- metal distribution (1)
- metallomics (1)
- metamaterial (1)
- multiple regression analysis (1)
- neoplastic larynx disease (1)
- neurodegenerative diseases (1)
- object detection (1)
- quantitative Farbmessung (1)
- real-time (1)
- reconstructive surgery (1)
- robot-assisted surgery (1)
- smoke simulation (1)
- submucosal invasion (1)
- tissue imitating phantom hand (1)
- unpaired image-to-image translation (1)
- villous atrophy (1)
- visual recognition (1)
- Ähnlichkeitssuche (1)
Institute
- Fakultät Informatik und Mathematik (140) (remove)
Begutachtungsstatus
- peer-reviewed (69)
- begutachtet (1)
Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images.
Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer.
Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively.
Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI.
Limitations in computer-assisted diagnosis include lack of labeled data and inability to model the relation between what experts see and what computers learn. Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. While deep learning techniques are broad so that unseen information might help learn patterns of interest, human insights to describe objects of interest help in decision-making. This paper proposes a novel approach, DeepCraftFuse, to address the challenge of combining information provided by deep networks with visual-based features to significantly enhance the correct identification of cancerous tissues in patients affected with Barrett’s esophagus (BE). We demonstrate that DeepCraftFuse outperforms state-of-the-art techniques on private and public datasets, reaching results of around 95% when distinguishing patients affected by BE that is either positive or negative to esophageal cancer.
This work presents a systematic review concerning recent studies and technologies of machine learning for Barrett's esophagus (BE) diagnosis and treatment. The use of artificial intelligence is a brand new and promising way to evaluate such disease. We compile some works published at some well-established databases, such as Science Direct, IEEEXplore, PubMed, Plos One, Multidisciplinary Digital Publishing Institute (MDPI), Association for Computing Machinery (ACM), Springer, and Hindawi Publishing Corporation. Each selected work has been analyzed to present its objective, methodology, and results. The BE progression to dysplasia or adenocarcinoma shows a complex pattern to be detected during endoscopic surveillance. Therefore, it is valuable to assist its diagnosis and automatic identification using computer analysis. The evaluation of the BE dysplasia can be performed through manual or automated segmentation through machine learning techniques. Finally, in this survey, we reviewed recent studies focused on the automatic detection of the neoplastic region for classification purposes using machine learning methods.
The number of patients with Barret’s esophagus (BE) has increased in the last decades. Considering the dangerousness of the disease and its evolution to adenocarcinoma, an early diagnosis of BE may provide a high probability of cancer remission. However, limitations regarding traditional methods of detection and management of BE demand alternative solutions. As such, computer-aided tools have been recently used to assist in this problem, but the challenge still persists. To manage the problem, we introduce the infinity Restricted Boltzmann Machines (iRBMs) to the task of automatic identification of Barrett’s esophagus from endoscopic images of the lower esophagus. Moreover, since iRBM requires a proper selection of its meta-parameters, we also present a discriminative iRBM fine-tuning using six meta-heuristic optimization techniques. We showed that iRBMs are suitable for the context since it provides competitive results, as well as the meta-heuristic techniques showed to be appropriate for such task.
The development of adenocarcinoma in Barrett’s esophagus is difficult to detect by endoscopic surveillance of patients with signs of dysplasia. Computer assisted diagnosis of endoscopic images (CAD) could therefore be most helpful in the demarcation and classification of neoplastic lesions. In this study we tested the feasibility of a CAD method based on Speeded up Robust Feature Detection (SURF). A given database containing 100 images from 39 patients served as benchmark for feature based classification models. Half of the images had previously been diagnosed by five clinical experts as being ”cancerous”, the other half as ”non-cancerous”. Cancerous image regions had been visibly delineated (masked) by the clinicians. SURF features acquired from full images as well as from masked areas were utilized for the supervised training and testing of an SVM classifier. The predictive accuracy of the developed CAD system is illustrated by sensitivity and specificity values. The results based on full image matching where 0.78 (sensitivity) and 0.82 (specificity) were achieved, while the masked region approach generated results of 0.90 and 0.95, respectively.
Pixel-level classification is an essential part of computer vision. For learning from labeled data, many powerful deep learning models have been developed recently. In this work, we augment such supervised segmentation models by allowing them to learn from unlabeled data. Our semi-supervised approach, termed Error-Correcting Supervision, leverages a collaborative strategy. Apart from the supervised training on the labeled data, the segmentation network is judged by an additional network. The secondary correction network learns on the labeled data to optimally spot correct predictions, as well as to amend incorrect ones. As auxiliary regularization term, the corrector directly influences the supervised training of the segmentation network. On unlabeled data, the output of the correction network is essential to create a proxy for the unknown truth. The corrector’s output is combined with the segmentation network’s prediction to form the new target. We propose a loss function that incorporates both the pseudo-labels as well as the predictive certainty of the correction network. Our approach can easily be added to supervised segmentation models. We show consistent improvements over a supervised baseline on experiments on both the Pascal VOC 2012 and the Cityscapes datasets with varying amounts of labeled data.
In this work, we propose the use of single channel Color Co-occurrence Matrices for texture description of Barrett’sEsophagus (BE)and adenocarcinoma images. Further classification using supervised learning techniques, such as Optimum-Path Forest (OPF), Support Vector Machines with Radial Basisunction (SVM-RBF) and Bayesian classifier supports the contextof automatic BE and adenocarcinoma diagnosis. We validated three approaches of classification based on patches, patients and images in two datasets (MICCAI 2015 and Augsburg) using the color-and-texture descriptors and the machine learning techniques. Concerning MICCAI 2015 dataset, the best results were obtained using the blue channel for the descriptors and the supervised OPF for classification purposes in the patch-based approach, with sensitivity nearly to 73% for positive adenocarcinoma identification and specificity close to 77% for BE (non-cancerous) patch classification. Regarding the Augsburg dataset, the most accurate results were also obtained using both OPF classifier and blue channel descriptor for the feature extraction, with sensitivity close to 67% and specificity around to76%. Our work highlights new advances in the related research area and provides a promising technique that combines color and texture information, allied to three different approaches of dataset pre-processing aiming to configure robust scenarios for the classification step.
Computer-assisted analysis of endoscopic images can be helpful to the automatic diagnosis and classification of neoplastic lesions. Barrett's esophagus (BE) is a common type of reflux that is not straight forward to be detected by endoscopic surveillance, thus being way susceptible to erroneous diagnosis, which can cause cancer when not treated properly. In this work, we introduce the Optimum-Path Forest (OPF) classifier to the task of automatic identification of Barrett'sesophagus, with promising results and outperforming the well known Support Vector Machines (SVM) in the aforementioned context. We consider describing endoscopic images by means of feature extractors based on key point information, such as the Speeded up Robust Features (SURF) and Scale-Invariant Feature Transform (SIFT), for further designing a bag-of-visual-wordsthat is used to feed both OPF and SVM classifiers. The best results were obtained by means of the OPF classifier for both feature extractors, with values lying on 0.732 (SURF) - 0.735(SIFT) for sensitivity, 0.782 (SURF) - 0.806 (SIFT) for specificity, and 0.738 (SURF) - 0.732 (SIFT) for the accuracy.
Pixel-level classification is an essential part of computer vision. For learning from labeled data, many powerful deep learning models have been developed recently. In this work, we augment such supervised segmentation models by allowing them to learn from unlabeled data. Our semi-supervised approach, termed Error-Correcting Supervision, leverages a collaborative strategy. Apart from the supervised training on the labeled data, the segmentation network is judged by an additional network.
Barrett's esophagus figured a swift rise in the number of cases in the past years. Although traditional diagnosis methods offered a vital role in early-stage treatment, they are generally time- and resource-consuming. In this context, computer-aided approaches for automatic diagnosis emerged in the literature since early detection is intrinsically related to remission probabilities. However, they still suffer from drawbacks because of the lack of available data for machine learning purposes, thus implying reduced recognition rates. This work introduces Generative Adversarial Networks to generate high-quality endoscopic images, thereby identifying Barrett's esophagus and adenocarcinoma more precisely. Further, Convolution Neural Networks are used for feature extraction and classification purposes. The proposed approach is validated over two datasets of endoscopic images, with the experiments conducted over the full and patch-split images. The application of Deep Convolutional Generative Adversarial Networks for the data augmentation step and LeNet-5 and AlexNet for the classification step allowed us to validate the proposed methodology over an extensive set of datasets (based on original and augmented sets), reaching results of 90% of accuracy for the patch-based approach and 85% for the image-based approach. Both results are based on augmented datasets and are statistically different from the ones obtained in the original datasets of the same kind. Moreover, the impact of data augmentation was evaluated in the context of image description and classification, and the results obtained using synthetic images outperformed the ones over the original datasets, as well as other recent approaches from the literature. Such results suggest promising insights related to the importance of proper data for the accurate classification concerning computer-assisted Barrett's esophagus and adenocarcinoma detection.