TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Passos, Leandro A. A1 - Santana, Marcos Cleison S. A1 - Mendel, Robert A1 - Rauber, David A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Layer-selective deep representation to improve esophageal cancer classification JF - Medical & Biological Engineering & Computing N2 - Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis.For this task, the deep learning techniques’ black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett’s esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett’s esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem. KW - Multistep training KW - Barrett’s esophagus detection KW - Convolutional neural networks KW - Deep learning Y1 - 2024 U6 - https://doi.org/10.1007/s11517-024-03142-8 VL - 62 SP - 3355 EP - 3372 PB - Springer Nature CY - Heidelberg ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Pacheco, André G.C. A1 - Passos, Leandro A. A1 - Santana, Marcos Cleison S. A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Papa, João Paulo T1 - DeepCraftFuse: visual and deeply-learnable features work better together for esophageal cancer detection in patients with Barrett’s esophagus JF - Neural Computing and Applications N2 - Limitations in computer-assisted diagnosis include lack of labeled data and inability to model the relation between what experts see and what computers learn. Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. While deep learning techniques are broad so that unseen information might help learn patterns of interest, human insights to describe objects of interest help in decision-making. This paper proposes a novel approach, DeepCraftFuse, to address the challenge of combining information provided by deep networks with visual-based features to significantly enhance the correct identification of cancerous tissues in patients affected with Barrett’s esophagus (BE). We demonstrate that DeepCraftFuse outperforms state-of-the-art techniques on private and public datasets, reaching results of around 95% when distinguishing patients affected by BE that is either positive or negative to esophageal cancer. KW - Deep Learning KW - Speiseröhrenkrebs KW - Adenocarcinom KW - Endobrachyösophagus KW - Diagnose KW - Maschinelles Lernen KW - Machine learning KW - Adenocarcinoma KW - Object detector KW - Barrett’s esophagus KW - Deep Learning Y1 - 2024 U6 - https://doi.org/10.1007/s00521-024-09615-z VL - 36 SP - 10445 EP - 10459 PB - Springer CY - London ER -