TY - CHAP A1 - Sophia, Strasser A1 - Kucera, Markus T1 - Artificial intelligence in safety-relevant embedded systems - on autonomous robotic surgery T2 - 2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI), 11-16 July 2021, Niigata, Japan N2 - This paper focuses on Artificial Intelligence (AI) in robotic surgery. The question of safety and autonomy follows through the whole paper. Guidelines like Safety Integrity Levels which apply to dependable systems in general are described shortly. Overall, this work does not explicitly supply advantages of AI and instructional guidelines to build autonomous robots, instead, concentrates on challenges in the use of AI. In conclusion, there are still many open issues in the use of AI which cause potential gaps in reliability. KW - artificial intelligence KW - control engineering computing KW - embedded systems KW - medical robotics KW - reliability theory KW - safety systems KW - surgery Y1 - 2021 SN - 978-1-6654-2420-2 U6 - https://doi.org/10.1109/IIAI-AAI53430.2021.00089 SP - 506 EP - 509 PB - IEEE ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Mendel, Robert A1 - Strasser, Sophia A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Convolutional Neural Networks for the evaluation of cancer in Barrett’s esophagus: Explainable AI to lighten up the black-box JF - Computers in Biology and Medicine N2 - Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of earlycancerous tissues in Barrett’s esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts’ previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts’ delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model’s sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts’ insights, demonstrating how human knowledge may influence the correct computational learning. KW - Deep Learning KW - Künstliche Intelligenz KW - Computerunterstützte Medizin KW - Barrett's esophagus KW - Adenocarcinoma KW - Machine learning KW - Explainable artificial intelligence KW - Computer-aided diagnosis Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-20126 SN - 0010-4825 VL - 135 SP - 1 EP - 14 PB - Elsevier ER -