<?xml version="1.0" encoding="utf-8"?>
<export-example>
  <doc>
    <id>2024</id>
    <completedYear/>
    <publishedYear>2021</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue>S 01</issue>
    <volume>53</volume>
    <type>conferencepresentation</type>
    <publisherName>Georg Thieme Verlag</publisherName>
    <publisherPlace>Stuttgart</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2021-07-30</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Endoscopic Diagnosis of Eosinophilic Esophagitis Using a deep Learning Algorithm</title>
    <abstract language="eng">Aims &#13;
Eosinophilic esophagitis (EoE) is easily missed during endoscopy, either because physicians are not familiar with its endoscopic features or the morphologic changes are too subtle. In this preliminary paper, we present the first attempt to detect EoE in endoscopic white light (WL) images using a deep learning network (EoE-AI).&#13;
&#13;
Methods &#13;
401 WL images of eosinophilic esophagitis and 871 WL images of normal esophageal mucosa were evaluated. All images were assessed for the Endoscopic Reference score (EREFS) (edema, rings, exudates, furrows, strictures). Images with strictures were excluded. EoE was defined as the presence of at least 15 eosinophils per high power field on biopsy. A convolutional neural network based on the ResNet architecture with several five-fold cross-validation runs was used. Adding auxiliary EREFS-classification branches to the neural network allowed the inclusion of the scores as optimization criteria during training. EoE-AI was evaluated for sensitivity, specificity, and F1-score. In addition, two human endoscopists evaluated the images.&#13;
&#13;
Results &#13;
EoE-AI showed a mean sensitivity, specificity, and F1 of 0.759, 0.976, and 0.834 respectively, averaged over the five distinct cross-validation runs. With the EREFS-augmented architecture, a mean sensitivity, specificity, and F1-score of 0.848, 0.945, and 0.861 could be demonstrated respectively. In comparison, the two human endoscopists had an average sensitivity, specificity, and F1-score of 0.718, 0.958, and 0.793.&#13;
&#13;
Conclusions &#13;
To the best of our knowledge, this is the first application of deep learning to endoscopic images of EoE which were also assessed after augmentation with the EREFS-score. The next step is the evaluation of EoE-AI using an external dataset. We then plan to assess the EoE-AI tool on endoscopic videos, and also in real-time. This preliminary work is encouraging regarding the ability for AI to enhance physician detection of EoE, and potentially to do a true “optical biopsy” but more work is needed.</abstract>
    <parentTitle language="eng">Endoscopy</parentTitle>
    <identifier type="doi">10.1055/s-0041-1724274</identifier>
    <enrichment key="ConferenceStatement">ESGE Days 2021</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Christoph Römmele</author>
    <author>Robert Mendel</author>
    <author>David Rauber</author>
    <author>Tobias Rückert</author>
    <author>Michael F. Byrne</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <author>Alanna Ebigbo</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Eosinophilic Esophagitis</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Endoscopy</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Deep Learning</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="610">Medizin und Gesundheit</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>5777</id>
    <completedYear/>
    <publishedYear>2023</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>article</type>
    <publisherName>Elsevier</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2023-02-02</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Detection of duodenal villous atrophy on endoscopic images using a deep learning algorithm</title>
    <abstract language="eng">Background and aims&#13;
Celiac disease with its endoscopic manifestation of villous atrophy is underdiagnosed worldwide. The application of artificial intelligence (AI) for the macroscopic detection of villous atrophy at routine esophagogastroduodenoscopy may improve diagnostic performance.&#13;
&#13;
Methods&#13;
A dataset of 858 endoscopic images of 182 patients with villous atrophy and 846 images from 323 patients with normal duodenal mucosa was collected and used to train a ResNet 18 deep learning model to detect villous atrophy. An external data set was used to test the algorithm, in addition to six fellows and four board certified gastroenterologists. Fellows could consult the AI algorithm’s result during the test. From their consultation distribution, a stratification of test images into “easy” and “difficult” was performed and used for classified performance measurement.&#13;
&#13;
Results&#13;
External validation of the AI algorithm yielded values of 90 %, 76 %, and 84 % for sensitivity, specificity, and accuracy, respectively. Fellows scored values of 63 %, 72 % and 67 %, while the corresponding values in experts were 72 %, 69 % and 71 %, respectively. AI consultation significantly improved all trainee performance statistics. While fellows and experts showed significantly lower performance for “difficult” images, the performance of the AI algorithm was stable.&#13;
&#13;
Conclusion&#13;
In this study, an AI algorithm outperformed endoscopy fellows and experts in the detection of villous atrophy on endoscopic still images. AI decision support significantly improved the performance of non-expert endoscopists. The stable performance on “difficult” images suggests a further positive add-on effect in challenging cases.</abstract>
    <parentTitle language="eng">Gastrointestinal Endoscopy</parentTitle>
    <identifier type="doi">10.1016/j.gie.2023.01.006</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="Kostentraeger">2071855</enrichment>
    <licence>Creative Commons - CC BY-NC-ND - Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 International</licence>
    <author>Markus W. Scheppach</author>
    <author>David Rauber</author>
    <author>Johannes Stallhofer</author>
    <author>Anna Muzalyova</author>
    <author>Vera Otten</author>
    <author>Carolin Manzeneder</author>
    <author>Tanja Schwamberger</author>
    <author>Julia Wanzl</author>
    <author>Jakob Schlottmann</author>
    <author>Vidan Tadic</author>
    <author>Andreas Probst</author>
    <author>Elisabeth Schnoy</author>
    <author>Christoph Römmele</author>
    <author>Carola Fleischmann</author>
    <author>Michael Meinikheim</author>
    <author>Silvia Miller</author>
    <author>Bruno Märkl</author>
    <author>Andreas Stallmach</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <author>Alanna Ebigbo</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>celiac disease</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>villous atrophy</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>endoscopy detection</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>artificial intelligence</value>
    </subject>
    <collection role="ddc" number="00">Informatik, Wissen, Systeme</collection>
    <collection role="ddc" number="61">Medizin und Gesundheit</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="oaweg" number="">Hybrid Open Access - OA-Veröffentlichung in einer Subskriptionszeitschrift/-medium</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>6040</id>
    <completedYear/>
    <publishedYear>2023</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>S169</pageNumber>
    <edition/>
    <issue>S02</issue>
    <volume>55</volume>
    <type>conferencepresentation</type>
    <publisherName>Thieme</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2023-05-04</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">AI-assisted detection and characterization of early Barrett's neoplasia: Results of an Interim analysis</title>
    <abstract language="eng">Aims &#13;
Evaluation of the add-on effect an artificial intelligence (AI) based clinical decision support system has on the performance of endoscopists with different degrees of expertise in the field of Barrett's esophagus (BE) and Barrett's esophagus-related neoplasia (BERN).&#13;
&#13;
Methods &#13;
The support system is based on a multi-task deep learning model trained to solve a segmentation and several classification tasks. The training approach represents an extension of the ECMT semi-supervised learning algorithm. The complete system evaluates a decision tree between estimated motion, classification, segmentation, and temporal constraints, to decide when and how the prediction is highlighted to the observer. In our current study, ninety-six video cases of patients with BE and BERN were prospectively collected and assessed by Barrett's specialists and non-specialists. All video cases were evaluated twice – with and without AI assistance. The order of appearance, either with or without AI support, was assigned randomly. Participants were asked to detect and characterize regions of dysplasia or early neoplasia within the video sequences.&#13;
&#13;
Results &#13;
Standalone sensitivity, specificity, and accuracy of the AI system were 92.16%, 68.89%, and 81.25%, respectively. Mean sensitivity, specificity, and accuracy of expert endoscopists without AI support were 83,33%, 58,20%, and 71,48 %, respectively. Gastroenterologists without Barrett's expertise but with AI support had a comparable performance with a mean sensitivity, specificity, and accuracy of 76,63%, 65,35%, and 71,36%, respectively.&#13;
&#13;
Conclusions &#13;
Non-Barrett's experts with AI support had a similar performance as experts in a video-based study.</abstract>
    <parentTitle language="eng">Endoscopy</parentTitle>
    <identifier type="doi">10.1055/s-0043-1765437</identifier>
    <enrichment key="ConferenceStatement">ESGE Days 2023</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="opus.doi.autoCreate">false</enrichment>
    <enrichment key="opus.urn.autoCreate">true</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Michael Meinikheim</author>
    <author>Robert Mendel</author>
    <author>Andreas Probst</author>
    <author>Markus W. Scheppach</author>
    <author>Elisabeth Schnoy</author>
    <author>Sandra Nagl</author>
    <author>Christoph Römmele</author>
    <author>Friederike Prinz</author>
    <author>Jakob Schlottmann</author>
    <author>Daniela Golger</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <author>Alanna Ebigbo</author>
    <collection role="ddc" number="61">Medizin und Gesundheit</collection>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>7948</id>
    <completedYear/>
    <publishedYear>2025</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>article</type>
    <publisherName>Thieme</publisherName>
    <publisherPlace>Stuttgart</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2025-02-15</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Artificial intelligence improves submucosal vessel detection during third space endoscopy</title>
    <abstract language="eng">Background and study aims: While artificial intelligence (AI) shows high potential in decision support for diagnostic gastrointestinal endoscopy, its role in therapeutic endoscopy remains unclear. Third space endoscopic procedures pose the risk of intraprocedural bleeding. Therefore, we aimed to develop an AI algorithm for intraprocedural blood vessel detection. Patients and Methods: Using a test dataset with 101 standardized video clips containing 200 predefined submucosal blood vessels, 19 endoscopists were evaluated for the vessel detection rate (VDR) and time (VDT) with and without support of an AI algorithm. Test subjects were grouped according to experience in ESD. Results: With AI support, endoscopists VDR increased from 56.4% [CI 54.1–58.6] to 72.4% [CI 70.3–74.4]. Endoscopists‘ VDT dropped from 6.7sec [CI 6.2-7.1] to 5.2sec [CI 4.8-5.7]. False positive (FP) readings appeared in 4.5% of frames and were marked significantly shorter than true positives (6.0sec [CI 5.28-6.70] vs. 0.7sec [CI 0.55-0.87]). Conclusions: AI improved the vessel detection rate and time of endoscopists during third space endoscopy. While these data need to be corroborated by clinical trials, AI may prove to be an invaluable tool for the improvement of endoscopic interventions.</abstract>
    <parentTitle language="eng">Endoscopy</parentTitle>
    <identifier type="doi">10.1055/a-2534-1164</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Markus W. Scheppach</author>
    <author>Robert Mendel</author>
    <author>Anna Muzalyova</author>
    <author>David Rauber</author>
    <author>Andreas Probst</author>
    <author>Sandra Nagl</author>
    <author>Christoph Römmele</author>
    <author>Hon Chi Yip</author>
    <author>Louis Ho Shing Lau</author>
    <author>Stefan Karl Gölder</author>
    <author>Arthur Schmidt</author>
    <author>Konstantinos Kouladouros</author>
    <author>Mohamed Abdelhafez</author>
    <author>Benjamin M. Walter</author>
    <author>Michael Meinikheim</author>
    <author>Philip Wai Yan Chiu</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <author>Alanna Ebigbo</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Artificial Intelligence</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Third Space Endoscopy</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="1">Ingenieurwissenschaften</collection>
  </doc>
  <doc>
    <id>6983</id>
    <completedYear/>
    <publishedYear>2024</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>24</pageNumber>
    <edition/>
    <issue/>
    <volume>169</volume>
    <type>article</type>
    <publisherName>Elsevier</publisherName>
    <publisherPlace>Amsterdam</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2024-01-06</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art</title>
    <abstract language="eng">In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were “instrument segmentation”, “instrument tracking”, “surgical tool segmentation”, and “surgical tool tracking”, resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.</abstract>
    <parentTitle language="eng">Computers in Biology and Medicine</parentTitle>
    <identifier type="doi">10.1016/j.compbiomed.2024.107929</identifier>
    <identifier type="urn">urn:nbn:de:bvb:898-opus4-69830</identifier>
    <note>Corresponding author: Tobias Rückert</note>
    <note>Corrigendum unter: https://opus4.kobv.de/opus4-oth-regensburg/frontdoor/index/index/docId/7033</note>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="Kostentraeger">2027701</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="CorrespondingAuthor">Tobias Rückert</enrichment>
    <licence>Creative Commons - CC BY-NC-ND - Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 International</licence>
    <author>Tobias Rückert</author>
    <author>Daniel Rückert</author>
    <author>Christoph Palm</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Deep Learning</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Minimal-invasive Chirurgie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildsegmentierung</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Surgical instrument segmentation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Surgical instrument tracking</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Spatio-temporal information</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Endoscopic surgery</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Robot-assisted surgery</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="oaweg" number="">Hybrid Open Access - OA-Veröffentlichung in einer Subskriptionszeitschrift/-medium</collection>
    <collection role="oaweg" number="">Corresponding author der OTH Regensburg</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="funding" number="">DEAL Elsevier</collection>
    <thesisPublisher>Ostbayerische Technische Hochschule Regensburg</thesisPublisher>
    <file>https://opus4.kobv.de/opus4-oth-regensburg/files/6983/1-s2.0-S0010482524000131-main.pdf</file>
  </doc>
  <doc>
    <id>5004</id>
    <completedYear/>
    <publishedYear>2020</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>11</pageNumber>
    <edition/>
    <issue/>
    <volume/>
    <type>preprint</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2022-07-27</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">2018 Robotic Scene Segmentation Challenge</title>
    <abstract language="eng">In 2015 we began a sub-challenge at the EndoVis workshop at MICCAI in Munich using endoscope images of exvivo tissue with automatically generated annotations from robot forward kinematics and instrument CAD models. However, the limited background variation and simple motion rendered the dataset uninformative in learning about which techniques would be suitable for segmentation in real surgery. In 2017, at the same workshop in Quebec we introduced the robotic instrument segmentation dataset with 10 teams participating in the challenge to perform binary, articulating parts and type segmentation of da Vinci  instruments. This challenge included realistic instrument motion and more complex porcine tissue as background and was widely addressed with modfications on U-Nets and other popular CNN architectures [1].&#13;
&#13;
In 2018 we added to the complexity by introducing a set of anatomical objects and medical devices to the segmented classes. To avoid over-complicating the challenge, we continued with porcine data which is dramatically simpler than human tissue due to the lack of fatty tissue occluding many organs.</abstract>
    <identifier type="url">https://arxiv.org/abs/2001.11190</identifier>
    <identifier type="urn">urn:nbn:de:bvb:898-opus4-50049</identifier>
    <identifier type="doi">10.48550/arXiv.2001.11190</identifier>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Creative Commons - CC BY - Namensnennung 4.0 International</licence>
    <author>Max Allan</author>
    <author>Satoshi Kondo</author>
    <author>Sebastian Bodenstedt</author>
    <author>Stefan Leger</author>
    <author>Rahim Kadkhodamohammadi</author>
    <author>Imanol Luengo</author>
    <author>Felix Fuentes</author>
    <author>Evangello Flouty</author>
    <author>Ahmed Mohammed</author>
    <author>Marius Pedersen</author>
    <author>Avinash Kori</author>
    <author>Varghese Alex</author>
    <author>Ganapathy Krishnamurthi</author>
    <author>David Rauber</author>
    <author>Robert Mendel</author>
    <author>Christoph Palm</author>
    <author>Sophia Bano</author>
    <author>Guinther Saibro</author>
    <author>Chi-Sheng Shih</author>
    <author>Hsun-An Chiang</author>
    <author>Juntang Zhuang</author>
    <author>Junlin Yang</author>
    <author>Vladimir Iglovikov</author>
    <author>Anton Dobrenkii</author>
    <author>Madhu Reddiboina</author>
    <author>Anubhav Reddy</author>
    <author>Xingtong Liu</author>
    <author>Cong Gao</author>
    <author>Mathias Unberath</author>
    <author>Myeonghyeon Kim</author>
    <author>Chanho Kim</author>
    <author>Chaewon Kim</author>
    <author>Hyejin Kim</author>
    <author>Gyeongmin Lee</author>
    <author>Ihsan Ullah</author>
    <author>Miguel Luna</author>
    <author>Sang Hyun Park</author>
    <author>Mahdi Azizian</author>
    <author>Danail Stoyanov</author>
    <author>Lena Maier-Hein</author>
    <author>Stefanie Speidel</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Minimally invasive surgery</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Robotic</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Minimal-invasive Chirurgie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Robotik</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <thesisPublisher>Ostbayerische Technische Hochschule Regensburg</thesisPublisher>
    <file>https://opus4.kobv.de/opus4-oth-regensburg/files/5004/2001.11190.pdf</file>
  </doc>
  <doc>
    <id>2025</id>
    <completedYear/>
    <publishedYear>2021</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue>S 01</issue>
    <volume>53</volume>
    <type>conferencepresentation</type>
    <publisherName>Georg Thieme Verlag</publisherName>
    <publisherPlace>Stuttgart</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2021-07-30</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Detection Of Celiac Disease Using A Deep Learning Algorithm</title>
    <abstract language="eng">Aims &#13;
Celiac disease (CD) is a complex condition caused by an autoimmune reaction to ingested gluten. Due to its polymorphic manifestation and subtle endoscopic presentation, the diagnosis is difficult and thus the disorder is underreported. We aimed to use deep learning to identify celiac disease on endoscopic images of the small bowel.&#13;
&#13;
Methods &#13;
Patients with small intestinal histology compatible with CD (MARSH classification I-III) were extracted retrospectively from the database of Augsburg University hospital. They were compared to patients with no clinical signs of CD and histologically normal small intestinal mucosa. In a first step MARSH III and normal small intestinal mucosa were differentiated with the help of a deep learning algorithm. For this, the endoscopic white light images were divided into five equal-sized subsets. We avoided splitting the images of one patient into several subsets. A ResNet-50 model was trained with the images from four subsets and then validated with the remaining subset. This process was repeated for each subset, such that each subset was validated once. Sensitivity, specificity, and harmonic mean (F1) of the algorithm were determined.&#13;
&#13;
Results &#13;
The algorithm showed values of 0.83, 0.88, and 0.84 for sensitivity, specificity, and F1, respectively. Further data showing a comparison between the detection rate of the AI model and that of experienced endoscopists will be available at the time of the upcoming conference.&#13;
&#13;
Conclusions &#13;
We present the first clinical report on the use of a deep learning algorithm for the detection of celiac disease using endoscopic images. Further evaluation on an external data set, as well as in the detection of CD in real-time, will follow. However, this work at least suggests that AI can assist endoscopists in the endoscopic diagnosis of CD, and ultimately may be able to do a true optical biopsy in live-time.</abstract>
    <parentTitle language="eng">Endoscopy</parentTitle>
    <identifier type="doi">10.1055/s-0041-1724970</identifier>
    <note>Digital poster exhibition</note>
    <enrichment key="ConferenceStatement">ESGE Days 2021</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Markus W. Scheppach</author>
    <author>David Rauber</author>
    <author>Robert Mendel</author>
    <author>Christoph Palm</author>
    <author>Michael F. Byrne</author>
    <author>Helmut Messmann</author>
    <author>Alanna Ebigbo</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Celiac Disease</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Deep Learning</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="610">Medizin und Gesundheit</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>5918</id>
    <completedYear/>
    <publishedYear>2023</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>670e</pageFirst>
    <pageLast>674e</pageLast>
    <pageNumber/>
    <edition/>
    <issue>4</issue>
    <volume>152</volume>
    <type>article</type>
    <publisherName>Lippincott Williams &amp; Wilkins</publisherName>
    <publisherPlace>Philadelphia, Pa.</publisherPlace>
    <creatingCorporation>American Society of Plastic Surgeons</creatingCorporation>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2023-03-28</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Precise Monitoring of Returning Sensation in Digital Nerve Lesions by 3-D Imaging: A Proof-of-Concept Study</title>
    <abstract language="eng">Digital nerve lesions result in a loss of tactile sensation reflected by an anesthetic area (AA) at the radial or ulnar aspect of the respective digit. Yet, available tools to monitor the recovery of tactile sense have been criticized for their lack of validity. However, the precise quantification of AA dynamics by three-dimensional (3-D) imaging could serve as an accurate surrogate to monitor recovery following digital nerve repair.&#13;
&#13;
For validation, AAs were marked on digits of healthy volunteers to simulate the AA of an impaired cutaneous innervation. Three dimensional models were composed from raw images that had been acquired with a 3-D camera (Vectra H2) to precisely quantify relative AA for each digit (3-D models, n= 80). Operator properties varied regarding individual experience in 3-D imaging and image processing. Additionally, the concept was applied in a clinical case study.&#13;
&#13;
Images taken by experienced photographers were rated better quality (p&lt; 0.001) and needed less processing time (p= 0.020). Quantification of the relative AA was neither altered significantly by experience levels of the photographer (p= 0.425) nor the image assembler (p= 0.749).&#13;
&#13;
The proposed concept allows precise and reliable surface quantification of digits and can be performed consistently without relevant distortion by lack of examiner experience. Routine 3-D imaging of the AA has the great potential to provide visual evidence of various returning states of sensation and to convert sensory nerve recovery into a metric variable with high responsiveness to temporal progress.</abstract>
    <parentTitle language="eng">Plastic and Reconstructive Surgery</parentTitle>
    <identifier type="doi">10.1097/PRS.0000000000010456</identifier>
    <identifier type="issn">1529-4242</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Marc Ruewe</author>
    <author>Andreas Eigenberger</author>
    <author>Silvan Klein</author>
    <author>Antonia von Riedheim</author>
    <author>Christine Gugg</author>
    <author>Lukas Prantl</author>
    <author>Christoph Palm</author>
    <author>Maximilian Weiherer</author>
    <author>Florian Zeman</author>
    <author>Alexandra Anker</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>3D imaging</value>
    </subject>
    <collection role="ddc" number="61">Medizin und Gesundheit</collection>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>7869</id>
    <completedYear/>
    <publishedYear>2024</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>6</pageNumber>
    <edition/>
    <issue/>
    <volume/>
    <type>preprint</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2025-01-04</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">iRBSM: A Deep Implicit 3D Breast Shape Model</title>
    <abstract language="eng">We present the first deep implicit 3D shape model of the female breast, building upon and improving the recently proposed Regensburg Breast Shape Model (RBSM). Compared to its PCA-based predecessor, our model employs implicit neural representations; hence, it can be trained on raw 3D breast scans and eliminates the need for computationally demanding non-rigid registration -- a task that is particularly difficult for feature-less breast shapes. The resulting model, dubbed iRBSM, captures detailed surface geometry including fine structures such as nipples and belly buttons, is highly expressive, and outperforms the RBSM on different surface reconstruction tasks. Finally, leveraging the iRBSM, we present a prototype application to 3D reconstruct breast shapes from just a single image. Model and code publicly available at this https URL.</abstract>
    <identifier type="doi">10.48550/arXiv.2412.13244</identifier>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="opus.doi.autoCreate">false</enrichment>
    <enrichment key="opus.urn.autoCreate">true</enrichment>
    <licence>Creative Commons - CC BY-NC-ND - Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 International</licence>
    <author>Maximilian Weiherer</author>
    <author>Antonia von Riedheim</author>
    <author>Vanessa Brébant</author>
    <author>Bernhard Egger</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Shape Model</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Female Breast</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Regensburg Breast Shape Model</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>4038</id>
    <completedYear/>
    <publishedYear>2020</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>S23</pageNumber>
    <edition/>
    <issue>S 01</issue>
    <volume>52</volume>
    <type>conferencepresentation</type>
    <publisherName>Thieme</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2022-05-25</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Real-Time Diagnosis of an Early Barrett's Carcinoma using Artificial Intelligence (AI) - Video Case Demonstration</title>
    <abstract language="eng">Introduction &#13;
We present a clinical case showing the real-time detection, characterization and delineation of an early Barrett’s cancer using AI.&#13;
&#13;
Patients and methods &#13;
A 70-year old patient with a long-segment Barrett’s esophagus (C5M7) was assessed with an AI algorithm.&#13;
&#13;
Results &#13;
The AI system detected a 10 mm focal lesion and AI characterization predicted cancer with a probability of &gt;90%. After ESD resection, histopathology showed mucosal adenocarcinoma (T1a (m), R0) confirming AI diagnosis.&#13;
&#13;
Conclusion &#13;
We demonstrate the real-time AI detection, characterization and delineation of a small and early mucosal Barrett’s cancer.</abstract>
    <parentTitle language="eng">Endoscopy</parentTitle>
    <identifier type="doi">10.1055/s-0040-1704075</identifier>
    <enrichment key="ConferenceStatement">ESGE Days 2020</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Alanna Ebigbo</author>
    <author>Robert Mendel</author>
    <author>Georgios Tziatzios</author>
    <author>Andreas Probst</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Artificial Intelligence</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Barrett's Carcinoma</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Speiseröhrenkrebs</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Künstliche Intelligenz</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Diagnose</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>3381</id>
    <completedYear/>
    <publishedYear>2022</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>267</pageFirst>
    <pageLast>272</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>Springer Vieweg</publisherName>
    <publisherPlace>Wiesbaden</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2022-04-06</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Classification of Vascular Malformations Based on T2 STIR Magnetic Resonance Imaging</title>
    <abstract language="eng">Vascular malformations (VMs) are a rare condition. They can be categorized into high-ﬂow and low-ﬂow VMs, which is a challenging task for radiologists. In this work, a very heterogeneous set of MRI images with only rough annotations are used for classification with a convolutional neural network. The main focus is to describe the challenging data set and strategies to deal with such data in terms of preprocessing, annotation usage and choice of the network architecture. We achieved a classification result of 89.47 % F1-score with a 3D ResNet 18.</abstract>
    <parentTitle language="eng">Bildverarbeitung für die Medizin 2022: Proceedings, German Workshop on Medical Image Computing, Heidelberg, June 26-28, 2022</parentTitle>
    <identifier type="doi">10.1007/978-3-658-36932-3_57</identifier>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Danilo Weber Nunes</author>
    <author>Michael Hammer</author>
    <author>Simone Hammer</author>
    <author>Wibke Uller</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Deep Learning</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Magnetic Resonance Imaging</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Vascular Malformations</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>3380</id>
    <completedYear/>
    <publishedYear>2022</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>115</pageFirst>
    <pageLast>120</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>Springer Vieweg</publisherName>
    <publisherPlace>Wiesbaden</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2022-04-06</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Analysis of Celiac Disease with Multimodal Deep Learning</title>
    <abstract language="eng">Celiac disease is an autoimmune disorder caused by gluten that results in an inﬂammatory response of the small intestine.We investigated whether celiac disease can be detected using endoscopic images through a deep learning approach. The results show that additional clinical parameters can improve the classiﬁcation accuracy. In this work, we distinguished between healthy tissue and Marsh III, according to the Marsh score system. We ﬁrst trained a baseline network to classify endoscopic images of the small bowel into these two classes and then augmented the approach with a multimodality component that took the antibody status into account.</abstract>
    <parentTitle language="eng">Bildverarbeitung für die Medizin 2022: Proceedings, German Workshop on Medical Image Computing, Heidelberg, June 26-28, 2022</parentTitle>
    <identifier type="doi">10.1007/978-3-658-36932-3_25</identifier>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>David Rauber</author>
    <author>Robert Mendel</author>
    <author>Markus W. Scheppach</author>
    <author>Alanna Ebigbo</author>
    <author>Helmut Messmann</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Deep Learning</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Endoscopy</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>120</id>
    <completedYear/>
    <publishedYear>2013</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue>DocAbstr. 329</issue>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>German Medical Science GMS Publishing House</publisherName>
    <publisherPlace>Düsseldorf</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Parallelization of FSL-Fast segmentation of MRI brain data</title>
    <parentTitle language="deu">58. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (GMDS 2013), Lübeck, 01.-05.09.2013</parentTitle>
    <identifier type="doi">10.3205/13gmds261</identifier>
    <note>Meeting Abstract</note>
    <author>Joachim Weber</author>
    <author>Alexander Brawanski</author>
    <author>Christoph Palm</author>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="oaweg" number="">Gold Open Access- Erstveröffentlichung in einem/als Open-Access-Medium</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>113</id>
    <completedYear/>
    <publishedYear>2016</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>7</pageNumber>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Interactive Computer-assisted Approach for Evaluation of Ultrastructural Cilia Abnormalities</title>
    <abstract language="eng">Introduction – Diagnosis of abnormal cilia function is based on ultrastructural analysis of axoneme defects, especialy the features of inner and outer dynein arms which are the motors of ciliar motility. Sub-optimal biopsy material, methodical, and intrinsic electron microscopy factors pose difficulty in ciliary defects evaluation. We present a computer-assisted approach based on state-of-the-art image analysis and object recognition methods yielding a time-saving and efficient diagnosis of cilia dysfunction. Method – The presented approach is based on a pipeline of basal image processing methods like smoothing, thresholding and ellipse fitting. However, integration of application specific knowledge results in robust segmentations even in cases of image artifacts. The method is build hierarchically starting with the detection of cilia within the image, followed by the detection of nine doublets within each analyzable cilium, and ending with the detection of dynein arms of each doublet. The process is concluded by a rough classification of the dynein arms as basis for a computer-assisted diagnosis. Additionally, the interaction possibilities are designed in a way, that the results are still reproducible given the completion report. Results – A qualitative evaluation showed reasonable detection results for cilia, doublets and dynein arms. However, since a ground truth is missing, the variation of the computer-assisted diagnosis should be within the subjective bias of human diagnosticians. The results of a first quantitative evaluation with five human experts and six images with 12 analyzable cilia showed, that with default parameterization 91.6% of the cilia and 98% of the doublets were found. The computer-assisted approach rated 66% of those inner and outer dynein arms correct, where all human experts agree. However, especially the quality of the dynein arm classification may be improved in future work.</abstract>
    <parentTitle language="eng">Medical Imaging 2016: Computer-Aided Diagnosis, San Diego, California, United States, 27 February - 3 March, SPIE Proceedings 97853N, 2016, ISBN 9781510600201</parentTitle>
    <identifier type="doi">10.1117/12.2214976</identifier>
    <author>Christoph Palm</author>
    <author>Heiko Siegmund</author>
    <author>Matthias Semmelmann</author>
    <author>Claudia Grafe</author>
    <author>Matthias Evert</author>
    <author>Josef A. Schroeder</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Image analysis</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Image processing</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Computer aided diagnosis and therapy</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Image classification</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Image segmentation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Biopsy</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Electron microscopy</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Zilie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Ultrastruktur</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Anomalie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildverarbeitung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Objekterkennung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Computerunterstütztes Verfahren</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>117</id>
    <completedYear/>
    <publishedYear>2015</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>784</pageFirst>
    <pageLast>800</pageLast>
    <pageNumber/>
    <edition/>
    <issue>6</issue>
    <volume>17</volume>
    <type>article</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Current standards and new concepts in MRI and PET response assessment of antiangiogenic therapies in high-grade glioma patients</title>
    <abstract language="eng">Despite multimodal treatment, the prognosis of high-grade gliomas is grim. As tumor growth is critically dependent on new blood vessel formation, antiangiogenic treatment approaches offer an innovative treatment strategy. Bevacizumab, a humanized monoclonal antibody, has been in the spotlight of antiangiogenic approaches for several years. Currently, MRI including contrast-enhanced T1-weighted and T2/fluid-attenuated inversion recovery (FLAIR) images is routinely used to evaluate antiangiogenic treatment response (Response Assessment in Neuro-Oncology criteria). However, by restoring the blood–brain barrier, bevacizumab may reduce T1 contrast enhancement and T2/FLAIR hyperintensity, thereby obscuring the imaging-based detection of progression. The aim of this review is to highlight the recent role of imaging biomarkers from MR and PET imaging on measurement of disease progression and treatment effectiveness in antiangiogenic therapies. Based on the reviewed studies, multimodal imaging combining standard MRI with new physiological MRI techniques and metabolic PET imaging, in particular amino acid tracers, may have the ability to detect antiangiogenic drug susceptibility or resistance prior to morphological changes. As advances occur in the development of therapies that target specific biochemical or molecular pathways and alter tumor physiology in potentially predictable ways, the validation of physiological and metabolic imaging biomarkers will become increasingly important in the near future.</abstract>
    <parentTitle language="eng">Neuro-Oncology</parentTitle>
    <identifier type="doi">10.1093/neuonc/nou322</identifier>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Markus Hutterer</author>
    <author>Elke Hattingen</author>
    <author>Christoph Palm</author>
    <author>Martin Andreas Proescholdt</author>
    <author>Peter Hau</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>High-grade glioma</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Antiangiogenic treatment</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>MRI</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>PET</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Multimodal response assessment</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Gliom</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Antiangiogenese</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildgebendes Verfahren</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Biomarker</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>6065</id>
    <completedYear/>
    <publishedYear>2023</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>128</pageFirst>
    <pageLast>13</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>Springer Vieweg</publisherName>
    <publisherPlace>Wiesbaden</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Exploring the Effects of Contrastive Learning on Homogeneous Medical Image Data</title>
    <abstract language="deu">We investigate contrastive learning in a multi-task learning setting classifying and segmenting early Barrett’s cancer. How can contrastive learning be applied in a domain with few classes and low inter-class and inter-sample variance, potentially enabling image retrieval or image attribution? We introduce a data sampling strategy that mines per-lesion data for positive samples and keeps a queue of the recent projections as negative samples. We propose a masking strategy for the NT-Xent loss that keeps the negative set pure and removes samples from the same lesion. We show cohesion and uniqueness improvements of the proposed method in feature space. The introduction of the auxiliary objective does not affect the performance but adds the ability to indicate similarity between lesions. Therefore, the approach could enable downstream auto-documentation tasks on homogeneous medical image data.</abstract>
    <parentTitle language="deu">Bildverarbeitung für die Medizin 2023: Proceedings, German Workshop on Medical Image Computing, July 2– 4, 2023, Braunschweig</parentTitle>
    <identifier type="doi">10.1007/978-3-658-41657-7</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.doi.autoCreate">false</enrichment>
    <enrichment key="opus.urn.autoCreate">true</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Robert Mendel</author>
    <author>David Rauber</author>
    <author>Christoph Palm</author>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="600">Technik, Technologie</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>1460</id>
    <completedYear/>
    <publishedYear>2021</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>178</pageFirst>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferencepresentation</type>
    <publisherName>Springer Vieweg</publisherName>
    <publisherPlace>Wiesbaden</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2021-03-10</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Abstract: Semi-supervised Segmentation Based on Error-correcting Supervision</title>
    <abstract language="eng">Pixel-level classification is an essential part of computer vision. For learning from labeled data, many powerful deep learning models have been developed recently. In this work, we augment such supervised segmentation models by allowing them to learn from unlabeled data. Our semi-supervised approach, termed Error-Correcting Supervision, leverages a collaborative strategy. Apart from the supervised training on the labeled data, the segmentation network is judged by an additional network.</abstract>
    <parentTitle language="eng">Bildverarbeitung für die Medizin 2021. Proceedings, German Workshop on Medical Image Computing, Regensburg, March 7-9, 2021</parentTitle>
    <identifier type="isbn">978-3-658-33197-9</identifier>
    <identifier type="doi">10.1007/978-3-658-33198-6_43</identifier>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Robert Mendel</author>
    <author>Luis Antonio de Souza Jr.</author>
    <author>David Rauber</author>
    <author>João Paulo Papa</author>
    <author>Christoph Palm</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Deep Learning</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>1461</id>
    <completedYear/>
    <publishedYear>2021</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue>June-August</issue>
    <volume>52-53</volume>
    <type>article</type>
    <publisherName>Elsevier</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Barrett esophagus: What to expect from Artificial Intelligence?</title>
    <abstract language="eng">The evaluation and assessment of Barrett’s esophagus is challenging for both expert and nonexpert endoscopists. However, the early diagnosis of cancer in Barrett’s esophagus is crucial for its prognosis, and could save costs. Pre-clinical and clinical studies on the application of Artificial Intelligence (AI) in Barrett’s esophagus have shown promising results. In this review, we focus on the current challenges and future perspectives of implementing AI systems in the management of patients with Barrett’s esophagus.</abstract>
    <parentTitle language="eng">Best Practice &amp; Research Clinical Gastroenterology</parentTitle>
    <identifier type="issn">1521-6918</identifier>
    <identifier type="doi">10.1016/j.bpg.2021.101726</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Alanna Ebigbo</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Deep Learning</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Künstliche Intelligenz</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Computerunterstützte Medizin</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Barrett</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Adenocarcinoma</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Artificial intelligence</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Deep learning</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Convolutional neural networks</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>2150</id>
    <completedYear/>
    <publishedYear>2019</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>1</pageFirst>
    <pageLast>2</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume>14</volume>
    <type>article</type>
    <publisherName>Springer</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2021-11-05</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Guest editorial of the IJCARS - BVM 2018 special issue</title>
    <parentTitle language="eng">International Journal of Computer Assisted Radiology and Surgery</parentTitle>
    <identifier type="doi">10.1007/s11548-018-01902-0</identifier>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Andreas Maier</author>
    <author>Thomas M. Deserno</author>
    <author>Heinz Handels</author>
    <author>Klaus H. Maier-Hein</author>
    <author>Christoph Palm</author>
    <author>Thomas Tolxdorff</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Medical Image Computing</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>98</id>
    <completedYear/>
    <publishedYear>2019</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>475</pageFirst>
    <pageLast>485</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume>59</volume>
    <type>article</type>
    <publisherName>Elsevier</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2019-12-18</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Barrett's esophagus analysis using infinity Restricted Boltzmann Machines</title>
    <abstract language="eng">The number of patients with Barret’s esophagus (BE) has increased in the last decades. Considering the dangerousness of the disease and its evolution to adenocarcinoma, an early diagnosis of BE may provide a high probability of cancer remission. However, limitations regarding traditional methods of detection and management of BE demand alternative solutions. As such, computer-aided tools have been recently used to assist in this problem, but the challenge still persists. To manage the problem, we introduce the infinity Restricted Boltzmann Machines (iRBMs) to the task of automatic identification of Barrett’s esophagus from endoscopic images of the lower esophagus. Moreover, since iRBM requires a proper selection of its meta-parameters, we also present a discriminative iRBM fine-tuning using six meta-heuristic optimization techniques. We showed that iRBMs are suitable for the context since it provides competitive results, as well as the meta-heuristic techniques showed to be appropriate for such task.</abstract>
    <parentTitle language="eng">Journal of Visual Communication and Image Representation</parentTitle>
    <identifier type="doi">10.1016/j.jvcir.2019.01.043</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Leandro A. Passos</author>
    <author>Luis Antonio de Souza Jr.</author>
    <author>Robert Mendel</author>
    <author>Alanna Ebigbo</author>
    <author>Andreas Probst</author>
    <author>Helmut Messmann</author>
    <author>Christoph Palm</author>
    <author>João Paulo Papa</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Speiseröhrenkrankheit</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Diagnose</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Boltzmann-Maschine</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Barrett’s esophagus</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Infinity Restricted Boltzmann Machines</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Meta-heuristics</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Deep learning</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Metaheuristik</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Maschinelles Lernen</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="610">Medizin und Gesundheit</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>110</id>
    <completedYear/>
    <publishedYear>2017</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>308</pageFirst>
    <pageLast>314</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Barrett's Esophagus Identification Using Optimum-Path Forest</title>
    <abstract language="eng">Computer-assisted analysis of endoscopic images can be helpful to the automatic diagnosis and classification of neoplastic lesions. Barrett's esophagus (BE) is a common type of reflux that is not straight forward to be detected by endoscopic surveillance, thus being way susceptible to erroneous diagnosis, which can cause cancer when not treated properly. In this work, we introduce the Optimum-Path Forest (OPF) classifier to the task of automatic identification of Barrett'sesophagus, with promising results and outperforming the well known Support Vector Machines (SVM) in the aforementioned context. We consider describing endoscopic images by means of feature extractors based on key point information, such as the Speeded up Robust Features (SURF) and Scale-Invariant Feature Transform (SIFT), for further designing a bag-of-visual-wordsthat is used to feed both OPF and SVM classifiers. The best results were obtained by means of the OPF classifier for both feature extractors, with values lying on 0.732 (SURF) - 0.735(SIFT) for sensitivity, 0.782 (SURF) - 0.806 (SIFT) for specificity, and 0.738 (SURF) - 0.732 (SIFT) for the accuracy.</abstract>
    <parentTitle language="eng">Proceedings of the 30th Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T 2017), Niterói, Rio de Janeiro, Brazil, 2017,  17-20 October</parentTitle>
    <identifier type="doi">10.1109/SIBGRAPI.2017.47</identifier>
    <author>Luis Antonio de Souza Jr.</author>
    <author>Luis Claudio Sugi Afonso</author>
    <author>Christoph Palm</author>
    <author>João Paulo Papa</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Speiseröhrenkrankheit</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Diagnose</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Maschinelles Lernen</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bilderkennung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Automatische Klassifikation</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>114</id>
    <completedYear/>
    <publishedYear>2015</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>395</pageFirst>
    <pageLast>400</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">GraphMIC: Easy Prototyping of Medical Image Computing Applications</title>
    <abstract language="eng">GraphMIC is a cross-platform image processing application utilizing the libraries ITK and OpenCV. The abstract structure of image processing pipelines is visually represented by user interface components based on modern QtQuick technology and allows users to focus on arrangement and parameterization of operations rather than implementing the equivalent functionality natively in C++. The application's central goal is to improve and simplify the typical workflow by providing various high level features and functions like multi threading, image sequence processing and advanced error handling. A built-in python interpreter allows the creation of custom nodes, where user defined algorithms can be integrated to extend basic functionality. An embedded 2d/3d visual-izer gives feedback of the resulting image of an operation or the whole pipeline. User inputs like seed points, contours or regions are forwarded to the processing pipeline as parameters to offer semi-automatic image computing. We report the main concept of the application and introduce several features and their implementation. Finally, the current state of development as well as future perspectives of GraphMIC are discussed</abstract>
    <parentTitle language="eng">Interactive Medical Image Computing (IMIC), Workshop at the Medical Image Computing and Computer Assisted Interventions (MICCAI 2015), 2015, Munich</parentTitle>
    <identifier type="doi">10.13140/RG.2.1.3718.4725</identifier>
    <note>Open-Access-Publikation</note>
    <author>Alexander Zehner</author>
    <author>Alexander Eduard Szalo</author>
    <author>Christoph Palm</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildverarbeitung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Medizin</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>116</id>
    <completedYear/>
    <publishedYear>2015</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>389</pageFirst>
    <pageLast>394</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>Springer</publisherName>
    <publisherPlace>Berlin</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Data-Parallel MRI Brain Segmentation in Clinicial Use</title>
    <abstract language="eng">Structural MRI brain analysis and segmentation is a crucial part in the daily routine in neurosurgery for intervention planning. Exemplarily, the free software FSL-FAST (FMRIB’s Segmentation Library – FMRIB’s Automated Segmentation Tool) in version 4 is used for segmentation of brain tissue types. To speed up the segmentation procedure by parallel execution, we transferred FSL-FAST to a General Purpose Graphics Processing Unit (GPGPU) using Open Computing Language (OpenCL) [1]. The necessary steps for parallelization resulted in substantially different and less useful results. Therefore, the underlying methods were revised and adapted yielding computational overhead. Nevertheless, we achieved a speed-up factor of 3.59 from CPU to GPGPU execution, as well providing similar useful or even better results.</abstract>
    <parentTitle language="deu">Bildverarbeitung für die Medizin 2015; Algorithmen - Systeme - Anwendungen; Proceedings des Workshops vom 15. bis 17. März 2015 in Lübeck</parentTitle>
    <subTitle language="deu">Porting FSL-Fastv4 to GPGPUs</subTitle>
    <identifier type="doi">10.1007/978-3-662-46224-9_67</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Joachim Weber</author>
    <author>Christian Doenitz</author>
    <author>Alexander Brawanski</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Brain Segmentation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Magnetic Resonance Imaging</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Parallel Execution</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Voxel Spacing</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>General Purpose Graphic Processing Unit</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Kernspintomografie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Gehirn</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildsegmentierung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Parallelverarbeitung</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>351</id>
    <completedYear/>
    <publishedYear>2019</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>218</pageFirst>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferencepresentation</type>
    <publisherName>Springer Vieweg</publisherName>
    <publisherPlace>Wiesbaden</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2020-04-22</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Abstract: Imitating Human Soft Tissue with Dual-Material 3D Printing</title>
    <abstract language="eng">Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand.</abstract>
    <parentTitle language="eng">Bildverarbeitung für die Medizin 2019, Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 17. bis 19. März 2019 in Lübeck</parentTitle>
    <identifier type="isbn">978-3-658-25325-7</identifier>
    <identifier type="doi">10.1007/978-3-658-25326-4_48</identifier>
    <author>Johannes Maier</author>
    <author>Maximilian Weiherer</author>
    <author>Michaela Huber</author>
    <author>Christoph Palm</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Handchirurgie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>3D-Druck</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Lernprogramm</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>HaptiVisT</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>6080</id>
    <completedYear/>
    <publishedYear>2023</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>S54</pageFirst>
    <pageLast>S56</pageLast>
    <pageNumber/>
    <edition/>
    <issue>Suppl 1</issue>
    <volume>18</volume>
    <type>conferencepresentation</type>
    <publisherName>Springer Nature</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2023-06-25</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Augmenting instrument segmentation in video sequences of minimally invasive surgery by synthetic smoky frames</title>
    <parentTitle language="eng">International Journal of Computer Assisted Radiology and Surgery</parentTitle>
    <identifier type="doi">10.1007/s11548-023-02878-2</identifier>
    <enrichment key="ConferenceStatement">CARS 2023—Computer Assisted Radiology and Surgery Proceedings of the 37th International Congress and Exhibition Munich, Germany, June 20–23, 2023</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="Kostentraeger">2027701</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Tobias Rückert</author>
    <author>Maximilian Rieder</author>
    <author>David Rauber</author>
    <author>Michel Xiao</author>
    <author>Eg Humolli</author>
    <author>Hubertus Feussner</author>
    <author>Dirk Wilhelm</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Surgical instrument segmentation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>smoke simulation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>unpaired image-to-image translation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>robot-assisted surgery</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="617">Chirurgie und verwandte medizinische Fachrichtungen</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>8567</id>
    <completedYear/>
    <publishedYear>2025</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>1577</pageFirst>
    <pageLast>1587</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume>20</volume>
    <type>article</type>
    <publisherName>Springer Nature</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2025-11-06</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Enhancing generalization in zero-shot multi-label endoscopic instrument classiﬁcation</title>
    <abstract language="eng">Purpose &#13;
Recognizing previously unseen classes with neural networks is a signiﬁcant challenge due to their limited generalization capabilities. This issue is particularly critical in safety-critical domains such as medical applications, where accurate classiﬁcation is essential for reliability and patient safety. Zero-shot learning methods address this challenge by utilizing additional semantic data, with their performance relying heavily on the quality of the generated embeddings.&#13;
&#13;
Methods &#13;
This work investigates the use of full descriptive sentences, generated by a Sentence-BERT model, as class representations, compared to simpler category-based word embeddings derived from a BERT model. Additionally, the impact of z-score normalization as a post-processing step on these embeddings is explored. The proposed approach is evaluated on a multi-label generalized zero-shot learning task, focusing on the recognition of surgical instruments in endoscopic images from minimally invasive cholecystectomies.&#13;
&#13;
Results &#13;
The results demonstrate that combining sentence embeddings and z-score normalization signiﬁcantly improves model performance. For unseen classes, the AUROC improves from 43.9% to 64.9%, and the multi-label accuracy from 26.1% to 79.5%. Overall performance measured across both seen and unseen classes improves from 49.3% to 64.9% in AUROC and from 37.3% to 65.1% in multi-label accuracy, highlighting the effectiveness of our approach.&#13;
&#13;
Conclusion &#13;
These ﬁndings demonstrate that sentence embeddings and z-score normalization can substantially enhance the generalization performance of zero-shot learning models. However, as the study is based on a single dataset, future work should validate the method across diverse datasets and application domains to establish its robustness and broader applicability.</abstract>
    <parentTitle language="eng">International Journal of Computer Assisted Radiology and Surgery</parentTitle>
    <identifier type="doi">10.1007/s11548-025-03439-5</identifier>
    <identifier type="urn">urn:nbn:de:bvb:898-opus4-85674</identifier>
    <note>Corresponding author der OTH Regensburg: Raphaela Maerkl</note>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="CorrespondingAuthor">Raphaela Maerkl</enrichment>
    <enrichment key="opus.doi.autoCreate">false</enrichment>
    <enrichment key="opus.urn.autoCreate">true</enrichment>
    <licence>Creative Commons - CC BY - Namensnennung 4.0 International</licence>
    <author>Raphaela Maerkl</author>
    <author>Tobias Rueckert</author>
    <author>David Rauber</author>
    <author>Max Gutbrod</author>
    <author>Danilo Weber Nunes</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Generalized zero-shot learning</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Sentence embeddings</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Z-score normalization</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Multi-label classiﬁcation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Surgical instruments</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="oaweg" number="">Hybrid Open Access - OA-Veröffentlichung in einer Subskriptionszeitschrift/-medium</collection>
    <collection role="oaweg" number="">Corresponding author der OTH Regensburg</collection>
    <collection role="funding" number="">DEAL Springer Nature</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="4">Naturwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Gesundheit und Soziales</collection>
    <thesisPublisher>Ostbayerische Technische Hochschule Regensburg</thesisPublisher>
    <file>https://opus4.kobv.de/opus4-oth-regensburg/files/8567/Maerkl_EnhancingGeneralization2025.pdf</file>
  </doc>
  <doc>
    <id>8499</id>
    <completedYear/>
    <publishedYear>2025</publishedYear>
    <thesisYearAccepted/>
    <language>deu</language>
    <pageFirst>e612</pageFirst>
    <pageLast>e613</pageLast>
    <pageNumber/>
    <edition/>
    <issue>08</issue>
    <volume>63</volume>
    <type>conferencepresentation</type>
    <publisherName>Thieme</publisherName>
    <publisherPlace>Stuttgart</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2025-09-22</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="deu">Künstliche Intelligenz-basierte Erkennung von interventionellen Phasen bei der endoskopischen Submukosadissektion</title>
    <abstract language="deu">Einleitung: Die endoskopische Submukosadissektion (ESD) ist ein komplexes endoskopisches Verfahren, das technische Expertise erfordert. Objektive Methoden zur Analyse von interventionellen Abläufen bei ESD könnten für Qualitätssicherung und Ausbildung, wie auch eine automatische Befunderstellung von Nutzen sein.&#13;
&#13;
Ziele: In dieser Studie wurde ein KI-Algorithmus für die Erkennung und Klassifizierung der interventionellen Phasen der ESD entwickelt, um die technische Basis für eine standardisierte Leistungsbewertung und automatische Befunderstellung zu schaffen.&#13;
&#13;
Methodik: Vollständige ESD-Videoaufnahmen von 49 Patienten wurden retrospektiv zusammengestellt. Der Datensatz umfasste 6.390.151 Einzelbilder, die alle für die folgenden interventionellen Phasen annotiert wurden: Diagnostik, Markierung, Injektion, Dissektion und Hämostase. 3.973.712 Bilder (28 Patienten) wurden für das Training eines Video-Swin-Transformers genutzt. Dabei wurde temporale Information durch standardisierte BIldextraktion in festgelegten zeitlichen Abständen zum analysierten Bild inkorporiert. 2.416.439 separate Bilder (21 Patienten) wurden für eine interne Validierung genutzt.&#13;
&#13;
Ergebnis: Bei der internen Evaluation erreichte das System insgesamt einen F1-Wert von 0,88. Es wurden F1-Werte von 0,99, 0,89, 0,89, 0,91 und 0,52 für Diagnostik, Markierung, Injektion, Dissektion bzw. Blutungsmanagement gemessen. Die Sensitivitäten für dieselben Parameter betrugen 1,00, 0,80, 0,94, 0,89 und 0,67, die Spezifitäten lagen bei 1,00, 1,00, 0,98, 0,88 und 0,93. Positive prädiktive Werte wurden mit 0,98, 1,00, 0,85, 0,94 und 0,43 gemessen.&#13;
&#13;
Schlussfolgerung: In dieser vorläufigen Studie zeigte ein KI-Algorithmus eine hohe Leistungsfähigkeit für die Einzelbild-Erkennung von Verfahrensphasen während der ESD. Die vergleichsweise niedrige Leistung für die Blutungsphase wurde auf das seltene Auftreten von Blutungsepisoden im Trainingsdatensatz zurückgeführt, der zu diesem Zeitpunkt nur Videos in voller Länge umfasste. Die zukünftige Entwicklung des Algorithmus wird sich auf die Reduzierung von Klassenungleichgewichten durch selektive Annotationsprotokolle konzentrieren.</abstract>
    <parentTitle language="deu">Zeitschrift für Gastroenterologie</parentTitle>
    <identifier type="doi">10.1055/s-0045-1811093</identifier>
    <enrichment key="ConferenceStatement">79. Jahrestagung der DGVS mit Sektion Endoskopie Jahrestagung der Deutschen Gesellschaft für Allgemein- und Viszeralchirurgie mit den Arbeitsgemeinschaften der DGAV und Jahrestagung der CACP. - Viszeralmedizin 2025; 15-20. September 2025</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Markus W. Scheppach</author>
    <author>Danilo Weber Nunes</author>
    <author>David Rauber</author>
    <author>X. Arizi</author>
    <author>Andreas Probst</author>
    <author>Sandra Nagl</author>
    <author>Christoph Römmele</author>
    <author>Alanna Ebigbo</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="3">Lebenswissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Gesundheit und Soziales</collection>
  </doc>
  <doc>
    <id>8500</id>
    <completedYear/>
    <publishedYear>2025</publishedYear>
    <thesisYearAccepted/>
    <language>deu</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue>8</issue>
    <volume>63</volume>
    <type>conferencepresentation</type>
    <publisherName>Thieme</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2025-09-22</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="deu">Instrumentenerkennung während der endoskopischen Submukosadissektion mittels künstlicher Intelligenz</title>
    <abstract language="deu">Einleitung: Die endoskopische Submukosadissektion (ESD) ist eine komplexe Technik zur Resektion gastrointestinaler Frühneoplasien. Dabei werden für die verschiedenen Schritte der Intervention spezifische endoskopische Instrumente verwendet. Die präzise und automatische Erkennung und Abgrenzung der verwendeten Instrumente (Injektionsnadeln, elektrochirurgische Messer mit unterschiedlichen Konfigurationen, hämostatische Zangen) könnte wertvolle Informationen über den Fortschritt und die Verfahrensmerkmale der ESD liefern und eine automatische standardisierte Berichterstattung ermöglichen.&#13;
&#13;
Ziele: Ziel dieser Studie war die Entwicklung eines KI-Algorithmus zur Erkennung und Delineation von endoskopischen Instrumenten bei der ESD.&#13;
&#13;
Methodik: 17 ESD-Videos (9×rektal, 5×ösophageal, 3×gastrisch) wurden retrospektiv zusammengestellt. Auf 8530 Einzelbilder dieser Videos wurden durch 2 Studienmitarbeiter die folgenden Klassen eingezeichnet: Hakenmesser – Spitze, Hakenmesser – Katheter, Nadelmesser – Spitze und – Katheter, Injektionsnadel -Spitze und – Katheter sowie hämostatische Zange – Spitze und – Katheter. Der annotierte Datensatz wurde zum Training eines DeepLabV3+-Deep-Learning-Algorithmus mit ConvNeXt-Backbone zur Erkennung und Abgrenzung der genannten Klassen verwendet. Die Evaluation erfolgte durch 5-fache interne Kreuzvalidierung.&#13;
&#13;
Ergebnis: Die Validierung auf Einzelpixelbasis ergab insgesamt einen F1-Score von 0,80, eine Sensitivität von 0,81 und eine Spezifität von 1,00. Es wurden F1-Scores von 1,00, 0,97, 0,80, 0,98, 0,85, 0,97, 0,80, 0,51 bzw. 0,85 für die Klassen Hakenmesser – Katheter und – Spitze, Nadelmesser – Katheter und – Spitze, Injektionsnadel – Katheter und – Spitze, hämostatische Zange – Katheter und – Spitze gemessen.&#13;
&#13;
Schlussfolgerung: In dieser Studie wurden die wichtigsten endoskopischen Instrumente, die während der ESD verwendet werden, mit hoher Genauigkeit erkannt. Die geringere Leistung bei der hämostatische Zange – Katheter kann auf die Unterrepräsentation dieser Klassen in den Trainingsdaten zurückgeführt werden. Zukünftige Studien werden sich auf die Erweiterung der Instrumentenklassen sowie auf die Ausbalancierung der Trainingsdaten konzentrieren.</abstract>
    <parentTitle language="deu">Zeitschrift für Gastroenterologie</parentTitle>
    <identifier type="doi">10.1055/s-0045-1811092</identifier>
    <enrichment key="ConferenceStatement">79. Jahrestagung der DGVS mit Sektion Endoskopie Jahrestagung der Deutschen Gesellschaft für Allgemein- und Viszeralchirurgie mit den Arbeitsgemeinschaften der DGAV und Jahrestagung der CACP. - Viszeralmedizin 2025; 15.-20. September 2025</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Markus W. Scheppach</author>
    <author>David Rauber</author>
    <author>C. Zingler</author>
    <author>Danilo Weber Nunes</author>
    <author>Andreas Probst</author>
    <author>Christoph Römmele</author>
    <author>Sandra Nagl</author>
    <author>Alanna Ebigbo</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="3">Lebenswissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Gesundheit und Soziales</collection>
  </doc>
  <doc>
    <id>8568</id>
    <completedYear/>
    <publishedYear>2025</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>85</pageFirst>
    <pageLast>95</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>Springer</publisherName>
    <publisherPlace>Cham</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2025-11-06</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">DIY challenge blueprint: from organization to technical realization in biomedical image analysis</title>
    <abstract language="eng">Biomedical image analysis challenges have become the de facto standard for publishing new datasets and benchmarking diﬀerent state-of-the-art algorithms. Most challenges use commercial cloud-based platforms, which can limit custom options and involve disadvantages such as reduced data control and increased costs for extended functionalities. In contrast, Do-It-Yourself (DIY) approaches have the capability to emphasize reliability, compliance, and custom features, providing a solid basis for low-cost, custom designs in self-hosted systems. Our approach emphasizes cost eﬃciency, improved data sovereignty, and strong compliance with regulatory frameworks, such as the GDPR. This paper presents a blueprint for DIY biomedical imaging challenges, designed to provide institutions with greater autonomy over their challenge infrastructure. Our approach comprehensively addresses both organizational and technical dimensions, including key user roles, data management strategies, and secure, eﬃcient workﬂows. Key technical contributions include a modular, containerized infrastructure based on Docker, integration of open-source identity management, and automated solution evaluation workﬂows. Practical deployment guidelines are provided to facilitate implementation and operational stability. The feasibility and adaptability of the proposed framework are demonstrated through the MICCAI 2024 PhaKIR challenge with multiple international teams submitting and validating their solutions through our self-hosted platform. This work can be used as a baseline for future self-hosted DIY implementations and our results encourage further studies in the area of biomedical image analysis challenges.</abstract>
    <parentTitle language="eng">Medical Image Computing and Computer Assisted Intervention - MICCAI 2025 ; Proceedings Part XI</parentTitle>
    <identifier type="isbn">978-3-032-05141-7</identifier>
    <identifier type="doi">10.1007/978-3-032-05141-7_9</identifier>
    <enrichment key="ConferenceStatement">28th International Conference,  23-27 September 2025, Daejeon, South Korea</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="OtherSeries">Lecture Notes in Computer Science, volume 15970</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Leonard Klausmann</author>
    <author>Tobias Rueckert</author>
    <author>David Rauber</author>
    <author>Raphaela Maerkl</author>
    <author>Suemeyye R. Yildiran</author>
    <author>Max Gutbrod</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Biomedical challenges</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Image analysis</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Blueprint</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Do-It-Yourself</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Self-hosting</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="4">Naturwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Digitale Transformation</collection>
  </doc>
  <doc>
    <id>8059</id>
    <completedYear/>
    <publishedYear>2025</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>18</pageNumber>
    <edition/>
    <issue/>
    <volume/>
    <type>preprint</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2025-04-28</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection</title>
    <abstract language="eng">The growing reliance on Artificial Intelligence (AI) in critical domains such as healthcare demands robust mechanisms to ensure the trustworthiness of these systems, especially when faced with unexpected or anomalous inputs. This paper introduces the Open Medical Imaging Benchmarks for Out-Of-Distribution Detection (OpenMIBOOD), a comprehensive framework for evaluating out-of-distribution (OOD) detection methods specifically in medical imaging contexts. OpenMIBOOD includes three benchmarks from diverse medical domains, encompassing 14 datasets divided into covariate-shifted in-distribution, near-OOD, and far-OOD categories. We evaluate 24 post-hoc methods across these benchmarks, providing a standardized reference to advance the development and fair comparison of OOD detection methods. Results reveal that findings from broad-scale OOD benchmarks in natural image domains do not translate to medical applications, underscoring the critical need for such benchmarks in the medical field. By mitigating the risk of exposing AI models to inputs outside their training distribution, OpenMIBOOD aims to support the advancement of reliable and trustworthy AI systems in healthcare. The repository is available at this https URL.</abstract>
    <identifier type="doi">10.48550/arXiv.2503.16247</identifier>
    <identifier type="arxiv">arXiv:2503.16247v1</identifier>
    <note>Der Aufsatz wurde peer-reviewed veröffentlicht und ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/8467</note>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Creative Commons - CC BY - Namensnennung 4.0 International</licence>
    <author>Max Gutbrod</author>
    <author>David Rauber</author>
    <author>Danilo Weber Nunes</author>
    <author>Christoph Palm</author>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="1">Ingenieurwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Gesundheit und Soziales</collection>
  </doc>
  <doc>
    <id>3050</id>
    <completedYear/>
    <publishedYear>2023</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>1597</pageFirst>
    <pageLast>1616</pageLast>
    <pageNumber/>
    <edition/>
    <issue>4</issue>
    <volume>39</volume>
    <type>article</type>
    <publisherName>Springer Nature</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2022-03-08</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Learning the shape of female breasts: an open-access 3D statistical shape model of the female breast built from 110 breast scans</title>
    <abstract language="eng">We present the Regensburg Breast Shape Model (RBSM)—a 3D statistical shape model of the female breast built from 110 breast scans acquired in a standing position, and the first publicly available. Together with the model, a fully automated, pairwise surface registration pipeline used to establish dense correspondence among 3D breast scans is introduced. Our method is computationally efficient and requires only four landmarks to guide the registration process. A major challenge when modeling female breasts from surface-only 3D breast scans is the non-separability of breast and thorax. In order to weaken the strong coupling between breast and surrounding areas, we propose to minimize the variance outside the breast region as much as possible. To achieve this goal, a novel concept called breast probability masks (BPMs) is introduced. A BPM assigns probabilities to each point of a 3D breast scan, telling how likely it is that a particular point belongs to the breast area. During registration, we use BPMs to align the template to the target as accurately as possible inside the breast region and only roughly outside. This simple yet effective strategy significantly reduces the unwanted variance outside the breast region, leading to better statistical shape models in which breast shapes are quite well decoupled from the thorax. The RBSM is thus able to produce a variety of different breast shapes as independently as possible from the shape of the thorax. Our systematic experimental evaluation reveals a generalization ability of 0.17 mm and a specificity of 2.8 mm. To underline the expressiveness of the proposed model, we finally demonstrate in two showcase applications how the RBSM can be used for surgical outcome simulation and the prediction of a missing breast from the remaining one. Our model is available at https://www.rbsm.re-mic.de/.</abstract>
    <parentTitle language="eng">The Visual Computer</parentTitle>
    <identifier type="doi">10.1007/s00371-022-02431-3</identifier>
    <identifier type="urn">urn:nbn:de:bvb:898-opus4-30506</identifier>
    <note>Corresponding author: Christoph Palm</note>
    <note>Zugehörige arXiv-Publikation:&#13;
https://opus4.kobv.de/opus4-oth-regensburg/frontdoor/index/index/docId/2023</note>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Creative Commons - CC BY - Namensnennung 4.0 International</licence>
    <author>Maximilian Weiherer</author>
    <author>Andreas Eigenberger</author>
    <author>Bernhard Egger</author>
    <author>Vanessa Brébant</author>
    <author>Lukas Prantl</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Statistical shape model</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Non-rigid surface registration</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Breast imaging</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Surgical outcome simulation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Breast reconstruction surgery</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="oaweg" number="">Hybrid Open Access - OA-Veröffentlichung in einer Subskriptionszeitschrift/-medium</collection>
    <collection role="oaweg" number="">Corresponding author der OTH Regensburg</collection>
    <collection role="funding" number="">DEAL Springer Nature</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <thesisPublisher>Ostbayerische Technische Hochschule Regensburg</thesisPublisher>
    <file>https://opus4.kobv.de/opus4-oth-regensburg/files/3050/Weiherer_et_al-2022-The_Visual_Computer.pdf</file>
  </doc>
  <doc>
    <id>97</id>
    <completedYear/>
    <publishedYear>2019</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>30</pageFirst>
    <pageLast>42</pageLast>
    <pageNumber/>
    <edition/>
    <issue>1</issue>
    <volume>9</volume>
    <type>article</type>
    <publisherName>AME Publishing Company</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Imitating human soft tissue on basis of a dual-material 3D print using a support-filled metamaterial to provide bimanual haptic for a hand surgery training system</title>
    <abstract language="eng">Background: Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D-printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand.&#13;
Methods: The goal of this experiment is to imitate human soft tissue with its haptic and elasticity for a realistic hand phantom fabrication, using only a dual-material 3D printer and support-material-filled metamaterial between skin and bone. We present our workflow to generate lattice structures between hard bone and soft skin with iterative cube edge (CE) or cube face (CF) unit cells. Cuboid and finger shaped sample prints with and without inner hard bone in different lattice thickness are constructed and 3D printed.&#13;
Results: The most elastic available rubber-like material is too firm to imitate soft tissue. By reducing the amount of rubber in the inner volume through support material (SUP), objects become significantly softer. Without metamaterial, after disintegration, the SUP can be shifted through the volume and thus the body loses its original shape. Although the CE design increases the elasticity, it cannot restore the fabric form. In contrast to CE, the CF design increases not only the elasticity but also guarantees a local limitation of the SUP. Therefore, the body retains its shape and internal bones remain in its intended place. Various unit cell sizes, lattice thickening and skin thickness regulate the rubber material and SUP ratio. Test prints with higher SUP and lower rubber material percentage appear softer and vice versa. This was confirmed by an expert surgeon evaluation. Subjects adjudged pure rubber-like material as too firm and samples only filled with SUP or lattice structure in CE design as not suitable for imitating tissue. 3D-printed finger samples in CF design were rated as realistic compared to the haptic of human tissue with a good palpable bone structure.&#13;
Conclusions: We developed a new dual-material 3D print technique to imitate soft tissue of the human hand with its haptic properties. Blowy SUP is trapped within a lattice structure to soften rubber-like 3D print material, which makes it possible to reproduce a realistic replica of human hand soft tissue.</abstract>
    <parentTitle language="eng">Quantitative Imaging in Medicine and Surgery</parentTitle>
    <identifier type="doi">10.21037/qims.2018.09.17</identifier>
    <identifier type="urn">urn:nbn:de:bvb:898-opus4-979</identifier>
    <note>Corresponding author: Christoph Palm</note>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Creative Commons - CC BY-NC-ND - Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 International</licence>
    <author>Johannes Maier</author>
    <author>Maximilian Weiherer</author>
    <author>Michaela Huber</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Dual-material 3D printing</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Hand surgery training</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Metamaterial</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Support material</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Tissue-imitating hand phantom</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Handchirurgie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>3D-Druck</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Biomaterial</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Lernprogramm</value>
    </subject>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="oaweg" number="">Gold Open Access- Erstveröffentlichung in einem/als Open-Access-Medium</collection>
    <collection role="persons" number="palmhaptivist">Palm, Christoph (Prof. Dr.) - Projekt HaptiVisT</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="oaweg" number="">Corresponding author der OTH Regensburg</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>346</id>
    <completedYear/>
    <publishedYear>2020</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>340</pageFirst>
    <pageLast>455</pageLast>
    <pageNumber/>
    <edition/>
    <issue>02</issue>
    <volume>10</volume>
    <type>article</type>
    <publisherName>AME Publishing Company</publisherName>
    <publisherPlace>Hong Kong, China</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2020-04-22</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Optically tracked and 3D printed haptic phantom hand for surgical training system</title>
    <abstract language="eng">Background: For surgical fixation of bone fractures of the human hand, so-called Kirschner-wires (K-wires) are drilled through bone fragments. Due to the minimally invasive drilling procedures without a view of risk structures like vessels and nerves, a thorough training of young surgeons is necessary. For the development of a virtual reality (VR) based training system, a three-dimensional (3D) printed phantom hand is required. To ensure an intuitive operation, this phantom hand has to be realistic in both, its position relative to the driller as well as in its haptic features. The softest 3D printing material available on the market, however, is too hard to imitate human soft tissue. Therefore, a support-material (SUP) filled metamaterial is used to soften the raw material. Realistic haptic features are important to palpate protrusions of the bone to determine the drilling starting point and angle. An optical real-time tracking is used to transfer position and rotation to the training system.&#13;
Methods: A metamaterial already developed in previous work is further improved by use of a new unit cell. Thus, the amount of SUP within the volume can be increased and the tissue is softened further. In addition, the human anatomy is transferred to the entire hand model. A subcutaneous fat layer and penetration of air through pores into the volume simulate shiftability of skin layers. For optical tracking, a rotationally symmetrical marker attached to the phantom hand with corresponding reference marker is developed. In order to ensure trouble-free position transmission, various types of marker point applications are tested.&#13;
&#13;
Results: Several cuboid and forearm sample prints lead to a final 30 centimeter long hand model. The whole haptic phantom could be printed faultless within about 17 hours. The metamaterial consisting of the new unit cell results in an increased SUP share of 4.32%. Validated by an expert surgeon study, this allows in combination with a displacement of the uppermost skin layer a good palpability of the bones. Tracking of the hand marker in dodecahedron design works trouble-free in conjunction with a reference marker attached to the worktop of the training system.&#13;
&#13;
Conclusions: In this work, an optically tracked and haptically correct phantom hand was developed using dual-material 3D printing, which can be easily integrated into a surgical training system.</abstract>
    <parentTitle language="eng">Quantitative Imaging in Medicine and Surgery</parentTitle>
    <identifier type="doi">10.21037/qims.2019.12.03</identifier>
    <note>Corresponding author: Christoph Palm</note>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Creative Commons - CC BY-NC-ND - Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 International</licence>
    <author>Johannes Maier</author>
    <author>Maximilian Weiherer</author>
    <author>Michaela Huber</author>
    <author>Christoph Palm</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Handchirurgie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>3D-Druck</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Lernprogramm</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Zielverfolgung</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>HaptiVisT</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Dual-material 3D printing</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>hand surgery training</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>metamaterial</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>tissue imitating phantom hand</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="ddc" number="617">Chirurgie und verwandte medizinische Fachrichtungen</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="oaweg" number="">Gold Open Access- Erstveröffentlichung in einem/als Open-Access-Medium</collection>
    <collection role="persons" number="palmhaptivist">Palm, Christoph (Prof. Dr.) - Projekt HaptiVisT</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="oaweg" number="">Corresponding author der OTH Regensburg</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>16</id>
    <completedYear/>
    <publishedYear>2018</publishedYear>
    <thesisYearAccepted/>
    <language>deu</language>
    <pageFirst>176</pageFirst>
    <pageLast>181</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>Springer</publisherName>
    <publisherPlace>Berlin</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2018-02-21</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="deu">CT-basiertes virtuelles Fräsen am Felsenbein</title>
    <abstract language="deu">Im Rahmen der Entwicklung eines haptisch-visuellen Trainingssystems für das Fräsen am Felsenbein werden ein Haptikarm und ein autostereoskopischer 3D-Monitor genutzt, um Chirurgen die virtuelle Manipulation von knöchernen Strukturen im Kontext eines sog. Serious Game zu ermöglichen. Unter anderem sollen Assistenzärzte im Rahmen ihrer Ausbildung das Fräsen am Felsenbein für das chirurgische Einsetzen eines Cochlea-Implantats üben können. Die Visualisierung des virtuellen Fräsens muss dafür in Echtzeit und möglichst realistisch modelliert, implementiert und evaluiert werden. Wir verwenden verschiedene Raycasting Methoden mit linearer und Nearest Neighbor Interpolation und vergleichen die visuelle Qualität und die Bildwiederholfrequenzen der Methoden. Alle verglichenen Verfahren sind sind echtzeitfähig, unterscheiden sich aber in ihrer visuellen Qualität.</abstract>
    <parentTitle language="deu">Bildverarbeitung für die Medizin 2018; Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 11. bis 13. März 2018 in Erlangen</parentTitle>
    <subTitle language="deu">Bild- und haptischen Wiederholfrequenzen bei unterschiedlichen Rendering Methoden</subTitle>
    <identifier type="isbn">978-3-662-56537-7</identifier>
    <identifier type="doi">10.1007/978-3-662-56537-7_51</identifier>
    <enrichment key="OtherSeries">Informatik aktuell</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Daniela Franz</author>
    <author>Maria Dreher</author>
    <author>Martin Prinzen</author>
    <author>Matthias Teßmann</author>
    <author>Christoph Palm</author>
    <author>Uwe Katzky</author>
    <author>Jerome Perret</author>
    <author>Mathias Hofer</author>
    <author>Thomas Wittenberg</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Felsenbein</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Fräsen</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Virtualisierung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Computertomographie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Computerassistierte Chirurgie</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmhaptivist">Palm, Christoph (Prof. Dr.) - Projekt HaptiVisT</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>103</id>
    <completedYear/>
    <publishedYear>2018</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>291</pageFirst>
    <pageLast>296</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>Springer</publisherName>
    <publisherPlace>Berlin</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Force-Feedback-assisted Bone Drilling Simulation Based on CT Data</title>
    <abstract language="eng">In order to fix a fracture using minimally invasive surgery approaches, surgeons are drilling complex and tiny bones with a 2 dimensional X-ray as single imaging modality in the operating room. Our novel haptic force-feedback and visual assisted training system will potentially help hand surgeons to learn the drilling procedure in a realistic visual environment. Within the simulation, the collision detection as well as the interaction between virtual drill, bone voxels and surfaces are important. In this work, the chai3d collision detection and force calculation algorithms are combined with a physics engine to simulate the bone drilling process. The chosen Bullet-Physics-Engine provides a stable simulation of rigid bodies, if the collision model of the drill and the tool holder is generated as a compound shape. Three haptic points are added to the K-wire tip for removing single voxels from the bone. For the drilling process three modes are proposed to emulate the different phases of drilling in restricting the movement of a haptic device.</abstract>
    <parentTitle language="deu">Bildverarbeitung für die Medizin 2018; Algorithmen - Systeme - Anwendungen; Proceedings des Workshops vom 11. bis 13. März 2018 in Erlangen</parentTitle>
    <identifier type="doi">10.1007/978-3-662-56537-7_78</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Johannes Maier</author>
    <author>Michaela Huber</author>
    <author>Uwe Katzky</author>
    <author>Jerome Perret</author>
    <author>Thomas Wittenberg</author>
    <author>Christoph Palm</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Handchirurgie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Osteosynthese</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Simulation</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Lernprogramm</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmhaptivist">Palm, Christoph (Prof. Dr.) - Projekt HaptiVisT</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>105</id>
    <completedYear/>
    <publishedYear>2018</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>214</pageFirst>
    <pageLast>219</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume>17</volume>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">A haptic model for virtual petrosal bone milling</title>
    <abstract language="eng">Virtual training of bone milling requires realtime and realistic haptics of the interaction between the ”virtual mill” and a ”virtual bone”. We propose an exponential abrasion model between a virtual one and the mill bit and combine it with a coarse representation of the virtual bone and the mill shaft for collision detection using the Bullet Physics Engine. We compare our exponential abrasion model to a widely used linear abrasion model and evaluate it quantitatively and qualitatively. The evaluation results show, that we can provide virtual milling in real-time, with an abrasion behavior similar to that proposed in the literature and with a realistic feeling of five different surgeons.</abstract>
    <parentTitle language="eng">17. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC2018), Tagungsband, 2018, Leipzig, 13.-15. September</parentTitle>
    <identifier type="url">https://www.curac.org/images/advportfoliopro/images/CURAC2018/CURAC 2018 Tagungsband.pdf</identifier>
    <licence>Creative Commons - CC BY-NC-ND - Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 International</licence>
    <author>Thomas Eixelberger</author>
    <author>Thomas Wittenberg</author>
    <author>Jerome Perret</author>
    <author>Uwe Katzky</author>
    <author>Martina Simon</author>
    <author>Stephanie Schmitt-Rüth</author>
    <author>Mathias Hofer</author>
    <author>M. Sorge</author>
    <author>R. Jacob</author>
    <author>Felix B. Engel</author>
    <author>A. Gostian</author>
    <author>Christoph Palm</author>
    <author>Daniela Franz</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Osteosynthese</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Simulation</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Lernprogramm</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmhaptivist">Palm, Christoph (Prof. Dr.) - Projekt HaptiVisT</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>119</id>
    <completedYear/>
    <publishedYear>2013</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue>DocAbstr. 324</issue>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>German Medical Science GMS Publishing House</publisherName>
    <publisherPlace>Düsseldorf</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Biomedical Image and Signal Computing (BISC 2013)</title>
    <parentTitle language="deu">58. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (GMDS 2013), Lübeck, 01.-05.09.2013</parentTitle>
    <identifier type="doi">doi:10.3205/13gmds257</identifier>
    <note>Meeting Abstract</note>
    <licence>Creative Commons - CC BY-NC-ND - Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 International</licence>
    <author>Christoph Palm</author>
    <author>Thomas Schanze</author>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>6</id>
    <completedYear/>
    <publishedYear>2019</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>1143</pageFirst>
    <pageLast>1145</pageLast>
    <pageNumber>3</pageNumber>
    <edition/>
    <issue>7</issue>
    <volume>68</volume>
    <type>article</type>
    <publisherName>British Society of Gastroenterology</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2018-12-03</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Computer-aided diagnosis using deep learning in the evaluation of early oesophageal adenocarcinoma</title>
    <abstract language="eng">Computer-aided diagnosis using deep learning (CAD-DL) may be an instrument to improve endoscopic assessment of Barrett’s oesophagus&#13;
(BE) and early oesophageal adenocarcinoma (EAC). Based on still images from two databases, the diagnosis of EAC by CAD-DL reached sensitivities/specificities of 97%/88% (Augsburg data) and 92%/100% (Medical Image Computing and Computer-Assisted Intervention [MICCAI]&#13;
data) for white light (WL) images and 94%/80% for narrow band images (NBI) (Augsburg data), respectively. Tumour margins delineated by&#13;
experts into images were detected satisfactorily with a Dice coefficient (D) of 0.72. This could be a first step towards CAD-DL for BE assessment. If developed further, it could become a useful&#13;
adjunctive tool for patient management.</abstract>
    <parentTitle language="eng">GuT</parentTitle>
    <identifier type="doi">10.1136/gutjnl-2018-317573</identifier>
    <identifier type="urn">urn:nbn:de:bvb:898-opus4-68</identifier>
    <note>Corresponding authors: Alanna Ebigbo and Christoph Palm</note>
    <enrichment key="opus.import.file">1</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Creative Commons - CC BY-NC - Namensnennung - Nicht kommerziell 4.0 International</licence>
    <author>Alanna Ebigbo</author>
    <author>Robert Mendel</author>
    <author>Andreas Probst</author>
    <author>Johannes Manzeneder</author>
    <author>Luis Antonio de Souza Jr.</author>
    <author>João Paulo Papa</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Speiseröhrenkrebs</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Diagnose</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Computerunterstütztes Verfahren</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Maschinelles Lernen</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="oaweg" number="">Hybrid Open Access - OA-Veröffentlichung in einer Subskriptionszeitschrift/-medium</collection>
    <collection role="oaweg" number="">Corresponding author der OTH Regensburg</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <thesisPublisher>Ostbayerische Technische Hochschule Regensburg</thesisPublisher>
    <file>https://opus4.kobv.de/opus4-oth-regensburg/files/6/gutjnl_2018_ebigbo.pdf</file>
  </doc>
  <doc>
    <id>104</id>
    <completedYear/>
    <publishedYear>2018</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>1</pageFirst>
    <pageLast>8</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume>19</volume>
    <type>article</type>
    <publisherName>Springer Nature</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">3D Analysis of Osteosyntheses Material using semi-automated CT Segmentation</title>
    <abstract language="eng">Backround&#13;
Scaphoidectomy and midcarpal fusion can be performed using traditional fixation methods like K-wires, staples, screws or different dorsal (non)locking arthrodesis systems. The aim of this study is to test the Aptus four corner locking plate and to compare the clinical findings to the data revealed by CT scans and semi-automated segmentation.&#13;
Methods:&#13;
This is a retrospective review of eleven patients suffering from scapholunate advanced collapse (SLAC) or scaphoid non-union advanced collapse (SNAC) wrist, who received a four corner fusion between August 2011 and July 2014. The clinical evaluation consisted of measuring the range of motion (ROM), strength and pain on a visual analogue scale (VAS). Additionally, the Disabilities of the Arm, Shoulder and Hand (QuickDASH) and the Mayo Wrist Score were assessed. A computerized tomography (CT) of the wrist was obtained six weeks postoperatively. After semi-automated segmentation of the CT scans, the models were post processed and surveyed.&#13;
Results&#13;
During the six-month follow-up mean range of motion (ROM) of the operated wrist was 60°, consisting of 30° extension and 30° flexion. While pain levels decreased significantly, 54% of grip strength and 89% of pinch strength were preserved compared to the contralateral healthy wrist. Union could be detected in all CT scans of the wrist. While X-ray pictures obtained postoperatively revealed no pathology, two user related technical complications were found through the 3D analysis, which correlated to the clinical outcome.&#13;
Conclusion&#13;
Due to semi-automated segmentation and 3D analysis it has been proved that the plate design can keep up to the manufacturers’ promises. Over all, this case series confirmed that the plate can compete with the coexisting techniques concerning clinical outcome, union and complication rate.</abstract>
    <parentTitle language="eng">BMC Musculoskeletal Disorders</parentTitle>
    <subTitle language="eng">a case series of a 4 corner fusion plate</subTitle>
    <identifier type="doi">10.1186/s12891-018-1975-0</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Creative Commons - CC BY - Namensnennung 4.0 International</licence>
    <author>Rebecca Wöhl</author>
    <author>Johannes Maier</author>
    <author>Sebastian Gehmert</author>
    <author>Christoph Palm</author>
    <author>Birgit Riebschläger</author>
    <author>Michael Nerlich</author>
    <author>Michaela Huber</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Handchirurgie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Osteosynthese</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Arthrodese</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>4FC</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>SLAC wrist</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>SNAC wrist</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Semi-automated segmentation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>3D analysis</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Computertomographie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildsegmentierung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Dreidimensionale Bildverarbeitung</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="oaweg" number="">Gold Open Access- Erstveröffentlichung in einem/als Open-Access-Medium</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>6041</id>
    <completedYear/>
    <publishedYear>2023</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>S53</pageFirst>
    <pageLast>S54</pageLast>
    <pageNumber/>
    <edition/>
    <issue>S02</issue>
    <volume>55</volume>
    <type>conferencepresentation</type>
    <publisherName>Thieme</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2023-05-04</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Real-time detection and delineation of tissue during third-space endoscopy using artificial intelligence (AI)</title>
    <abstract language="eng">Aims &#13;
AI has proven great potential in assisting endoscopists in diagnostics, however its role in therapeutic endoscopy remains unclear. Endoscopic submucosal dissection (ESD) is a technically demanding intervention with a slow learning curve and relevant risks like bleeding and perforation. Therefore, we aimed to develop an algorithm for the real-time detection and delineation of relevant structures during third-space endoscopy.&#13;
&#13;
Methods &#13;
5470 still images from 59 full length videos (47 ESD, 12 POEM) were annotated. 179681 additional unlabeled images were added to the training dataset. Consequently, a DeepLabv3+ neural network architecture was trained with the ECMT semi-supervised algorithm (under review elsewhere). Evaluation of vessel detection was performed on a dataset of 101 standardized video clips from 15 separate third-space endoscopy videos with 200 predefined blood vessels.&#13;
&#13;
Results &#13;
Internal validation yielded an overall mean Dice score of 85% (68% for blood vessels, 86% for submucosal layer, 88% for muscle layer). On the video test data, the overall vessel detection rate (VDR) was 94% (96% for ESD, 74% for POEM). The median overall vessel detection time (VDT) was 0.32 sec (0.3 sec for ESD, 0.62 sec for POEM).&#13;
&#13;
Conclusions &#13;
Evaluation of the developed algorithm on a video test dataset showed high VDR and quick VDT, especially for ESD. Further research will focus on a possible clinical benefit of the AI application for VDR and VDT during third-space endoscopy.</abstract>
    <parentTitle language="eng">Endoscopy</parentTitle>
    <identifier type="doi">10.1055/s-0043-1765128</identifier>
    <enrichment key="ConferenceStatement">ESGE Days 2023</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Markus W. Scheppach</author>
    <author>Robert Mendel</author>
    <author>Andreas Probst</author>
    <author>David Rauber</author>
    <author>Tobias Rückert</author>
    <author>Michael Meinikheim</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <author>Alanna Ebigbo</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Speiseröhrenkrankheit</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Künstliche Intelligenz</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Artificial Intelligence</value>
    </subject>
    <collection role="ddc" number="61">Medizin und Gesundheit</collection>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>112</id>
    <completedYear/>
    <publishedYear>2016</publishedYear>
    <thesisYearAccepted/>
    <language>deu</language>
    <pageFirst>21</pageFirst>
    <pageLast>26</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="deu">Haptisches Lernen für Cochlea Implantationen</title>
    <abstract language="deu">Die Implantation eines Cochlea Implantates benötigt einen chirurgischen Zugang im Felsenbein und durch die Paukenhöhle des Patienten. Der Chirurg hat eine eingeschränkte Sicht im Operationsgebiet, die weiterhin viele Risikostrukturen enthält. Um eine Cochlea Implantation sicher und fehlerfrei durchzuführen, ist eine umfangreiche theoretische und praktische (teilweise berufsbegleitende) Fortbildung sowie langjährige Erfahrung notwendig. Unter Nutzung von realen klinischen CT/MRT Daten von Innen- und Mittelohr und der interaktiven Segmentierung der darin abgebildeten  Strukturen  (Nerven, Cochlea, Gehörknöchelchen,...) wird im HaptiVisT Projekt ein haptisch-visuelles Trainingssystem für die Implantation von Innen- und Mittelohr-Implantaten realisiert, das als sog. „Serious Game“ mit immersiver Didaktik gestaltet wird. Die Evaluierung des Demonstrators hinsichtlich Zweckmäßigkeit erfolgt prozessbegleitend und  ergebnisorientiert, um mögliche technische oder didaktische Fehler vor Fertigstellung des Systems aufzudecken. Drei zeitlich versetzte Evaluationen fokussieren dabei chirurgisch-fachliche, didaktische sowie haptisch-ergonomische Akzeptanzkriterien.</abstract>
    <parentTitle language="deu">15. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC2016), Tagungsband, 2016, Bern, 29.09. - 01.10.</parentTitle>
    <subTitle language="deu">Konzept - HaptiVisT Projekt</subTitle>
    <identifier type="url">https://curac.org/images/advportfoliopro/images/CURAC2016/CURAC%202016%20Tagungsband.pdf</identifier>
    <author>Daniela Franz</author>
    <author>Uwe Katzky</author>
    <author>Sabine Neumann</author>
    <author>Jerome Perret</author>
    <author>Mathias Hofer</author>
    <author>Michaela Huber</author>
    <author>Stephanie Schmitt-Rüth</author>
    <author>Sonja Haug</author>
    <author>Karsten Weber</author>
    <author>Martin Prinzen</author>
    <author>Christoph Palm</author>
    <author>Thomas Wittenberg</author>
    <subject>
      <language>deu</language>
      <type>uncontrolled</type>
      <value>Virtuelles Training</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>uncontrolled</type>
      <value>Haptisches Feedback</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>uncontrolled</type>
      <value>Gamification in der Medizin</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Cochlea-Implantat</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Operationstechnik</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Simulation</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Haptische Feedback-Technologie</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Lernprogramm</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="FakSoz">Fakultät Sozial- und Gesundheitswissenschaften</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmhaptivist">Palm, Christoph (Prof. Dr.) - Projekt HaptiVisT</collection>
    <collection role="persons" number="weberlate">Weber, Karsten (Prof. Dr.) - Labor für Technikfolgenabschätzung und Angewandte Ethik</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Empirische Sozialforschung</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="institutes" number="">Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe)</collection>
    <collection role="persons" number="hauglasofo">Haug, Sonja (Prof. Dr.) - Labor Empirische Sozialforschung</collection>
  </doc>
  <doc>
    <id>108</id>
    <completedYear/>
    <publishedYear>2017</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>141</pageFirst>
    <pageLast>146</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>Springer</publisherName>
    <publisherPlace>Berlin</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Barrett's Esophagus Analysis Using SURF Features</title>
    <abstract language="eng">The development of adenocarcinoma in Barrett’s esophagus is difficult to detect by endoscopic surveillance of patients with signs of dysplasia. Computer assisted diagnosis of endoscopic images (CAD) could therefore be most helpful in the demarcation and classification of neoplastic lesions. In this study we tested the feasibility of a CAD method based on Speeded up Robust Feature Detection (SURF). A given database containing 100 images from 39 patients served as benchmark for feature based classification models. Half of the images had previously been diagnosed by five clinical experts as being ”cancerous”, the other half as ”non-cancerous”. Cancerous image regions had been visibly delineated (masked) by the clinicians. SURF features acquired from full images as well as from masked areas were utilized for the supervised training and testing of an SVM classifier. The predictive accuracy of the developed CAD system is illustrated by sensitivity and specificity values. The results based on full image matching where 0.78 (sensitivity) and 0.82 (specificity) were achieved, while the masked region approach generated results of 0.90 and 0.95, respectively.</abstract>
    <parentTitle language="deu">Bildverarbeitung für die Medizin 2017; Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 12. bis 14. März 2017 in Heidelberg</parentTitle>
    <identifier type="doi">10.1007/978-3-662-54345-0_34</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <author>Luis Antonio de Souza Jr.</author>
    <author>Christian Hook</author>
    <author>João Paulo Papa</author>
    <author>Christoph Palm</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Speiseröhrenkrankheit</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Diagnose</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Maschinelles Sehen</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Automatische Klassifikation</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>107</id>
    <completedYear/>
    <publishedYear>2017</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>80</pageFirst>
    <pageLast>85</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>Springer</publisherName>
    <publisherPlace>Berlin</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Barrett’s Esophagus Analysis Using Convolutional Neural Networks</title>
    <abstract language="eng">We propose an automatic approach for early detection of adenocarcinoma in the esophagus. High-definition endoscopic images (50 cancer, 50 Barrett) are partitioned into a dataset containing approximately equal amounts of patches showing cancerous and non-cancerous regions. A deep convolutional neural network is adapted to the data using a transfer learning approach. The final classification of an image is determined by at least one patch, for which the probability being a cancer patch exceeds a given threshold. The model was evaluated with leave one patient out cross-validation. With sensitivity and specificity of 0.94 and 0.88, respectively, our findings improve recently published results on the same image data base considerably. Furthermore, the visualization of the class probabilities of each individual patch indicates, that our approach might be extensible to the segmentation domain.</abstract>
    <parentTitle language="deu">Bildverarbeitung für die Medizin 2017; Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 12. bis 14. März 2017 in Heidelberg</parentTitle>
    <identifier type="doi">10.1007/978-3-662-54345-0_23</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <author>Robert Mendel</author>
    <author>Alanna Ebigbo</author>
    <author>Andreas Probst</author>
    <author>Helmut Messmann</author>
    <author>Christoph Palm</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Speiseröhrenkrebs</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Diagnose</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Maschinelles Lernen</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bilderkennung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Automatische Klassifikation</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>2257</id>
    <completedYear/>
    <publishedYear>2017</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Development of a haptic and visual assisted training simulation concept for complex bone drilling in minimally invasive hand surgery</title>
    <parentTitle language="eng">CARS Conference, 5.10.-7.10.2017</parentTitle>
    <author>Johannes Maier</author>
    <author>Sonja Haug</author>
    <author>Michaela Huber</author>
    <author>Uwe Katzky</author>
    <author>Sabine Neumann</author>
    <author>Jérôme Perret</author>
    <author>Martin Prinzen</author>
    <author>Karsten Weber</author>
    <author>Thomas Wittenberg</author>
    <author>Rebecca Wöhl</author>
    <author>Ulrike Scorna</author>
    <author>Christoph Palm</author>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="FakSoz">Fakultät Sozial- und Gesundheitswissenschaften</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmhaptivist">Palm, Christoph (Prof. Dr.) - Projekt HaptiVisT</collection>
    <collection role="persons" number="weberlate">Weber, Karsten (Prof. Dr.) - Labor für Technikfolgenabschätzung und Angewandte Ethik</collection>
    <collection role="othforschungsschwerpunkt" number="16311">Digitalisierung</collection>
    <collection role="institutes" number="">Institut für Sozialforschung und Technikfolgenabschätzung (IST)</collection>
    <collection role="institutes" number="">Labor Empirische Sozialforschung</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="institutes" number="">Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe)</collection>
    <collection role="persons" number="hauglasofo">Haug, Sonja (Prof. Dr.) - Labor Empirische Sozialforschung</collection>
  </doc>
  <doc>
    <id>7308</id>
    <completedYear/>
    <publishedYear>2024</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>3355</pageFirst>
    <pageLast>3372</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume>62</volume>
    <type>article</type>
    <publisherName>Springer Nature</publisherName>
    <publisherPlace>Heidelberg</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2024-06-12</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Layer-selective deep representation to improve esophageal cancer classification</title>
    <abstract language="eng">Even though artiﬁcial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis.For this task, the deep learning techniques’ black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett’s esophagus and adenocarcinoma classiﬁcation. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classiﬁed for further deﬁnition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classiﬁcation for our task. Besides, we observed a signiﬁcant improvement when the most discriminative layers expressed more impact in the training and classiﬁcation of ResNet-50 for Barrett’s esophagus and adenocarcinoma classiﬁcation, demonstrating that both human knowledge and computational processing may inﬂuence the correct learning of such a problem.</abstract>
    <parentTitle language="eng">Medical &amp; Biological Engineering &amp; Computing</parentTitle>
    <identifier type="doi">10.1007/s11517-024-03142-8</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Luis Antonio de Souza Jr.</author>
    <author>Leandro A. Passos</author>
    <author>Marcos Cleison S. Santana</author>
    <author>Robert Mendel</author>
    <author>David Rauber</author>
    <author>Alanna Ebigbo</author>
    <author>Andreas Probst</author>
    <author>Helmut Messmann</author>
    <author>João Paulo Papa</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Multistep training</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Barrett’s esophagus detection</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Convolutional neural networks</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Deep learning</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>2269</id>
    <completedYear/>
    <publishedYear>2022</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>1</pageNumber>
    <edition>E-Video</edition>
    <issue>10</issue>
    <volume>54</volume>
    <type>article</type>
    <publisherName>Georg Thieme Verlag</publisherName>
    <publisherPlace>Stuttgart</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2022-01-08</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Multimodal imaging for detection and segmentation of Barrett’s esophagus-related neoplasia using artificial intelligence</title>
    <abstract language="eng">The early diagnosis of cancer in Barrett’s esophagus is crucial for improving the prognosis. However, identifying Barrett’s esophagus-related neoplasia (BERN) is challenging, even for experts [1]. Four-quadrant biopsies may improve the detection of neoplasia, but they can be associated with sampling errors. The application of artificial intelligence (AI) to the assessment of Barrett’s esophagus could improve the diagnosis of BERN, and this has been demonstrated in both preclinical and clinical studies [2] [3].&#13;
&#13;
In this video demonstration, we show the accurate detection and delineation of BERN in two patients ([Video 1]). In part 1, the AI system detects a mucosal cancer about 20 mm in size and accurately delineates the lesion in both white-light and narrow-band imaging. In part 2, a small island of BERN with high-grade dysplasia is detected and delineated in white-light, narrow-band, and texture and color enhancement imaging. The video shows the results using a transparent overlay of the mucosal cancer in real time as well as a full segmentation preview. Additionally, the optical flow allows for the assessment of endoscope movement, something which is inversely related to the reliability of the AI prediction. We demonstrate that multimodal imaging can be applied to the AI-assisted detection and segmentation of even small focal lesions in real time.</abstract>
    <parentTitle language="eng">Endoscopy</parentTitle>
    <identifier type="doi">10.1055/a-1704-7885</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Alanna Ebigbo</author>
    <author>Robert Mendel</author>
    <author>Andreas Probst</author>
    <author>Michael Meinikheim</author>
    <author>Michael F. Byrne</author>
    <author>Helmut Messmann</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Video</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Artificial Intelligence</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Multimodal Imaging</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>122</id>
    <completedYear/>
    <publishedYear>2011</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>2944</pageFirst>
    <pageLast>2958</pageLast>
    <pageNumber/>
    <edition/>
    <issue>12</issue>
    <volume>44</volume>
    <type>article</type>
    <publisherName>Elsevier</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">A variational approach to vesicle membrane reconstruction from fluorescence imaging</title>
    <abstract language="eng">Biological applications like vesicle membrane analysis involve the precise segmentation of 3D structures in noisy volumetric data, obtained by techniques like magnetic resonance imaging (MRI) or laser scanning microscopy (LSM). Dealing with such data is a challenging task and requires robust and accurate segmentation methods. In this article, we propose a novel energy model for 3D segmentation fusing various cues like regional intensity subdivision, edge alignment and orientation information. The uniqueness of the approach consists in the definition of a new anisotropic regularizer, which accounts for the unbalanced slicing of the measured volume data, and the generalization of an efficient numerical scheme for solving the arising minimization problem, based on linearization and fixed-point iteration. We show how the proposed energy model can be optimized globally by making use of recent continuous convex relaxation techniques. The accuracy and robustness of the presented approach are demonstrated by evaluating it on multiple real data sets and comparing it to alternative segmentation methods based on level sets. Although the proposed model is designed with focus on the particular application at hand, it is general enough to be applied to a variety of different segmentation tasks.</abstract>
    <parentTitle language="eng">Pattern Recognition</parentTitle>
    <identifier type="doi">10.1016/j.patcog.2011.04.019</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <author>Kalin Kolev</author>
    <author>Norbert Kirchgeßner</author>
    <author>Sebastian Houben</author>
    <author>Agnes Csiszár</author>
    <author>Wolfgang Rubner</author>
    <author>Christoph Palm</author>
    <author>Björn Eiben</author>
    <author>Rudolf Merkel</author>
    <author>Daniel Cremers</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>3D segmentation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Convex optimization</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Vesicle membrane analysis</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Fluorescence imaging</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Dreidimensionale Bildverarbeitung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildsegmentierung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Konvexe Optimierung</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>170</id>
    <completedYear/>
    <publishedYear>2000</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>549</pageFirst>
    <pageLast>552</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Texture Classification of Graylevel Images by Multiscale Cross-Co-Occurrence Matrices</title>
    <abstract language="eng">Local gray level dependencies of natural images can be modelled by means of co-occurrence matrices containing joint probabilities of gray-level pairs. Texture, however, is a resolution-dependent phenomenon and hence, classification depends on the chosen scale. Since there is no optimal scale for all textures we employ a multiscale approach that acquires textural features at several scales. Thus linear and nonlinear scale-spaces are analyzed by multiscale co-occurrence matrices that describe the statistical behavior of a texture in scale-space. Classification is then performed on the basis of texture features taken from the individual scale with the highest discriminatory power. By considering cross-scale occurrences of gray level pairs, the impact of filters on the feature is described and used for classification of natural textures. This novel method was found to improve classification rates of the common co-occurrence matrix approach on standard textures significantly.</abstract>
    <parentTitle language="eng">Proceedings 15th International Conference on Pattern Recognition (ICPR-2000)</parentTitle>
    <identifier type="doi">10.1109/ICPR.2000.906133</identifier>
    <author>V. Metzler</author>
    <author>T. Aach</author>
    <author>Christoph Palm</author>
    <author>Thomas M. Lehmann</author>
    <collection role="ddc" number="00">Informatik, Wissen, Systeme</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othpublikationsherkunft" number="">Externe Publikationen</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>5434</id>
    <completedYear/>
    <publishedYear>2022</publishedYear>
    <thesisYearAccepted/>
    <language>deu</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue>08</issue>
    <volume>60</volume>
    <type>conferencepresentation</type>
    <publisherName>Georg Thieme Verlag</publisherName>
    <publisherPlace>Stuttgart</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2022-09-16</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="deu">Optical Flow als Methode zur Qualitätssicherung KI-unterstützter Untersuchungen von Barrett-Ösophagus und Barrett-Ösophagus assoziierten Neoplasien</title>
    <abstract language="deu">Einleitung &#13;
Übermäßige Bewegung im Bild kann die Performance von auf künstlicher Intelligenz (KI) basierenden klinischen Entscheidungsunterstützungssystemen (CDSS) reduzieren. Optical Flow (OF) ist eine Methode zur Lokalisierung und Quantifizierung von Bewegungen zwischen aufeinanderfolgenden Bildern.&#13;
&#13;
Ziel &#13;
Ziel ist es, die Mensch-Computer-Interaktion (HCI) zu verbessern und Endoskopiker die unser KI-System „Barrett-Ampel“ zur Unterstützung bei der Beurteilung von Barrett-Ösophagus (BE) verwenden, ein Echtzeit-Feedback zur aktuellen Datenqualität anzubieten.&#13;
&#13;
Methodik &#13;
Dazu wurden unveränderte Videos in „Weißlicht“ (WL), „Narrow Band Imaging“ (NBI) und „Texture and Color Enhancement Imaging“ (TXI) von acht endoskopischen Untersuchungen von histologisch gesichertem BE und mit Barrett-Ösophagus assoziierten Neoplasien (BERN) durch unseren KI-Algorithmus analysiert. Der zur Bewertung der Bildqualität verwendete OF beinhaltete die mittlere Magnitude und die Entropie des Histogramms der Winkel. Frames wurden automatisch extrahiert, wenn die vordefinierten Schwellenwerte von 3,0 für die mittlere Magnitude und 9,0 für die Entropie des Histogramms der Winkel überschritten wurden. Experten sahen sich zunächst die Videos ohne KI-Unterstützung an und bewerteten, ob Störfaktoren die Sicherheit mit der eine Diagnose im vorliegenden Fall gestellt werden kann negativ beeinflussen. Anschließend überprüften sie die extrahierten Frames.&#13;
&#13;
Ergebnis &#13;
Gleichmäßige Bewegung in eine Richtung, wie etwa beim Vorschieben des Endoskops, spiegelte sich, bei insignifikant veränderter Entropie, in einer Erhöhung der Magnitude wider. Chaotische Bewegung, zum Beispiel während dem Spülen, war mit erhöhter Entropie assoziiert. Insgesamt war eine unruhige endoskopische Darstellung, Flüssigkeit sowie übermäßige Ösophagusmotilität mit erhöhtem OF assoziiert und korrelierte mit der Meinung der Experten über die Qualität der Videos. Der OF und die subjektive Wahrnehmung der Experten über die Verwertbarkeit der vorliegenden Bildsequenzen korrelierten direkt proportional. Wenn die vordefinierten Schwellenwerte des OF überschritten wurden, war die damit verbundene Bildqualität in 94% der Fälle für eine definitive Interpretation auch für Experten unzureichend.&#13;
&#13;
Schlussfolgerung &#13;
OF hat das Potenzial Endoskopiker ein Echtzeit-Feedback über die Qualität des Dateninputs zu bieten und so nicht nur die HCI zu verbessern, sondern auch die optimale Performance von KI-Algorithmen zu ermöglichen.</abstract>
    <parentTitle language="deu">Zeitschrift für Gastroenterologie</parentTitle>
    <identifier type="doi">10.1055/s-0042-1754997</identifier>
    <enrichment key="ConferenceStatement">Jahrestagung der Deutschen Gesellschaft für Gastroenterologie, Verdauungs- und Stoffwechselkrankheiten mit Sektion Endoskopie, 76, 2022, Hamburg</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Michael Meinikheim</author>
    <author>Robert Mendel</author>
    <author>Andreas Probst</author>
    <author>Markus W. Scheppach</author>
    <author>Helmut Messmann</author>
    <author>Christoph Palm</author>
    <author>Alanna Ebigbo</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Optical Flow</value>
    </subject>
    <collection role="ddc" number="61">Medizin und Gesundheit</collection>
    <collection role="ddc" number="004">Datenverarbeitung; Informatik</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>3507</id>
    <completedYear/>
    <publishedYear>2022</publishedYear>
    <thesisYearAccepted/>
    <language>deu</language>
    <pageFirst>251</pageFirst>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue>4</issue>
    <volume>60</volume>
    <type>conferencepresentation</type>
    <publisherName>Thieme</publisherName>
    <publisherPlace>Stuttgart</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2022-04-10</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="deu">Einsatz von künstlicher Intelligenz (KI) als Entscheidungsunterstützungssystem für nicht-Experten bei der Beurteilung von Barrett-Ösophagus assoziierten Neoplasien (BERN)</title>
    <abstract language="deu">Einleitung&#13;
Die sichere Detektion und Charakterisierung von Barrett-Ösophagus assoziierten Neoplasien (BERN) stellt selbst für erfahrene Endoskopiker eine Herausforderung dar.&#13;
&#13;
Ziel&#13;
Ziel dieser Studie ist es, den Add-on Effekt eines künstlichen Intelligenz (KI) Systems (Barrett-Ampel) als Entscheidungsunterstüzungssystem für Endoskopiker ohne Expertise bei der Untersuchung von BERN zu evaluieren.&#13;
&#13;
Material und Methodik&#13;
Zwölf Videos in „Weißlicht“ (WL), „narrow-band imaging“ (NBI) und „texture and color enhanced imaging“ (TXI) von histologisch bestätigten Barrett-Metaplasien oder BERN wurden von Experten und Untersuchern ohne Barrett-Expertise evaluiert. Die Probanden wurden dazu aufgefordert in den Videos auftauchende BERN zu identifizieren und gegebenenfalls die optimale Biopsiestelle zu markieren. Unser KI-System wurde demselben Test unterzogen, wobei dieses BERN in Echtzeit segmentierte und farblich von umliegendem Epithel differenzierte. Anschließend wurden den Probanden die Videos mit zusätzlicher KI-Unterstützung gezeigt. Basierend auf dieser neuen Information, wurden die Probanden zu einer Reevaluation ihrer initialen Beurteilung aufgefordert.&#13;
&#13;
Ergebnisse&#13;
Die „Barrett-Ampel“ identifizierte unabhängig von den verwendeten Darstellungsmodi (WL, NBI, TXI) alle BERN. Zwei entzündlich veränderte Läsionen wurden fehlinterpretiert (Genauigkeit=75%). Während Experten vergleichbare Ergebnisse erzielten (Genauigkeit=70,8%), hatten Endoskopiker ohne Expertise bei der Beurteilung von Barrett-Metaplasien eine Genauigkeit von lediglich 58,3%. Wurden die nicht-Experten allerdings von unserem KI-System unterstützt, erreichten diese eine Genauigkeit von 75%.&#13;
&#13;
Zusammenfassung&#13;
Unser KI-System hat das Potential als Entscheidungsunterstützungssystem bei der Differenzierung zwischen Barrett-Metaplasie und BERN zu fungieren und so Endoskopiker ohne entsprechende Expertise zu assistieren. Eine Limitation dieser Studie ist die niedrige Anzahl an eingeschlossenen Videos. Um die Ergebnisse dieser Studie zu bestätigen, müssen randomisierte kontrollierte klinische Studien durchgeführt werden.</abstract>
    <parentTitle language="deu">Zeitschrift für Gastroenterologie</parentTitle>
    <identifier type="doi">10.1055/s-0042-1745653</identifier>
    <enrichment key="ConferenceStatement">49. Jahrestagung der Gesellschaft für Gastroenterologie in Bayern e.V., Freising</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Michael Meinikheim</author>
    <author>Robert Mendel</author>
    <author>Markus W. Scheppach</author>
    <author>Andreas Probst</author>
    <author>Friederike Prinz</author>
    <author>Tanja Schwamberger</author>
    <author>Jakob Schlottmann</author>
    <author>Stefan Karl Gölder</author>
    <author>Benjamin Walter</author>
    <author>Ingo Steinbrück</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <author>Alanna Ebigbo</author>
    <subject>
      <language>deu</language>
      <type>uncontrolled</type>
      <value>Barrett-Ösophagus</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>uncontrolled</type>
      <value>Künstliche Intelligenz</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>psyndex</type>
      <value>Speiseröhrenkrankheit</value>
    </subject>
    <collection role="ddc" number="0">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="ddc" number="6">Technik, Medizin, angewandte Wissenschaften</collection>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
</export-example>
