<?xml version="1.0" encoding="utf-8"?>
<export-example>
  <doc>
    <id>6484</id>
    <completedYear/>
    <publishedYear>2023</publishedYear>
    <thesisYearAccepted/>
    <language>deu</language>
    <pageFirst>e539</pageFirst>
    <pageLast>e540</pageLast>
    <pageNumber/>
    <edition/>
    <issue>08</issue>
    <volume>61</volume>
    <type>conferencepresentation</type>
    <publisherName>Thieme</publisherName>
    <publisherPlace>Stuttgart</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2023-09-18</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="deu">Verwendung künstlicher Intelligenz bei der Detektion der Papilla duodeni major</title>
    <abstract language="deu">Einleitung Die Endoskopische Retrograde Cholangiopankreatikographie (ERCP) ist der Goldstandard in der Diagnostik und Therapie von Erkrankungen des pankreatobiliären Trakts. Jedoch ist sie technisch sehr anspruchsvoll und weist eine vergleichsweise hohe Komplikationsrate auf.&#13;
&#13;
Ziele &#13;
In der vorliegenden Machbarkeitsstudie soll geprüft werden, ob mithilfe eines Deep-learning-Algorithmus die Papille und das Ostium zuverlässig detektiert werden können und somit für Endoskopiker mit geringer Erfahrung ein geeignetes Hilfsmittel, insbesondere für die Ausbildungssituation, darstellen könnten.&#13;
&#13;
Methodik&#13;
Wir betrachteten insgesamt 606 Bilddatensätze von 65 Patienten. In diesen wurde sowohl die Papilla duodeni major als auch das Ostium segmentiert. Anschließend wurde eine neuronales Netz mittels eines Deep-learning-Algorithmus trainiert. Außerdem erfolgte eine 5-fache Kreuzvaldierung.&#13;
&#13;
Ergebnisse&#13;
Bei einer 5-fachen Kreuzvaldierung auf den 606 gelabelten Daten konnte für die Klasse Papille eine F1-Wert von 0,7908, eine Sensitivität von 0,7943 und eine Spezifität von 0,9785 erreicht werden, für die Klasse Ostium eine F1-Wert von 0,5538, eine Sensitivität von 0,5094 und eine Spezifität von 0,9970 (vgl. [Tab. 1]). Unabhängig von der Klasse zeigte sich gemittelt (Klasse Papille und Klasse Ostium) ein F1-Wert von 0,6673, eine Sensitivität von 0,6519 und eine Spezifität von 0,9877 (vgl. [Tab. 2]).&#13;
&#13;
Schlussfolgerung &#13;
In vorliegende Machbarkeitsstudie konnte das neuronale Netz die Papilla duodeni major mit einer hohen Sensitivität und sehr hohen Spezifität identifizieren. Bei der Detektion des Ostiums war die Sensitivität deutlich geringer. Zukünftig soll das das neuronale Netz mit mehr Daten trainiert werden. Außerdem ist geplant, den Algorithmus auch auf Videos anzuwenden. Somit könnte langfristig ein geeignetes Hilfsmittel für die ERCP etabliert werden.</abstract>
    <parentTitle language="deu">Zeitschrift für Gastroenterologie</parentTitle>
    <identifier type="doi">10.1055/s-0043-1772000</identifier>
    <identifier type="url">https://www.thieme-connect.de/products/ejournals/abstract/10.1055/s-0043-1772000</identifier>
    <enrichment key="ConferenceStatement">Viszeralmedizin 2023 77. Jahrestagung der DGVS mit Sektion Endoskopie Herbsttagung der Deutschen Gesellschaft für Allgemein- und Viszeralchirurgie mit den Arbeitsgemeinschaften der DGAV und Jahrestagung der CACP</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Stephan Zellmer</author>
    <author>David Rauber</author>
    <author>Andreas Probst</author>
    <author>Tobias Weber</author>
    <author>Sandra Nagl</author>
    <author>Christoph Römmele</author>
    <author>Elisabeth Schnoy</author>
    <author>Christoph Palm</author>
    <author>Helmut Messmann</author>
    <author>Alanna Ebigbo</author>
    <subject>
      <language>deu</language>
      <type>uncontrolled</type>
      <value>Künstliche Intelligenz</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
  </doc>
  <doc>
    <id>8846</id>
    <completedYear/>
    <publishedYear>2026</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>31</pageNumber>
    <edition/>
    <issue/>
    <volume>109</volume>
    <type>article</type>
    <publisherName>Elsevier</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Comparative validation of surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation in endoscopy: Results of the PhaKIR 2024 challenge</title>
    <abstract language="eng">Reliable recognition and localization of surgical instruments in endoscopic video recordings are foundational for a wide range of applications in computer- and robot-assisted minimally invasive surgery (RAMIS), including surgical training, skill assessment, and autonomous assistance. However, robust performance under real-world conditions remains a significant challenge. Incorporating surgical context – such as the current procedural phase – has emerged as a promising strategy to improve robustness and interpretability.&#13;
To address these challenges, we organized the Surgical Procedure Phase, Keypoint, and Instrument Recognition (PhaKIR) sub-challenge as part of the Endoscopic Vision (EndoVis) challenge at MICCAI 2024. We introduced a novel, multi-center dataset comprising thirteen full-length laparoscopic cholecystectomy videos collected from three distinct medical institutions, with unified annotations for three interrelated tasks: surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation. Unlike existing datasets, ours enables joint investigation of instrument localization and procedural context within the same data while supporting the integration of temporal information across entire procedures.&#13;
We report results and findings in accordance with the BIAS guidelines for biomedical image analysis challenges. The PhaKIR sub-challenge advances the field by providing a unique benchmark for developing temporally aware, context-driven methods in RAMIS and offers a high-quality resource to support future research in surgical scene understanding.</abstract>
    <parentTitle language="eng">Medical Image Analysis</parentTitle>
    <identifier type="issn">1361-8415</identifier>
    <identifier type="doi">10.1016/j.media.2026.103945</identifier>
    <note>Corresponding author der OTH Regensburg: Tobias Rueckert&#13;
&#13;
Die Preprint-Version ist ebenfalls in diesem Repositorium verzeichnet unter: &#13;
https://opus4.kobv.de/opus4-oth-regensburg/solrsearch/index/search/start/0/rows/10/sortfield/score/sortorder/desc/searchtype/simple/query/2507.16559</note>
    <enrichment key="opus_doi_flag">true</enrichment>
    <enrichment key="local_crossrefDocumentType">journal-article</enrichment>
    <enrichment key="local_crossrefLicence">https://www.elsevier.com/tdm/userlicense/1.0/</enrichment>
    <enrichment key="local_import_origin">crossref</enrichment>
    <enrichment key="opus.source">doi-import</enrichment>
    <enrichment key="CorrespondingAuthor">Tobias Rueckert</enrichment>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="Kostentraeger">2027701</enrichment>
    <licence>Creative Commons - CC BY-NC - Namensnennung - Nicht kommerziell 4.0 International</licence>
    <author>Tobias Rueckert</author>
    <author>David Rauber</author>
    <author>Raphaela Maerkl</author>
    <author>Leonard Klausmann</author>
    <author>Suemeyye R. Yildiran</author>
    <author>Max Gutbrod</author>
    <author>Danilo Weber Nunes</author>
    <author>Alvaro Fernandez Moreno</author>
    <author>Imanol Luengo</author>
    <author>Danail Stoyanov</author>
    <author>Nicolas Toussaint</author>
    <author>Enki Cho</author>
    <author>Hyeon Bae Kim</author>
    <author>Oh Sung Choo</author>
    <author>Ka Young Kim</author>
    <author>Seong Tae Kim</author>
    <author>Gonçalo Arantes</author>
    <author>Kehan Song</author>
    <author>Jianjun Zhu</author>
    <author>Junchen Xiong</author>
    <author>Tingyi Lin</author>
    <author>Shunsuke Kikuchi</author>
    <author>Hiroki Matsuzaki</author>
    <author>Atsushi Kouno</author>
    <author>João Renato Ribeiro Manesco</author>
    <author>João Paulo Papa</author>
    <author>Tae-Min Choi</author>
    <author>Tae Kyeong Jeong</author>
    <author>Juyoun Park</author>
    <author>Oluwatosin Alabi</author>
    <author>Meng Wei</author>
    <author>Tom Vercauteren</author>
    <author>Runzhi Wu</author>
    <author>Mengya Xu</author>
    <author>An Wang</author>
    <author>Long Bai</author>
    <author>Hongliang Ren</author>
    <author>Amine Yamlahi</author>
    <author>Jakob Hennighausen</author>
    <author>Lena Maier-Hein</author>
    <author>Satoshi Kondo</author>
    <author>Satoshi Kasai</author>
    <author>Kousuke Hirasawa</author>
    <author>Shu Yang</author>
    <author>Yihui Wang</author>
    <author>Hao Chen</author>
    <author>Santiago Rodríguez</author>
    <author>Nicolás Aparicio</author>
    <author>Leonardo Manrique</author>
    <author>Christoph Palm</author>
    <author>Dirk Wilhelm</author>
    <author>Hubertus Feussner</author>
    <author>Daniel Rueckert</author>
    <author>Stefanie Speidel</author>
    <author>Sahar Nasirihaghighi</author>
    <author>Yasmina Al Khalil</author>
    <author>Yiping Li</author>
    <author>Pablo Arbeláez</author>
    <author>Nicolás Ayobi</author>
    <author>Olivia Hosie</author>
    <author>Juan Camilo Lyons</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Surgical phase recognition</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Instrument keypoint estimation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Instrument instance segmentation</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Robot-assisted surgery</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="oaweg" number="">Hybrid Open Access - OA-Veröffentlichung in einer Subskriptionszeitschrift/-medium</collection>
    <collection role="oaweg" number="">Corresponding author der OTH Regensburg</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="funding" number="">DEAL Elsevier</collection>
    <collection role="DFGFachsystematik" number="1">Ingenieurwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Digitale Transformation</collection>
  </doc>
  <doc>
    <id>8869</id>
    <completedYear/>
    <publishedYear>2025</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>researchdata</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">PhaKIR Dataset - Surgical Procedure Phase, Keypoint, and Instrument Recognition [Data set]</title>
    <abstract language="eng">Note: A script for extracting the individual frames from the video files while preserving the challenge-compliant directory structure and frame-to-mask naming conventions is available on GitHub and can be accessed here: https://github.com/remic-othr/PhaKIR_Dataset.&#13;
&#13;
The dataset is described in the following publications: &#13;
&#13;
    Rueckert, Tobias et al.: Comparative validation of surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation in endoscopy: Results of the PhaKIR 2024 challenge. arXiv preprint, https://arxiv.org/abs/2507.16559. 2025.&#13;
    Rueckert, Tobias et al.: Video Dataset for Surgical Phase, Keypoint, and Instrument Recognition in Laparoscopic Surgery (PhaKIR). arXiv preprint, https://arxiv.org/abs/2511.06549. 2025.&#13;
&#13;
The proposed dataset was used as the training dataset in the PhaKIR challenge (https://phakir.re-mic.de/) as part of EndoVis-2024 at MICCAI 2024 and consists of eight real-world videos of human cholecystectomies ranging from 23 to 60 minutes in duration. The procedures were performed by experienced physicians, and the videos were recorded in three hospitals. In addition to existing datasets, our annotations provide pixel-wise instance segmentation masks of surgical instruments for a total of 19 categories, coordinates of relevant instrument keypoints (instrument tip(s), shaft-tip transition, shaft), both at an interval of one frame per second, and specifications regarding the intervention phases for a total of eight different phase categories for each individual frame in one dataset and thus comprehensively cover instrument localization and the context of the operation. Furthermore, the provision of the complete video sequences offers the opportunity to include the temporal information regarding the respective tasks and thus further optimize the resulting methods and outcomes.</abstract>
    <identifier type="doi">10.5281/zenodo.15740620</identifier>
    <enrichment key="file_type">Dataset</enrichment>
    <enrichment key="opus.doi.autoCreate">false</enrichment>
    <enrichment key="opus.urn.autoCreate">false</enrichment>
    <licence>Creative Commons - CC BY-NC-SA - Namensnennung - Nicht kommerziell -  Weitergabe unter gleichen Bedingungen 4.0 International</licence>
    <author>Tobias Rueckert</author>
    <author>David Rauber</author>
    <author>Leonard Klausmann</author>
    <author>Max Gutbrod</author>
    <author>Daniel Rueckert</author>
    <author>Hubertus Feussner</author>
    <author>Dirk Wilhelm</author>
    <author>Christoph Palm</author>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="1">Ingenieurwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Digitale Transformation</collection>
  </doc>
  <doc>
    <id>8866</id>
    <completedYear/>
    <publishedYear>2025</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>researchdata</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">A cleaned subset of the first five CATARACTS test videos [Data set]</title>
    <abstract language="eng">This dataset is a subset of the original CATARACTS test dataset and is used by the OpenMIBOOD framework to evaluate a specific out-of-distribution setting.&#13;
When using this dataset, it is mandatory to cite the corresponding publication (OpenMIBOOD (10.1109/CVPR52734.2025.02410)) and follow the acknowledgement and citation requirements of the original dataset (CATARACTS).&#13;
&#13;
The original CATARACTS dataset (associated publication,Homepage) consists of 50 videos of cataract surgeries, split into 25 train and 25 test videos.&#13;
This subset contains the frames of the first 5 test videos. Further, black frames at the beginning of each video were removed.</abstract>
    <identifier type="doi">10.5281/zenodo.14924735</identifier>
    <note>Related works: &#13;
Is derived from:&#13;
Dataset: 10.21227/ac97-8m18 (DOI)&#13;
&#13;
Software:&#13;
Repository URL: https://github.com/remic-othr/OpenMIBOOD</note>
    <enrichment key="file_format">.jpg</enrichment>
    <enrichment key="file_size">29.6 GB</enrichment>
    <enrichment key="file_type">Image</enrichment>
    <enrichment key="ConferenceStatement">The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2025 (CVPR) , Nashville, Tennesse, 11-15 June 2025</enrichment>
    <licence>Creative Commons - CC BY-NC-SA - Namensnennung - Nicht kommerziell -  Weitergabe unter gleichen Bedingungen 4.0 International</licence>
    <author>Max Gutbrod</author>
    <author>David Rauber</author>
    <author>Danilo Weber Nunes</author>
    <author>Christoph Palm</author>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="4">Naturwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Gesundheit und Soziales</collection>
  </doc>
  <doc>
    <id>8881</id>
    <completedYear/>
    <publishedYear>2025</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>researchdata</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Cropped single instrument frames subset from Cholec80 [Data set]</title>
    <abstract language="eng">This dataset is a subset of the original Cholec80 dataset and is used by the OpenMIBOOD framework to evaluate a specific out-of-distribution setting.&#13;
When using this dataset, it is mandatory to cite the corresponding publication (OpenMIBOOD) and to follow the acknowledgement and citation requirements of the original dataset (Cholec80).&#13;
&#13;
The original Cholec80 dataset (associated paper,Homepage) consists of 80 cholecystectomy surgery videos recorded at 25 fps, performed by 13 surgeons. It includes phase annotations (25 fps) and tool presence labels (1 fps), with phase definitions provided by a senior surgeon. A tool is considered present if at least half of its tip is visible. The dataset categorizes tools into seven types: Grasper, Bipolar, Hook, Scissors, Clipper, Irrigator, and Specimen bag. Multiple tools may be present in each frame. Additionally, 76 of the 80 videos exhibit a strong black vignette.&#13;
&#13;
For this dataset subset, frames were extracted based on tool presence labels, selecting only those containing Grasper, Bipolar, Hook, or Clipper while ensuring that only a single tool appears per frame. To enhance visual consistency, the black vignette was removed by extracting an inner rectangular region, where applicable.</abstract>
    <identifier type="doi">10.5281/zenodo.14921670</identifier>
    <note>Related works&#13;
Is derived from&#13;
Journal article: 10.1109/TMI.2016.2593957&#13;
&#13;
Software Repository URL &#13;
https://github.com/remic-othr/OpenMIBOOD</note>
    <enrichment key="file_format">.png</enrichment>
    <enrichment key="file_size">20.5 GB</enrichment>
    <enrichment key="file_type">Image</enrichment>
    <enrichment key="opus.doi.autoCreate">false</enrichment>
    <enrichment key="opus.urn.autoCreate">false</enrichment>
    <licence>Creative Commons - CC BY-NC-SA - Namensnennung - Nicht kommerziell -  Weitergabe unter gleichen Bedingungen 4.0 International</licence>
    <author>Max Gutbrod</author>
    <author>David Rauber</author>
    <author>Danilo Weber Nunes</author>
    <author>Christoph Palm</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Tool Presence Detection</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Cholecystectomy</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Laparoscopic</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Deep Learning</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Out-Of-Distribution Detection</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="1">Ingenieurwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Gesundheit und Soziales</collection>
  </doc>
  <doc>
    <id>8882</id>
    <completedYear/>
    <publishedYear>2025</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>researchdata</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">OpenMIBOOD's classification models for the MIDOG, PhaKIR, and OASIS-3 benchmarks [Data set]</title>
    <abstract language="eng">These models are provided for evaluating post-hoc out-of-distribution methods on the three OpenMIBOOD benchmarks: MIDOG, PhaKIR, and OASIS-3.&#13;
&#13;
When using these models, make sure to give appropriate credit and cite the OpenMIBOOD publication.</abstract>
    <identifier type="doi">10.5281/zenodo.14982267</identifier>
    <note>Software Repository URL &#13;
https://github.com/remic-othr/OpenMIBOOD</note>
    <enrichment key="file_format">.pth</enrichment>
    <enrichment key="file_size">264.5 MB</enrichment>
    <enrichment key="file_type">Model</enrichment>
    <licence>Creative Commons - CC BY - Namensnennung 4.0 International</licence>
    <author>Max Gutbrod</author>
    <author>David Rauber</author>
    <author>Danilo Weber Nunes</author>
    <author>Christoph Palm</author>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="1">Ingenieurwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Gesundheit und Soziales</collection>
  </doc>
  <doc>
    <id>6369</id>
    <completedYear/>
    <publishedYear>2022</publishedYear>
    <thesisYearAccepted>2022</thesisYearAccepted>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>262</pageNumber>
    <edition/>
    <issue/>
    <volume/>
    <type>doctoralthesis</type>
    <publisherName>Universidade Federal de São Carlos</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2022-03-28</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Computer-assisted diagnosis of Barrett's esophagus using machine learning techniques</title>
    <title language="por">Auxílio ao diagnóstico automático do esôfago de Barrett utilizando aprendizado de máquina</title>
    <abstract language="eng">Esophageal adenocarcinoma is an illness that is usually hard to detect at the early stages&#13;
in the presence of Barrett’s esohagus. The development of automatic evaluation systems of such illness may be very useful, thus assisting the experts in the neoplastic region detection. With the strong growth of machine learning techniques aiming to improve the effectivess of medical diagnosis, the use of such approaches characterizes a strong scenario to be explored for the early diagnosis of esophageal adenocarcinoma. Barrett’s esophagus as a predecessor of adenocarcinoma can be explained by some risk factors, such as obesity, smoking, and late medical diagnosis. This project proposes the development of new computer vision and machine learning techniques to assist the automatic diagnosis of the esophageal adenocarcinioma based on the evaluation of two kind of features: (i) handcrafted features, calculated by means of human knowledge using some image processing technique and; (ii) deeply-learnable features, calculated exclusively based on deep learning techniques. From the extensive application of global and local protocols for the models proposed in this work, the description of cancer-affected images and Barrett’s esophagus-affected samples were generalized and deeply evaluated using, for example, classifiers such as Support Vector Machines, ResNet-50 and the combination of descriptions by handcrafted and deeply-learnable features. Also, the behavior of the automatic definition of key-points within the evaluated techniques was observed, something of a paramount importance nowadays to guarantee transparency and reliability in the decisions made by computational techniques. Thus, this project contributes to both the computational and medical fields, introducing new classifiers, approaches and interpretation of the class generalization process, in addition to proposing fast and precise manners to define cancer, delivering important and novel results concerning the accurate identification of cancer in samples affected by Barrett’s esophagus, showing values around 95% of correct identification rates and arranged in a collection of scientific works developed by the author during the research period and submitted/published to date.</abstract>
    <identifier type="handle">https://repositorio.ufscar.br/handle/ufscar/15820</identifier>
    <enrichment key="TitelVerleihendInstitution">Universidade federal de São Carlos, Centro de Ciências exatas e de tecnologia</enrichment>
    <enrichment key="opus.doi.autoCreate">false</enrichment>
    <enrichment key="opus.urn.autoCreate">false</enrichment>
    <author>Luis Antonio de Souza Jr.</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Machine Learning</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Barrett's esophagus</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Deep Learning</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>handcrafted features</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>deeply-learnable features</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>convolutional neural networks</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>interpretability</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="persons" number="palmbarrett">Palm, Christoph (Prof. Dr.) - Projekt Barrett</collection>
    <collection role="othforschungsschwerpunkt" number="16314">Lebenswissenschaften und Ethik</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="othpublikationsherkunft" number="">Dissertation in Kooperation</collection>
  </doc>
  <doc>
    <id>8978</id>
    <completedYear/>
    <publishedYear>2026</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>11</pageNumber>
    <edition/>
    <issue>1</issue>
    <volume>32</volume>
    <type>article</type>
    <publisherName>Brazilian Computer Society</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2026-03-29</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">LiwTERM-r: a Revised Lightweight Transformer-based Model for Multimodal Skin Lesion Detection Robust to Incomplete Input</title>
    <abstract language="eng">As the most common type of cancer in the world, skin cancer accounts for approximately 30% of all diagnosed tumor-based lesions. Early diagnosis can reduce mortality and prevent disfiguring in different skin regions. With the application of machine learning techniques in recent years, especially deep learning, promising results in this task could be achieved, presenting studies demonstrating that the combination of patients’ clinical anamneses and images of the injured lesion is essential for improving the correct classification of skin lesions. Despite that, meaningful use of anamneses with multiple collected images of the same skin lesion is mandatory, requiring further investigation. Thus, this project aims to contribute to developing multimodal machine learning-based models to solve the skin lesion classification problem by employing a lightweight transformer model that is robust to missing clinical information input. As a main hypothesis, models can be fed by multiple images from different sources as input along with clinical anamneses from the patient’s historical evaluations, leading to a more factual and trustworthy diagnosis. Our model deals with the not-trivial task of combining images and clinical information concerning the skin lesions in a lightweight transformer architecture that does not demand high computation resources or even all the information from the anamneses but still presents competitive classification results.</abstract>
    <parentTitle language="eng">Journal of the Brazilian Computer Society</parentTitle>
    <identifier type="doi">10.5753/jbcs.2026.5871</identifier>
    <enrichment key="BegutachtungStatus">peer-reviewed</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="opus.doi.autoCreate">false</enrichment>
    <enrichment key="opus.urn.autoCreate">false</enrichment>
    <licence>Creative Commons - CC BY - Namensnennung 4.0 International</licence>
    <author>Luis Antonio de Souza Júnior</author>
    <author>André Georghton Cardoso Pacheco</author>
    <author>Thiago Oliveira dos Santos</author>
    <author>Wyctor Fogos da Rocha</author>
    <author>Pedro Henrique Bouzon</author>
    <author>Christoph Palm</author>
    <author>João Paulo Papa</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Deep learning</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Skin Lesion Detection</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Transformers</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Lightweight Architectures</value>
    </subject>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="oaweg" number="">Diamond Open Access - OA-Veröffentlichung ohne Publikationskosten (Sponsoring)</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="1">Ingenieurwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Gesundheit und Soziales</collection>
  </doc>
  <doc>
    <id>8974</id>
    <completedYear/>
    <publishedYear>2026</publishedYear>
    <thesisYearAccepted/>
    <language>deu</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber>xxix, 496</pageNumber>
    <edition/>
    <issue/>
    <volume/>
    <type>conferencevolume</type>
    <publisherName>Springer Vieweg</publisherName>
    <publisherPlace>Wiesbaden</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation>Gesellschaft für Informatik e.V.</contributingCorporation>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2026-03-29</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="deu">Bildverarbeitung für die Medizin 2026 : Proceedings, German Conference on Medical Image Computing, Lübeck March 15–17, 2026</title>
    <abstract language="deu">Die Konferenz "BVM – Bildverarbeitung für die Medizin" ist seit vielen Jahren als die nationale Plattform für den Austausch von Ideen und die Diskussion der neuesten Forschungsergebnisse im Bereich der Medizinischen Bildverarbeitung und der Künstlichen Intelligenz (KI) etabliert. Auch 2026 haben (junge) Wissenschaftler*innen, Industrie und Anwender*innen diesen Austausch vertieft. Die Beiträge dieses Bandes – die meisten davon in englischer Sprache – umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere die Bildgebung und -akquisition, Segmentierung und Analyse, Registrierung, Visualisierung und Animation, computerunterstützte Diagnose sowie bildgestützte Therapieplanung und Therapie. Hierbei kommen Methoden des maschinellen Lernens, der biomechanischen Modellierung sowie der Validierung und Qualitätssicherung zum Einsatz.</abstract>
    <identifier type="isbn">978-3-658-51099-2</identifier>
    <identifier type="issn">1431-472X</identifier>
    <identifier type="doi">10.1007/978-3-658-51100-5</identifier>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="opus.doi.autoCreate">false</enrichment>
    <enrichment key="opus.urn.autoCreate">false</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildverarbeitung</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Computerunterstützte Medizin</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildgebendes Verfahren</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildanalyse</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Deep Learning</value>
    </subject>
    <collection role="institutes" number="FakIM">Fakultät Informatik und Mathematik</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="1">Ingenieurwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Gesundheit und Soziales</collection>
  </doc>
  <doc>
    <id>8977</id>
    <completedYear/>
    <publishedYear>2026</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>131</pageFirst>
    <pageLast>131</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferencepresentation</type>
    <publisherName>Springer Vieweg</publisherName>
    <publisherPlace>Wiesbaden</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2026-03-29</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Abstract: DIY Challenge Blueprint</title>
    <abstract language="eng">The high cost of challenge platforms prevents many people from organizing their own competitions. The do-it-yourself (DIY) challenge blueprint [1] allows you to host your own biomedical AI benchmark challenge. Our DIY approach circumvents the current constraints of commercial challenge platforms. A sovereign, extensible and cost-efficient deployment is provided via containerised, identity-managed and reproducible pipelines. Focus lies on GDPR-compliant hosting via infrastructure-as-code, automated evaluation, modular orchestration, and role-based identity and access management. The framework integrates Docker-based execution and standardised interfaces for task definitions, dataset curation and evaluation. All in all it is designed to be flexible and modular, as demonstrated in the MICCAI 2024 PhaKIR challenge [2, 3]. In this case study, different medical tasks on a multicentre laparoscopic dataset with framewise labels for phases and spatial annotations for instruments across fulllength videos were supported. This case study empirically validates the DIY challenge blueprint as a reproducible and customizable challenge-hosting infrastructure. The full code can be found at https://github.com/remic-othr/PhaKIR_DIY.</abstract>
    <parentTitle language="eng">Bildverarbeitung für die Medizin 2025: Proceedings, German Conference on Medical Image Computing, Lübeck March 15-17, 2026</parentTitle>
    <subTitle language="deu">from organization to technical implementation in Biomedical Image Analysis</subTitle>
    <identifier type="doi">10.1007/978-3-658-51100-5_27</identifier>
    <enrichment key="OtherSeries">Informatik aktuell</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="OtherSeries">BVM Workshop</enrichment>
    <licence>Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG</licence>
    <author>Leonard Klausmann</author>
    <author>Tobias Rueckert</author>
    <author>David Rauber</author>
    <author>Raphaela Maerkl</author>
    <author>Suemeyye R. Yildiran</author>
    <author>Max Gutbrod</author>
    <author>Christoph Palm</author>
    <subject>
      <language>deu</language>
      <type>swd</type>
      <value>Bildverarbeitung</value>
    </subject>
    <collection role="institutes" number="RCBE">Regensburg Center of Biomedical Engineering - RCBE</collection>
    <collection role="institutes" number="RCHST">Regensburg Center of Health Sciences and Technology - RCHST</collection>
    <collection role="persons" number="palmremic">Palm, Christoph (Prof. Dr.) - ReMIC</collection>
    <collection role="institutes" number="">Labor Regensburg Medical Image Computing (ReMIC)</collection>
    <collection role="DFGFachsystematik" number="1">Ingenieurwissenschaften</collection>
    <collection role="othforschungsschwerpunkt" number="">Gesundheit und Soziales</collection>
  </doc>
</export-example>
