Lebenswissenschaften und Ethik
Refine
Year of publication
Document Type
- Article (33)
- conference proceeding (presentation, abstract) (17)
- conference proceeding (article) (8)
- Preprint (2)
- Report (1)
Language
- English (61) (remove)
Is part of the Bibliography
- no (61)
Keywords
- Deep Learning (9)
- Artificial Intelligence (5)
- Diagnose (5)
- Künstliche Intelligenz (5)
- Maschinelles Lernen (5)
- Deep learning (4)
- Adenocarcinoma (3)
- Barrett's esophagus (3)
- Computerunterstützte Medizin (3)
- Speiseröhrenkrankheit (3)
Institute
- Regensburg Center of Health Sciences and Technology - RCHST (61) (remove)
Begutachtungsstatus
- peer-reviewed (44)
- begutachtet (5)
The endoscopic features associated with eosinophilic esophagitis (EoE) may be missed during routine endoscopy. We aimed to develop and evaluate an Artificial Intelligence (AI) algorithm for detecting and quantifying the endoscopic features of EoE in white light images, supplemented by the EoE Endoscopic Reference Score (EREFS). An AI algorithm (AI-EoE) was constructed and trained to differentiate between EoE and normal esophagus using endoscopic white light images extracted from the database of the University Hospital Augsburg. In addition to binary classification, a second algorithm was trained with specific auxiliary branches for each EREFS feature (AI-EoE-EREFS). The AI algorithms were evaluated on an external data set from the University of North Carolina, Chapel Hill (UNC), and compared with the performance of human endoscopists with varying levels of experience. The overall sensitivity, specificity, and accuracy of AI-EoE were 0.93 for all measures, while the AUC was 0.986. With additional auxiliary branches for the EREFS categories, the AI algorithm (AI-EoEEREFS) performance improved to 0.96, 0.94, 0.95, and 0.992 for sensitivity, specificity, accuracy, and AUC, respectively. AI-EoE and AI-EoE-EREFS performed significantly better than endoscopy beginners and senior fellows on the same set of images. An AI algorithm can be trained to detect and quantify endoscopic features of EoE with excellent performance scores. The addition of the EREFS criteria improved the performance of the AI algorithm, which performed significantly better than endoscopists with a lower or medium experience level.
Aims
Eosinophilic esophagitis (EoE) is easily missed during endoscopy, either because physicians are not familiar with its endoscopic features or the morphologic changes are too subtle. In this preliminary paper, we present the first attempt to detect EoE in endoscopic white light (WL) images using a deep learning network (EoE-AI).
Methods
401 WL images of eosinophilic esophagitis and 871 WL images of normal esophageal mucosa were evaluated. All images were assessed for the Endoscopic Reference score (EREFS) (edema, rings, exudates, furrows, strictures). Images with strictures were excluded. EoE was defined as the presence of at least 15 eosinophils per high power field on biopsy. A convolutional neural network based on the ResNet architecture with several five-fold cross-validation runs was used. Adding auxiliary EREFS-classification branches to the neural network allowed the inclusion of the scores as optimization criteria during training. EoE-AI was evaluated for sensitivity, specificity, and F1-score. In addition, two human endoscopists evaluated the images.
Results
EoE-AI showed a mean sensitivity, specificity, and F1 of 0.759, 0.976, and 0.834 respectively, averaged over the five distinct cross-validation runs. With the EREFS-augmented architecture, a mean sensitivity, specificity, and F1-score of 0.848, 0.945, and 0.861 could be demonstrated respectively. In comparison, the two human endoscopists had an average sensitivity, specificity, and F1-score of 0.718, 0.958, and 0.793.
Conclusions
To the best of our knowledge, this is the first application of deep learning to endoscopic images of EoE which were also assessed after augmentation with the EREFS-score. The next step is the evaluation of EoE-AI using an external dataset. We then plan to assess the EoE-AI tool on endoscopic videos, and also in real-time. This preliminary work is encouraging regarding the ability for AI to enhance physician detection of EoE, and potentially to do a true “optical biopsy” but more work is needed.
In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were “instrument segmentation”, “instrument tracking”, “surgical tool segmentation”, and “surgical tool tracking”, resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.
Resting motor threshold and magnetic field output of the figure-of-8 and the double-cone coil
(2020)
The use of the double-cone (DC) coil in transcranial magnetic stimulation (TMS) is promoted with the notion that the DC coil enables stimulation of deeper brain areas in contrast to conventional figure-of-8 (Fo8) coils. However, systematic comparisons of these two coil types with respect to the spatial distribution of the magnetic field output and also to the induced activity in superficial and deeper brain areas are limited. Resting motor thresholds of the left and right first dorsal interosseous (FDI) and tibialis anterior (TA) were determined with the DC and the Fo8 coil in 17 healthy subjects. Coils were orientated over the corresponding motor area in an angle of 45 degrees for the hand area with the handle pointing in posterior direction and in medio-lateral direction for the leg area. Physical measurements were done with an automatic gantry table using a Gaussmeter. Resting motor threshold was higher for the leg area in contrast to the hand area and for the Fo8 in contrast to the DC coil. Muscle by coil interaction was also significant providing higher differences between leg and hand area for the Fo8 (about 27%) in contrast to the DC coil (about 15%). Magnetic field strength was higher for the DC coil in contrast to the Fo8 coil. The DC coil produces a higher magnetic field with higher depth of penetration than the figure of eight coil.
Aims
VA is an endoscopic finding of celiac disease (CD), which can easily be missed if pretest probability is low. In this study, we aimed to develop an artificial intelligence (AI) algorithm for the detection of villous atrophy on endoscopic images.
Methods
858 images from 182 patients with VA and 846 images from 323 patients with normal duodenal mucosa were used for training and internal validation of an AI algorithm (ResNet18). A separate dataset was used for external validation, as well as determination of detection performance of experts, trainees and trainees with AI support. According to the AI consultation distribution, images were stratified into “easy” and “difficult”.
Results
Internal validation showed 82%, 85% and 84% for sensitivity, specificity and accuracy. External validation showed 90%, 76% and 84%. The algorithm was significantly more sensitive and accurate than trainees, trainees with AI support and experts in endoscopy. AI support in trainees was associated with significantly improved performance. While all endoscopists showed significantly lower detection for “difficult” images, AI performance remained stable.
Conclusions
The algorithm outperformed trainees and experts in sensitivity and accuracy for VA detection. The significant improvement with AI support suggests a potential clinical benefit. Stable performance of the algorithm in “easy” and “difficult” test images may indicate an advantage in macroscopically challenging cases.
Clinical setting
Third space procedures such as endoscopic submucosal dissection (ESD) and peroral endoscopic myotomy (POEM) are complex minimally invasive techniques with an elevated risk for operator-dependent adverse events such as bleeding and perforation. This risk arises from accidental dissection into the muscle layer or through submucosal blood vessels as the submucosal cutting plane within the expanding resection site is not always apparent. Deep learning algorithms have shown considerable potential for the detection and characterization of gastrointestinal lesions. So-called AI – clinical decision support solutions (AI-CDSS) are commercially available for polyp detection during colonoscopy. Until now, these computer programs have concentrated on diagnostics whereas an AI-CDSS for interventional endoscopy has not yet been introduced. We aimed to develop an AI-CDSS („Smart ESD“) for real-time intra-procedural detection and delineation of blood vessels, tissue structures and endoscopic instruments during third-space endoscopic procedures.
Characteristics of Smart ESD
An AI-CDSS was invented that delineates blood vessels, tissue structures and endoscopic instruments during third-space endoscopy in real-time. The output can be displayed by an overlay over the endoscopic image with different modes of visualization, such as a color-coded semitransparent area overlay, or border tracing (demonstration video). Hereby the optimal layer for dissection can be visualized, which is close above or directly at the muscle layer, depending on the applied technique (ESD or POEM). Furthermore, relevant blood vessels (thickness> 1mm) are delineated. Spatial proximity between the electrosurgical knife and a blood vessel triggers a warning signal. By this guidance system, inadvertent dissection through blood vessels could be averted.
Technical specifications
A DeepLabv3+ neural network architecture with KSAC and a 101-layer ResNeSt backbone was used for the development of Smart ESD. It was trained and validated with 2565 annotated still images from 27 full length third-space endoscopic videos. The annotation classes were blood vessel, submucosal layer, muscle layer, electrosurgical knife and endoscopic instrument shaft. A test on a separate data set yielded an intersection over union (IoU) of 68%, a Dice Score of 80% and a pixel accuracy of 87%, demonstrating a high overlap between expert and AI segmentation. Further experiments on standardized video clips showed a mean vessel detection rate (VDR) of 85% with values of 92%, 70% and 95% for POEM, rectal ESD and esophageal ESD respectively. False positive measurements occurred 0.75 times per minute. 7 out of 9 vessels which caused intraprocedural bleeding were caught by the algorithm, as well as both vessels which required hemostasis via hemostatic forceps.
Future perspectives
Smart ESD performed well for vessel and tissue detection and delineation on still images, as well as on video clips. During a live demonstration in the endoscopy suite, clinical applicability of the innovation was examined. The lag time for processing of the live endoscopic image was too short to be visually detectable for the interventionist. Even though the algorithm could not be applied during actual dissection by the interventionist, Smart ESD appeared readily deployable during visual assessment by ESD experts. Therefore, we plan to conduct a clinical trial in order to obtain CE-certification of the algorithm. This new technology may improve procedural safety and speed, as well as training of modern minimally invasive endoscopic resection techniques.
Aims
AI has proven great potential in assisting endoscopists in diagnostics, however its role in therapeutic endoscopy remains unclear. Endoscopic submucosal dissection (ESD) is a technically demanding intervention with a slow learning curve and relevant risks like bleeding and perforation. Therefore, we aimed to develop an algorithm for the real-time detection and delineation of relevant structures during third-space endoscopy.
Methods
5470 still images from 59 full length videos (47 ESD, 12 POEM) were annotated. 179681 additional unlabeled images were added to the training dataset. Consequently, a DeepLabv3+ neural network architecture was trained with the ECMT semi-supervised algorithm (under review elsewhere). Evaluation of vessel detection was performed on a dataset of 101 standardized video clips from 15 separate third-space endoscopy videos with 200 predefined blood vessels.
Results
Internal validation yielded an overall mean Dice score of 85% (68% for blood vessels, 86% for submucosal layer, 88% for muscle layer). On the video test data, the overall vessel detection rate (VDR) was 94% (96% for ESD, 74% for POEM). The median overall vessel detection time (VDT) was 0.32 sec (0.3 sec for ESD, 0.62 sec for POEM).
Conclusions
Evaluation of the developed algorithm on a video test dataset showed high VDR and quick VDT, especially for ESD. Further research will focus on a possible clinical benefit of the AI application for VDR and VDT during third-space endoscopy.
Aims
Celiac disease (CD) is a complex condition caused by an autoimmune reaction to ingested gluten. Due to its polymorphic manifestation and subtle endoscopic presentation, the diagnosis is difficult and thus the disorder is underreported. We aimed to use deep learning to identify celiac disease on endoscopic images of the small bowel.
Methods
Patients with small intestinal histology compatible with CD (MARSH classification I-III) were extracted retrospectively from the database of Augsburg University hospital. They were compared to patients with no clinical signs of CD and histologically normal small intestinal mucosa. In a first step MARSH III and normal small intestinal mucosa were differentiated with the help of a deep learning algorithm. For this, the endoscopic white light images were divided into five equal-sized subsets. We avoided splitting the images of one patient into several subsets. A ResNet-50 model was trained with the images from four subsets and then validated with the remaining subset. This process was repeated for each subset, such that each subset was validated once. Sensitivity, specificity, and harmonic mean (F1) of the algorithm were determined.
Results
The algorithm showed values of 0.83, 0.88, and 0.84 for sensitivity, specificity, and F1, respectively. Further data showing a comparison between the detection rate of the AI model and that of experienced endoscopists will be available at the time of the upcoming conference.
Conclusions
We present the first clinical report on the use of a deep learning algorithm for the detection of celiac disease using endoscopic images. Further evaluation on an external data set, as well as in the detection of CD in real-time, will follow. However, this work at least suggests that AI can assist endoscopists in the endoscopic diagnosis of CD, and ultimately may be able to do a true optical biopsy in live-time.
Background and aims
Celiac disease with its endoscopic manifestation of villous atrophy is underdiagnosed worldwide. The application of artificial intelligence (AI) for the macroscopic detection of villous atrophy at routine esophagogastroduodenoscopy may improve diagnostic performance.
Methods
A dataset of 858 endoscopic images of 182 patients with villous atrophy and 846 images from 323 patients with normal duodenal mucosa was collected and used to train a ResNet 18 deep learning model to detect villous atrophy. An external data set was used to test the algorithm, in addition to six fellows and four board certified gastroenterologists. Fellows could consult the AI algorithm’s result during the test. From their consultation distribution, a stratification of test images into “easy” and “difficult” was performed and used for classified performance measurement.
Results
External validation of the AI algorithm yielded values of 90 %, 76 %, and 84 % for sensitivity, specificity, and accuracy, respectively. Fellows scored values of 63 %, 72 % and 67 %, while the corresponding values in experts were 72 %, 69 % and 71 %, respectively. AI consultation significantly improved all trainee performance statistics. While fellows and experts showed significantly lower performance for “difficult” images, the performance of the AI algorithm was stable.
Conclusion
In this study, an AI algorithm outperformed endoscopy fellows and experts in the detection of villous atrophy on endoscopic still images. AI decision support significantly improved the performance of non-expert endoscopists. The stable performance on “difficult” images suggests a further positive add-on effect in challenging cases.
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of earlycancerous tissues in Barrett’s esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts’ previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts’ delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model’s sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts’ insights, demonstrating how human knowledge may influence the correct computational learning.
Barrett's esophagus denotes a disorder in the digestive system that affects the esophagus' mucosal cells, causing reflux, and showing potential convergence to esophageal adenocarcinoma if not treated in initial stages. Thus, fast and reliable computer-aided diagnosis becomes considerably welcome. Nevertheless, such approaches usually suffer from imbalanced datasets, which can be addressed through Generative Adversarial Networks (GANs). Such techniques generate realistic images based on observed samples, even though at the cost of a proper selection of its hyperparameters. Many works employed a class of nature-inspired algorithms called metaheuristics to tackle the problem considering distinct deep learning approaches. Therefore, this paper's main contribution is to introduce metaheuristic techniques to fine-tune GANs in the context of Barrett's esophagus identification, as well as to investigate the feasibility of generating high-quality synthetic images for early-cancer assisted identification.
BACKGROUND:
Due to their corrugated profile, dragonfly wings have special aerodynamic characteristics during flying and gliding. OBJECTIVE: The aim of this study was to create a realistic 3D model of a dragonfly wing captured with a high-resolution micro-CT. To represent geometry changes in span and chord length and their aerodynamic effects, numerical investigations are carried out at different wing positions. METHODS:
The forewing of a Camacinia gigantea was captured using a micro-CT. After the wing was adapted an error-free 3D model resulted. The wing was cut every 5 mm and 2D numerical analyses were conducted in Fluent® 2020 R2 (ANSYS, Inc., Canonsburg, PA, USA). RESULTS: The highest lift coefficient, as well as the highest lift-to-drag ratio, resulted at 0 mm and an angle of attack (AOA) of 5∘. At AOAs of 10∘ or 15∘, the flow around the wing stalled and a Kármán vortex street behind the wing becomes
CONCLUSIONS:
The velocity is higher on the upper side of the wing compared to the lower side. The pressure acts vice versa. Due to the recirculation zones that are formed in valleys of the corrugation pattern the wing resembles the form of an airfoil.
The special wing geometry of dragonflies consisting of veins and a membrane forming a corrugated profile leads to special aerodynamic characteristics. To capture the governing flow regimes of a dragonfly wing in detail, a realistic wing model has to be investigated. Therefore, this study aimed to analyze the aerodynamic characteristics of a 3D dragonfly wing reconstructed from a high-resolution micro-CT scan. Afterwards, a spatially high discretized mesh was generated using the mesh generator CENTAUR™ 14.5.0.2 (CentaurSoft, Austin, TX, US) to finally conduct Computational Fluid Dynamics (CFD) investigations in Fluent® 2020 R2 (ANSYS, Inc., Canonsburg, PA, US). Due to the small dimensions of the wing membrane, only the vein structure of a Camacinia Gigantea was captured at a micro-CT voxel size of 7 microns. The membrane was adapted and connected to the vein structure using a Boolean union operation. Occurring nconsistencies after combining the veins and the membrane were corrected using an adapted pymesh script [1]. As an initial study, only one quarter of the wing (outer wing section) was investigated to reduce the required computational effort. The resulting hybrid mesh consisting of 10 pseudo-structured prism layers along the wing surface and tetrahedra in the farfield area has 43 mio. nodes. The flow around the wing was considered to be incompressible and laminar using transient calculations. When the flow passes the vein structures, steady vortices occur in the corrugation valleys leading to recirculation zones. Therefore, the dragonfly wing resembles the profile of an airfoil. This leads to comparable lift coefficients of dragonfly wings and airfoil profiles at significantly reduced structural weight. The reconstructed geometry also included naturally occurring triangular prismlike serrated structures at the leading edge of the wing, which have comparable effects to micro vortex generators and might stabilize the recirculation zones. Further work aims to investigate the aerodynamic properties of a complete dragonfly wing during wing flapping.
Surgical smoke has been a little discussed topic in the context of the current pandemic. Surgical smoke is generated during the cauterization of tissue with heat-generating devices and consists of 95% water
vapor and 5% cellular debris in the form of particulate matter. In-vivo investigations are performed during tracheotomies where surgical smoke is produced during tissue electrocautery. Furthermore, in-vitro parametric studies to investigate the particle number and size distribution and the spatial distribution of surgical smoke with laser light sheet technique are conducted. The higher the power of the high-frequency-device the larger the particles in size and the higher the resulting particle counts. The images taken show the densest smoke at 40W with artificial saliva. The resulting characteristic size distribution, which may include viruses and bacterial components, confirms that the risk arising from surgical smoke should be considered. Furthermore, the experiments will provide the database for further numerical investigations.
Surgical Smoke is generated during the cauterization of tissue with high-frequency (HF) devices and consists of 95% water vapor and 5% cellular debris. When the coagulation tweezers, which are supplied with HF voltage by the HF device, touch tissue, the electric circuit is closed, and smoke is generated by the heat. In-vivo investigations are performed during tracheotomies where surgical smoke is produced during coagulation of tissue. Furthermore, in-vitro parametric studies to investigate the particle number and size distribution and the spatial distribution of surgical smoke with laser light sheet technique are conducted. With higher power of the HF device, the particles generated are larger in size and the total number of particles generated is also higher. Adding artificial saliva to the tissue shows even higher particle counts. The study by laser light sheet also confirms this. The resulting characteristic size distribution, which may include viruses and bacterial components, confirms considering the risk arising from surgical smoke. Furthermore, the experiments will provide the database for further numerical investigations.
Ergonomic workplaces lead to fewer work-related musculoskeletal disorders and thus fewer sick days. There are various guidelines to help avoid harmful situations. However, these recommendations are often rather crude and often neglect the complex interaction of biomechanical loading and psychological stress. This study investigates whether machine learning algorithms can be used to predict mechanical and stress-related muscle activity for a standardized motion. For this purpose, experimental data were collected for trunk movement with and without additional psychological stress. Two different algorithms (XGBoost and TensorFlow) were used to model the experimental data. XGBoost in particular predicted the results very well. By combining it with musculoskeletal models, the method shown here can be used for workplace analysis but also for the development of real-time feedback systems in real workplace environments.
BACKGROUND:
Tracheobronchial mucus plays a crucial role in pulmonary function by providing protection against inhaled pathogens. Due to its composition of water, mucins, and other biomolecules, it has a complex viscoelastic rheological behavior. This interplay of both viscous and elastic properties has not been fully described yet. In this study, we characterize the rheology of human mucus using oscillatory and transient tests. Based on the transient tests, we describe the material behavior of mucus under stress and strain loading by mathematical models.
METHODS:
Mucus samples were collected from clinically used endotracheal tubes. For rheological characterization, oscillatory amplitude-sweep and frequency-sweep tests, and transient creep-recovery and stress-relaxation tests were performed. The results of the transient test were approximated using the Burgers model, the Weibull distribution, and the six-element Maxwell model. The three-dimensional microstructure of the tracheobronchial mucus was visualized using scanning electron microscope imaging.
RESULTS:
Amplitude-sweep tests showed storage moduli ranging from 0.1 Pa to 10000 Pa and a median critical strain of 4 %. In frequency-sweep tests, storage and loss moduli increased with frequency, with the median of the storage modulus ranging from 10 Pa to 30 Pa, and the median of the loss modulus from 5 Pa to 14 Pa. The Burgers model approximates the viscoelastic behavior of tracheobronchial mucus during a constant load of stress appropriately (R2 of 0.99), and the Weibull distribution is suitable to predict the recovery of the sample after the removal of this stress (R2 of 0.99). The approximation of the stress-relaxation test data by a six-element Maxwell model shows a larger fit error (R2 of 0.91).
CONCLUSIONS:
This study provides a detailed description of all process steps of characterizing the rheology of tracheobronchial mucus, including sample collection, microstructure visualization, and rheological investigation. Based on this characterization, we provide mathematical models of the rheological behavior of tracheobronchial mucus. These can now be used to simulate mucus flow in the respiratory system through numerical approaches.
High Spatial Resolution Tomo-PIV of the Nasopharynx Focussing on the Physiological Breathing Cycle
(2022)
Investigations of complex patient-specific flow in the nasopharynx requires high resolution numerical calculations validated by reliable experiments. When building the validation base and the benchmark
of computational fluid dynamics, an experimental setup of the nasal airways was developed. The applied optical measurement technique of tomo-PIV supplies information on the governing flow field in three dimensions. This paper presents tomo-PIV measurements of the highly complex patient-specific geometry of the human trachea. A computertomographic scan of a person’s head builds the basis of the experimental silicone model of the nasal airways. An optimised approach for precise refractive index matching avoids optical distortions even in highly complex non-free-of-sight 3D geometries. A linear-motor-driven pump generates breathing scenarios, based on measured breathing cycles. Adjusting of the CCD cameras‘ double-frame-rate PIV-Δt enables the detailed analysis of flow structures during different cycle phases. Merging regions of interest enables high spatial resolution acquisition of the flow field.
High Spatial Resolution Tomo-PIV of the Trachea Focussing on the Physiological Breathing Cycle
(2023)
Investigations of complex patient-specific flow in the nasopharynx requires high resolution numerical calculations validated by reliable experiments. When building the validation base and the benchmark of computational fluid dynamics, an experimental setup of the nasal airways was developed. The applied optical measurement technique of tomo-PIV supplies information on the governing flow field in three dimensions.
This paper presents tomo-PIV measurements of the highly complex patient-specific geometry of the human trachea. A computertomographic scan of a person’s head builds the basis of the experimental silicone model of the nasal airways. An optimised approach for precise refractive index matching avoids optical distortions even in highly complex non-free-of-sight 3D geometries. A linear-motor-driven pump generates breathing scenarios, based on measured breathing cycles. Adjusting of the CCD cameras‘ double-frame-rate PIV-Δt enables the detailed analysis of flow structures during different cycle phases. Merging regions of interest enables high spatial resolution acquisition of the flow field.