Technische Fakultät
Refine
Year of publication
Document Type
- Article (1496)
- Doctoral Thesis (610)
- Preprint (37)
- Report (19)
- Conference Proceeding (11)
- Book (5)
- Master's Thesis (4)
Language
- English (1979)
- German (201)
- Multiple languages (2)
Has Fulltext
- yes (2182)
Keywords
- - (291)
- machine learning (47)
- additive manufacturing (46)
- deep learning (29)
- tissue engineering (28)
- bioactive glass (27)
- Simulation (19)
- electrospinning (19)
- microstructure (19)
- 3D printing (18)
Institute
- Technische Fakultät (2182)
- Lehrstuhl für Technische Mechanik (LTM) (22)
- Lehrstuhl für Technische Dynamik (LTD) (3)
- Medizinische Fakultät (3)
- Naturwissenschaftliche Fakultät (3)
- Lehrstuhl für Kunststofftechnik (LKT) (2)
- Rechts- und Wirtschaftswissenschaftliche Fakultät (2)
- Department Informatik (1)
- Lehrstuhl für Fertigungstechnologie (LFT) (1)
- Lehrstuhl für Photonische Technologien (LPT) (1)
Due to the mass digitisation of paintings, manual examination and understanding of individual images is a cumbersome task. Developing automatic methods using computer vision and machine learning techniques is extremely useful for humanities experts, who are generally interested in understanding the origin of objects, iconographies, and narratives in artworks. Digital humanities has become a predominant field in the last decade for understanding and connecting the past, present, and future via artworks in the digitised form of text and images. The aim is to have quicker access to the information, uncover hidden trends and validate the theoretical learning from large data collections. Understanding artworks is challenging in digital humanities
due to their subjective nature and lack of annotated data. Recently, deep learning-based methods have shown commendable performance on real-world images. One simple way is to learn algorithms on existing real-world photographs and test them on artwork images. Since the artwork images have a highly different data distribution, these algorithms often fail to generalise well, commonly referred to as the domain shift problem.
This thesis develops several scene understanding methods from a digital humanities perspective, targeting Art history, Christian, and Classical archaeology domains. The focus lies on (a) developing methods for character-, iconography, and object recognition, and (b) beyond recognition, especially targeting pose-estimation and novel image compositions. Particular attention is given to methods beyond recognition, where the theoretical concepts from Art history are converted into a computational method for understanding and linking iconography. For methods in recognition, starting with recognising characters for Art History, a two-step style transfer learning algorithm is developed. This work is extended to iconography recognition, where a detailed analysis of the impact of styles using supervised and self-supervised models is presented. To mitigate the problem of the availability of few annotations, a one-shot object detection pipeline with advanced augmentations such as context and crop is developed for heterogeneous artworks.
For methods beyond recognition, first, the task of linking narratives in Greek vase paintings is considered using pose estimation with as few as 1500 pose annotations. The proposed two-step style transfer learning for recognition is extended to enhance pose estimation and build a pose-based image retrieval system to link narratives in Classical archaeology. Finally, a novel computational algorithm is developed, namely Image Composition Canvas (ICC) which is an operationalisation based on compositions in paintings presented by Hetzer and extended by Max Imdahl to understand artworks. The concept of the mid-level feature extraction method presented by Imdahl is constructed and extended to an image retrieval system (ICC++) with
explainable features. The proposed mid-level composition features are extremely lightweight and outperform the existing state-of-the-art, which only uses detected pose key points to link the images. The detailed qualitative and quantitative results show the potential to improve the image composition methods further to introduce complex composition features. This work, therefore, builds new constructs and proof of concepts for artwork scene understanding tasks, including recognition and beyond, allowing a detailed understanding of styles for domain adaptation from both digital
humanities and computer vision perspectives.
Abstract
When a container filled with granular material is subjected to vertical vibration in the presence of gravity, under certain conditions a non-monotonous density profile can be observed. This effect which is characteristic for dissipative granular gases, was termed “floating cluster regime” or “granular Leidenfrost effect”. Here, we study the behavior of vibro-agitated granular matter in the absence of gravity and identify a corresponding stationary state of the granulate, that is, we provide experimental evidence of the granular Leidenfrost effect under conditions of weightlessness.
Abstract
The study of processes characterized by impulsive nature (i.e. impacts) can be considered of great interest in both physics and engineering disciplines: in the geotechnical field, for instance, their effect on the interaction between soil and structures need to be investigated. The present work aims at the validation, by means of two-dimensional finite element simulations, of a methodology of force calibration which uses results obtained from three-dimensional discrete element analysis for predicting the stress at the base of a granular bed, retained by a movable wall, arising when the system is hit by a projectile. To approach this problem, the low-velocity impact has been modeled as a punctual impulsive force on a granular packing.
Abstract
When a container filled with granular material is subjected to sinusoidal vibration in microgravity, dependent on the amplitude of the oscillation, the granulate may exhibit one of two distinct dynamical modes: at low amplitude, a gas-like state is observed, where the particles are relatively homogeneously distributed within the container, almost independent of the phase of the oscillation. In contrast, for large amplitude, collective motion of the particles is favoured, termed collect-and-collide regime. Both regimes are characterized by very different dissipation characteristics. A recent model predicts that the regimes are separated by a sharp transition due to a critical amplitude of the vibration. Here we confirm this prediction of a sharp transition and also the numerical value of the critical amplitude by means of experiments performed under conditions of weightlessness.
Abstract
Time–frequency representations of the speech signals provide dynamic information about how the frequency component changes with time. In order to process this information, deep learning models with convolution layers can be used to obtain feature maps. In many speech processing applications, the time–frequency representations are obtained by applying the short-time Fourier transform and using single-channel input tensors to feed the models. However, this may limit the potential of convolutional networks to learn different representations of the audio signal. In this paper, we propose a methodology to combine three different time–frequency representations of the signals by computing continuous wavelet transform, Mel-spectrograms, and Gammatone spectrograms and combining then into 3D-channel spectrograms to analyze speech in two different applications: (1) automatic detection of speech deficits in cochlear implant users and (2) phoneme class recognition to extract phone-attribute features. For this, two different deep learning-based models are considered: convolutional neural networks and recurrent neural networks with convolution layers.
Abstract
Forming processes of continuous fiber reinforced thermoplastic materials are oftentimes limited to high volume production due to the high costs for tooling and processing machines. This study suggests the combined use of a cold and simple tool and high forming speeds to reduce tooling and processing costs and enable the usage of common stamping machines. Half sphere samples are produced from single and two-layer polypropylene and glass fiber organo-sheets in a custom built drop tower and analyzed for their geometry, degree of re-consolidation, surface quality and potential fiber damage using a variety of microscopy techniques. While only mediocre degrees of re-consolidation and limited surface qualities can be achieved with the combination of a cold tooling and state-of-the-art forming speeds of 0–0.5 ms−1, the usage of a higher forming speed of 3 ms−1, vastly improves surface qualities and the degree of re-consolidation without any detectable fiber damage. This effect is more pronounced in the dual layer material. Extensive knowledge on the forming behavior of continuous fiber reinforced thermoplastics at high cooling rates and high speeds of deformation is required for sufficient process control and future studies need to further elaborate and quantify the influencing factors and limits of high-speed forming of continuous fiber reinforced thermoplastics.
Abstract
A common approach to controller synthesis for hybrid systems is to first establish a discrete-event abstraction and then to use methods from supervisory control theory to synthesise a controller. In this paper, we consider behavioural abstractions of hybrid systems with a prescribed discrete-event input/output interface. We discuss a family of abstractions based on so called experiments which consist of samples from the external behaviour of the hybrid system. The special feature of our setting is that the accuracy of the abstraction can be carefully adapted to suit the particular control problem at hand. Technically, this is implemented as an iteration in which we alternate trial control synthesis with abstraction refinement. While localising refinement to where it is intuitively needed, we can still formally establish that the overall iteration will solve the control problem, provided that an abstraction-based solution exists at all.
Abstract
The correct choice of interface conditions and effective parameters for coupled macroscale free-flow and porous-medium models is crucial for a complete mathematical description of the problem under consideration and for accurate numerical simulation of applications. We consider single-fluid-phase systems described by the Stokes–Darcy model. Different sets of coupling conditions for this model are available. However, the choice of these conditions and effective model parameters is often arbitrary. We use large-scale lattice Boltzmann simulations to validate coupling conditions by comparison of the macroscale simulations against pore-scale resolved models. We analyse three settings (lid-driven cavity over a porous bed, infiltration problem and general filtration problem) with different geometrical configurations (channelised and staggered distributions of solid grains) and different sets of interface conditions. Effective parameters for the macroscale models (permeability tensor, boundary layer constants) are computed numerically for each geometrical configuration. Numerical simulation results demonstrate the sensitivity of the coupled Stokes–Darcy problem to the location of the sharp fluid–porous interface, the effective model parameters and the interface conditions.
XXL-Computed Tomography (XXL-CT) is able to produce large scale volume datasets of scanned objects such as crash tested cars, sea and aircraft containers or cultural heritage objects. The acquired image data consists of volumes of up to and above 10,0003 voxels which can relate up to many terabytes in file size and can contain multiple 10,000 of different entities of depicted objects. In order to extract specific information about these entities from the scanned objects in such vast datasets, segmentation or delineation of these parts is necessary. Due to unknown and varying properties (shapes, densities, materials, compositions) of these objects, as well as interfering acquisition artefacts, classical (automatic) segmentation is usually not feasible. Contrarily, a complete manual delineation is error-prone and time-consuming, and can only be performed by trained and experienced personnel. Hence, an interactive and partial segmentation of so-called “chunks” into tightly coupled assemblies or sub-assemblies may help the assessment, exploration and understanding of such large scale volume data. In order to assist users with such an (possibly interactive) instance segmentation for the data exploration process, we propose to utilize delineation algorithms with an approach derived from flood filling networks. We present primary results of a flood filling network implementation adapted to non-destructive testing applications based on large scale CT from various test objects, as well as real data of an airplane and describe the adaptions to this domain. Furthermore, we address and discuss segmentation challenges due to acquisition artefacts such as scattered radiation or beam hardening resulting in reduced data quality, which can severely impair the interactive segmentation results.
Abstract
Nanocomposite coatings were successfully prepared by electrophoretic deposition of poly(etheretherketone) (PEEK)/graphene oxide (GO) suspensions. The GO flakes developed a large-scale co-continuous morphology with the basal plane mainly aligned with the coating surface. However, the PEEK particles were also found to be wrapped by GO nanosheets when deposited on the stainless steel substrate. Both phenomena, the co-continuous morphology and the wrapping effect, were dependent on the initial GO content in the suspension and influenced the final morphological characteristics of the thermally treated coatings. The PEEK matrix developed a dendritic morphology during its cooling from the molten state because of transcrystallinity that was induced by the incorporation of GO. The preparation of suspensions involved tip ultrasonication (TS) to deagglomerate, disperse, and mill the PEEK particles. A detailed study of the microstructure revealed that TS tended not only to reduce PEEK particle size, but also to promote an elongated shape, favourable for the nanocomposite coatings.
Capsular contracture remains a challenge in plastic surgery and represents one of the most common postoperative complications following alloplastic breast reconstruction. The impact of the surface structure of silicone implants on the foreign body reaction and the behaviour of connective tissue-producing cells has already been discussed. The aim of this study was to investigate different pore sizes of silicone surfaces and their influence on human fibroblasts in an in vitro model. Four different textures (no, fine, medium and coarse texture) produced with the salt-loss technique, have been assessed in an in vitro model. Human fibroblasts were seeded onto silicone sheets and evaluated after 1, 4 and 7 days microscopically, with viability assay and gene expression analysis. Comparing the growth behaviour and adhesion of the fibroblasts on the four different textures, a dense cell layer, good adhesion and bridge-building ability of the cells could be observed for the fine and medium texture. Cell number and viability of the cells were increasing during the time course of experiments on every texture. TGFß1 was lowest expressed on the fine and medium texture indicating a trend for decreased fibrotic activity. For silicone surfaces produced with the salt-loss technique, we were able to show an antifibrotic effect of smaller sized pores. These findings underline the hypothesis of a key role of the implant surface and the pore size and pore structure in preventing capsular contracture.
3D printing has emerged as vanguard technique of biofabrication to assemble cells, biomaterials and biomolecules in a spatially controlled manner to reproduce native tissues. In this work, gelatin methacrylate (GelMA)/alginate hydrogel scaffolds were obtained by 3D printing and 14-3-3ε protein was encapsulated in the hydrogel to induce osteogenic differentiation of human adipose-derived mesenchymal stem cells (hASC). GelMA/alginate-based grid-like structures were printed and remained stable upon photo-crosslinking. The viscosity of alginate allowed to control the pore size and strand width. A higher viscosity of hydrogel ink enhanced the printing accuracy. Protein-loaded GelMA/alginate-based hydrogel showed a clear induction of the osteogenic differentiation of hASC cells. The results are relevant for future developments of GelMA/alginate for bone tissue engineering given the positive effect of 14-3-3ε protein on both cell adhesion and proliferation.
Alginate dialdehyde–gelatin (ADA–GEL) hydrogels have been reported to be suitable matrices for cell encapsulation. In general, application of ADA–GEL as bioink has been limited to planar structures due to its low viscosity. In this work, ring shaped constructs of ADA–GEL hydrogel were fabricated by casting the hydrogel into sacrificial molds which were 3D printed from 9% methylcellulose and 5% gelatin. Dissolution of the supporting structure was observed during the 1st week of sample incubation. In addition, the effect of different crosslinkers (Ba2+ and Ca2+) on the physicochemical properties of ADA–GEL and on the behavior of encapsulated MG-63 cells was investigated. It was found that Ba2+ crosslinked network had more than twice higher storage modulus, and mass decrease to 70% during incubation compared to 42% in case of hydrogels crosslinked with Ca2+. In addition, faster increase in cell viability during incubation and earlier cell network formation were observed after Ba2+ crosslinking. No negative effects on cell activity due to the use of sacrificial materials were observed. The approach presented here could be further developed for cell-laden ADA–GEL bioink printing into complex 3D structures.
Abstract
The COVID-19 pandemic has led to an unprecedented world-wide effort to gather data, model, and understand the viral spread. Entire societies and economies are desperate to recover and get back to normality. However, to this end accurate models are of essence that capture both the viral spread and the courses of disease in space and time at reasonable resolution. Here, we combine a spatially resolved county-level infection model for Germany with a memory-based integro-differential approach capable of directly including medical data on the course of disease, which is not possible when using traditional SIR-type models. We calibrate our model with data on cumulative detected infections and deaths from the Robert-Koch Institute and demonstrate how the model can be used to obtain county- or even city-level estimates on the number of new infections, hospitality rates and demands on intensive care units. We believe that the present work may help guide decision makers to locally fine-tune their expedient response to potential new outbreaks in the near future.
Abstract
The present paper takes up the underlying nonlinear initial value problem from a preceding author’s work about the dynamics of a single bubble in a highly viscous liquid medium under different pressure impacts. The arising ordinary differential equation is mainly based on the constitutive relation of a second-order liquid that in particular includes two non-Newtonian material constants. In this article, the significance of these coefficients is mathematically analyzed in detail by proving the existence of stable solutions of the named initial value problem. This is achieved by special transformations of the differential equation at hand and the introduction of appropriate Lyapunov functions. It particularly turns out that a combined condition of the non-Newtonian coefficients and diverse restrictions to the external pressure impact are decisive for the validity of the existence results. Furthermore, the convergence speed of solutions is investigated by considering the linearized equation associated with the present initial value problem and by applying a special variant of Gronwall’s lemma. The main theoretical result, being the prementioned strong condition for the non-Newtonian coefficients, is finally compared to real data sets.
XDose: toward online cross-validation of experimental and computational X-ray dose estimation
(2021)
Abstract
Purpose
As the spectrum of X-ray procedures has increased both for diagnostic and for interventional cases, more attention is paid to X-ray dose management. While the medical benefit to the patient outweighs the risk of radiation injuries in almost all cases, reproducible studies on organ dose values help to plan preventive measures helping both patient as well as staff. Dose studies are either carried out retrospectively, experimentally using anthropomorphic phantoms, or computationally. When performed experimentally, it is helpful to combine them with simulations validating the measurements. In this paper, we show how such a dose simulation method, carried out together with actual X-ray experiments, can be realized to obtain reliable organ dose values efficiently.
Methods
A Monte Carlo simulation technique was developed combining down-sampling and super-resolution techniques for accelerated processing accompanying X-ray dose measurements. The target volume is down-sampled using the statistical mode first. The estimated dose distribution is then up-sampled using guided filtering and the high-resolution target volume as guidance image. Second, we present a comparison of dose estimates calculated with our Monte Carlo code experimentally obtained values for an anthropomorphic phantom using metal oxide semiconductor field effect transistor dosimeters.
Results
We reconstructed high-resolution dose distributions from coarse ones (down-sampling factor 2 to 16) with error rates ranging from 1.62 % to 4.91 %. Using down-sampled target volumes further reduced the computation time by 30 % to 60 %. Comparison of measured results to simulated dose values demonstrated high agreement with an average percentage error of under 10%\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$10 \%$$\end{document} for all measurement points.
Conclusions
Our results indicate that Monte Carlo methods can be accelerated hardware-independently and still yield reliable results. This facilitates empirical dose studies that make use of online Monte Carlo simulations to easily cross-validate dose estimates on-site.
Abstract
Purpose
During spinal fusion surgery, screws are placed close to critical nerves suggesting the need for highly accurate screw placement. Verifying screw placement on high-quality tomographic imaging is essential. C-arm cone-beam CT (CBCT) provides intraoperative 3D tomographic imaging which would allow for immediate verification and, if needed, revision. However, the reconstruction quality attainable with commercial CBCT devices is insufficient, predominantly due to severe metal artifacts in the presence of pedicle screws. These artifacts arise from a mismatch between the true physics of image formation and an idealized model thereof assumed during reconstruction. Prospectively acquiring views onto anatomy that are least affected by this mismatch can, therefore, improve reconstruction quality.
Methods
We propose to adjust the C-arm CBCT source trajectory during the scan to optimize reconstruction quality with respect to a certain task, i.e., verification of screw placement. Adjustments are performed on-the-fly using a convolutional neural network that regresses a quality index over all possible next views given the current X-ray image. Adjusting the CBCT trajectory to acquire the recommended views results in non-circular source orbits that avoid poor images, and thus, data inconsistencies.
Results
We demonstrate that convolutional neural networks trained on realistically simulated data are capable of predicting quality metrics that enable scene-specific adjustments of the CBCT source trajectory. Using both realistically simulated data as well as real CBCT acquisitions of a semianthropomorphic phantom, we show that tomographic reconstructions of the resulting scene-specific CBCT acquisitions exhibit improved image quality particularly in terms of metal artifacts.
Conclusion
The proposed method is a step toward online patient-specific C-arm CBCT source trajectories that enable high-quality tomographic imaging in the operating room. Since the optimization objective is implicitly encoded in a neural network trained on large amounts of well-annotated projection images, the proposed approach overcomes the need for 3D information at run-time.
Abstract
Writing programs for heterogeneous platforms optimized for high performance is hard since this requires the code to be tuned at a low level with architecture-specific optimizations that are most times based on fundamentally differing programming paradigms and languages. OpenVX promises to solve this issue for computer vision applications with a royalty-free industry standard that is based on a graph-execution model. Yet, the OpenVX ’ algorithm space is constrained to a small set of vision functions. This hinders accelerating computations that are not included in the standard. In this paper, we analyze OpenVX vision functions to find an orthogonal set of computational abstractions. Based on these abstractions, we couple an existing domain-specific language (DSL) back end to the OpenVX environment and provide language constructs to the programmer for the definition of user-defined nodes. In this way, we enable optimizations that are not possible to detect with OpenVX graph implementations using the standard computer vision functions. These optimizations can double the throughput on an Nvidia GTX GPU and decrease the resource usage of a Xilinx Zynq FPGA by 50% for our benchmarks. Finally, we show that our proposed compiler framework, called HipaccVX, can achieve better results than the state-of-the-art approaches Nvidia VisionWorks and Halide-HLS.
Abstract
As a result of lightweight design, increased use is being made of high-strength steel and aluminium in car bodies. Self-piercing riveting is an established technique for joining these materials. The dissimilar properties of the two materials have led to a number of different rivet geometries in the past. Each rivet geometry fulfils the requirements of the materials within a limited range. In the present investigation, an improved rivet geometry is developed, which permits the reliable joining of two material combinations that could only be joined by two different rivet geometries up until now. Material combination 1 consists of high-strength steel on both sides, while material combination 2 comprises aluminium on the punch side and high-strength steel on the die side. The material flow and the stress and strain conditions prevailing during the joining process are analysed by means of numerical simulation. The rivet geometry is then improved step-by-step on the basis of this analysis. Finally, the improved rivet geometry is manufactured and the findings of the investigation are verified in experimental joining tests.
Abstract
The trend towards lightweight design leads to an increasing demand for sophisticated part geometries with high functional integration. In order to use the advantages of cold forging regarding the time- and resource-efficient production of high-quality parts, high local stresses causing fatigue failure in geometrically complex tools have to be controlled. The objective of this manuscript is to analyse the use of stress pins for a local influence on the stress state within forging, especially in non-circular symmetrical cold forging dies. For this purpose, a closed-die forging process for elliptical parts is designed and analysed regarding the die stresses. Distinct areas with local compressive and tensile stresses occur in the process. To counteract the tensile stresses critical for fatigue failure, the effect of stress pins pressed into the die creating a local prestress is analysed. Around the pins, compressive radial and tensile tangential stresses occur. While large pin diameters, interferences and close positioning to the tensile area lead to an increasing prestressing effect, too high values of these parameters cause a detrimental superposition of tensile process and pin stresses. If used correctly, there is high potential to improve the stress state and tool life especially for locally stressed complex tools.