000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Year of publication
Document Type
- Article (170)
- Doctoral Thesis (99)
- Conference Proceeding (15)
- Report (15)
- Book (7)
- PeriodicalPart (7)
- Other (3)
- Working Paper (3)
- Habilitation (1)
- Master's Thesis (1)
Language
- English (269)
- German (51)
- Multiple languages (1)
Has Fulltext
- yes (321)
Keywords
- - (25)
- deep learning (18)
- machine learning (14)
- Machine learning (9)
- Image processing (5)
- inertial measurement unit (5)
- Betriebssystem (4)
- GPU (4)
- Optimierung (4)
- Simulation (4)
Institute
- Technische Fakultät (204)
- Department Informatik (41)
- Rechts- und Wirtschaftswissenschaftliche Fakultät (16)
- Fakultätsübergreifend / Sonstige Einrichtung -ohne weitere Spezifikation- (13)
- Fachbereich Wirtschaftswissenschaften (8)
- Medizinische Fakultät (6)
- Naturwissenschaftliche Fakultät (5)
- Philosophische Fakultät und Fachbereich Theologie (5)
- Institut für Medizininformatik, Biometrie und Epidemiologie (3)
- Medizinische Fakultät -ohne weitere Spezifikation- (3)
Abstract
The study of processes characterized by impulsive nature (i.e. impacts) can be considered of great interest in both physics and engineering disciplines: in the geotechnical field, for instance, their effect on the interaction between soil and structures need to be investigated. The present work aims at the validation, by means of two-dimensional finite element simulations, of a methodology of force calibration which uses results obtained from three-dimensional discrete element analysis for predicting the stress at the base of a granular bed, retained by a movable wall, arising when the system is hit by a projectile. To approach this problem, the low-velocity impact has been modeled as a punctual impulsive force on a granular packing.
Abstract
Time–frequency representations of the speech signals provide dynamic information about how the frequency component changes with time. In order to process this information, deep learning models with convolution layers can be used to obtain feature maps. In many speech processing applications, the time–frequency representations are obtained by applying the short-time Fourier transform and using single-channel input tensors to feed the models. However, this may limit the potential of convolutional networks to learn different representations of the audio signal. In this paper, we propose a methodology to combine three different time–frequency representations of the signals by computing continuous wavelet transform, Mel-spectrograms, and Gammatone spectrograms and combining then into 3D-channel spectrograms to analyze speech in two different applications: (1) automatic detection of speech deficits in cochlear implant users and (2) phoneme class recognition to extract phone-attribute features. For this, two different deep learning-based models are considered: convolutional neural networks and recurrent neural networks with convolution layers.
Abstract
We present numerical simulations of full transition-edge sensor (TES) arrays utilizing graphical processing units (GPUs). With the support of GPUs, it is possible to perform simulations of large pixel arrays to assist detector development. Comparisons with TES small-signal and noise theory confirm the representativity of the simulated data. In order to demonstrate the capabilities of this approach, we present its implementation in xifusim, a simulator for the X-ray Integral Field Unit, a cryogenic X-ray spectrometer on board the future Athena X-ray observatory.
Abstract
Purpose
During spinal fusion surgery, screws are placed close to critical nerves suggesting the need for highly accurate screw placement. Verifying screw placement on high-quality tomographic imaging is essential. C-arm cone-beam CT (CBCT) provides intraoperative 3D tomographic imaging which would allow for immediate verification and, if needed, revision. However, the reconstruction quality attainable with commercial CBCT devices is insufficient, predominantly due to severe metal artifacts in the presence of pedicle screws. These artifacts arise from a mismatch between the true physics of image formation and an idealized model thereof assumed during reconstruction. Prospectively acquiring views onto anatomy that are least affected by this mismatch can, therefore, improve reconstruction quality.
Methods
We propose to adjust the C-arm CBCT source trajectory during the scan to optimize reconstruction quality with respect to a certain task, i.e., verification of screw placement. Adjustments are performed on-the-fly using a convolutional neural network that regresses a quality index over all possible next views given the current X-ray image. Adjusting the CBCT trajectory to acquire the recommended views results in non-circular source orbits that avoid poor images, and thus, data inconsistencies.
Results
We demonstrate that convolutional neural networks trained on realistically simulated data are capable of predicting quality metrics that enable scene-specific adjustments of the CBCT source trajectory. Using both realistically simulated data as well as real CBCT acquisitions of a semianthropomorphic phantom, we show that tomographic reconstructions of the resulting scene-specific CBCT acquisitions exhibit improved image quality particularly in terms of metal artifacts.
Conclusion
The proposed method is a step toward online patient-specific C-arm CBCT source trajectories that enable high-quality tomographic imaging in the operating room. Since the optimization objective is implicitly encoded in a neural network trained on large amounts of well-annotated projection images, the proposed approach overcomes the need for 3D information at run-time.
Abstract
Writing programs for heterogeneous platforms optimized for high performance is hard since this requires the code to be tuned at a low level with architecture-specific optimizations that are most times based on fundamentally differing programming paradigms and languages. OpenVX promises to solve this issue for computer vision applications with a royalty-free industry standard that is based on a graph-execution model. Yet, the OpenVX ’ algorithm space is constrained to a small set of vision functions. This hinders accelerating computations that are not included in the standard. In this paper, we analyze OpenVX vision functions to find an orthogonal set of computational abstractions. Based on these abstractions, we couple an existing domain-specific language (DSL) back end to the OpenVX environment and provide language constructs to the programmer for the definition of user-defined nodes. In this way, we enable optimizations that are not possible to detect with OpenVX graph implementations using the standard computer vision functions. These optimizations can double the throughput on an Nvidia GTX GPU and decrease the resource usage of a Xilinx Zynq FPGA by 50% for our benchmarks. Finally, we show that our proposed compiler framework, called HipaccVX, can achieve better results than the state-of-the-art approaches Nvidia VisionWorks and Halide-HLS.
Reviews Left and Right: The Link Between Reviewers’ Political Ideology and Online Review Language
(2021)
Abstract
Online reviews, i.e., evaluations of products and services posted on websites, are ubiquitous. Prior research observed substantial variance in the language of such online reviews and linked it to downstream consequences like perceived helpfulness. However, the understanding of why the language of reviews varies is limited. This is problematic because it might have vital implications for the design of IT systems and user interactions. To improve the understanding of online review language, the paper proposes that consumers’ personality, as reflected in their political ideology, is a predictor of such online review language. Specifically, it is hypothesized that reviewers’ political ideology as measured by degree of conservatism on a liberal–conservative spectrum is negatively related to review depth (the number of words and the number of arguments in a review), cognitively complex language in reviews, diversity of arguments, and positive valence in language. Support for these hypotheses is obtained through the analysis of a unique dataset that links a sample of online reviews to reviewers’ political ideology as inferred from their online news consumption recorded in clickstream data.
Self-adaptive potential-based stopping criteria for Particle Swarm Optimization with forced moves
(2020)
Abstract
We study the variant of Particle Swarm Optimization that applies random velocities in a dimension instead of the regular velocity update equations as soon as the so-called potential of the swarm falls below a certain small bound in this dimension, arbitrarily set by the user. In this case, the swarm performs a forced move.
In this paper, we are interested in how, by counting the forced moves, the swarm can decide for itself to stop its movement because it is improbable to find better candidate solutions than the already-found best solution. We formally prove that when the swarm is close to a (local) optimum, it behaves like a blind-searching cloud and that the frequency of forced moves exceeds a certain, objective function-independent value. Based on this observation, we define stopping criteria and evaluate them experimentally showing that good candidate solutions can be found much faster than setting upper bounds on the iterations and better solutions compared to applying other solutions from the literature.
Abstract
Today’s manufacturing facilities and processes offer the potential to collect data on an unprecedented scale. However, conventional Programmable Logic Controllers are often proprietary systems with closed-source hardware and software and not designed to also take over the seamless acquisition and processing of enormous amounts of data. Furthermore, their major focus on simple control tasks and a rigid number of static built-in I/O connectors make them not well suited for the big data challenge and an industrial environment that is changing at a high pace. This paper, advocates emerging hardware- and I/O reconfigurable Programmable System-on-Chip (PSoC) solutions based on Field-Programmable Gate Arrays to provide flexible and adaptable capabilities for both data acquisition and control right at the edge. Still, the design and implementation of applications on such heterogeneous PSoC platforms demands a comprehensive expertise in hardware/software co-design. To bridge this gap, a model-based design automation approach is presented to generate automatically optimized HW/SW configurations for a given PSoC. As a case study, a metal forming process is considered and the design automation of an industrial closed-loop control algorithm with the design objectives performance and resource costs is investigated to show the benefits of the approach.
Editorial
(2020)
Abstract
Tree-based classifiers provide easy-to-understand outputs. Artificial neural networks (ANN) commonly outperform tree-based classifiers; nevertheless, understanding their outputs requires specialized knowledge in most cases. The highly redundant architecture of ANN is typically designed through an expensive trial-and-error scheme. We aim at (1) investigating whether using ensembles of decision trees to design the architecture of low-redundant, sparse ANN provides better-performing networks, and (2) evaluating whether such trees can be used to provide human-understandable explanations for their outputs. Information about the hierarchy of the features, and how good they are at separating subsets of samples among the classes, is gathered from each branch in an ensemble of trees. This information is used to design the architecture of a sparse multilayer perceptron network. Networks built using our method are called ForestNet. Tree branches corresponding to highly activated neurons are used to provide explanations of the networks’ outputs. ForestNets are able to handle low- and high-dimensional data, as we show on an evaluation using four datasets. Our networks consistently outperformed their respective ensemble of trees and had similar performance to their fully connected counterparts with a significant reduction of connections. Furthermore, our interpretation method seems to provide support for the ForestNet outputs. While ForestNet’s architectures do not allow them yet to capture well the intrinsic variability of visual data, they exhibit very promising results by reducing more than 98% of connections for such visual tasks. Structure similarities between ForestNets and their respective tree ensemble provide means to interpret their outputs.
Perceptual bias and technical metapictures: critical machine vision as a humanities challenge
(2021)
In many critical investigations of machine vision, the focus lies almost exclusively on dataset bias and on fixing datasets by introducing more and more diverse sets of images. We propose that machine vision systems are inherently biased not only because they rely on biased datasets but also because their perceptual topology, their specific way of representing the visual world, gives rise to a new class of bias that we call perceptual bias. Concretely, we define perceptual topology as the set of those inductive biases in machine vision systems that determine its capability to represent the visual world. Perceptual bias, then, describes the difference between the assumed “ways of seeing” of a machine vision system, our reasonable expectations regarding its way of representing the visual world, and its actual perceptual topology. We show how perceptual bias affects the interpretability of machine vision systems in particular, by means of a close reading of a visualization technique called “feature visualization”. We conclude that dataset bias and perceptual bias both need to be considered in the critical analysis of machine vision systems and propose to understand critical machine vision as an important transdisciplinary challenge, situated at the interface of computer science and visual studies/Bildwissenschaft.
Blood pressure monitoring is of paramount importance in the assessment of a human’s cardiovascular health. The state-of-the-art method remains the usage of an upper-arm cuff sphygmomanometer. However, this device suffers from severe limitations—it only provides a static blood pressure value pair, is incapable of capturing blood pressure variations over time, is inaccurate, and causes discomfort upon use. This work presents a radar-based approach that utilizes the movement of the skin due to artery pulsation to extract pressure waves. From those waves, a set of 21 features was collected and used—together with the calibration parameters of age, gender, height, and weight—as input for a neural network-based regression model. After collecting data from 55 subjects from radar and a blood pressure reference device, we trained 126 networks to analyze the developed approach’s predictive power. As a result, a very shallow network with just two hidden layers produced a systolic error of 9.2±8.3 mmHg (mean error ± standard deviation) and a diastolic error of 7.7±5.7 mmHg. While the trained model did not reach the requirements of the AAMI and BHS blood pressure measuring standards, optimizing network performance was not the goal of the proposed work. Still, the approach has displayed great potential in capturing blood pressure variation with the proposed features. The presented approach therefore shows great potential to be incorporated into wearable devices for continuous blood pressure monitoring for home use or screening applications, after improving this approach even further.
The production and delivery of audio for television involve many creative and technical challenges. One of them is concerned with the level balance between the foreground speech (also referred to as dialogue) and the background elements, e.g., music, sound effects, and ambient sounds. Background elements are fundamental for the narrative and for creating an engaging atmosphere, but they can mask the dialogue, which the audience wishes to follow in a comfortable way. Very different individual factors of the people in the audience clash with the creative freedom of the content creators. As a result, service providers receive regular complaints about difficulties in understanding the dialogue because of too loud background sounds. While this has been a known issue for at least three decades, works analyzing the problem and up-to-date statics were scarce before the contributions in this work.
Enabling the user to personalize the dialogue level provides a technological relief from the problem of a background perceived as too loud. The content creators are free to craft the audio soundtrack according to their artistic vision, yet personalization is available if users want it. This functionality is often referred to as Dialogue Enhancement (DE) and can be implemented with object-based audio, requiring separate audio objects (or stems) from the production stage. Stems are often not available, e.g., for archive material, solely consisting of the final stereo soundtracks.
Blind Source Separation (BSS) can be applied on the final soundtrack to estimate the stems. However, before the contributions in this dissertation, only a few works dealt with BSS techniques directly applicable for television content and with evaluation methods for this application. BSS might introduce artifacts, colorations, and distortions. These are highly undesired, as the final audio quality is of the utmost importance in television. Before the contributions in this dissertation, it was not clear what subjective and objective methods could be used to evaluate audio quality for this application. Previous works focusing on objective methods did not answer the question whether objective measures perform well enough for detecting perceivable quality degradations introduced by BSS for DE and for counteracting against them.
The main contributions of this work advance our knowledge on the issues identified above and can be organized into the following three categories. Firstly, via a nationwide survey, insight is obtained into the gravity and into the causes of the frustration that the television audience experiences in relation to the audio balance between dialogue and background. Furthermore, controlled experiments are carried out to investigate personal preferences about the level difference between dialogue and background. Highly individual preferences are observed, settling the importance of providing the final user with DE. Moreover, it is observed that audio experts prefer louder background than non-experts (on average 4 dB), explaining part of the frustration experienced by the non-experts in the audience. Based on these experiments, technical guidelines are formulated for the production of esthetically pleasing audio for television with clear speech.
Secondly, BSS solutions specifically designed for DE are presented, both based on traditional signal processing and based on Deep Neural Networks (DNNs), where special attention is given to the quality assessment of these solutions. Subjective and objective evaluation methods are proposed to evaluate BSS for DE. The subjective methods span a large range of factors influencing the final Quality of Experience (QoE). Listening Effort (LE) is evaluated in a multimodal way, including pupillometry and considering audio signals representative of the application. Other quality factors such as the perception of artifacts and distortions, or the overall audio quality are studied with approaches inspired by standard methodologies, but with modifications relevant to the application. Approaching the overall QoE, the Adjustment/Satisfaction Test (A/ST) is proposed, as a method to evaluate content personalization and the resulting user satisfaction. In addition, a large number of users are surveyed after having interacted with a prototype of the final system. Thanks to these methods, different BSS approaches at different development stages are evaluated, and it is shown that the proposed BSS solutions can successfully enable DE. Objective measures are investigated in terms of their response to distortions typically encountered in BSS for DE as well as their correlation with subjective quality scores. It is shown that objective measures very often do not generalize to cases unseen during their development. The measures that exhibit the best performance are then considered to perform automatic quality control.
Thirdly, automatic quality control is discussed in detail. Solutions are proposed to control the remixing of the estimated stems, with the aim of maximizing the background attenuation under a constraint on the minimum quality of the remixed output. DNNs are trained to either estimate directly the remixing gain or in a two-step approach, in which a non-intrusive quality estimate is first obtained and then mapped to the remixing gains.
In summary, contributions to the most relevant areas concerning BSS for DE in television audio are given, providing a better understanding of the importance of DE, and laying the ground for methodological development, evaluation, and control of BSS for DE. Evidence of the success of these contributions can be identified in one outcome from the nationwide survey, where the BSS-based DE, developed within the proposed evaluation framework, was clearly preferred over the original soundtrack.
This is a cumulative dissertation (or thesis by publication). Its body consists of a previously unpublished exposition illustrating the contextual links between previously published works, all peer-reviewed. The interested reader is referred to the appendix, where the individual publications can be found.
Classical core memory was entirely non-
volatile and could keep at least part of the operating
system (OS) in main memory even across power cycles.
These days we can have terabytes of NVRAM to repeat
this approach, albeit on an entirely different scale and
with large parts of the OS state still kept in the volatile
CPU caches. In this paper, we discuss our experiences of
running large modern operating systems including their
applications entirely in NVRAM. We adapted stock Linux
and FreeBSD kernels to work exclusively with NVRAM by
hiding all DRAM from the kernels at boot time to establish
a realistic performance baseline without changing anything
else. Following this entirely NVRAM-agnostic approach,
we could observe an effective performance penalty of
a factor of about four, but only negligible increases in
whole-system power draw. For our system with two CPU
sockets and 56 cores total, we also observed a reduction
in power draw in several scenarios. Due to prolonged
execution times, the energy consumption increased as
well for these measured workloads. While this might be
discouraging at first sight, this result was achieved without
any performance tuning as to the specific characteristics
of today’s NVRAM technology. Therefore, we are also
discussing means to mitigate the observed shortcomings
by integrating NVRAM appropriately into the memory
hierarchy of future robust persistent systems.
Bitcoin and other cryptocurrencies are digital means of payment. They primarily differ from traditional fiat money by not requiring a central authority to issue new units of the currency or process payments. Cryptocurrencies achieve this decentralization by using a ledger of transactions. Participants in a network maintain the ledger and decide which transactions to add to it. The ledger is immutable: transactions cannot be removed anymore once added. Besides, cryptocurrency transactions differ from classical bank transfers in that cryptocurrency users can generate new pseudonyms for each transaction and are not restricted to account numbers that banks can directly link to the account holders. In addition, transactions can contain arbitrary data beyond payment information. These properties of cryptocurrencies raise issues within the conflicting tension between law and IT security. This work deals with three of these issues and shows that they are not solvable solely by legal or technical means.
First, network participants face criminal liability if illegal content is embedded into the ledger. In the case of personal data on the ledger, the participants might be obliged to erase it to comply with data protection regulations, namely the right to be forgotten. We propose a protocol that allows removing content from the ledger. Our protocol does not require any additional trust assumptions as it employs the exact mechanism used by the participants to maintain the ledger. By allowing the participants to break the immutability property, they can more effectively comply with the law.
Second, cryptocurrencies are the primary means of payment on the dark web. Thus, law enforcement commonly analyses cryptocurrency transactions. These analyses include tracing payment flows and linking multiple pseudonyms used in transactions that belong to the same person. The ultimate goal of these analyses is to identify the persons behind the pseudonyms. Cryptocurrency analyses typically rely on assumptions. These assumptions are often not questioned by law enforcement. However, their reliability is crucial to justify any subsequent investigation against an identified person. We extracted assumptions from scientific papers doing such analyses and classified them. In addition, we argue the reliability of each class of assumptions and introduce criteria to consider when arguing reliability on a case-by-case basis. Law enforcement, expert witnesses, and legal decision-makers can use our taxonomy and criteria to address the reliability of findings obtained from cryptocurrency analyses.
Third, although envisioned as digital cash, Bitcoin does not achieve the level of anonymity afforded by cash. As the ledger is public, everyone can trace payment flows and link multiple pseudonyms that belong to the same person. Mixing protocols emerged as a way to improve anonymity in Bitcoin. The basic idea of mixing is to combine coins of multiple users to harden payment flow analyses and pseudonym linkage. We analyzed the built-in mixing of the cryptocurrency Dash, which works similarly to the mixing protocols run on top of Bitcoin. We found two anonymity issues. First, users spent mixed and unmixed coins together, thereby lifting the anonymity gained from mixing. Second, as mixing in Dash requires coins with a fixed value, the mixed coins typically need to be combined after mixing to pay a specific amount. This combination of mixed coins allows intersecting each coin’s anonymity set, which might also lift anonymity gains from mixing. To prevent the need to combine coins after mixing, we propose a mixing algorithm that does not require coins with fixed values. Furthermore, we provide insights that the found anonymity issues could also be present in Bitcoin.
Abstract
Background
Intra‐scan rigid‐body motion is a costly and ubiquitous problem in clinical magnetic resonance imaging (MRI) of the head.
Purpose
State‐of‐the‐art methods for retrospective motion correction in MRI are often computationally expensive or in the case of image‐to‐image deep learning (DL) based methods can be prone to undesired alterations of the image (hallucinations'). In this work we introduce a novel rigid‐body motion correction method which combines the advantages of classical model‐driven and data‐consistency (DC) preserving approaches with a novel DL algorithm, to provide fast and robust retrospective motion correction.
Methods
The proposed Motion Parameter Estimating Densenet (MoPED) retrospectively estimates subject head motion during MRI acquisitions using a DL network with DenseBlocks and multitask learning. It quantifies the 2D rigid in‐plane motion parameters slice‐wise for each echo train (ET) of a Cartesian T2‐weighted 2D Turbo‐Spin‐Echo sequence. The network receives a center patch of the motion corrupted k‐space as well as an additional motion‐free low‐resolution reference scan to provide the ground truth orientation. The supervised training utilizes motion simulations based on 28 acquisitions with subject‐wise training, validation, and test data splits of 70%, 23%, and 7%. During inference, MoPED is embedded in an iterative DC‐driven motion correction algorithm which alternatingly updates estimates of the motion parameters and motion‐corrected low‐resolution k‐space data. The estimated motion parameters are then used to reconstruct the final motion corrected image.
The mean absolute/squared error and the Pearson correlation coefficient were used to analyze the motion parameter estimation quality on in‐silico data in a quantitative evaluation. Structural similarity (SSIM), DC error and root mean squared error (RMSE) were used as metrics of image quality improvement. Furthermore, the generalization capability of the network was analyzed on two in‐vivo motion volumes with 28 slices each and on one simulated T1‐weighted volume.
Results
The motion estimation achieves a Pearson correlation of 0.968 to the simulated ground‐truth of the 2433 test data slices used. In‐silico results indicate that MoPED decreases the time for the optimization by a factor of around 27 compared to a conventional method and is able to reduce the RMSE of the reconstructions and average DC error by more than a factor of two compared to uncorrected images. In‐vivo experiments show a decrease in computation time by a factor of around 20, a RMSE decrease from 0.055 to 0.033 and an SSIM increase from 0.795 to 0.862. Furthermore, contrast independence is demonstrated as MoPED is also able to correct T1‐weighted images in simulations without retraining. Due to the model‐based correction, no hallucinations were observed.
Conclusions
Incorporating DL in a model‐based motion correction algorithm shows great benefit on the optimization and computation time. The k‐space‐based estimation also allows a data consistent correction and therefore avoids the risk of hallucinations of image‐to‐image approaches.
Analysis and comparison of boundary condition variants in the free‐surface lattice Boltzmann method
(2023)
Abstract
The accuracy of the free‐surface lattice Boltzmann method (FSLBM) depends significantly on the boundary condition employed at the free interface. Ideally, the chosen boundary condition balances the forces exerted by the liquid and gas pressure. Different variants of the same boundary condition are possible, depending on the number and choice of the particle distribution functions (PDFs) to which it is applied. This study analyzes and compares four variants, in which (i) the boundary condition is applied to all PDFs oriented in the opposite direction of the free interface's normal vector, including or (ii) excluding the central PDF. While these variants overwrite existing information, the boundary condition can also be applied (iii) to only missing PDFs without dropping available data or (iv) to only missing PDFs but at least three PDFs as suggested in the literature. It is shown that neither variant generally balances the forces exerted by the liquid and gas pressure at the free surface. The four variants' accuracy was compared in five different numerical experiments covering various applications. These include a standing gravity wave, a rectangular and cylindrical dam break, a rising Taylor bubble, and a droplet impacting a thin pool of liquid. Overall, variant (iii) was substantially more accurate than the other variants in the numerical experiments performed in this study.
Abstract
We introduce EETTlib, an instance library for the Energy‐Efficient Train Timetabling problem. The task in this problem is to adjust a given timetable draft such that the energy consumption of the resulting railway traffic is minimized. To this end, the departure times of the trains can be slightly, and their velocity profiles on each trip can be modified. We provide real‐world data originating from two research projects in this field, one with Deutsche Bahn AG, the most important railway company in Germany, the other with VAG Verkehrs‐Aktiengesellschaft, the operator of public transport in the city of Nürnberg, Germany. In both cases, our library contains representative data on the relevant operational constraints and supports various possible choices for the objective function with respect to energy‐efficiency. The resulting benchmark instances can be used by the scheduling and timetabling community to improve their models and algorithms. They are available under https://www.eettlib.fau.de.