Fakultät Informatik und Mathematik
Refine
Year of publication
- 2019 (96) (remove)
Document Type
- conference proceeding (article) (41)
- Article (32)
- conference proceeding (presentation, abstract) (10)
- Part of a Book (5)
- conference talk (2)
- conference proceeding (volume) (2)
- Moving Images (2)
- Book (1)
- Preprint (1)
Is part of the Bibliography
- no (96)
Keywords
- Maschinelles Lernen (6)
- Outsourcing (5)
- Produktionsplanung (5)
- Betriebliches Informationssystem (3)
- Business-managed IT (3)
- Diagnose (3)
- Handchirurgie (3)
- Lernprogramm (3)
- Literaturbericht (3)
- Machine learning (3)
Institute
- Fakultät Informatik und Mathematik (96) (remove)
Begutachtungsstatus
- peer-reviewed (58)
- begutachtet (1)
ELSI-Begleitforschung entwickelt sich zunehmend zu einem integralen Bestandteil von Forschungs- und Entwicklungsprojekten – nicht zuletzt dadurch, dass Drittmittelgeber wie die EU oder das BMBF in vielen Förderlinien explizit fordern, dass Technik wertebasiert entwickelt bzw., responsible research and innovation‘ (RRI) betrieben wird. Leitend ist hierbei der Gedanke, dass die Gestaltung von Technik alle Stakeholder-Interessen berücksichtigen soll; die Ermittlung dieser Interessen sowie die moralisch fundierte Balance zwischen widerstreitenden Interessen soll durch partizipative Verfahren erreicht werden. Dazu eignen sich ethische Leitlinien oder andere Kodifizierungen von Normen und Werten nur bedingt; in der Praxis werden anwendbare Verfahren benötigt. In den letzten Jahren wurden solche Verfahren entwickelt und bereits in der F&E-Praxis erprobt und eingesetzt. Drei (diskursethisch basierte) Verfahren, die kombiniert werden können, sollen vorgestellt werden.
SQL-on-Hadoop processing engines have become state-of-the-art in data lake analysis. However, the skills required to tune such systems are rare. This has inspired automated tuning advisors which profile the query workload and produce tuning setups for the low-level MapReduce jobs. Yet with highly dynamic query workloads, repeated re-tuning costs time and money in IaaS environments. In this paper, we focus on reducing the costs for up-front tuning. At the heart of our approach is the observation that a SQL query is compiled into a query plan of MapReduce jobs. While the plans differ from query to query, single jobs tend to be similar between queries. We introduce the notion of the code signature of a MapReduce job and, based on this, our concept of job similarity. We show that we can effectively recycle tuning setups from similar MapReduce jobs already profiled. In doing so, we can leverage any third-party tuning adviser for MapReduce engines. We are able to show that by recycling tuning setups, we can reduce the time spent on profiling by 50% in the TPC-H benchmark.
This article revisits an analysis on (in)accuracies of time series averaging under dynamic time warping (dtw) conducted by Niennattrakul and Ratanamahatana [16]. They proposed a correctness-criterion for dtw-averages and postulated that dtw-averages can drift out of the cluster of time series to be averaged. They claimed that dtw-averages are inaccurate if they violate the correctness-criterion or suffer from the drift-out phenomenon. Furthermore, they conjectured that such inaccuracies are caused by the lack of triangle inequality. In this article, we show that a rectified version of the correctness-criterion is unsatisfiable and that the concept of drift-out is geometrically and operationally inconclusive. Satisfying the triangle inequality is insufficient to achieve correctness and unnecessary to overcome the drift-out phenomenon. We place the concept of drift-out on a principled basis and show that Fréchet means never drift out. The adjusted drift-out is a way to test to which extent an approximated dtw-average is coherent. Empirical results show that approximations obtained by the state-of-the-art averaging methods are incoherent in over a third of all cases.
The sample mean is one of the most fundamental concepts in statistics with far-reaching implications for data mining and pattern recognition. Household load profiles are compared to the aggregated levels more intermittent and a specific error measure based on local permutations has been proposed to cope with this when comparing profiles. We formally describe a distance based on this error, the local permutation invariant (LPI) distance, and introduce the sample mean problem in the LPI space. An existing exact solution has exponential complexity and is only tractable for very few profiles. We propose three subgradient-based approximation algorithms and compare them empirically on 100 households of the CER dataset. We find that stochastic subgradient descent can approximate the mean best, while the majorize-minimize mean is a good compromise for applications as no hyperparameter-tuning is needed. We show how the algorithms can be used in forecasting and clustering to achieve more appropriate results than by using the arithmetic mean.
The literature postulates that the dynamic time warping (dtw) distance can cope with temporal variations but stores and processes time series in a form as if the dtw-distance cannot cope with such variations. To address this inconsistency, we first show that the dtw-distance is not warping-invariant—despite its name and contrary to its characterization in some publications. The lack of warping-invariance contributes to the inconsistency mentioned above and to a strange behavior. To eliminate these peculiarities, we convert the dtw-distance to a warping-invariant semi-metric, called time-warp-invariant (twi) distance. Empirical results suggest that the error rates of the twi and dtw nearest-neighbor classifier are practically equivalent in a Bayesian sense. However, the twi-distance requires less storage and computation time than the dtw-distance for a broad range of problems. These results challenge the current practice of applying the dtw-distance in nearest-neighbor classification and suggest the proposed twi-distance as a more efficient and consistent option.
Averaging time series under dynamic time warping is an important tool for improving nearest-neighbor classifiers and formulating centroid-based clustering. The most promising approach poses time series averaging as the problem of minimizing a Fréchet function. Minimizing the Fréchet function is NP-hard and so far solved by several heuristics and inexact strategies. Our contributions are as follows: we first discuss some inaccuracies in the literature on exact mean computation in dynamic time warping spaces. Then we propose an exponential-time dynamic program for computing a global minimum of the Fréchet function. The proposed algorithm is useful for benchmarking and evaluating known heuristics. In addition, we present an exact polynomial-time algorithm for the special case of binary time series. Based on the proposed exponential-time dynamic program, we empirically study properties like uniqueness and length of a mean, which are of interest for devising better heuristics. Experimental evaluations indicate substantial deficits of state-of-the-art heuristics in terms of their output quality.
In diesem Talk blicken Prof. Dr. Wolfgang Mauerer und Ralf Ramsauer unter die Haube des verteilten Versionskontrollsystems Git. Neben einer genauen Beschreibung der Strukturen und Plumbing APIs, mit denen Git intern Commits erzeugt und verknüpft, gehen die Vortragenden auch auf nützliche Features und Standards ein, welche die Kollaborition in großen Open-Source Projekten erleichtern.
As a result of the enormous growth in data traffic for autonomous driving, the conventional in-vehicle network is no longer sufficient and requires new types of network concepts in a vehicle. This part of the automobile is known as the next generation communication network. Since the new car-systems can be extended by various services at any time, the network must adapt dynamically to new requirements wherever possible. For example, data flow must be configured dynamically between new services. Also data rates will be much higher in the future than today. This is one of the main reasons why we need to search for new technologies for data transfer in vehicles. This is based on an in-vehicle ethernet network. The process of configuring networks automatically has been discussed several times in recent years. One of the next steps is verifying and validating the automatic configuration process during the development of the new communication network. This research paper identifies several ways to ensure the automatically generated network configuration leads to a secure system. To achieve that, other parts of the company’s enterprise IT architecture and network technologies, the conventional vehicle network and other options for verification and validation are analysed
The Spoken Wikipedia Corpus collection: Harvesting, alignment and an application to hyperlistening
(2019)
Spoken corpora are important for speech research, but are expensive to create and do not necessarily reflect (read or spontaneous) speech ‘in the wild’. We report on our conversion of the preexisting and freely available Spoken Wikipedia into a speech resource. The Spoken Wikipedia project unites volunteer readers of Wikipedia articles. There are initiatives to create and sustain Spoken Wikipedia versions in many languages and hence the available data grows over time. Thousands of spoken articles are available to users who prefer a spoken over the written version. We turn these semi-structured collections into structured and time-aligned corpora, keeping the exact correspondence with the original hypertext as well as all available metadata. Thus, we make the Spoken Wikipedia accessible for sustainable research. We present our open-source software pipeline that downloads, extracts, normalizes and text–speech aligns the Spoken Wikipedia. Additional language versions can be exploited by adapting configuration files or extending the software if necessary for language peculiarities. We also present and analyze the resulting corpora for German, English, and Dutch, which presently total 1005 h and grow at an estimated 87 h per year. The corpora, together with our software, are available via http://islrn.org/resources/684-927-624-257-3/. As a prototype usage of the time-aligned corpus, we describe an experiment about the preferred modalities for interacting with information-rich read-out hypertext. We find alignments to help improve user experience and factual information access by enabling targeted interaction.
Speech-based interactive systems, such as virtual personal assistants, inevitably use complex architectures, with a multitude of modules working in series (or less often in parallel) to perform a task (e.g., giving personalized movie recommendations via dialog). Add modules for evoking and sustaining sociability with the user and the accumulation of processing latencies through the modules results in considerable turn-taking delays. We introduce incremental speech processing into the generation pipeline of the system to overcome this challenge with only minimal changes to the system architecture, through partial underspecification that is resolved as necessary. A user study with a sociable movie recommendation agent objectively diminishes turn-taking delays; furthermore, users not only rate the incremental system as more responsive, but also rate its recommendation performance as higher.
Ellipses denote the omission of one or more grammatically necessary phrases. In this paper, we will demonstrate how to identify such ellipses as a rhythmical pattern in modern and postmodern free verse poetry by using data from lyrikline which contain the corresponding audio recording of each poem as spoken by the original author. We present a feature engineering approach based on literary analysis as well as a neural networks based approach for the identification of ellipses within the lines of a poem. A contrast class to the ellipsis is defined from poems consisting of complete and correct sentences. The feature-based approach used features derived from a parser such as verb, comma, and sentence ending punctuation. The classifier of neural networks is trained on the line level to integrate the textual information, the spoken recitation, and the pause information between lines, and to integrate information across the lines within the poem. A statistic analysis of poet's gender showed that 65% of all elliptical poems were written by female poets. The best results, calculated by the weighted F-measure, for the classification of ellipsis with the contrast class is 0.94 with the neural networks based approach. The best results for classification of elliptical lines is 0.62 with the feature-based approach.
This work aims to discern the poetics of concrete poetry by using a corpus-based classification focusing on the two most important techniques used within concrete poetry: semantic decomposition and syntactic permutation. We demonstrate how to identify concrete poetry in modern and postmodern free verse. A class contrasting to concrete poetry is defined on the basis of poems with complete and correct sentences. We used the data from lyrikline, which contain both the written as well as the spoken form of poems as read by the original author. We explored two approaches for the identification of concrete poetry. The first is based on the definition of concrete poetry in literary theory by the extraction of various types of features derived from a parser, such as verb, noun, comma, sentence ending, conjunction, and asemantic material. The second is a neural network-based approach, which is theoretically less informed by human insight, as it does not have access to features established by scholars. This approach used the following inputs: textual information and the spoken recitation of poetic lines as well as information about pauses between lines. The results based on the neural network are more accurate than the feature-based approach. The best results, calculated by the weighted F-measure, for the classification of concrete poetry vis-à-vis the contrasting class is 0.96
Translation systems aim to perform a meaning-preserving conversion of linguistic material (typically text but also speech) from a source to a target language (and, to a lesser degree, the corresponding socio-cultural contexts). Dubbing, i.e., the lip-synchronous translation and revoicing of speech adds to this constraints about the close matching of phonetic and resulting visemic synchrony characteristics of source and target material. There is an inherent conflict between a translation’s meaning preservation and ‘dubbability’ and the resulting trade-off can be controlled by weighing the synchrony constraints. We introduce our work, which to the best of our knowledge is the first of its kind, on integrating synchrony constraints into the machine translation paradigm. We present first results for the integration of synchrony constraints into encoder decoder-based neural machine translation and show that considerably more ‘dubbable’ translations can be achieved with only a small impact on BLEU score, and dubbability improves more steeply than BLEU degrades.