Fakultät Informatik und Mathematik
Refine
Year of publication
- 2019 (96) (remove)
Document Type
- conference proceeding (article) (41)
- Article (32)
- conference proceeding (presentation, abstract) (10)
- Part of a Book (5)
- conference talk (2)
- conference proceeding (volume) (2)
- Moving Images (2)
- Book (1)
- Preprint (1)
Is part of the Bibliography
- no (96)
Keywords
- Maschinelles Lernen (6)
- Outsourcing (5)
- Produktionsplanung (5)
- Betriebliches Informationssystem (3)
- Business-managed IT (3)
- Diagnose (3)
- Handchirurgie (3)
- Lernprogramm (3)
- Literaturbericht (3)
- Machine learning (3)
Institute
- Fakultät Informatik und Mathematik (96) (remove)
Begutachtungsstatus
- peer-reviewed (58)
- begutachtet (1)
ELSI-Begleitforschung entwickelt sich zunehmend zu einem integralen Bestandteil von Forschungs- und Entwicklungsprojekten – nicht zuletzt dadurch, dass Drittmittelgeber wie die EU oder das BMBF in vielen Förderlinien explizit fordern, dass Technik wertebasiert entwickelt bzw., responsible research and innovation‘ (RRI) betrieben wird. Leitend ist hierbei der Gedanke, dass die Gestaltung von Technik alle Stakeholder-Interessen berücksichtigen soll; die Ermittlung dieser Interessen sowie die moralisch fundierte Balance zwischen widerstreitenden Interessen soll durch partizipative Verfahren erreicht werden. Dazu eignen sich ethische Leitlinien oder andere Kodifizierungen von Normen und Werten nur bedingt; in der Praxis werden anwendbare Verfahren benötigt. In den letzten Jahren wurden solche Verfahren entwickelt und bereits in der F&E-Praxis erprobt und eingesetzt. Drei (diskursethisch basierte) Verfahren, die kombiniert werden können, sollen vorgestellt werden.
SQL-on-Hadoop processing engines have become state-of-the-art in data lake analysis. However, the skills required to tune such systems are rare. This has inspired automated tuning advisors which profile the query workload and produce tuning setups for the low-level MapReduce jobs. Yet with highly dynamic query workloads, repeated re-tuning costs time and money in IaaS environments. In this paper, we focus on reducing the costs for up-front tuning. At the heart of our approach is the observation that a SQL query is compiled into a query plan of MapReduce jobs. While the plans differ from query to query, single jobs tend to be similar between queries. We introduce the notion of the code signature of a MapReduce job and, based on this, our concept of job similarity. We show that we can effectively recycle tuning setups from similar MapReduce jobs already profiled. In doing so, we can leverage any third-party tuning adviser for MapReduce engines. We are able to show that by recycling tuning setups, we can reduce the time spent on profiling by 50% in the TPC-H benchmark.
This article revisits an analysis on (in)accuracies of time series averaging under dynamic time warping (dtw) conducted by Niennattrakul and Ratanamahatana [16]. They proposed a correctness-criterion for dtw-averages and postulated that dtw-averages can drift out of the cluster of time series to be averaged. They claimed that dtw-averages are inaccurate if they violate the correctness-criterion or suffer from the drift-out phenomenon. Furthermore, they conjectured that such inaccuracies are caused by the lack of triangle inequality. In this article, we show that a rectified version of the correctness-criterion is unsatisfiable and that the concept of drift-out is geometrically and operationally inconclusive. Satisfying the triangle inequality is insufficient to achieve correctness and unnecessary to overcome the drift-out phenomenon. We place the concept of drift-out on a principled basis and show that Fréchet means never drift out. The adjusted drift-out is a way to test to which extent an approximated dtw-average is coherent. Empirical results show that approximations obtained by the state-of-the-art averaging methods are incoherent in over a third of all cases.
The sample mean is one of the most fundamental concepts in statistics with far-reaching implications for data mining and pattern recognition. Household load profiles are compared to the aggregated levels more intermittent and a specific error measure based on local permutations has been proposed to cope with this when comparing profiles. We formally describe a distance based on this error, the local permutation invariant (LPI) distance, and introduce the sample mean problem in the LPI space. An existing exact solution has exponential complexity and is only tractable for very few profiles. We propose three subgradient-based approximation algorithms and compare them empirically on 100 households of the CER dataset. We find that stochastic subgradient descent can approximate the mean best, while the majorize-minimize mean is a good compromise for applications as no hyperparameter-tuning is needed. We show how the algorithms can be used in forecasting and clustering to achieve more appropriate results than by using the arithmetic mean.
The literature postulates that the dynamic time warping (dtw) distance can cope with temporal variations but stores and processes time series in a form as if the dtw-distance cannot cope with such variations. To address this inconsistency, we first show that the dtw-distance is not warping-invariant—despite its name and contrary to its characterization in some publications. The lack of warping-invariance contributes to the inconsistency mentioned above and to a strange behavior. To eliminate these peculiarities, we convert the dtw-distance to a warping-invariant semi-metric, called time-warp-invariant (twi) distance. Empirical results suggest that the error rates of the twi and dtw nearest-neighbor classifier are practically equivalent in a Bayesian sense. However, the twi-distance requires less storage and computation time than the dtw-distance for a broad range of problems. These results challenge the current practice of applying the dtw-distance in nearest-neighbor classification and suggest the proposed twi-distance as a more efficient and consistent option.
Averaging time series under dynamic time warping is an important tool for improving nearest-neighbor classifiers and formulating centroid-based clustering. The most promising approach poses time series averaging as the problem of minimizing a Fréchet function. Minimizing the Fréchet function is NP-hard and so far solved by several heuristics and inexact strategies. Our contributions are as follows: we first discuss some inaccuracies in the literature on exact mean computation in dynamic time warping spaces. Then we propose an exponential-time dynamic program for computing a global minimum of the Fréchet function. The proposed algorithm is useful for benchmarking and evaluating known heuristics. In addition, we present an exact polynomial-time algorithm for the special case of binary time series. Based on the proposed exponential-time dynamic program, we empirically study properties like uniqueness and length of a mean, which are of interest for devising better heuristics. Experimental evaluations indicate substantial deficits of state-of-the-art heuristics in terms of their output quality.