Information und Kommunikation
Refine
Year of publication
Document Type
- conference proceeding (article) (171)
- Article (72)
- Part of a Book (12)
- conference proceeding (presentation, abstract) (10)
- conference talk (4)
- Preprint (3)
- Report (3)
- Book (2)
- Bachelor-/Diplom Thesis (1)
- conference proceeding (volume) (1)
Is part of the Bibliography
- no (283)
Keywords
- speech synthesis (5)
- Cloud Computing (4)
- Software (4)
- incremental processing (4)
- prosody (4)
- spoken dialogue systems (4)
- Chief Information Officer (3)
- Insourcing (3)
- Literaturbericht (3)
- Magneto-optics (3)
Institute
- Fakultät Informatik und Mathematik (198)
- Fakultät Elektro- und Informationstechnik (79)
- Laboratory for Safe and Secure Systems (LAS3) (31)
- Regensburg Strategic IT Management (ReSITM) (21)
- Labor Intelligente Materialien und Strukturen (8)
- Labor Elektroakustik (6)
- Fakultät Angewandte Natur- und Kulturwissenschaften (5)
- Labor Optische Übertragungssysteme (5)
- FuE-Anwenderzentrum Informations- und Kommunikationstechnologien (IKT) (4)
- Labor Industrielle Elektronik (4)
Begutachtungsstatus
- peer-reviewed (144)
- begutachtet (8)
When humans speak, they do not plan their full utterance inall detail before beginning to speak, nor do they speak piece-by-piece and ignoring their full message – instead humans usepartial representations in which they fill in the missing partsas the utterance unfolds. Incremental speech synthesizers, incontrast, have not yet made use of partial representations and theinformation contained there-in.We analyze the quality of prosodic parameter assignments(pitch and duration) generated from partial utterance specifi-cations (substituting defaults for missing features) in order todetermine the requirements that symbolic incremental prosodymodelling should meet. We find that broader, higher-level infor-mation helps to improve prosody even if lower-level informationabout the near future is yet unavailable. Furthermore, we findthat symbolic phrase-level or utterance-level information is mosthelpful towards the end of the phrase or utterance, respectively,that is, when this information is becoming available even in theincremental case. Thus, the negative impact of incremental pro-cessing can be minimized by using partial representations thatare filled in incrementally.
Automatic speech recognition (ASR) technology has been developed to such a level that off-the-shelf distributed speech recognition services are available (free of cost), which allow researchers to integrate speech into their applications with little development effort or expert knowledge leading to better results compared with previously used open-source tools.
Often, however, such services do not accept language models or grammars but process free speech from any domain. While results are very good given the enormous size of the search space, results frequently contain out-of-domain words or constructs that cannot be understood by subsequent domain-dependent natural language understanding (NLU) components. We present a versatile post-processing technique based on phonetic distance that integrates domain knowledge with open-domain ASR results, leading to improved ASR performance. Notably, our technique is able to make use of domain restrictions using various degrees of domain knowledge, ranging from pure vocabulary restrictions via grammars or N-Grams to restrictions of the acceptable utterances. We present results for a variety of corpora (mainly from human-robot interaction) where our combined approach significantly outperforms Google ASR as well as a plain open-source ASR solution.
Human speakers plan and deliver their utterances incrementally, piece-by-piece, and it is obvious that their choice regarding phonetic details (and the details' peculiarities) is rarely determined by globally optimal solutions. In contrast, parametric speech synthesizers use a full-utterance context when optimizing vocoding parameters and when determing HMM states. Apart from being cognitively implausible, this impedes incremental use-cases, where the future context is often at least partially unavailable. This paper investigates the `locality' of features in parametric speech synthesis voices and takes some missing steps towards better HMM state selection and prosody modelling for incremental speech synthesis.
It is established that driver distraction is the result of sharing cognitive resources between the primary task (driving) and any other secondary task. In the case of holding conversations, a human passenger who is aware of the driving conditions can choose to interrupt his speech in situations potentially requiring more attention from the driver, but in-car information systems typically do not exhibit such sensitivity. We have designed and tested such a system in a driving simulation environment. Unlike other systems, our system delivers information via speech (calendar entries with scheduled meetings) but is able to react to signals from the environment to interrupt when the driver needs to be fully attentive to the driving task and subsequently resume its delivery. Distraction is measured by a secondary short-term memory task. In both tasks, drivers perform significantly worse when the system does not adapt its speech, while they perform equally well to control conditions (no concurrent task) when the system intelligently interrupts and resumes.
When a passenger speaks to a driver, he or she is co-located with the driver, is generally aware of the situation, and can stop speaking to allow the driver to focus on the driving task. In-car dialogue systems ignore these important aspects, making them more distracting than even cell-phone conversations. We developed and tested a "situationally-aware" dialogue system that can interrupt its speech when a situation which requires more attention from the driver is detected, and can resume when driving conditions return to normal. Furthermore, our system allows driver-controlled resumption of interrupted speech via verbal or visual cues (head nods). Over two experiments, we found that the situationally-aware spoken dialogue system improves driving performance and attention to the speech content, while driver-controlled speech resumption does not hinder performance in either of these two tasks
Robots should appropriately give reasons for their actions
when these actions affect a human’s action or goal space. Communicating reasons may help the human understand the robot’s intents and may initiate joint action, i. e., accepting the robot’s goals and cooperating on the robot’s actions. However, to be efficient, the communication of reasons should be limited to the necessary rather than to completeness, conforming to the Gricean Maxim of Quantity. Furthermore, what is necessary only becomes apparent as the situation evolves and hence, for seamless interaction, ongoing utterances must be adapted as they happen. We present a system that flexibly gives reasons in a reduced setting in which the robot needs to intrude a human’s personal space in order to reach its goal.
We propose to use a model of personal space to initiate communication while passing a human thereby acknowledging that humans are not just a special kind of obstacle to be avoided but potential interaction partners. As a simple form of interaction, our system communicates an apology while closely passing a human. To this end, we present a software architecture that integrates a social-spaces knowledge base and a component for incremental speech production. Incrementality ensures that the robot’s utterance can be adapted to fit the developing situation in a natural way. Observer ratings show that personal-space intrusion is perceived as both natural and polite if the robot has the capability to utter and adapt an apology in an incremental way whereas it is perceived as unfriendly if the robot intrudes personal space without saying anything. Moreover, the robot is perceived as less natural if it does not adapt.
Incremental speech synthesis aims at delivering the synthetic voice while the sentence is still being typed. One of the main challenges is the online estimation of the target prosody from a partial knowledge of the sentence's syntactic structure. In the context of HMM-based speech synthesis, this typically results in missing segmental and suprasegmental features, which describe the linguistic context of each phoneme. This study describes a voice training procedure which integrates explicitly a potential uncertainty on some contextual features. The proposed technique is compared to a baseline approach (previously published), which consists in substituting a missing contextual feature by a default value calculated on the training set. Both techniques were implemented in a HMM-based Text-To-Speech system for French, and compared using objective and perceptual measurements. Experimental results show that the proposed strategy outperforms the baseline technique for this language.
The Spoken Wikipedia project unites volunteer readers of encyclopedic entries. Their recordings make encyclopedic knowledge accessible to persons who are unable to read (out of alexia, visual impairment, or because their sight is currently occupied, e. g. while driving). However, on Wikipedia, recordings are available as raw audio files that can only be consumed linearly, without the possibility for targeted navigation or search. We present a reading application which uses an alignment between the recording, text and article structure and which allows to navigate spoken articles, through a graphical or voice-based user interface (or a combination thereof). We present the results of a usability study in which we compare the two interaction modalities. We find that both types of interaction enable users to navigate articles and to find specific information much more quickly compared to a sequential presentation of the full article. In particular when the VUI is not restricted by speech recognition and understanding issues, this interface is on par with the graphical interface and thus a real option for browsing the Wikipedia without the need for vision or reading.
We present a corpus of time-aligned spoken data of Wikipedia articles as well as the pipeline that allows to generate such corpora for many
languages. There are initiatives to create and sustain spoken Wikipedia versions in many languages and hence the data is freely available,
grows over time, and can be used for automatic corpus creation. Our pipeline automatically downloads and aligns this data. The resulting
German corpus currently totals 293h of audio, of which we align 71h in full sentences and another 86h of sentences with some missing
words. The English corpus consists of 287h, for which we align 27h in full sentence and 157h with some missing words. Results are publically available.
Das ständige Umblättern von Noten ist für Musiker ein wiederkehrendes Problem. Dieses wird häufig durch einen Assistenten des Musikers, dem sogenannten Notenwender, gelöst. Diese Unterstützung haben allerdings viele Musiker nur selten während des Übens. In diesem Artikel stellen wir eine Anwendung für mobile Geräte vor, die auf verschiedene Arten das Umblättern von Klavierpartituren unterstützt. In einer Studie mit professionellen Musikern und Klavierschülern wurden diese Arten gegeneinander abgewogen. Die Ergebnisse zeigen auf, dass computer-unterstütztes Blättern Vorteile gegenüber herkömmlichem Blättern hat.
A recently proposed concept for training reverberation-robust acoustic models for automatic speech recognition using pairs of clean and reverberant data is extended from word models to tied-state triphone models in this paper. The key idea of the concept, termed ICEWIND, is to use the clean data for the temporal alignment and the reverberant data for the estimation of the emission densities. Experiments with the 5000-word Wall Street Journal corpus confirm the benefits of ICEWIND with tied-state triphones: While the training time is reduced by more than 90%, the word accuracy is improved at the same time, both for room-specific and multi-style hidden Markov models. Since the acoustic models trained with ICEWIND need less Gaussian components for the emission densities to achieve comparable recognition rates as Baum-Welch acoustic models, ICEWIND also allows for a reduced decoding complexity.
Many MapReduce jobs for analyzing Big Data require many hours and have to be repeated again and again because the base data changes continuously. In this paper we propose Marimba, a framework for making MapReduce jobs incremental. Thus, a recomputation of a job only needs to process the changes since the last computation. This accelerates the execution and enables more frequent recomputations, which leads to results which are more up-to-date. Our approach is based on concepts that are popular in the area of materialized views in relational database systems where a view can be updated only by aggregating changes in base data upon the previous result.
With the increase of centralization of resources in IT-infrastructure and the growing amount of cloud services, database management systems (DBMS) will be more and more outsourced to Infrastructure-as-a-Service (IaaS) providers. The outsourcing of entire databases, or the computation power for processing Big Data to an external provider also means that the provider has full access to the information contained in the database. In this article we propose a feasible solution with Order-Preserving Encryption (OPE) and further, state of the art, encryption methods to sort and process Big Data on external resources without exposing the unencrypted data to the IaaS provider. We also introduce a proof-of-concept client for Google BigQuery as example IaaS Provider.
A method, system, and computer-usable non-transitory storage device for dynamic voice codec adaptation are disclosed. The voice codec adapts in real time to devote more bits to audio quality when it is most needed, and fewer bits to less important parts of utterances are disclosed. Dialog knowledge is utilized for compression opportunities to adjust the bitrate moment-by-moment, based on the inferred value of each frame. Frame importance and appropriate transmission fidelity is predicted based on prosodic features and models of dialog dynamics. This technique provides the same communications quality with less spectrum needs, fewer antennas, and less battery drain.
Automatic speech recognition (ASR) is not only becoming increasingly
accurate, but also increasingly adapted for producing timely, incremental output. However, overall accuracy and timeliness alone are insufficient when it comes to interactive dialogue systems which require stability in the output and responsivity to the utterance as it is unfolding. Furthermore, for a dialogue system to deal with
phenomena such as disfluencies, to achieve deep understanding of user utterances these should be preserved or marked up for use by downstream components, such as language understanding, rather than be filtered out. Similarly, word timing can be informative for analyzing deictic expressions in a situated environment and should
be available for analysis. Here we investigate the overall accuracy and incremental performance of three widely used systems and discuss their suitability for the aforementioned perspectives. From the differing performance along these measures we provide a picture of the requirements for incremental ASR in dialogue systems and describe freely available tools for using and evaluating incremental ASR.
Most modern and post-modern poems have developed a post-metrical idea of lyrical prosody that employs rhythmical features of everyday language and prose instead of a strict adherence to rhyme and metrical schemes. This development is subsumed under the term free verse prosody. We present our methodology for the large-scale analysis of modern and post-modern poetry in both their written form and as spoken aloud by the author. We employ language processing tools to align text and speech, to generate a null-model of how the poem would be spoken by a naïve reader, and to extract contrastive prosodic features used by the poet. On these, we intend to build our model of free verse prosody, which will help to understand, differentiate and relate the different styles of free verse poetry. We plan to use our processing scheme on large amounts of data to iteratively build models of styles, to validate and guide manual style annotation, to identify further rhythmical categories, and ultimately to broaden our understanding of free verse poetry. In this paper, we report on a proof-of-concept of our methodology using smaller amounts of poems and a limited set of features. We find that our methodology helps to extract differentiating features in the authors’ speech that can be explained by philological insight. Thus, our automatic method helps to guide the literary analysis and this in turn helps to improve our computational models.
Predictive incremental parsing produces syntactic representations of sentences as they are produced, e.g. by typing or speaking. In order to generate connected parses for such unfinished sentences, upcoming word types can be hypothesized and structurally integrated with already realized words. For example, the presence of a determiner as the last word of a sentence prefix may indicate that a noun will appear somewhere in the completion of that sentence, and the determiner can be attached to the predicted noun. We combine the forward-looking parser predictions with backward-looking N-gram histories and analyze in a set of experiments the impact on language models, i.e. stronger discriminative power but also higher data sparsity. Conditioning N-gram models, MaxEnt models or RNN-LMs on parser predictions yields perplexity reductions of about 6%. Our method (a) retains online decoding capabilities and (b) incurs relatively little computational overhead which sets it apart from previous approaches that use syntax for language modeling. Our method is particularly attractive for modular systems that make use of a syntax parser anyway, e.g. as part of an understanding pipeline where predictive parsing improves language modeling at no additional cost.
Engineering is based on the understanding of causes and effects. Thus, causality should also guide the safety assessment of complex systems such as autonomous driving cars. To ensure the safety of the intended functionality of these systems, normative regulations like ISO 21448 recommend scenario-based testing. An important task here is to identify critical scenarios, so-called edge and corner cases. Data-driven approaches to this task (e.g. based on machine learning) cannot adequately address a constantly changing operational design domain. Model-based approaches offer a remedy – they allow including different sources of knowledge (e.g. data, human experts) into safety considerations. With this paper, we outline a novel approach for ensuring automotive system safety. We propose to use structural causal models as a probabilistic modelling language to combine knowledge about an open-context environment from different sources. Based on these models, we investigate parameter configurations that are candidates for critical scenarios. In this paper, we first discuss some aspects of scenario-based testing. We then provide an informal introduction to causal models and relate their development lifecycle to the established V-model. Finally, we outline a generic workflow for using causal models to identify critical scenarios and highlight some challenges that arise in the process.
With the increase in demand of services in the automotive industry, automotive enterprises prefer to collaborate with other qualified cross-domain partners to provide complex automotive functions (or services), such as autonomous driving, OTA (Over The Air) vehicle update, V2X (Vehicle-to-Vehicle communication), etc. One key element in cross-domain enterprise collaboration is the mutual agreement between interfaces of software components. In this context, model-to-model mappings of software component models of heterogeneous frameworks for automotive services and to explore the synergies in their interface semantics, have become an essential factor in improving the interoperability among the automotive and other cross-domain enterprises. However, one of the challenges in achieving cross-domain component interface model-to-model mappings at an application level lies in detecting the interface semantics and the semantic relations that are conveyed in different component models in different frameworks. This paper addresses this challenge using a Model Driven Architecture (MDA) based analytical approach to explore interface semantic synergies in the cross-domain component meta-models that are used for automotive services. The approach applies manual semantic checking measurements at an application interface level to understand the meanings and relations between the different meta-model entities of cross-domain framework software components. In this research, we attempt to ensure that interface description models of software components from heterogeneous frameworks can be compared, correlated and re-used for automotive services based on semantic synergies. We have demonstrated our approach using component meta-models from cross-domain enterprises, that are used for the automotive application domain.
In recent years, mapping of application software components’ ontologies semantically emerged as a big research challenge in automotive application domain that manipulates several cross-enterprise synergy knowledge application frameworks. The same knowledge formalized by different experts in different vehicle application frameworks leads to heterogeneous representations of components’ interface data. Consequently, this causes the most daunting impediment in semantic interoperability between the service components in cooperative automotive systems. From a modeling perspective, in the absence of standardized domain-based unified modeling techniques, the orchestration and resolution of semantic data interoperability between various vehicle application frameworks’ components’ interface models remain a challenge. However, this challenge could be addressed using ontological metamodeling by specifying semantic associations between components’ interface model concepts based on the domain knowledge. Apart from the semantic mapping of interface ontological metamodels, this work also defines quality metrics to determine the degree of semantic alignment achieved between the various interface ontologies. Additionally, to reduce development time and cost towards semantic interoperability, this work proposes a semi-automated plugin tool for the applicability of the evaluated quality metrics to semantic mapping of real-world components’ interface models.
Development and verification of modern, dependable automotive systems require appropriate modelling approaches. Classic automotive safety is described by the normative regulations ISO 26262, its relative ISO/PAS 21448, and their respective methodologies. In recent publications, an emerging demand to combine environmental influences, machine learning, or reasoning under uncertainty with standard-compliant analysis techniques can be noticed. Therefore, adapting established methods like FTA and proper tool support is necessary. We argue that Bayesian Networks (BNs) can be used as a central component to address and merge these demands. In this paper, we present our Open-Source Python package BayesianSafety. First, we review how BNs relate to data-driven methods, model-to-model transformations, and causal reasoning. Together with FTA and ETA, these models form the core functionality of our software. After describing currently implemented features and possibilities of combining individual modelling approaches, we provide an informal view of the tool’s architecture and of the resulting software ecosystem. By comparing selected publicly available safety and reliability analysis libraries, we outline that many relevant methodologies yield specialized implementations. Finally, we show that there is a demand for a flexible, unifying analysis tool that allows researching system safety by using multi-model and multi-domain approaches.
With autonomous driving, the system complexity of vehicles will increase drastically. This requires new approaches to ensure system safety. Looking at standards like ISO 26262 or ISO/PAS 21448 and their suggested methodologies, an increasing trend in the recent literature can be noticed to incorporate uncertainty. Often this is done by using Bayesian Networks as a framework to enable probabilistic reasoning. These models can also be used to represent causal relationships. Many publications claim to model cause-effect relations, yet rarely give a formal introduction of the implications and resulting possibilities such an approach may have. This paper aims to link the domains of causal reasoning and automotive system safety by investigating relations between causal models and approaches like FMEA, FTA, or GSN. First, the famous “Ladder of Causation” and its implications on causality are reviewed. Next, we give an informal overview of common hazard and reliability analysis techniques and associate them with probabilistic models. Finally, we analyse a mixed-model methodology called Hybrid Causal Logic, extend its idea, and build the concept of a causal shell model of automotive system safety.
Supervisory Control and Data Acquisition (SCADA) systems are used to control and monitor components within the energy grid, playing a significant role in the stability of the system. As a part of critical infrastructures, components in these systems have to fulfill a variety of different requirements regarding their dependability and must also undergo strict audit procedures in order to comply with all relevant standards. This results in a slow adoption of new functionalities. Due to the emerged threat of cyberattacks against critical infrastructures, extensive security measures are needed within these systems to protect them from adversaries and ensure a stable operation. In this work, a solution is proposed to integrate extensive security measures into current systems. By deploying additional security-gateways into the communication path between two nodes, security features can be integrated transparently for the existing components. The developed security-gateway is compliant to all regulatory requirements and features an internal architecture based on the separation-of-concerns principle to increase its security and longevity. The viability of the proposed solution has been verified in different scenarios, consisting of realistic field tests, security penetration tests and various performance evaluations.
Today's cyberphysical systems are increasingly prone to misuse. To secure existing and future software systems, introducing concepts of IT-Security and Secure Software Engineering (SecSE) in Software Engineering (SE) courses is essential for academic education of future software engineers. This is not only important for computer science students, but also for engineering students studying topics of computing and SE. However, only little research exists on integrating these topics into traditional SE courses, especially for engineering students in non-computer science majors. To narrow this gap, this paper contributes with the design and evaluation of an exercise on modeling misuse cases alongside use cases, based on the inductive teaching method problem-based learning (PBL). The exercise is part of an educational design research investigating which learning content and teaching methods are suitable for integrating IT-Security and SecSE topics into traditional SE education of engineering students to convey factual knowledge as well as raise awareness and interest for both topics during software development. We present the integration of the exercise design into a traditional SE course for engineering students and its evaluation to examine its suitability. We evaluated the exercise design regarding the suitability of the design components, the learning content of misuse cases and the intended learning goals as well as its impact on students' motivation, and their interest in IT-security. The paper then presents indications on the feasibility and success of the exercise design for teaching misuse cases to engineering students and sparking their interest in IT-Security.
In the real-time systems sector, various task models and corresponding tests exist to model and verify the schedulability of task sets on the system at hand. While those models and schedulability tests have intensively been studied from a theoretical point of view, it is hard to ma e use of them to compare the actual execution behavior of scheduling algorithms on a real system. In contrast to schedulability tests, simulators can help to investigate the performance of specific scheduling algorithms. One of the most generalized task models to describe parallel tasks is the Directed Acyclic Graph model that allows to represent tasks as a series of subtasks that depict the potentially parallel computations and precedence constraints that denote the order in which the subtasks are allowed to execute.
In this paper, we investigate various scheduling algorithms for the Directed Acyclic Graph model. For that, we first recapitulate the examined scheduling algorithms in detail and point out relevant differences. Subsequently, we present the evaluation of different global and federated scheduling algorithms using fine-grained parallel tasks. To this end, we generate random Directed Acyclic Graph tasks and simulate their execution on multiprocessor systems using scheduling algorithms such as global rate-monotonic and semi-federated scheduling as well as global scheduling policies using the thread pool model.
Networked control systems as e.g., battery management systems, smart grids or vehicular systems, consist of sensors, actuators and controllers with a communication network in the control loop. The data rate and the reliability of the underlying communication network are key factors since delays or message losses directly affect the system control. In addition, the processor load caused by the communication is significant as it influences the calculation of system states and the setting of control parameters. The power consumption of the communication network has a further impact on the energy efficiency of the respective application. In this paper, the communication technologies Controller Area Network (CAN), Controller Area Network Flexible Data-rate (CAN FD) and Ethernet are compared in the context of networked control systems with focus on a decentralized battery management system. First, the message processing time and the processor load are measured. With regard to energy efficiency, the maximum power consumption is determined. The Bit Error Rates (BER) and the Residual Error Rates (RER) are calculated to evaluate the reliability. Finally, the receive FIFO load under high traffic conditions is examined. Index Terms—Networked control systems, decentralized battery management system, microcontrollers, communication systems, Ethernet, Controller Area Network (CAN), Controller Area Network Flexible Data-rate (CAN FD) energy efficiency, energy consumption, bit error rate, residual error rate, processor load.
Modern cyber-physical systems, such as autonomous vehicles, advanced driver assistance systems, automation systems and battery management systems, result in extended communication requirements regarding the reliability and the availability. The Controller Area Network (CAN) is a broadcast-based protocol which is still used as a standard for serial communication between individual microcontrollers due to its reliability and low power consumption. In addition, it provides mechanisms for detecting transmission errors and retransmitting messages in the event of an error. The enhancement CAN Flexible Data-Rate (CAN FD) offers increased data rates and transmission rates in order to meet the data throughput requirements. In this paper, the mechanisms for reliable data transmission in a CAN FD network are analyzed. To improve reliability, a second identical CAN-FD network is added to the system, using the additional CAN interface already available on common microcontrollers. The redundant communication network is examined in terms of failure rates and the mean time to failure. The reliability over the operation time is calculated for the single and the redundant version of the CAN FD network using the failure rate limits of the ASIL levels.
We present a comprehensive analysis of the neural audio-visual synchrony evaluation tool SyncNet. We assess the agreement of SyncNet scores vis-a-vis human perception and whether we can use these as a reliable metric for evaluating audio-visual lip-synchrony in generation tasks with no ground truth reference audio-video pair. We further look into the underlying elements in audio and video which vitally affect synchrony using interpretable explanations from SyncNet predictions and analyse its susceptibility by introducing adversarial noise. SyncNet has been used in numerous papers on visually-grounded text-to-speech for scenarios such as dubbing. We focus on this scenario which features many local asynchronies (something that SyncNet isn’t made for).
On analytic properties of the standard zeta function attached to a vector-valued modular form
(2022)
We proof a Garrett–Böcherer decomposition of a vector-valued Siegel Eisenstein series E2l,0 of genus 2 transforming with the Weil representation of Sp2(Z) on the group ring C[(L′/L)2]. We show that the standard zeta function associated to a vector-valued common eigenform f for the Weil representation can be meromorphically continued to the whole s-plane and that it satisfies a functional equation. The proof is based on an integral representation of this zeta function in terms of f and E2l,0.
Over the last three decades, the Controller Area Network (CAN) has become the dominant communication in embedded systems. Especially for automotive systems it offers advantages including high robustness, low error rate and high reliability combined with low power consumption. Therefore, learning the basics of this bus system is substantial in this field. Nowadays, various media about the functionality and use of CAN exist which make it easy to read into the topic. But often, theory alone is not sufficient. To deepen the understanding, practical implementation contributes significantly. However, affordable and easy-to-use CAN devices for training purposes are scarce. Existing equipment can be divided into expensive professional devices, which have many functions and inexpensive ones for hobbyists, which require difficult configurations. Therefore, a practical solution is a low-budget device equipped with an overlay which deals with the time consuming configurations. This paper covers the development of a python interface for a purchasable cost effective CAN device for Windows OS. The intention is to create an easy-to-use program that enables beginners to get in touch with CAN and collect practical experience. At the start, a brief explanation of the CAN functionality is given. After that, we introduce the hardware used in this project. Next, the software part covers the development of the interface and the integration of this interface into python-can. Furthermore, a virtual playground is introduced for testing purposes. Also, to demonstrate the functionality of the interface, a test program is executed in conjunction with a logic analyzer.
The transition towards services has been imperative for manufacturing firms for years. The change from a productoriented to a more service-dominant business model affects the organizational structure of firms. However, literature provides limited insights into how manufacturing firms organize themselves in this transition. Even though digital technologies are critical for the transition, it is unclear how to orchestrate digital and traditional Information Technology (IT) resources in manufacturing firms accordingly. We analyze the case of a typical manufacturing firm that has adjusted its structure to reorganize for solution offerings based on product, service, and digital components. Our results describe a hybrid organizational structure that splits front- and back-end units. The back-end units are split along solution components. Digital IT resources are internalized and governed decentrally, with traditional IT resources being outsourced and steered centrally. Our findings contribute to digital servitization research by clarifying the overarching as well as the digital and traditional IT-related organization for manufacturing firms.
Despite the relevance and maturity of the Chief Information Officer (CIO) research field, no studies exist that exhaustively summarize the current body of knowledge, focusing on the development of the field over its entire timespan. The paper at hand addresses this research gap and presents an exhaustive literature review on the CIO research field using main path analysis. We identify the central papers in CIO research and eight main research streams by quantitatively and qualitatively analyzing 466 papers. We find that established research streams, e.g., ‘Evolving role of the CIO’ and ‘CIO hierarchical position and relationships’ as well as recently emerging research streams, e.g., ‘CIO as business enabler’ and ‘CIOs and IT security,’ draw growing attention. Based on our findings, we develop promising further avenues for research in the CIO field.
Translation invariant diagonal frame decomposition of inverse problems and their regularization
(2022)
Solving inverse problems is central to a variety of important applications, such as biomedical image reconstruction and non-destructive testing. These problems are characterized by the sensitivity of direct solution methods with respect to data perturbations. To stabilize the reconstruction process, regularization methods have to be employed. Well-known regularization methods are based on frame expansions, such as the wavelet-vaguelette (WVD) decomposition, which are well adapted to the underlying signal class and the forward model and allow efficient implementation. However, it is well known that the lack of translational invariance of wavelets and related systems leads to specific artifacts in the reconstruction. To overcome this problem, in this paper we introduce and analyze the concept of translation invariant diagonal frame decomposition (TI-DFD) of linear operators. We prove that a TI-DFD combined with a regularizing filter leads to a convergent regularization method with optimal convergence rates. As illustrative example, we construct a wavelet-based TI-DFD for one-dimensional integration, where we also investigate our approach numerically. The results indicate that filtered TI-DFDs eliminate the typical wavelet artifacts when using standard wavelets and provide a fast, accurate, and stable solution scheme for inverse problems.
Spherical aberrations of lenses lead to increased spot radii on the focal plane. Several methods, such as optimizing thickness and radii or building lens groups, are known to minimize these spherical aberrations. Here, an innovative method for spherical aberration minimization is introduced; it can be used for short-range free-space optical communication systems, such as unit power afocal relay trains, lens waveguides, and periodic lens systems. This spherical aberration compensation principle is based on the combination of two identical spherical convex lenses at an optimal distance. Due to the higher refractive power for rays with an larger axis distance, rays from the outer area of the first lens intersect the inner area of the second lens, and vice versa. With this setup, the radial refractive power deviation of the two lenses compensate each other. Analytic calculations and numerical simulations are done to confirm this behavior, and measurements using two symmetric spherical lenses with polymer optical fibers as light feed confirm the calculations. Simulations and measurements show a very good matching behavior except for an unknown systematic error. At a lens distance of 300 mm, the optical attenuation decreased by ∼2 dB compared with a very small lens distance. This leads to the conclusion that this proposed spherical aberration minimization method works as theoretically predicted.
This article compares the standard electrical method of partial discharge detection with a novel optical detection method based on silicon photomultipliers. A third, complementary, single-loop antenna method is added to represent the ultra-high frequency method commonly used in gas-insulated switchgear/lines. A trio of air-insulated electrode designs that simulate the fundamental fault/discharge types in gaseous insulation (protrusion – corona discharge, floating conductive particle, surface discharges) are employed. Phase-resolved partial discharge activity patterns are compiled for each electrode design. The patterns are analyzed using spatial statistics and the interpretation of the obtained data trends explained by means of an example. Ultimately, the consistency and reliability of discharge detection by the optical methods for each fault/discharge type are evaluated, and suggestions for improvement are made.
The INPROTK 2012 release
(2012)
We describe the 2012 release of INPROTK1, our “Incremental Processing Toolkit“ which combines a powerful and extensible architecture for incremental processing with components for incremental speech recognition and, new to this release, incremental speech synthesis. These components work domainindependently; we also provide example implementations of higher-level components such as natural language understanding and dialogue management that are somewhat more tied to a particular domain. The toolkit is accompanied by evaluation tools for analysing timing behaviour, and we highlight some timing results on conversational speech input in this paper. We offer our toolkit to foster research in this new and exciting area, which promises to help increase the naturalness of behaviours that can be modelled in such systems.
This paper applies design science research methodology to iteratively develop a framework for measuring and communicating IT business value from a CIO perspective. The framework design is based on analysis and integration of literature combined with empirical findings. The framework was evaluated by CIO interviews and a practical feasibility study. The results show that IT business value can be measured and communicated using our framework by applying six consecutive process steps. Thereby, IT business value is not a single number but a set of quantitative and qualitative metrics relevant to stakeholders. Our framework represents a novel and integrated approach on how CIOs can select appropriate metrics, measure, and communicate IT business value to stakeholders. In addition, the paper provides insights for CIOs on how to be successful in IT business value management.
This paper presents ongoing work in incremental speech synthesis that enables a system to adapt speech delivery to unforeseen changes in the timing of motor events (e. g. a robot actuator working faster or slower than anticipated) in order to improve the coordination of speech and gestures for deictic expressions.
We demonstrate nonlinear coupling in a discrete optical system. This is achieved in waveguide arrays with quadratic nonlinearity, where the symmetries of the nonlinearly interacting waveguide modes are used to suppress the usually dominating nonlinear effects within individual waveguides. We derive a mathematical model to describe the nonlinear coupling in such waveguide arrays and show experimentally the profound effects of this nonlinear coupling mechanism on second-harmonic generation.
In simultaneous interpreting, human experts incrementally construct and extend partial hypotheses about the source speaker’s message, and start to verbalize a corresponding message in the target language, based on a partial translation – which may have to be corrected occasionally. They commence the target utterance in the hope that they will be able to finish understanding the source speaker’s message and determine its translation in time for the unfolding delivery. Of course, both incremental understanding and translation by humans can be garden-pathed, although experts are able to optimize their delivery so as to balance the goals of minimal latency, translation quality and high speech fluency with few corrections. We investigate the temporal properties of both translation input and output to evaluate the tradeoff between low latency and translation quality. In addition, we estimate the improvements that can be gained with a tempo-elastic
speech synthesizer.
A Novel Design Flow for a Security-Driven Synthesis of Side-Channel Hardened Cryptographic Modules
(2017)
Over the last few decades, computer-aided engineering (CAE) tools have been developed and improved in order to ensure a short time-to-market in the chip design business. Up to now, these design tools do not yet support an integrated design strategy for the development of side-channel-resistant hardware implementations. In order to close this gap, a novel framework named AMASIVE (Adaptable Modular Autonomous SIde-Channel Vulnerability Evaluator) was developed. It supports the designer in implementing devices hardened against power attacks by exploiting novel security-driven synthesis methods. The article at hand can be seen as the second of the two contributions that address the AMASIVE framework. While the first one describes how the framework automatically detects vulnerabilities against power attacks, the second one explains how a design can be hardened in an automatic way by means of appropriate countermeasures, which are tailored to the identified weaknesses. In addition to the theoretical introduction of the fundamental concepts, we demonstrate an application to the hardening of a complete hardware implementation of the block cipher PRESENT.
The relation of syntax and prosody (the syntax-prosody interface) has been an active area of research, mostly in linguistics and typically studied under controlled conditions. More recently, prosody has also been successfully used in the data-based training of syntax parsers. However, there is a gap between the controlled and detailed study of the individual effects between syntax and prosody and the large-scale application of prosody in syntactic parsing with only a shallow analysis of the respective influences. In this paper, we close the gap by investigating the significance of correlations of prosodic realization with specific syntactic functions using linear mixed effects models in a very large corpus of read-out German encyclopedic texts. Using this corpus, we are able to analyze prosodic structuring performed by a diverse set of speakers while they try to optimize factual content delivery. After normalization by speaker, we obtain significant effects, e.g. confirming that the subject function, as compared to the object function, has a positive effect on pitch and duration of a word, but a negative effect on loudness.
After overcoming the traditional metrics, modern and postmodern poetry developed a large variety of ‘free verse prosodies’ that falls along a spectrum from a more fluent to a more disfluent and choppy style. We present a method, grounded in philological analysis and theories on cognitive (dis)fluency, to analyze this ‘free verse spectrum’ into six classes of poetic styles as well as to differentiate three types of poems with enjambments. We use a model for automatic prosodic analysis of spoken free verse poetry which uses deep hierarchical attention networks to integrate the source text and audio and predict the assigned class. We then analyze and fine-tune the model with a particular focus on enjambments and in two ways: we drill down on classification performance by analyzing whether the model focuses on similar traits of poems as humans would, specifically, whether it internally builds a notion of enjambment. We find that our model is similarly good as humans in finding enjambments; however, when we employ the model for classifying enjambment-dominated poem types, it does not pay particular attention to those lines. Adding enjambment labels to the training only marginally improves performance, indicating that all other lines are similarly informative for the model.
We show how to classify the phrasing of readout poems with the help of machine learning algorithms that use manually engineered features or automatically learn representations. We investigate modern and postmodern poems from the webpage lyrikline, and focus on two exemplary rhythmical patterns in order to detect the rhythmic phrasing: The Parlando and the Variable Foot. These rhythmical patterns have been compared by using two important theoretical works: The Generative Theory of Tonal Music and the Rhythmic Phrasing in English Verse. Using both, we focus on a combination of four different features: The grouping structure, the metrical structure, the time-span-variation, and the prolongation in order to detect the rhythmic phrasing in the two rhythmical types. We use manually engineered features based on text-speech alignment and parsing for classification. We also train a neural network to learn its own representation based on text, speech and audio during pauses. The neural network outperforms manual feature engineering, reaching an f-measure of 0.85.
One of the most important patterns in ancient as well as modern poetry is the enjambment, the continuation of a sentence beyond the end of a line, couplet, or stanza. The paper reports first activities towards the development of a digital tool to analyze the accentuation of poetic enjambments in readout poetry. The aim in this contribution is to recognize two forms of enjambment (emphasized and unemphasized) in poems using audio and text data. We use data from lyrikline which is a major online portal for spoken poetry whereas poems are read aloud by the original authors. We identified by hermeneutical means based on literary analysis a total of 69 poems being characteristic for the use of enjambments in modern and postmodern German poetry and train classifiers to differentiate the emphasized/unemphasized ategorization. A remarkable result of our automated analyses (and to our knowledge the first data-driven analysis of this kind) is the identification of a cultural difference in the accentuation of enjambments: statistically speaking, poets from the former GDR tend to emphasize the enjambment, whereas poets from the FRG do not. We use features derived from speech-to-text alignment and statistical parsing information such as pause lengths, number of lines with verbs, and number of lines with punctuation. The best classification results, calculated by the F-measure, for the both types of enjambment (emphasized/unemphasized) is 0.69.
We present the open-source extensible dialog manager DialogOS.
DialogOS features simple finite-state based dialog management
(which can be expanded to more complex DM strategies via a full-fledged scripting language) in combination with integrated speech recognition and synthesis in multiple languages.
DialogOS runs on all major platforms, provides a simple-to-use
graphical interface and can easily be extended via well-defined
plugin and client interfaces, or can be integrated server-side into
larger existing software infrastructures. We hope that DialogOS
will help foster research and teaching given that it lowers the bar
of entry into building and testing spoken dialog systems and provides paths to extend one’s system as development progresses.
Speech can be more or less likable in various ways and comparing speakers by likability has important applications such as speaker selection or matching. Determining the likability of a speaker is a difficult task which can be simplified by breaking it down into pairwise preference decisions. Using a corpus of 5440 pairwise preference ratings collected previously through crowd-sourcing, we train classifiers to determine which of two speakers is “better”. We find that modeling the speech feature sequences using LSTMs outperforms conventional methods that pre-aggregate feature averages by a large margin, indicating that the prosodic structure should be taken into account when determining speech quality. Our classifier reaches an accuracy of 97 % for coarse-grained decisions, where differences between speech quality in both stimuli is relatively large.
Our paper focuses on the computational analysis of “readout poetry” (german: Hördichtung) – recordings of poets reading their own work – with regards to the most important type of this genre, the modern “sound poetry” (german: Lautdichtung). Whereas “readout poetry” often uses normal words and sentences, the “sound poetry”, developed by dadaistic poets like Hugo Ball and Kurt Schwitters or concrete poets like Ernst Jandl, Oskar Pastior, or Bob Cobbing, combines the “microparticles of the human voice” like the segments in Ernst Jandls sound poem “schtzngrmm” (“schtzngrmm / schtzngrmm / tttt / tttt / grrrmmmmm / tttt / sch / tzngrmm”). Within the genre of sound poetry, there are two main forms: The lettristic and the syllabic decomposition. A short anecdote will explain this difference: The dadaist Raoul Hausmann developed the lettristic sound poetry in his early dadaistic poem “fmsbw” from 1918. This is said to have inspired his successor Schwitters, whose famous “Ursonate” [The Sonata in Primal Speech] begins with the words “Fümms bö wö tää zää Uu”. With the “Ursonate”, Schwitters developed a syllabic variation of the lettristic poems of Hausmann. The paper shows how to train a bidirectional LSTM network in order to differ between these “dadaistic” sound poems and the “normal” read out poems. In a further step, we will also show how to distinguish between the lettristic and the syllabic decomposition. Based on a bidirectional LSTM network that reads encodings of the character sequence in the poem and uses the output of each directional layer, we identify poems of the sound poetry genre and differentiate between its two types of compositions. The classification results of sound poetry vs. other poetry as well as lettristic vs. syllabic decomposition are with a high performance, yielding a f-scores of 0.86 and 0.84, respectively.
Modern and post-modern free verse poems feature a large and complex variety in their poetic prosodies that falls along a continuum from a more fluent to a more disfluent and choppy style. As the poets of modernism overcame rhyme and meter, they oriented themselves in these two opposing directions, creating a free verse spectrum that calls for new analyses of prosodic forms. We present a method, grounded in philological analysis and current research on cognitive (dis)fluency, for automatically analyzing this spectrum. We define and relate six classes of poetic styles (ranging from parlando to lettristic decomposition) by their gradual differentiation. Based on this discussion, we present a model for automatic prosodic classification of spoken free verse poetry that uses deep hierarchical attention networks to integrate the source text and audio and predict the assigned class. We evaluate our model on a large corpus of German author-read post-modern poetry and find that classes can reliably be differentiated, reaching a weighted f-measure of 0.73, when combining textual and phonetic evidence. In our further analyses, we validate the model’s decision-making process, the philologically hypothesized continuum of fluency and investigate the relative importance of various features.
This seminar was held in late 2016 and brought together, for the first time, researchers studying vocal interaction in a variety of different domains covering communications between all possible combinations of humans, animals, and robots. While each of these sub-domains has extensive histories of research progress, there is much potential for cross-fertilisation that currently remains underexplored. This seminar aimed at bridging this gap. In this report, we present the nascent research field of VIHAR and the major outputs from our seminar in the form of prioritised open research questions, abstracts from stimulus talks given by prominent researchers in their respective fields, and open problem statements by all participants.
Automatic speech recognition (ASR) is not only becoming increasingly accurate, but also increasingly adapted for producing timely, incremental output. However, overall accuracy and timeliness alone are insufficient when it comes to interactive dialogue systems which require stability in the output and responsivity to the utterance as it is unfolding. Furthermore, for a dialogue system to deal with phenomena such as disfluencies, to achieve deep understanding of user utterances these should be preserved or marked up for use by downstream components, such as language understanding, rather than be filtered out. Similarly, word timing can be informative for analyzing deictic expressions in a situated environment and should be available for analysis. Here we investigate the overall accuracy and incremental performance of three widely used systems and discuss their suitability for the aforementioned perspectives. From the differing performance along these measures we provide a picture of the requirements for incremental ASR in dialogue systems and describe freely available tools for using and evaluating incremental ASR.
The most important development in modern and postmodern poetry is the replacement of traditional meter by new rhythmical patterns. Ever since Walt Whitman's Leaves of Grass (1855), modern (nineteenth-to twenty-first-century) poets have been searching for novel forms of prosody, accent, rhythm, and intonation. Along with the rejection of older metrical units such as the iamb or trochee, a structure of lyrical language was developed that renounced traditional forms like rhyme and meter. This development is subsumed under the term free verse prosody. Our project will test this theory by applying machine learning or deep learning techniques to a corpus of modern and postmodern poems as read aloud by the original authors. To this end, we examine “lyrikline”, the most famous online portal for spoken poetry. First, about 17 different patterns being characteristic for the lyrikline-poems have been identified by the philological scholar of this project. This identification was based on a certain philological method including three different steps: a) grammetrical ranking; b) rhythmic phrasing; and c) mapping rubato and prosodic phrasing. In this paper we will show how to combine this philological and a digital analysis by using the prosody detection available in speech processing technology. In order to analyse the data, we want to use different tools for the following tasks: PoS-tagging, alignment, intonation, phrases and pauses, and tempo. We also analyzed the lyrikline-data by identifying the occurrence of the mentioned patterns. This analysis is a first step towards an automatic classification based on machine learning or deep learning techniques.
This contribution focuses on structural similarities between tonality and cadences in music on the one hand, and rhythmical patterns in poetic languages respectively poetry on the other hand.
We investigate two exemplary rhythmical patterns in modern and postmodern poetry to detect these tonality-like features in poetic language: The Parlando and the Variable Foot. German poems
readout from the original poets are collected from the webpage of our partner lyrikline. We compared these rhythmical features with tonality rules, explained in two important theoretical volumes: The Generative Theory of Tonal Music and the Rhythmic Phrasing in English Verse. Using both volumes, we focused on a certain combination of four different features: The grouping
structure, the metrical structure, the time-span-variation and the prolongation, in order to detect the two important rhythmical patterns which use tonality-like features in poetic language (Parlando and Variable Foot). Different features including pause and parser information are used in this classification process. The best classification result, calculated by the f-measure, for
Parlando and Variable Foot is 0.69.
Speech quality and likability is a multi-faceted phenomenon consisting of a combination of perceptory features that cannot easily be computed nor weighed automatically. Yet, it is often easy to decide which of two voices one likes better, even though it would be hard to describe why, or to name the underlying basic perceptory features. Although likability is inherently subjective and individual preferences differ frequently, generalizations are useful and there is often a broad intersubjective consensus about whether one speaker is more likable than another. However, breaking down likability rankings into pairwise comparisons leads to a quadratic explosion of rating pairs. We present a methodology and software to efficiently create a likability ranking for many speakers from crowdsourced pairwise likability ratings. We collected pairwise likability ratings for many (>220) speakers from many raters (>160) and turn these ratings into one likability ranking. We investigate the resulting speaker ranking stability under different conditions: limiting the number of ratings and the dependence on rater and speaker characteristics. We also analyze the ranking wrt. acoustic correlates to find out what factors influence likability. We publish our ranking and the underlying ratings in order to facilitate further research.
The translation of poetry is a complex, multifaceted challenge: the translated text should communicate the same meaning, similar metaphoric expressions, and also match the style and prosody of the original poem. Research on machine poetry translation is existing since 2010, but for four reasons it is still rather insufficient:
1. The few approaches existing completely lack any knowledge about current developments in both lyric theory and translation theory.
2. They are based on very small datasets.
3. They mostly ignored the neural learning approach that superseded the long-standing dominance of phrase-based approaches within machine translation.
4. They have no concept concerning the pragmatic function of their research and the resulting tools.
Our paper describes how to improve the existing research and technology for poetry translations in exactly these four points. With regards to 1) we will describe the “Poetics of Translation”. With regards to 2) we will introduce the Worlds largest corpus for poetry translations from lyrikline. With regards to 3) we will describe first steps towards a neural machine translation of poetry. With regards to 4) we will describe first steps towards the development of a poetry translation mapping system.
Different higher education backgrounds in China and Germany led to challenges in the curriculum design at the beginning of our cooperative bachelor program in Optoelectronics Engineering. We see challenges in different subject requirements from both sides and in the German language requirements for Chinese students. The curriculum was optimized according to the AS IIN criteria, which makes it acceptable and und erstandable by both countries. German students are integrated into the Chinese class and get the sa me lectures like their Chinese colleagues. Intercultural and curriculum challenges are successfully solved. The results are summarized to provide an example for other similar international programs.
We present our research on computer-supported analysis of prosodic styles in post-modern poetry. Our project is unique in making use of both the written as well as the spoken form of the poem as read by the original author. In particular, we use speech and natural language processing technology to align speech and text and to perform textual analyses. We then explore, based on literary theory, the quantitative value of various types of features in differentiating various prosodic classes of post-modern poetry using machine-learning techniques. We contrast this feature-driven approach with a theoretically less informed neural networks-based approach and explore the relative strengths of both models, as well as how to integrate higher-level knowledge into the NN. In this paper, we give an overview of our project, our approach, and particularly focus on the challenges encountered and lessons learned in our interdisciplinary endeavour. The classification results of the rhythmical patterns (six classes) using NN-based approaches are better than by feature-based approaches.
There is a common understanding amongst academics that information systems (IS) research sometimes has limited relevance for practitioners. This can be explained by the fact that research lags behind the fast moving IS environment, has limited practical applicability and is hard to access. We have focused on the research area of IS backsourcing, and analyzed practitioner literature to increase our understanding of topics of interest for practitioners, to determine a potential gap between academic and practitioner literature, and to identify future research directions in this field. We observed that most publications are either news or background articles, focusing on describing backsourcing cases. Additionally, we identified four recurring themes, namely reasons for backsourcing, presentation of survey results, discussion of industry trends, and backsourcing success stories. The main reasons identified to trigger backsourcing decisions are cost savings, quality improvements, and increasing control and flexibility. By comparing our findings with academic literature on IS backsourcing, we conclude that generally both literature types cover similar topics. However, researchers have a more formulative or interpretive focus than the often descriptive practitioner literature. Academic literature also examines a broader range of topics, while practitioner literature has a narrower focus. Additionally, we observe one difference regarding applied terminology: while researchers employ the term backsourcing, practitioners mostly use back in-house or insourcing. Our paper contributes to facilitating the exchange between academics and practitioners, presents topics to consider when aiming to increase practical relevance and provides researchers with concrete directions for future research within the field of IS backsourcing.
Research on Shadow IT is facing a conceptual dilemma in cases where previously “covert” systems developed by business entities (individual users, business workgroups, or business units) are integrated in the organizational IT management. These systems become visible, are therefore not “in the shadows” anymore, and subsequently do not fit to existing definitions of Shadow IT. Practice shows that some information systems share characteristics of Shadow IT, but are created openly in alignment with the IT department. This paper therefore proposes the term “Business- managed IT” to describe “overt” information systems developed or managed by business entities. We distinguish Business- managed IT from Shadow IT by illustrating case vignettes. Accordingly, our contribution is to suggest a concept and its delineation against other concepts. In this way, IS researchers interested in IT originated from or maintained by business entities can construct theories with a wider scope of application that are at the same time more specific to practical problems. In addition, value-laden terminology is complemented by a vocabulary that values potentially innovative developments by business entities more adequately. From a practical point of view, the distinction can be used to discuss the distribution of task responsibilities for information systems.
Datenbanken und SQL
(2017)
Dieses Buch vermittelt dem Leser fundierte Grundkenntnisse sowohl in Datenbanken als auch in SQL. Eine Zusammenfassung und zahlreiche Übungsaufgaben in jedem Kapitel dienen der Vertiefung des Stoffes und verbessern den Lernerfolg deutlich. Die Schwerpunkte des Buches sind relationale Datenbanken, Entwurf von Datenbanken, die Programmiersprache SQL und der Zugriff auf Datenbanken mittels der Sprache PHP. Aber auch Themen wie Recovery, Concurrency, Sicherheit und Integrität werden ausführlich besprochen. Ein Kapitel zu verteilten Datenbanken, NoSQL und objektrelationalen Datenbanken führt in die jeweilige Thematik ein. Ein eigenes Kapitel über Performance gibt wertvolle Anregungen und Tipps zum Betrieb von leistungsfähigen Datenbanken. Der Autor legt sehr viel Wert auf die praktische Anwendung. Mit Hilfe einer Beispieldatenbank kann das Gelernte sofort geübt werden. Diese Datenbank, alle vorgestellten Programme und die Lösungen zu allen Übungen werden im Internet zum Download bereitgestellt.
The REVERB challenge is a benchmark task designed to evaluate reverberation-robust automatic speech recognition techniques under various conditions. A particular novelty of the REVERB challenge database is that it comprises both real reverberant speech recordings and simulated reverberant speech, both of which include tasks to evaluate techniques for 1-, 2-, and 8-microphone situations. In this chapter, we describe the problem of reverberation and characteristics of the REVERB challenge data, and finally briefly introduce some results and findings useful for reverberant speech processing in the current deep-neural-network era.
Professional software development is a complex task with many inputs and a complex output. In order to handle complex topics as software or complex engineering projects, structured processes as the V-model or iterative development processes exist. Similarly, the development of a software engineering lecture is a task with many inputs, and a complex output. A structured and methodological approach to the development of a lecture is presented, which applies the same principles as used in the development of software.
Learning centered teaching becomes an important factor in a global perspective of learning software engineering. The Just-in-Time Teaching approach is used in a Chinese-German empirical case study. In a one year terminated project we will analyze the performance of our students in an active learning scenario with Just-in-Time Teaching and Peer Instruction. We will contribute an inter-cultural comparison of achieved competencies by student’s self-assessment and teacher’s observation.
We propose and experimentally demonstrate an all-optically tunable biphoton quantum light source using a nonlinear directional coupler. The source can generate high-fidelity N00N states, completely split states, and states with variable degrees of entanglement.
Fluctuations of hi-hat timing and dynamics in a virtuoso drum track of a popular music recording
(2015)
Long-range correlated temporal fluctuations in the beats of musical rhythms are an inevitable consequence of human action. According to recent studies, such fluctuations also lead to a favored listening experience. The scaling laws of amplitude variations in rhythms, however, are widely unknown. Here we use highly sensitive onset detection and time series analysis to study the amplitude and temporal fluctuations of Jeff Porcaro's one-handed hi-hat pattern in "I Keep Forgettin'"-one of the most renowned 16th note patterns in modern drumming. We show that fluctuations of hi-hat amplitudes and interbeat intervals (times between hits) have clear long-range correlations and short-range anticorrelations separated by a characteristic time scale. In addition, we detect subtle features in Porcaro's drumming such as small drifts in the 16th note pulse and non-trivial periodic two-bar patterns in both hi-hat amplitudes and intervals. Through this investigation we introduce a step towards statistical studies of the 20th and 21st century music recordings in the framework of complex systems. Our analysis has direct applications to the development of drum machines and to drumming pedagogy.
Datenbanken ohne Schema?
(2014)
In der Entwicklung von interaktiven Web-Anwendungen sind NoSQL-Datenbanksysteme zunehmend beliebt, nicht zuletzt, weil sie flexible Datenmodelle erlauben. Das erleichtert insbesondere ein agiles Projektmanagement, das sich durch häufige Releases und entsprechend häufige Änderungen am Datenmodell auszeichnet. In diesem Artikel geben wir einen Überblick über die besonderen Herausforderungen der agilen Anwendungsentwicklung gegen schemalose NoSQL-Datenbanksysteme. Wir stellen Strategien für die Schema-Evolution aus der Praxis vor, und postulieren unsere Vision einer eigenen Schema-Management-Komponente für NoSQL-Datenbanksysteme, die für eine kontinuierliche und systematische Schema-Evolution ausgelegt ist.
With the availability of cost effective embedded Linux solutions and the increasing complexity of embedded devices because of growing calculation power and communication demand, Linux is getting increasingly interesting as an operating system for the design of embedded control solutions. This is the case for almost all technical applications in electrical engineering like energy distribution systems, high level communication, signal processing or industrial automation. In the engineering master courses at the OTH Regensburg, a lecture is offered introducing students to Linux with a strong focus on embedded applications. This paper describes the concept of the lecture including the laboratory set up and gives some examples of embedded Linux projects performed by students.
This paper explores scalable implementation strategies for carrying out lazy schema evolution in NoSQL data stores. For decades, schema evolution has been an evergreen in database research. Yet new challenges arise in the context of cloud-hosted data backends: With all database reads and writes charged by the provider, migrating the entire data instance eagerly into a new schema can be prohibitively expensive. Thus, lazy migration may be more cost-efficient, as legacy entities are only migrated in case they are actually accessed by the application. Related work has shown that the overhead of migrating data lazily is affordable when a single evolutionary change is carried out, such as adding a new property. In this paper, we focus on long-term schema evolution, where chains of pending schema evolution operations may have to be applied. Chains occur when legacy entities written several application releases back are finally accessed by the application. We discuss strategies for dealing with chains of evolution operations, in particular, the composition into a single, equivalent composite migration that performs the required version jump. Our experiments with MongoDB focus on scalable implementation strategies. Our lineup further compares the number of write operations, and thus, the operational costs of different data migration strategies.
Averaging time series under dynamic time warping is an important tool for improving nearest-neighbor classifiers and formulating centroid-based clustering. The most promising approach poses time series averaging as the problem of minimizing a Fréchet function. Minimizing the Fréchet function is NP-hard and so far solved by several heuristics and inexact strategies. Our contributions are as follows: we first discuss some inaccuracies in the literature on exact mean computation in dynamic time warping spaces. Then we propose an exponential-time dynamic program for computing a global minimum of the Fréchet function. The proposed algorithm is useful for benchmarking and evaluating known heuristics. In addition, we present an exact polynomial-time algorithm for the special case of binary time series. Based on the proposed exponential-time dynamic program, we empirically study properties like uniqueness and length of a mean, which are of interest for devising better heuristics. Experimental evaluations indicate substantial deficits of state-of-the-art heuristics in terms of their output quality.
Algorithm selection (AS) tasks are dedicated to find the optimal algorithm for an unseen problem instance. With the knowledge of problem instances’ meta-features and algorithms’ landmark performances, Machine Learning (ML) approaches are applied to solve AS problems. However, the standard training process of benchmark ML approaches in AS either needs to train the models specifically for every algorithm or relies on the sparse one-hot encoding as the algorithms’ representation. To escape these intermediate steps and form the mapping function directly, we borrow the learning to rank framework from Recommender System (RS) and embed the bi-linear factorization to model the algorithms’ performances in AS. This Bi-linear Learning to Rank (BLR) has proven to work with competence in some AS scenarios and thus is also proposed as a benchmark approach. Thinking from the evaluation perspective in the modern AS challenges, precisely predicting the performance is usually the measuring goal. Though approaches’ inference time also needs to be counted for the running time cost calculation, it’s always overlooked in the evaluation process. The multi-objective evaluation metric Adjusted Ratio of Root Ratios (A3R) is therefore advocated in this paper to balance the trade-off between the accuracy and inference time in AS. Concerning A3R, BLR outperforms other benchmarks when expanding the candidates range to TOP3. The better effect of this candidates expansion results from the cumulative optimum performance during the AS process. We take the further step in the experimentation to represent the advantage of such TOPK expansion, and illustrate that such expansion can be considered as the supplement for the convention of TOP1 selection during the evaluation process.
Sufficient conditions for the existence of a sample mean of time series under dynamic time warping
(2020)
Time series averaging is an important subroutine for several time series data mining tasks. The most successful approaches formulate the problem of time series averaging as an optimization problem based on the dynamic time warping (DTW) distance. The existence of an optimal solution, called sample mean, is an open problem for more than four decades. Its existence is a necessary prerequisite to formulate exact algorithms, to derive complexity results, and to study statistical consistency. In this article, we propose sufficient conditions for the existence of a sample mean. A key result for deriving the proposed sufficient conditions is the Reduction Theorem that provides an upper bound for the minimum length of a sample mean.
Die vorliegende Studie untersucht Schlüsselfaktoren erfolgrei-
cher CIOs in deutschen Großunternehmen. Mit einer mittleren Verweildauer (Median) von 4,0 Jahren weisen deutsche CIOs, die mit 43 % noch überwiegend an den CFO berichten, im Vergleich zu anderen C-Level-Positionen eine deutlich kürzere Verweildauer im Amt auf. Die Ergebnisse aus 60 Interviews mit erfolgreichen deutschsprachigen CIOs, die primär über eine überdurchschnittlich lange Verweildauer verfügen, lassen verschiedene Schlüsselfaktoren für den Erfolg erkennen: Grundvoraussetzung ist stets die Gewährleistung eines sicheren und effizienten IT-Betriebs. Über effektive und innovative Change-Projekte machen die interviewten CIOs den IT-Mehrwert transparent und agieren als Brückenbauer zwischen IT und Fachbereichen. Dadurch wirken sie positiv auf die Firmenkultur ein und etablieren die IT nachhaltig in den Fachbereichen als Erfolgsfaktor. Erfolgreiche CIOs selbst sind keine „Techies“, sondern zeichnen sich durch hohe Führungskompetenz und ein hohes Geschäftsverständnis, gepaart mit visionärem Denken aus. Dadurch gelingt es ihnen, die IT zukunftsorientiert auszurichten und Anforderungen und Potenziale für und aus den Fachbereichen frühzeitig zu antizipieren. Die zukünftige Entwicklung der CIO-Organisation und der Paradigmen in der IT wird durch die Studienteilnehmer hingegen teilweise kontrovers diskutiert – so gibt es beispielsweise bei der Beurteilung der Sinnhaftigkeit und Relevanz der CDO-Position noch kein einheitliches Meinungsbild.
Eye tracking is a powerful technique that helps reveal how people process visual information. This paper discusses a novel metric for indicating expertise in visual information processing. Named the Gaze
Relational Index (GRI), this metric is defined as the ratio of mean fixation duration to fixation count.
Data from two eye-tracking studies of professional vision and visual expertise in using 3D dynamic medical visualizations are presented as cases to illustrate the suitability and additional benefits of the
GRI. Calculated values of the GRI were higher for novices than for experts, and higher in non-representative, semi-familiar / unfamiliar task conditions than in domain-representative familiar tasks.
These differences in GRI suggest that, compared to novices, experts engaged in more knowledge-driven, top-down processing that was characterized by quick, exploratory visual search. We discuss future
research aiming to replicate the GRI in professional domains with complex visual stimuli and to identify the moderating role of cognitive ability on GRI estimates.
In this paper, we consider the problem of feature reconstruction from incomplete X-ray CT data. Such incomplete data problems occur when the number of measured X-rays is restricted either due to limit radiation exposure or due to practical constraints, making the detection of certain rays challenging. Since image reconstruction from incomplete data is a severely ill-posed (unstable) problem, the reconstructed images may suffer from characteristic artefacts or missing features, thus significantly complicating subsequent image processing tasks (e.g., edge detection or segmentation).
In this paper, we introduce a framework for the robust reconstruction of convolutional image features directly from CT data without the need of computing a reconstructed image first. Within our framework, we use non-linear variational regularization methods that can be adapted to a variety of feature reconstruction tasks and to several limited data situations. The proposed variational regularization method minimizes an energy functional being the sum of a feature dependent datafitting term and an additional penalty accounting for specific properties of the features. In our numerical experiments, we consider instances of edge reconstructions from angular under-sampled data and show that our approach is able to reliably reconstruct feature maps in this case.
This The vehicle is evolving to a complex network of heterogeneous subsystems of ECUs, sensors and actuators, each with different computational requirements. These sub-systems are connected via bus systems following different communication paradigms like e.g. signal based or service-oriented communications. This has led to the heterogeneous syntax of describing interfaces even though the semantics of the interfaces are similar. The wide variety of Interface Description Languages (IDLs) in automotive industry hinders partly with the efficient collaboration between different suppliers and the OEMs in the automotive industry. Given this wide variety of automotive IDLs, what could be more beneficial, from a software engineering point of view, is a generic automotive domain specific IDL that can satisfy all the fundamental requirements of the heterogeneous subsystems. This paper describes an approach to compare and correlate IDLs based on semantic similarities of the languages, considering the two aspects: application description and underlying message frameworks used in the different domains of given automotive subsystems. With the exploration of semantic synergies between the IDLs, various domain specific and domain-agnostic frameworks can be compared and correlated. The results can be generalized and abstracted to define a generic Meta IDL which could support use cases like e.g. domain-agnostic functional models and migration of software components between different kinds of automotive subsystems.
We summarize our project Rhythmicalizer in which we analyze a corpus of post-modern poetry in a combination of qualitative hermeneutical and computational methods, as we have run the project over the course of the past three years (and preparing it for some time before that). Interdisciplinary work is always challenging and we here focus on some of the highlights of our collaboration.
This paper presents the classification of rhythmical patterns detected in post-modern spoken poetry by means of machine learning algorithms that use manually engineered features or automatically learnt representations. We used the world's largest corpus of spoken poetry from our partner lyrikline. We identified nine rhythmical patterns within a spectrum raging from a more fluent to a more disfluent poetic style. The text data analyzed by a statistical parser. Prosodic features of rhythmical patterns are identified by using the parser information. For the classification of rhythmical patterns, we used a neural networks-based approach which use text, audio, and pause information between poetic lines as features. Different combinations of features as well as the integration of feature engineering in the neural networks-based approach are tested. We compared the performance of both approaches (feature-based and neural network-based) using combinations of different features. The results show – by using the weighted average of f-measure for the evaluation – that the neural networks-based approach performed much better in classification of rhythmical patterns. The important improvement of the classification results lies in the use of the audio information. The integration of feature engineering in the neural networks-based approach yielded a very small result improvement.
Dubbing, i.e., the lip-synchronous translation and revoicing of audio-visual media into a target language from a different source language, is essential for the full-fledged reception of foreign audio-visual media, be it movies, instructional videos or short social media clips. In this paper, we objectify influences on the ‘dubbability’ of translations, i.e., how well a translation would be synchronously revoiceable to the lips on screen. We explore the value of traditional heuristics used in evaluating the qualitative aspects, in particular matching bilabial consonants and the jaw opening while producing vowels, and control for quantity, i.e., that translations are similar to the source in length. We perform an ablation study using an adversarial neural classifier which is trained to differentiate “true” dubbing translations from machine translations. While we are able to confirm the value of matching lip closure in dubbing, we find that the opening angle of the jaw as determined by the realized vowel may be less relevant than frequently considered in audio-visual translation.
This paper describes the tasks, databases, baseline systems, and summarizes submissions and results for the GermEval 2020 Shared Task 1 on the Classification and Regression of Cognitive and Motivational Style from Text. This shared task is divided into two subtasks, a regression task, and a classification task. Subtask 1 asks participants to reproduce a ranking of students based on average aptitude indicators such as different high school grades and different IQ scores. The second subtask aims to classify so-called implicit motives, which are projective testing procedures that can reveal unconscious desires. Besides five implicit motives, the target labels of Subtask 2 also contain one of six levels that describe the type of self-regulation when acting out a motive, which makes this task a multiclass-classification with 30 target labels. 3 participants submitted multiple systems. Subtask 1 was solved (best r = .3701) mainly with non-neural systems and statistical language representations, submissions for Subtask 2 utilized neural approaches and word embeddings (best macro F1 = 70.40). Not only were the tasks solvable, analyses by the participants even showed connections to the implicit psychometrics theory and behavioral observations made by psychologists. This paper describes the tasks, databases, baseline systems, and summarizes submissions and results for the GermEval 2020 Shared Task 1 on the Classification and Regression of Cognitive and Motivational Style from Text. This shared task is divided into two subtasks, a regression task, and a classification task. Subtask 1 asks participants to reproduce a ranking of students based on average aptitude indicators such as different high school grades and different IQ scores. The second subtask aims to classify so-called implicit motives, which are projective testing procedures that can reveal unconscious desires. Besides five implicit motives, the target labels of Subtask 2 also contain one of six levels that describe the type of self-regulation when acting out a motive, which makes this task a multiclass-classification with 30 target labels. 3 participants submitted multiple systems. Subtask 1 was solved (best r =.3701) mainly with non-neural systems and statistical language representations, submissions for Subtask 2 utilized neural approaches and word embeddings (best macro F1 = 70.40). Not only were the tasks solvable, analyses by the participants even showed connections to the implicit psychometrics theory and behavioral observations made by psychologists.
Listeners typically provide feedback while listening to a speaker in conversation and thereby engage in the co-construction of the interaction. We analyze the influence of the listener on the speaker by investigating how her verbal feedback signals help in modeling the speaker's language. We find that feedback from the listener may help in modeling the speaker's language, whether through the listener's feedback as transcribed, or the acoustic signal directly. We find the largest positive effects for end of sentence as well as for pauses mid-utterance, but also effects that indicate we successfully model elaborations of ongoing utterances that may result from the presence or absence of listener feedback.
As a result of the enormous growth in data traffic for autonomous driving, the conventional in-vehicle network is no longer sufficient and requires new types of network concepts in a vehicle. This part of the automobile is known as the next generation communication network. Since the new car-systems can be extended by various services at any time, the network must adapt dynamically to new requirements wherever possible. For example, data flow must be configured dynamically between new services. Also data rates will be much higher in the future than today. This is one of the main reasons why we need to search for new technologies for data transfer in vehicles. This is based on an in-vehicle ethernet network. The process of configuring networks automatically has been discussed several times in recent years. One of the next steps is verifying and validating the automatic configuration process during the development of the new communication network. This research paper identifies several ways to ensure the automatically generated network configuration leads to a secure system. To achieve that, other parts of the company’s enterprise IT architecture and network technologies, the conventional vehicle network and other options for verification and validation are analysed
A large proportion of (post)-modern poetry contains no or hardly any punctuation. In our contribution, we will investigate how well punctuation information can be recovered for postmodern poetry based on the information contained in the text and speech of free verse poems. We use the world's largest corpus of spoken (post-)modern poetry from our partner lyrikline which contains the corresponding audio recording of each poem as spoken by the original author and features translations for many of the poems. We identify lines that contain a phrase break in the middle of the poetic line, which may already be helpful for philological analysis on one hand, and identify the position of the break in the line on the other hand. We select those poetic lines that contain one or more punctuation characters that typically indicate a phrase break in poetry (.,;:!?/) somewhere in the middle (rather than only at the end of the line) as our target class. We train a neural network (bidirectional recurrent neural network (RNN) based on gated recurrent units (GRU) with attention) that combines audio and textual features to identify the punctuation with the goal of applying it to reconstruct them within a corpus of unpunctuated poems. Our results clearly indicate that speech is helpful for recovering the constituency structure of post-modern poetry that is partially obfuscated by missing punctuation.
In this paper, we present a study in which a robot initiates interactions with people passing by in an in-the-wild scenario. The robot adapts the loudness of its voice dynamically to the distance of the respective person approached, thus indicating who it is talking to. It furthermore tracks people based on information on body orientation and eye gaze and adapts the text produced based on people's distance autonomously. Our study shows that the adaptation of the loudness of its voice is perceived as personalization by the participants and that the likelihood that they stop by and interact with the robot increases when the robot incrementally adjusts its behavior.
We present an open source plugin for live subtitling in the popular open source video conferencing software BigBlueBut-ton. Our plugin decodes each speaker’s audio stream separately and in parallel, thereby obliviating the need for speaker di-arization and seamlessly handling overlapped talk. Any Kaldi-compatible nnet3 model can be used with our plugin and we demonstrate it using freely available TDNN-HMM-based ASR models for English and German. Our subtitles can be used as they are (e.g., in loud environments) or can form the basis for further NLP processes. Our tool can also simplify the collection of remotely recorded multi-party dialogue corpora.
We present a fully automatic solution for German video subtitling, with a focus on lecture videos. We rely entirely on open source models and scripts for German ASR, automatic punctuation reconstruction and subtitle segmentation. All training scripts, 1000h of German speech training data, pre-trained models and the final subtitling program are publicly available. It can readily be integrated into lecture video platforms such as Lecture2Go. The automatically generated subtitles can also serve as a basis to make the video material more accessible (e.g. via search, keyword clouds, and the like) or for further manual revision, potentially helping in significantly speeding up manual work. A particular challenge that we observe in lectures are technical terms that are frequent in a particular lecture, but infrequent in a typical language model and that might be out of vocabulary for a general purpose ASR. We approach this challenge by extracting texts from accompanying lecture slides to adapt the language model of our TDNN-HMM based ASR system. We demonstrate the usability of the full system and its generated subtitles and evaluate on a dataset of manually transcribed lectures with an average of 26.3% WER.
Speech quality and likability is a multi-faceted phenomenon consisting of a combination of perceptory features that cannot easily be computed nor weighed automatically. Yet, it is often easy to decide which of two voices one likes better, even though it would be hard to describe why, or to name the underlying basic perceptory features. Although likability is inherently subjective and individual preferences differ, generalizations are useful and there is often a broad intersubjective consensus about whether one speaker is more likeable than another. We present a methodology to efficiently create a likability ranking for many speakers from crowdsourced pairwise likability ratings which focuses manual rating effort on pairs of similar quality using an active sampling technique. Using this methodology, we collected pairwise likability ratings for many speakers (>220) from many raters (>160). We analyze listener preferences by correlating the resulting ranking with various acoustic and prosodic features. We also present a neural network that is able to model the complexity of listener preferences and the underlying temporal evolution of features. The recurrent neural network achieves remarkably high performance in estimating the pairwise decisions and an ablation study points toward the criticality of modeling temporal aspects in speech quality assessment.
The Spoken Wikipedia Corpus collection: Harvesting, alignment and an application to hyperlistening
(2019)
Spoken corpora are important for speech research, but are expensive to create and do not necessarily reflect (read or spontaneous) speech ‘in the wild’. We report on our conversion of the preexisting and freely available Spoken Wikipedia into a speech resource. The Spoken Wikipedia project unites volunteer readers of Wikipedia articles. There are initiatives to create and sustain Spoken Wikipedia versions in many languages and hence the available data grows over time. Thousands of spoken articles are available to users who prefer a spoken over the written version. We turn these semi-structured collections into structured and time-aligned corpora, keeping the exact correspondence with the original hypertext as well as all available metadata. Thus, we make the Spoken Wikipedia accessible for sustainable research. We present our open-source software pipeline that downloads, extracts, normalizes and text–speech aligns the Spoken Wikipedia. Additional language versions can be exploited by adapting configuration files or extending the software if necessary for language peculiarities. We also present and analyze the resulting corpora for German, English, and Dutch, which presently total 1005 h and grow at an estimated 87 h per year. The corpora, together with our software, are available via http://islrn.org/resources/684-927-624-257-3/. As a prototype usage of the time-aligned corpus, we describe an experiment about the preferred modalities for interacting with information-rich read-out hypertext. We find alignments to help improve user experience and factual information access by enabling targeted interaction.
Speech-based interactive systems, such as virtual personal assistants, inevitably use complex architectures, with a multitude of modules working in series (or less often in parallel) to perform a task (e.g., giving personalized movie recommendations via dialog). Add modules for evoking and sustaining sociability with the user and the accumulation of processing latencies through the modules results in considerable turn-taking delays. We introduce incremental speech processing into the generation pipeline of the system to overcome this challenge with only minimal changes to the system architecture, through partial underspecification that is resolved as necessary. A user study with a sociable movie recommendation agent objectively diminishes turn-taking delays; furthermore, users not only rate the incremental system as more responsive, but also rate its recommendation performance as higher.