Information und Kommunikation
Refine
Document Type
- conference proceeding (article) (34)
- Article (4)
- Preprint (3)
- Part of a Book (1)
- conference proceeding (volume) (1)
- Report (1)
- Study Thesis (1)
- Working Paper (1)
Has Fulltext
- yes (46) (remove)
Is part of the Bibliography
- no (46)
Keywords
- ADAM Optimizer (1)
- Analog Computing Devices (1)
- Angewandte Forschung (1)
- Bibliometric Analysis (1)
- Business Computing (1)
- CIO roles (1)
- CIO tenure (1)
- CRM (1)
- Chief Information Officer (1)
- Chief information officer (1)
Institute
Begutachtungsstatus
- peer-reviewed (7)
Der zweite ITG-Workshop „Sprachassistenten – Anwendungen, Implikationen, Entwicklungen“ fand am 5. März 2024 in Regensburg statt. Er bot eine organisatorische und inhaltliche Fortführung des ersten Workshops vor vier Jahren in Magdeburg 2020. Auch in diesem Jahr war er wieder der Konferenz Elektronische Sprachsignalverarbeitung angegliedert. Auf dem Workshop wurden vielfältige und interdisziplinäre Beiträge in eingeladenen Vorträgen und als eingereichte Poster präsentiert. Durch die gute Mischung von Beitragenden sowohl aus der Hochschullandschaft als auch aus der Industrie wurden die verschiedensten Aspekte anwendungsnah diskutiert.
One of the tasks PAULE[1, 2] solves is finding suitable control parameter (cp-)trajectories for a given target acoustic. These cp-trajectories can be used to synthesize speech with the articulatory speech synthesizer of the VocalTractLab (VTL) [3]. If the target acoustic contains substantial microphone noise or other background noises, occasionally PAULE optimizes not for the speech in the target, but for this background noises. By adding a speech/non-speech classifier to the feedback and planning-loop in PAULE this resynthesis of background noises should be mitigated. Unfortunately, the improvements were minor, which might be due to uninformative gradients of the classifier. The importance of informative gradients and the use classifiers to adapt PAULE to different tasks are explained and discussed.
There is high confidence for the hypothesis that in speech perception the cycles of a θ-oscillation segment the auditory signal into syllables [8]. Yet the functionality of the oscillator generating the θ-oscillation is unknown. We follow the finding that, within an auditory scene, speech is perceived as a stream given by temporal coherence [12]. We work with the hypotheses that the θ-oscillator is driven by temporal features providing this coherence. We propose a new temporal feature called O-distance, which detects the onset of a syllable - the starting point to of a θ-cycle–triggered by the temporal distance from to to the instance of the maximal rise of the loudness curve of the vowel. To extract to from the auditory signal, we use the statistical properties of this distance based on the C-center hypothesis [25], which predicts a close temporal relation of the onset consonants to the onset of a vowel. The statistics are derived from reference O-distance extracted from an articulatory database, where the minima and maxima of the loudness are related to maxima and minima of the lower incisor and tongue tip. To judge the quality of the O-distance extracted from the auditory signal, we regard the temporal deviation of the O-distance to the reference O-distance. Currently we achieve a mean deviation of 34ms.
This paper addresses the challenges and advancements in speech recognition for singing, a domain distinctly different from standard speech recognition. Singing encompasses unique challenges, including extensive pitch variations, diverse vocal styles, and background music interference. We explore key areas such as phoneme recognition, language identification in songs, keyword spotting, and full lyrics transcription. I will describe some of my own experiences when performing research on these tasks just as they were starting to gain traction, but will also show how recent developments in deep learning and large-scale datasets have propelled progress in this field. My goal is to illuminate the complexities of applying speech recognition to singing, evaluate current capabilities, and outline future research directions.
The use of chatbots based on large language models (LLMs) and their impact on society are influencing our learning experience platform Hochschul-Assistenz-System (HAnS). HAnS uses machine learning (ML) methods to support students and lecturers in the online learning and teaching processes [1]. This paper introduces LLM-based features available in HAnS which are using the transcript of our improved Automatic Speech Recognition (ASR) pipeline with an average transcription duration of 45 seconds and an average word error rate (WER) of 6.66% on over 8 hours of audio data of 7 lecture videos. A LLM-based chatbot could be used to answer questions on the lecture content as the ASR transcript is provided as context. The summarization and topic segmentation uses the LLM to improve our learning experience platform. We generate multiple choice questions using the LLM and the ASR transcript as context during playback in a period of 3 minutes and display them in the HAnS frontend.
In this study, we address the complex dynamics of emotional speech and
comprehensively examine the integration of rhythmic and vocal features to recognize emotional patterns. Our exploration is conducted using two German emotional corpora: VMEmo and EmoDB. Employing a combination of supervised methods (here linear discriminant analysis, LDA) and unsupervised techniques (here k-means clustering), we aim to uncover nuanced patterns within the emotional speech in these corpora. The application of LDA highlights salient patterns across different feature sets and focuses on the classification of speakers and prosodic characteristics. In addition, k-means clustering uncovers latent structures that reveal subtle mapping between emotions and speech behavior. Our results suggest that it is possible to cluster data based on prosodic behaviors that are influenced by emotional changes. Although precise mapping to the actual clusters derived from emotional labels could not be fully achieved, the results nonetheless reveal a moderate level of success in this investigation.
This study investigates the effects of speech segmentation methods on speaker recognition models, particularly with regard to the use of rhythmic feature sets. Using three automatic methods and one manual method on the German database of Kiel corpus, segmentation was performed based on the identification of vowel onsets. Subsequently rhythmic variability indices derived from these intervals were calculated and used for principal component analysis and support vector machine model in order to investigate the variation among speakers. The results underline the influence of signal segmentation methods on speaker recognition models.
PROM surveys, used to measure the effect of rehabilitation treatments, are typically filled out on paper, and often suffer from low response rates. Replacing it with a multimodal survey system, supporting touch and speech interaction, could lead to lower hurdles and therefore more data quantity. To do this, it requires task-specific training samples for the Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU) to classify spoken answers into one of the standardized PROM answer options.
Due to the lack of training data for medical PROM surveys, we created augmented text samples with each answer option description, combined with different templates. To improve training capabilities, introduce a proper test set, and evaluate the ASR, we also collected 1,797 real voice samples within an empirical study. Further, we incorporate the contextual knowledge of the current question into our NLU architecture by implementing one classifier for every question scale.
Our results reveal that training with empirical data leads to better results than augmented data from templates and original answer option descriptions. Because of participant mislabeling of 33% due to the ambiguity of the task, we receive overall low NLU performances with up to 51.1% accuracy, and rank-1-accuracy up to 79.3%. We also find that our implementation of many scale-specific NLU classifiers significantly outperforms one NLU classifier for all labels, that incorporates the same contextual knowledge after the prediction, by 8 percent points.
Speech Recognition Errors in ASR Engines and Their Impact on Linguistic Analysis in Psychotherapies
(2024)
Modern intervention planning in psychotherapies may benefit from predicting process relevant psychotherapy constructs by automated speech analysis. One essential step is the extraction of relevant linguistic speech markers by ASR engines, which because of highly sensible data, work offline. We analyze transcription errors from NeMo, Whisper, and Wav2Vec2.0, focusing on their impact on linguistic markers usually requiring high quality transcripts. By utilizing part-of-speech tagging, we examine error occurrences among different word types. The Linguistic Inquiry and Word Count (LIWC) software aids in extracting markers. We highlight challenges in transcribing spontaneous speech, prevalent in therapy, and compare results with the Mozilla CommonVoice dataset, which features read speech.
This paper investigated whether predictability-based adjustments in production have listener-oriented consequences in perception. By manipulating the acoustic features of a target syllable in different predictability contexts in German, we tested 40 listeners’ perceptual preference for the manipulation. Four source words underwent acoustic modifications on the target syllable. Our results revealed a general preference for the original (unmodified) version over the modified one. However, listeners generally favored the unmodified version more when the source word had a higher predictable context compared to a less predictable one. The results showed that predictability-based adjustments have perceptual consequences and that listeners have predictability-based expectations in perception.
Recent neural text-to-speech (TTS) models are able to synthesize highly
natural speech signals using deep learning techniques. In practical applications, it can be desirable to have explicit control over the prosody (speech rate, fundamental frequency, and energy) of the synthesized speech. Such controllability can be achieved by adding prosody prediction modules, whose main purpose is to estimate plausible prosody features for each phoneme in the text input. This explicit modeling also allows for changing prosody features at inference time, consequently enabling the adjustment of the prosody in the synthesized audio. In this paper, we evaluate to which extent deliberate manipulation of such prosody features is reflected in the resulting speech audio. We focus particularly on changing the pitch (i.e., fundamental frequency) while applying different normalization strategies.
Generative models for audio are commonly used for music composition, sound effects generation for video game development, audio restoration, voice cloning, etc. The ease of generating indistinguishable fake audio with deep learning poses a major threat to personal privacy, online security, and political discourse. Evaluating the quality and realism of these synthetic utterances is crucial for mitigating the potential for misinformation and harm. To assess this threat, this paper conducts a systematic review, using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), on how these deepfake models are currently evaluated. The analysis of 86 papers shows that the majority of the evaluation is conducted on a machine level and highlights a research gap regarding the human perception of deepfakes. This paper explores various methods and perceptual measures employed in assessing audio deepfakes and evaluating their strengths, limitations, and future directions.
In this work we assess whether there is information in pauses in-between utterances of the same or different speakers that are predictive of the following speaker’s utterance. We present models that connect a person’s visual features before they speak to their upcoming utterance. In our experiments we find that outof-the-box pre-trained models can already reach a better-than-chance performance in correlating video embeddings to utterance embeddings. In contrast, models that attempt to predict the first word after the pause do not outperform a unigram model, indicating that our models do not read lips (based e.g. on co-articulation effects) but rather capture more fundamental aspects of the upcoming utterance.
Wine making is usually considered a domain being far off the processing of speech and language. But in a particular aspect, the domains of speech processing and wine making are related, namely, in the description of wine aromas. These descriptors are used for creating wine expertise as well as more general (advertisement-like) textual representations. In the current paper, we use Natural Language Processing techniques, especially Named Entity Recognition, to identify Aspects and Opinions, reflecting wine characteristics. These are combined with analyses of respective relations (triplet extraction) building Aspect-Opinion-Pairs to establish indicative aroma descriptors, also trying to approach the complex interplay amongst these individual statements. In our experiments, we rely on the Falstaff corpus comprising a huge set of wine descriptions. This results in an average F1 score of around 0.85 for Aspect-Opinion classification. For triplet generation multiple strategies were compared, resulting in an average F1 score of 0.67 in this challenging task. For both tasks we rely only on a handful of manually annotated samples, applying pseudo-labeling methods from seed data to achieve automatic labeling.
Despite the potential of AI, only a small percentage of small and medium-sized enterprises (SMEs) are adopting it due to data issues, expertise gaps, and implementation barriers. Zero-shot learning offers a promising approach for SMEs by minimizing these obstacles. This paper explores the use of zero-shot learning in a real-world NLP classification task on online comments (comparable with intent classification tasks) from the e-learning platform Sofatutor. While finetuning has achieved high accuracy (82.3–86.5%), zero-shot models have shown lower performance (39.3–61.4%) due to different label selection, grouping of different scenarios in one class and the type of classification task. Even if the current accuracy is not sufficient for practical application, pre-filtering the data using zeroshot learning might be a promising option for SMEs.
Wines are complex beverages whose taste can be described either numerically or textually, with the former involving the rating of the intensities of different aroma characteristics often with the help of a wine tasting wheel, and the latter with the help of crisp terms often in a poetic fashion. These are often done with the help of wine sommeliers who with one sniff can describe the wine. Usually, each sommelier has a unique style when it comes to textually describing a wine, research has shown that such differences have no negative impact in correctly classifying wines on the basis of their color, grape variety, region etc. Given the recent advancements in the field of Natural Language Processing, especially with the emergence of Large Language Models, we aim to check the capability of Llama 2 in its ability to generate texts pertaining to a specific color of a wine, given a list of aroma intensities as input prompts. In our experiments, we relied on data from Meininger and Falstaff, and on a combination of domain adaptation and pseudo-labeling techniques to create the corpus to train the Llama 2 model on. Also, we relied on a voting scheme of three differently trained classifiers to evaluate the wine-color specific text generation capabilities of Llama 2. Additionally, we employed the services of domain experts to evaluate the quality of a sample set of texts that was generated by Llama 2.
Our paper introduces a new technology for posture research and training:
the INteractive POsture COrrector, IN-POCO. The device warns its users about unfavorable postures when speaking (e.g., sitting in video conferences) and is thus suitable as an aid for rhetoric trainers. In addition, IN-POCO can also collect time-aligned posture and speech signals for researching prosody-posture relationships in the speech sciences. We outline the motivation for the development of IN-POCO and describe the key technical specifications and operational characteristics. The paper concludes
with a pilot experiment in which we provide initial evidence that, for a communicative (public) speaking task, posture does indeed affect speech prosody in gender-specific ways – in line with claims of rhetoric trainers and guidebooks, and such that an unfavorable (e.g,, humped) posture can be assumed to reduce the speaker's vocal charisma.
Das Continuous-Response-Measurement-Verfahren bildet durch die kontinuierliche Bewertungsmöglichkeit eine wichtige Ergänzung zu den gängigen Methoden im Repertoire der Wirkungsforschung. Um diesen Mehrwert voll ausschöpfen zu können, wird als Verfahrensoptimierung die Entwicklung einer Softwarelösung vorgestellt. Die Überprüfung des optimierten CRM-Verfahrens erfolgt mittels eines Anwendungsfalls aus der sprechwissenschaftlichen Telekommunikationsforschung im Rahmen eines User-Acceptance-Tests. Dabei wird die Funktionalität und Bedienerfreundlichkeit der entwickelten CRM-Softwarelösung unter Beachtung der für die Sprechwirkungsforschung relevanten Kriterien in Form einer A-BStudie getestet.
Das Gesamtergebnis des User-Acceptance-Tests fällt für die Software Evalue positiv aus. Mit Hilfe der Verfahrensoptimierung des CRM-Verfahrens ist eine variabel einsetzbare und damit vielfältig nutzbare CRM-Softwarelösung entstanden.
The growing prevalence of voice assistants has sparked privacy concerns with respect to content privacy and potential human-based attacks such as eavesdropping which make users feel uncomfortable utilizing them in public. To address these challenges, understanding human privacy perceptions in acoustic environments becomes paramount. This understanding can empower voice assistants to accurately quantify privacy perceptions, adapt conversational patterns, and ultimately enhance human-machine interaction. This study draws inspiration from human-tohuman interactions and previous research on acoustic privacy, to quantify privacy perceptions in environments characterized by babble noise. The primary objective is a comprehensive evaluation of both objective and subjective measures to quantitatively capture privacy perceptions in acoustic environments.
Speaker recognition systems often use mel-scaled cepstral coefficients (MFCCs) as main features. In contrast to MFCCs, Godoy et al. (2015) proposed a different type of short-term spectral analysis that provides features related to the lower vocal tract (LVT). They are calculated as the ratio of the acoustic shorttime spectra during the closed and open phases of the glottal oscillation cycles based on a pitch-synchronous analysis. These features were suggested to be particularly speaker-specific and might therefore be suitable to substitute or complement MFCCs in speaker recognition systems. The present study investigated the benefit of these features in an i-vector-based speaker recognition system. Using the LVT features alone, the system achieved a speaker recognition rate of 92.3% with 63 enrolled speakers. When the LVT features were fused with conventional MFCC features, the recognition rate was about equal to the recognition rate using MFCC features alone (> 98%).
Octra Backend ist eine portable web-basierte Infrastruktur für Transkriptionsprojekte, die lokal im Feld oder geschützten Bereichen, im begrenzten Intranet oder weltweit erreichbar im Internet eingesetzt werden kann. Entwicklungsziele waren die Gewährleistung möglichst hoher Sicherheitsanforderungen, eine gute Skalierbarkeit sowie eine einfache Installation auch ohne Administratorrechte. Octra Backend ist in Node.js implementiert und für MacOS, Windows und Linux verfügbar.
Wir repräsentieren eine Bedeutung als Liste von Mustersignalen, und unser Ziel ist es, ein weiteres ankommendes Signal damit zu vergleichen. Die Quantenlogik motiviert die Verwendung von Orthogonalprojektoren, um die gesuchte Ähnlichkeit als Projektionswahrscheinlichkeit darzustellen. Die Ergebnisse des quantenlogischen Verfahrens hängen davon ab, in welcher Weise die Signale vorverarbeitet werden. In diesem Aufsatz untersuchen und diskutieren wir vier verschiedene Möglichkeiten der Vorverarbeitung.
NoiSLU: a Noisy speech corpus for Spoken Language Understanding in the Public Transport Domain
(2024)
The use of local public transport requires the barrier-free purchase of a ticket. Travellers who are not proficient in the local language benefit from a multilingual human(ticket)machine voice interaction. This paper presents a nearly parallel audio dataset with 13218 annotated user queries from 20 speakers for English, German and Dutch. The domain-specific speech corpus can be understood as an evaluation dataset for future research in Spoken Language Understanding (SLU) and thus, it enables researches to improve the quality of human-machine interaction applications. Furthermore, we compare the SLU performance of different compositions of Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU) models in baseline experiments on different test datasets.
Um bei Minimalistischen Grammatiken (MG) Übergenerierung zu vermeiden, kann man Einträge mit leeren Exponenten (ε-Einträge) verwenden. Ein Eintrag besteht aus einem Exponenten, der die Äußerung oder Schrift eines Wortes repräsentiert, einer Merkmalsliste, welche die Syntax kodiert und einem λ-Ausdruck, der die Semantik repräsentiert. Leere Einträge führen allerdings zu einer schlechteren Verwendbarkeit der Grammatik für das Parsen. Die vorliegende Arbeit wird ein Umformungsalgorithmus für MGs vorstellen, sodass die Anzahl der ε-Einträge verringert werden kann, um sie wieder für Parser verwendbar zu machen. Hierzu werden die ε-Einträge mit den anderen Einträgen vorverarbeitet und dadurch neue Einträge geschaffen. Die nun überflüssigen ε-Einträge können dann problemlos entfernt werden. Der Algorithmus wurde anhand von über 260 Zahlwortgrammatiken getestet.
Die vorliegende Studie untersucht, inwiefern sich die in der Phonetik verbreiteten Skripte zur automatisierten Feststellung von verschiedenen Aspekten des Sprechflusses von de Jong et al. zur Beurteilung des Sprachstands bei Kindern eignen und in welcher Art und Weise die Methodik angepasst werden könnte. Dazu wurden Sprachdaten von Vorschulkindern mit Deutsch als Muttersprache bzw. als Zweitsprache mithilfe eines Serious Game zur Sprachstandserhebung elizitiert. Die Audiodaten wurden bezüglich Artikulationsrate, Pausen und Füllpartikeln sowohl automatisiert durch die Skripte als auch manuell annotiert. Die Ergebnisse zeigen, dass sich die Skripte zur Ermittlung der Artikulationsrate mit einer relativ hohen Übereinstimmung mit der manuellen Ermittlung zur Verwendung in Sprachstandserhebungsverfahren eignen. Auch die automatische Erkennung von Sprechpausen weist einen hohen Precision-Wert auf und könnte als Instrument in Sprachstandserhebungen verwendet werden. Eine solche Verwendung würde mit
einer Erweiterung um die hier vorgestellte manuelle Methode zum Annotieren von disfluenten und nicht disfluenten Pausen profitieren. Bei den Füllpartikeln stellte sich die automatisierte Klassifizierung hingegen als weniger geeignet heraus. Hier wurde keine hohe Übereinstimmung mit der menschlichen Annotation festgestellt. Um in der Praxis Verwendung finden zu können, muss diese Methode noch erweitert werden, zum Beispiel durch Miteinbeziehen der Pausenerkennung.
Synchrony of Glottal Area Waveform Parameters During the Production of Obstruents in Vowel Context
(2024)
Obstruents are phonemes which require partial or total obstruction of airflow through the vocal tract. Their articulation also requires adjustments of the laryngeal settings, e. g., an abduction gesture to stop vocal fold vibration for voiceless obstruents. This study investigated the laryngeal settings during the production of voiced and voiceless obstruents in vowel context to analyze the degree of synchrony of the involved glottal gestures. High-speed laryngoscopy images were used to determine the glottal area waveform, from which the time functions of the parameters open quotient (OQ), fundamental frequency (f0), and AC and DC amplitude (ACA and DCA) were calculated and analyzed. Significant correlations were found between all pairs of parameters, with strong correlations between some of them, e.g. Open Quotient and AC Amplitude. Correlations were also either consistently positive or negative for specific pairs of parameters across all investigated phonemes. These results could point to consistent patterns in laryngeal gestures that could enhance articulatory speech synthesis.
Concatenative text-to-speech (TTS) systems remain a widely used cheaper alternative to neural TTS systems. Yet concatenation of prerecorded units entails some drawbacks, such as spectral distortion, the perceptual consequences of which remain unclear. In an attempt to bridge this gap, our study focused on the effect of spectral distortion in vowel formants on perceived speech quality in
naturally-read manipulated German words as well as non-words. More specifically, we explored the distortion effect on a varying number of affected formants, at different magnitude and directionality in two corner vowels /a:/ and /i:/. The results indicate that single formant manipulations have a less pronounced effect on the listeners’ perception compared to multiple formant perturbations. The threshold at which the distortion became generally audible was estimated to lie between 0.4 and 1.0 bandwidth. The directionality of the distortion was not found to be significant.
The continuous advancement of digitization extends beyond educational institutions, giving rise to numerous innovations, particularly in the realm of study information [1]. One avenue for incorporating digital methodologies involves leveraging conversational agents (CAs) [2], serving as interactive interfaces bridging the gap between humans and computers. In the broader context, conversational agents are gaining prominence, offering several benefits to their users. The overarching goal is to comprehensively assist users through these intelligent systems. Consequently, exploring existing university chatbots becomes imperative to discern the areas where they excel. This research aims to scrutinize diverse chatbot systems, delving into their use cases and the challenges they encounter, employing a systematic review. Here it turns out that chatbots support universities the most in the fields of administration, e-learning and mental health. Furthermore, the study will investigate practical experiences on the potential applications and implementation of these systems in university settings, incorporating insights from an online survey and interviews, both made with experts. Here it comes to conclusion that preparation in relation to a chatbot implementation is the key factor to success. Otherwise, a failed system is nearly impossible to be saved, once users lost trust in the system. Therefore, carefully made preparations in the technical and organisational field are necessary to provide a helpful assistant.
Die Therapie-App aphaDIGITAL wird im Rahmen eines Forschungsprojekts entwickelt, um Menschen mit Aphasie im häuslichen Umfeld zu
unterstützen. Das Projekt kombiniert bewährte Therapiemethoden mit digitalen Technologien, einschließlich künstlicher Intelligenz und einem interaktiven Avatar namens Eva. Dieser Artikel betrachtet die Analyse und Entwicklung der Interaktionsmechanismen, die für eine digitale Assistenz von Aphasietherapie den größten Einfluss haben. Es wurden dazu reale Therapiesitzungen zwischen sprachgeschädigten Menschen und ihren Therapeuten konversationsanalytisch nach spezifischen Merkmalen untersucht. Darauf aufbauend wurde durch manuelle Animation ein prototypisches Mundbild modelliert und ein eigenes Artikulationssystem konzipiert, um ein entsprechend authentisches Modell der deutschen Artikulations- und Koartikulationsmuster abzubilden.
This paper describes the usability evaluation of the parts of the CHATU chatbot. The evaluation was conducted with 21 participants. A focus of this paper is the description of the carefully designed evaluation procedure, which aims to avoid textual priming of the participants. The general evaluation procedure can be applied to other speech- or text-based conversational systems, and additional material is provided. The evaluation results show that the usability and user experience of CHATU are positively rated. However, the naturalness and novelty of the interaction are not optimal, and the potential influence of users’ experience with LLMs on the evaluation is discussed.
This paper describes a field study conducted with a museum chatbot at the Städel Museum Frankfurt. The chatbot uses the BERT language model for natural language processing and can be operated via touchscreen as well as via speech input. Prior to the study, hypotheses regarding the user experience of the system were formulated and a system-specific questionnaire was designed, which was used to inquire (among other things) about the perceived quality of the speech output and the frequency of audio guide use in museums. During the interaction with the chatbot, log data was collected and stored in the back-end system. The results show a significant correlation between perceived speech quality and user experience. An exploratory data analysis revealed that participants who used only speech input rated the system as significantly more stimulating than participants who used only touch input. Touch input turned out to be the most efficient input modality in terms of answer correctness and was rated highest regarding pragmatic quality. Interestingly touch input was preferred by younger participants. We discuss our findings and conclude that speech interaction should be seriously considered to create engaging conversational user experiences in museums.
Hochschulen für angewandte Wissenschaften (HAW) kommt als regionalen Innovationstreibern eine gesellschaftliche Verantwortung durch ihre Zusammenarbeit mit Unternehmen und Gesellschaft zu. Diese Zusammenarbeit geschieht im Rahmen von Aktivitäten des Wissens- und Technologietransfers. Die Beurteilung des Erfolgs dieser Transferaktivitäten verursacht jedoch regelmäßig Schwierigkeiten, da geeignete Indikatoren zur Erfolgsmessung fehlen. Im Rahmen des vom Bundesministerium für Bildung und Forschung (BMBF) geförderten Verbundforschungsprojektes Transfer_i wurde ein Modell zur Objektivierung und Messung von Forschungsleistung, forschungsbasiertem Transfer sowie dessen Umsetzung am Markt in Form von Innovationen erarbeitet. Mit dem Forschungsprojekt wurden kausale Zusammenhänge für das Gelingen von Transfer identifiziert und daraufhin entsprechende Indikatoren definiert, um auf dieser Basis die Steuerung von Transferprozessen zu ermöglichen.
In diesem Beitrag werden zwei im Projekt Transfer_i entwickelte Modelle und Indikatoren vorgestellt, die die Voraussetzungen für einen erfolgreichen Transferprozess abbilden können. Auf dieser Grundlage zeigen wir anhand eines Projekts (MAGGIE) der Ostbayerischen Technischen Hochschule (OTH) Regensburg mit mehreren regionalen Partnern, wie die vorher definierten Modelle und Indikatoren in einem konkreten Anwendungsfall verwendet werden können. Abschließend beschreiben wir die erforderlichen Rahmenbedingungen für die erfolgreiche Umsetzung von Transfer an Hochschulen und wie darauf bezogene Indikatoren effektiv eingeführt werden können.
The vernier or nonius (in other languages) goes back to a measuring tool used in navigation and astronomy named after its inventor Pedro Nunes (1502–1578; Latin: Petrus Nonius), a Portuguese mathematician and navigator. The nonius was created in 1542 to take finer measurements on circular instruments such as the astrolabe. In 1631 the French mathematician Pierre Vernier (1580–1637) adapted and simplified the system which was later denoted “vernier”.
We present a method to construct a vernier-like scale for logarithmic scales (as used for typical slide rules), which results in variable tick spacings. The idea is to put the non-linear scale on a spiral. The method can be applied to any non-linear scale, not only logarithmic scales.
On analytic properties of the standard zeta function attached to a vector-valued modular form
(2022)
We proof a Garrett–Böcherer decomposition of a vector-valued Siegel Eisenstein series E2l,0 of genus 2 transforming with the Weil representation of Sp2(Z) on the group ring C[(L′/L)2]. We show that the standard zeta function associated to a vector-valued common eigenform f for the Weil representation can be meromorphically continued to the whole s-plane and that it satisfies a functional equation. The proof is based on an integral representation of this zeta function in terms of f and E2l,0.
Despite the relevance and maturity of the Chief Information Officer (CIO) research field, no studies exist that exhaustively summarize the current body of knowledge, focusing on the development of the field over its entire timespan. The paper at hand addresses this research gap and presents an exhaustive literature review on the CIO research field using main path analysis. We identify the central papers in CIO research and eight main research streams by quantitatively and qualitatively analyzing 466 papers. We find that established research streams, e.g., ‘Evolving role of the CIO’ and ‘CIO hierarchical position and relationships’ as well as recently emerging research streams, e.g., ‘CIO as business enabler’ and ‘CIOs and IT security,’ draw growing attention. Based on our findings, we develop promising further avenues for research in the CIO field.
Die vorliegende Studie untersucht Schlüsselfaktoren erfolgrei-
cher CIOs in deutschen Großunternehmen. Mit einer mittleren Verweildauer (Median) von 4,0 Jahren weisen deutsche CIOs, die mit 43 % noch überwiegend an den CFO berichten, im Vergleich zu anderen C-Level-Positionen eine deutlich kürzere Verweildauer im Amt auf. Die Ergebnisse aus 60 Interviews mit erfolgreichen deutschsprachigen CIOs, die primär über eine überdurchschnittlich lange Verweildauer verfügen, lassen verschiedene Schlüsselfaktoren für den Erfolg erkennen: Grundvoraussetzung ist stets die Gewährleistung eines sicheren und effizienten IT-Betriebs. Über effektive und innovative Change-Projekte machen die interviewten CIOs den IT-Mehrwert transparent und agieren als Brückenbauer zwischen IT und Fachbereichen. Dadurch wirken sie positiv auf die Firmenkultur ein und etablieren die IT nachhaltig in den Fachbereichen als Erfolgsfaktor. Erfolgreiche CIOs selbst sind keine „Techies“, sondern zeichnen sich durch hohe Führungskompetenz und ein hohes Geschäftsverständnis, gepaart mit visionärem Denken aus. Dadurch gelingt es ihnen, die IT zukunftsorientiert auszurichten und Anforderungen und Potenziale für und aus den Fachbereichen frühzeitig zu antizipieren. Die zukünftige Entwicklung der CIO-Organisation und der Paradigmen in der IT wird durch die Studienteilnehmer hingegen teilweise kontrovers diskutiert – so gibt es beispielsweise bei der Beurteilung der Sinnhaftigkeit und Relevanz der CDO-Position noch kein einheitliches Meinungsbild.
Controller Area Network (CAN) ist in automatisierten Bereichen der Industrie ein wichtiger Bestandteil der Kommunikation vieler einzelner Komponenten. Spezielle Anwendungen fordern einen erhöhten Standard hinsichtlich Sicherheit und Fehlertoleranz. Eine Steigerung der Zuverlässigkeit des Systems ist somit numgänglich. Die vorliegende Arbeit befasst sich mit der Untersuchung eines Dual-CAN-Bussystems. Hierzu zählt die Entwicklung und Umsetzung eines Demonstratoraufbaus. Es werden fünf Nucleo STM32H7 Boards um ein sogenanntes Failure Injection Board (FIB) und ein 3,5" TFT-Touchdisplay erweitert sowie redundant miteinander über zwei speparate CAN verbunden. Mithilfe des FIBs wird das duale CAN-Netzwerk realisiert. Außerdem können die Abschlusswiderstände per Knopfdruck zugeschaltet und Drahtbrüche durch Relais auf dem redundanten CAN-Bus simuliert werden. Als weitere Anzeige- und Bedienmöglichkeit wird ein Touchdisplay auf das Nucleo STM32H7 Board gesteckt. Hier soll eine GUI eine intuitive Bedienung ermöglichen und den aktuellen Status der weiteren Teilnehmer im Netz anzeigen. Das FIB dient zur Entwicklung von Algorithmen um Fehler bzw. Drahtbrüche zu umgehen oder auch andere Busstrukturen zu untersuchen.
Cybersecurity is a complex global phenomenon where the risk for individuals, organisation and the society at large are at risk. These risks need to be in focus and solutions for the prevention and developing countermeasures. In this paper, we describe the Joint Effort Workshop as an approach to raise awareness to these threads and the possibility to generate and exchange knowledge between students and experts. We conclude that mechanisms for systematic response to attacks need the developed technical requirements, but foremost human behaviour, knowledge and resilience to response to risks, which can be experienced through the collaborative environment of the Joint Effort Workshop
The smooth transition between stable, Talbot-effect-dominated and modulationally unstable nonlinear optical beam propagation is described as the superposition of oscillating, growing and decaying eigenmodes of the common linearized theory of modulation instability. The saturation of the instability in form of breather maxima is embedded between eigenmode growth and decay. This explains well the changes of beam characteristics when the input intensity increases in experiments on modulation instability and breather excitation in spatial-spatial experimental platforms. An increased accuracy of instability gain measurements, a variety of interesting nonlinear beam scenarios and a more selective and well-directed breather excitation are demonstrated experimentally.
Although the average tenure of CIOs has increased over the last years, the majority of CIOs have been in their positions for only three years or less. Nevertheless, some CIOs have been successful in their position for a long time. In this study, we use tenure as a proxy for success as a CIO. The goal of this paper is to examine factors that are critical to the success of long-term CIOs. For this purpose, we created and analyzed resumes of 384 CIOs. Out of these 384, we conducted 19 interviews with CIOs from top-tier companies and collected and analyzed both qualitative and quantitative data. In the process, we were able to identify nine factors that are critical for the success (CSF) of CIOs. These factors fall into three categories. Category “Personality” includes “Accepting and embracing change” (CSF #1), “Being perseverant to pursue long-term goals” (CSF #2), “Anticipating the future through visionary thinking” (CSF #3), and “Being empathetic to deal with uncertainty felt by co-workers” (CSF #4). The “Role Fulfilment” category includes “Cross-functional involvement and integration of the IT organization” (CSF #5), “Positioning and restructuring of the IT organization” (CSF #6), and “Well-connected and communicative leadership” (CSF #7). The “Organizational Environment” category consists of “Availability of skilled workforce” (CSF #8) and “Reporting line to the CEO” (CSF #9). CSFs 1, 2, and 3 were perceived as most important by the participating CIOs. The results may be of particular interest both to aspiring CIOs and equally their employing organizations, as they reflect what long-term CIOs value during their time in office.
A basic task in the design of an industrial robot application
is the relative placement of robot and workpiece. Process points are defined in Cartesian coordinates relative to the workpiece coordinate system, and the workpiece has to be located such that the robot can reach all points. Finding such a location is still an iterative procedure based on the developers' intuition. One difficulty is the choice of one of the several solutions of the backward transform of a typical 6R robot.
We present a novel algorithm that simultaneously optimizes the workpiece location and the robot configuration at all process points using higher order optimization algorithms. A key ingredient is the extension of the robot with a virtual prismatic axis. The practical feasibility of the approach is shown with an example using a commercial industrial robot.
This paper connects research from business model innovation and information systems by exploring critical IT capabilities for servitized business models. The adoption of servitized business models is a major business model innovation strategy. At the same time, digitalization drives the evolution of IT capabilities at these business models. Scholars argue that it remains unclear how IT capabilities enable servitized business models to build a competitive advantage by achieving cost advantages or differentiation. This paper explores IT capabilities that enable building a competitive advantage for servitized business models based on a qualitative analysis of multiple published case studies. The authors identify configurations of IT capabilities among servitized business models. The findings contribute to servitization research by exploring IT capabilities
and how they are combined among servitized business models.
The insights help practitioners deploy digital technologies and IT
assets effectively as building blocks of IT capabilities to advance
their servitized business model.
Most large-scale organizations adopted Cloud Computing (CC) on a company level in recent years. Managers now face the challenge to appropriately implement CC "operationally", i.e., for information systems (ISs). We refer to this as post-adoption, addressing the extent of technology usage after adoption. Specifically, managers need to choose among the CC delivery models Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-asa-Service (SaaS). We differentiate the determinants of this post-adoption decision for IaaS, PaaS, and SaaS. Based on this analysis, we derive criteria that guide managers' delivery model selection: Adopt 1) IaaS for ISs requiring flexibility and reduced time to market, 2) PaaS to access specialized resources, and 3) SaaS to focus on core competencies. Moreover, we analyze the impact on the CC strategy and postulate them as recommendations: I) acknowledge the interplay between governance and time-to-market, II) realize cost savings on company level, and III) consider strategically important ISs for CC.
Adaptive Moment Estimation (ADAM) is a very popular training algorithm for deep neural networks and belongs to the family of adaptive gradient descent optimizers. However to the best of the authors knowledge no complete convergence analysis exists for ADAM. The contribution of this paper is a method for the local convergence analysis in batch mode for a deterministic fixed training set, which gives necessary conditions for the hyperparameters of the ADAM algorithm. Due to the local nature of the arguments the objective function can be non-convex but must be at least twice continuously differentiable. Then we apply this procedure to other adaptive gradient descent algorithms and show for most of them local convergence with hyperparameter bounds.
Mit dem Vorliegen großer Datenbestände steigen natürlich auch die Wünsche und Anforderungen zur Analyse dieser Daten. Aus diesen Gründen widmet sich die AKWI Tagung 2008 mit den hier dokumentierten Beiträgen und Diskussionen diesem Thema unter dem Titel: Herausforderungen an die Wirtschaftsinformatik: Anwendungen und Techniken zur Analyse großer Datenbestände. Mit dem analytischen Customer Relationship Management (CRM) wird von Frick und Iversen das Ziel verfolgt, die Analyse der verfügbaren Informationen für die Kundenbedürfnisse einzusetzen. Das kann beträchtlich die Qualität der Kundenbeziehungen verbessern. Es erfordert aber auch eine Weiterqualifizierung aller Beteiligten. Das Supply Chain Management (SCM) dient zur Unternehmensübergreifenden Betrachtung und Modifikation der Geschäftsprozesse. Die ständige Weiterentwicklung der Geschäftsprozesse in den Unternehmen erfordert auch permanent Anpassung bei den davon in der Supply Chain betroffenen Unternehmen. Die Erweiterung der klassischen Sichtweise der Kostenfunktion im Rahmen der Produktionsplanung um die Einfuhrzollproblematik wird von Szymanski mit einem Ansatz zur Optimierung der Supply Chain mit mathematischen Optimierungsmodellen aufgezeigt. Die Gründe dafür liegen in den immer globaleren Gegebenheiten der Beschaffungs-, Produktions- und Distributionslogistik. Für die umfangreichen Aufgaben im Umgang mit den Enterprise Ressource Planning (ERP) Systemen und deren Implementierung werden heute entsprechende Tools benötigt. Von SAP wird dafür der SAP Solution Manager angeboten. Von Frick und Lankes wird der SAP Solution Manager als ein Projektmanagement Werkzeug eingesetzt. Unternehmen jeder Größenordnung sehen sich zunehmend mit der Herausforderung konfrontiert, die betrieblichen Daten für die unterschiedlichsten Zielsetzungen zur Verfügung zu stellen. Neben den betrieblichen Anforderungen rücken hier aber auch die Änderung der Abgabenordnung (AO) und damit der Datenzugriff durch die Finanzbehörden, die Einhaltung von Compliance Richtlinien und damit das Erkennen von Unterschlagungshandlungen als auch die Bestätigung der Ordnungsmäßigkeit des Jahresabschlusses durch die Wirtschaftsprüfer immer stärker in den Fokus der digitalen Aufbereitung von Unternehmensdaten. Wirtschaftsprüfer, Interne Revisoren und Betriebsprüfer der Finanzverwaltung haben die gleichen Probleme zu bewältigen, indem sie sich mit einer gewachsenen, heterogenen und durch Firmenzukäufe und Ausgliederungen stetig ändernden IT Infrastruktur auseinander zu setzen haben. Von Herde wird ein Ansatz zur Extraktion betrieblicher Massendaten in ein analysefähiges Format, unabhängig von operativen Systemen, kostengünstig realisiert. Der Umgang mit großen Mengen von Daten, auch aus unterschiedlichen Unternehmensbereichen, einer großen Anzahl von Benutzern erfordert heute ein Data Warehouse mit konsolidierten Daten. Ohne eine ausreichende Konsolidierung, d. h. ohne eine einheitliche Darstellung der Daten, kann keine sinnvolle Auswertung erfolgen. Erst mit einem Data Warehouse können die Mitarbeiter auch erkennen, was das Unternehmen zu speziellen Fragen eigentlich alles weiß. Von Stegemerten wird aufgezeigt, wie aus einer bestehenden Unternehmensstrategie eine Strategie zum Aufbau eines Data Warehouses abgeleitet werden kann und eine Organisationsstruktur beschrieben, die diese Umsetzung gewährleistet. Die Bedeutung des Controllings wird in dem Beitrag von Frank Weymerich mit der Entwicklung eines entsprechenden Cockpits deutlich. Hier werden die für das Controlling nützlichen Informationen in einem individuell bedienbaren Cockpit für die betrieblichen Entscheider sinnvoll zusammengestellt. In allen Beiträgen werden unterschiedliche spezielle Aspekte der Analyse und Organisation großer Datenbestände aus der Sicht der Wirtschaftsinformatik mit dem ihr eigenen Blick auf die Gesamtheit aller Einflussgrößen aufgezeigt.