OpenAIRE
Refine
Document Type
- Article (76)
- Review (12)
- Book (8)
- Preprint (8)
- Bachelor Thesis (6)
- Doctoral Thesis (6)
- Master's Thesis (3)
- Other digital publishing (3)
- Bookcollection (2)
- Part of Bookcollection (2)
Has Fulltext
- yes (126)
Keywords
- - (58)
- Computer Science, general (11)
- IT in Business (9)
- cyber-physical systems (4)
- Industry 4.0 (3)
- Nachhaltigkeit (3)
- Soziale Arbeit (3)
- cyber-physical production systems (3)
- machine learning (3)
- Arbeitgeberattraktivität (2)
Institute
- Gesellschaftswissenschaften (26)
- Informatik (16)
- Media (16)
- Elektrotechnik und Informationstechnik (11)
- Wirtschaft (11)
- da/sec - Biometrics and Internet Security Research Group (10)
- Chemie- und Biotechnologie (6)
- Maschinenbau und Kunststofftechnik (6)
- Mathematik und Naturwissenschaften (6)
- Soziale Arbeit (6)
Polyhedral Bunsen flames, induced by hydrodynamic and thermo-diffusive instabilities, are characterized by periodic trough and cusp cellular structures along the conical flame front. In this study, the effects of flow velocity, hydrogen content, and equivalence ratio on the internal cellular structure of premixed fuel-lean hydrogen/methane/air polyhedral flames are experimentally investigated. A high-spatial-resolution one-dimensional Raman/Rayleigh scattering system is employed to measure the internal scalar structures of polyhedral flames in troughs and cusps. Planar laser-induced fluorescence of hydroxyl radicals and chemiluminescence imaging measurements are used to quantify the flame front morphology. In the experiments, stationary polyhedral flames with varying flow velocities from 1.65 to 2.50 m/s, hydrogen contents from 50 to 83%, and equivalence ratios from 0.53 to 0.64 are selected and measured. The results indicate that the positively curved troughs exhibit significantly higher hydrogen mole fractions and local equivalence ratios compared to the negatively curved cusps, due to the respective focusing/defocusing effect of trough/cusp structure on highly diffusive hydrogen. The hydrogen mole fraction and local equivalence ratio differences between troughs and cusps are first increased and then decreased with increasing measurement height from 5 to 13 mm, due to the three-dimensional effect of the flame front. With increasing flow velocity from 1.65 to 2.50 m/s, the hydrogen mole fraction and local equivalence ratio differences between troughs and cusps decrease, which is attributed to the overall decreasing curvatures in troughs and cusps due to the decreased residence time and increased velocity-induced strain. With increasing hydrogen content from 50 to 83%, the hydrogen mole fraction and local equivalence ratio differences between troughs and cusps are amplified, due to the enhanced effects of the flame front curvature and the differential diffusion of hydrogen. With increasing equivalence ratio from 0.53 to 0.64, a clear increasing trend in hydrogen mole fraction and equivalence ratio differences between troughs and cusps is observed at constant flow velocity condition, which is a trade-off result between increasing effective Lewis number and increasing curvatures in troughs and cusps.
Mobile Contactless Fingerprint Presentation Attack Detection: Generalizability and Explainability
(2024)
Contactless fingerprint recognition is an emerging biometric technology that has several advantages over contact-based schemes, such as improved user acceptance and fewer hygienic concerns. Like for most other biometrics, Presentation Attack Detection (PAD) is crucial to preserving the trustworthiness of contactless fingerprint recognition methods. For many contactless biometric characteristics, Convolutional Neural Networks (CNNs) represent the state-of-the-art of PAD algorithms. For CNNs, the ability to accurately classify samples that are not included in the training is of particular interest, since these generalization capabilities indicate robustness in real-world scenarios. In this work, we focus on the generalizability and explainability aspects of CNN-based contactless fingerprint PAD methods. Based on previously obtained findings, we selected four CNN-based methods for contactless fingerprint PAD: two PAD methods designed for other biometric characteristics, an algorithm for contact-based fingerprint PAD and a general-purpose ResNet18. For our evaluation, we use four databases and partition them using Leave-One-Out (LOO) protocols. Furthermore, the generalization capability to a newly captured database is tested. Moreover, we explore t-SNE plots as a means of explainability to interpret our results in more detail. The low D-EERs obtained from the LOO experiments (below 0.1% D-EER for every LOO group) indicate that the selected algorithms are well-suited for the particular application. However, with an D-EER of 4.14%, the generalization experiment still has room for improvement.
Zeit für Veränderung
(2024)
Der Beitrag nimmt unter Bezugnahme auf die Beiträge des Schwerpunkts „Staatliche Anerkennung“ einen Ausblick zur Perspektive der staatlichen Anerkennung vor und formuliert drei Impulse, die zu einer Neujustierung rund um Fragen der Qualifizierung von Sozialarbeiter_innen und Sozialpädagog_innen beitragen könnten.
Die staatliche Anerkennung von Sozialarbeiter_innen und Sozialpädagog_innen blickt auf eine lange Tradition zurück und ist seit vielen Jahren Gegenstand zahlreicher und kontroverser Auseinandersetzungen. Der Beitrag widmet sich neben einer grundlegenden Einführung zur Thematik, der gegenwärtigen Relevanz sowie der aktuellen Vergabepraxis der staatlichen Anerkennung, identifiziert zentrale
Diskussionsbedarfe und führt in den Schwerpunkt ein.
Gender classification on normalized iris images has been previously attempted with varying degrees of success. In these previous studies, it has been shown that occlusion masks may introduce gender information; occlusion masks are used in iris recognition to remove non-iris elements. When, the goal is to classify the gender using exclusively the iris texture, the presence of gender information in the masks may result in apparently higher accuracy, thereby not reflecting the actual gender information present in the iris. However, no measures have been taken to eliminate this information while preserving as much iris information as possible.
We propose a novel method to assess the gender information present in the iris more accurately by eliminating gender information in the masks. This consists of pairing iris with similar masks and different gender, generating a paired mask using the OR operator, and applying this mask to the iris. Additionally, we manually fix iris segmentation errors to study their impact on the gender classification.
Our results show that occlusion masks can account for 6.92% of the gender classification accuracy on average. Therefore, works aiming to perform gender classification using the iris texture from normalized iris images should eliminate this correlation.
Face Morphing Attacks pose a threat to the security of identity documents, especially with respect to a subsequent access control process, because they allow both involved individuals to use the same document. Several algorithms are currently being developed to detect Morphing Attacks, often requiring large data sets of morphed face images for training. In the present study, face embeddings are used for two different purposes: first, to pre-select images for the subsequent large-scale generation of Morphing Attacks, and second, to detect potential Morphing Attacks. Previous studies have demonstrated the power of embeddings in both use cases. However, we aim to build on these studies by adding the more powerful MagFace model to both use cases, and by performing comprehensive analyses of the role of embeddings in pre-selection and attack detection in terms of the vulnerability of face recognition systems and attack detection algorithms. In particular, we use recent developments to assess the attack potential, but also investigate the influence of morphing algorithms. For the first objective, an algorithm is developed that pairs individuals based on the similarity of their face embeddings. Different state-of-the-art face recognition systems are used to extract embeddings in order to pre-select the face images and different morphing algorithms are used to fuse the face images. The attack potential of the differently generated morphed face images will be quantified to compare the usability of the embeddings for automatically generating a large number of successful Morphing Attacks. For the second objective, we compare the performance of the embeddings of two state-of-the-art face recognition systems with respect to their ability to detect morphed face images. Our results demonstrate that ArcFace and MagFace provide valuable face embeddings for image pre-selection. Various open-source and commercial-off-the-shelf face recognition systems are vulnerable to the generated Morphing Attacks, and their vulnerability increases when image pre-selection is based on embeddings compared to random pairing. In particular, landmark-based closed-source morphing algorithms generate attacks that pose a high risk to any tested face recognition system. Remarkably, more accurate face recognition systems show a higher vulnerability to Morphing Attacks. Among the systems tested, commercial-off-the-shelf systems were the most vulnerable to Morphing Attacks. In addition, MagFace embeddings stand out as a robust alternative for detecting morphed face images compared to the previously used ArcFace embeddings. The results endorse the benefits of face embeddings for more effective image pre-selection for face morphing and for more accurate detection of morphed face images, as demonstrated by extensive analysis of various designed attacks. The MagFace model is a powerful alternative to the often-used ArcFace model in detecting attacks and can increase performance depending on the use case. It also highlights the usability of embeddings to generate large-scale morphed face databases for various purposes, such as training Morphing Attack Detection algorithms as a countermeasure against attacks.
This study investigates the diffusion of AI-based service applications within the business models of German manufacturing industries, surveying 162 decision-makers. The integration of AI into business model is assessed through the Business Model Canvas (BMC) framework, evaluating its value in terms of effectiveness as well as efficiency. Rather than focusing on specific use cases, the study delves into the intended usage of value-driven AI services references to enhance effectiveness and efficiency across various elements of the business models. Through this research, eleven service values have been identified. Each service vale corresponds to a distinct element of the BMC. Decision-makers were surveyed using a Confirmation/Disconfirmation (C/D) paradigm to measure the disparities between their current and target performance levels. Consequently, this study provides valuable insights from the perspective of decision makers regarding the current and desired state of AI integration in the German manufacturing industry, taking into account AI usage or no AI usage at the time of data collection.
In many forensic scenarios, criminals often attempt to conceal their identity by covering their face and other distinctive body parts. In such situations, physical evidence may, however, reveal other unique characteristics, e.g. hands, which can be used to identify offenders. In this context, several state-of-the-art biometric recognition systems have been proposed recently. These recognition systems offer high identification performance in restricted environments. However, in forensic scenarios, the environment is often unconstrained, making biometric identification considerably more difficult, with a consequent decrease in accuracy. In this article, we explore methods (e.g. hand alignment and information fusion) to improve the identification of subjects within forensic investigations. Experimental results show that explored techniques play an important role in the improvement of the identification performance of existing schemes: the combination of hand alignment and information fusion results in the highest Rank-1 identification performance improvement of up to 13.10% (i.e., 26.30% vs. 13.20%) and 16.30% (i.e., 77.00% vs. 60.70%) with respect to the baseline for the unconstrained databases NTU-PI_v1 and HaGRID, respectively ( https://github.com/ljsoler/IF-HA-HandRecognition ).
The development of large-scale identification systems that ensure the privacy protection of enrolled subjects represents a major challenge. Biometric deployments that provide interoperability and usability by including efficient multi-biometric solutions are a recent requirement. In the context of privacy protection, several template protection schemes have been proposed in the past. However, these schemes seem inadequate for indexing (workload reduction) in biometric identification systems. More specifically, they have been used in identification systems that perform exhaustive searches, leading to a degradation of computational efficiency. To overcome these limitations, we present an efficient privacy-preserving multi-biometric identification system that retrieves protected deep cancelable templates and is agnostic with respect to biometric characteristics and biometric template protection schemes. To this end, a multi-biometric binning scheme is designed to exploit the low intra-class variation properties contained in the frequent binary patterns extracted from different types of biometric characteristics. Experimental results reported on publicly available databases using state-of-the-art Deep Neural Network (DNN)-based embedding extractors show that the protected multi-biometric identification system can reduce the computational workload to approximately 57% (indexing up to three types of biometric characteristics) and 53% (indexing up to two types of biometric characteristics), while simultaneously improving the biometric performance of the baseline biometric system at the high-security thresholds. Code is available at https://github.com/dosorior/FBP-Multi-biometric-Indexing.
Millistructured Coiled Flow Inverter for Biphasic Continuous Flow 5‐Chloromethylfurfural Synthesis
(2024)
Syntheses of alternative platform chemicals, such as 5‐chloromethylfurfural (CMF), from bio‐based starting materials are often associated with complicated kinetic schemes and mass transfer processes. Millistructured flow reactor concepts can help to elucidate kinetic schemes and determine rate constants which are of crucial importance for the design of respective technical processes. For the first time, the influence of proton concentration on the rate constants involved in the biphasic synthesis of CMF is systematically investigated. Results are discussed in terms of green chemistry metrics.
In order to generate a machine learning algorithm (MLA) that can support ophthalmologists with the diagnosis of glaucoma, a carefully selected dataset that is based on clinically confirmed glaucoma patients as well as borderline cases (e.g., patients with suspected glaucoma) is required. The clinical annotation of datasets is usually performed at the expense of the data volume, which results in poorer algorithm performance. This study aimed to evaluate the application of an MLA for the automated classification of physiological optic discs (PODs), glaucomatous optic discs (GODs), and glaucoma-suspected optic discs (GSODs). Annotation of the data to the three groups was based on the diagnosis made in clinical practice by a glaucoma specialist. Color fundus photographs and 14 types of metadata (including visual field testing, retinal nerve fiber layer thickness, and cup–disc ratio) of 1168 eyes from 584 patients (POD = 321, GOD = 336, GSOD = 310) were used for the study. Machine learning (ML) was performed in the first step with the color fundus photographs only and in the second step with the images and metadata. Sensitivity, specificity, and accuracy of the classification of GSOD vs. GOD and POD vs. GOD were evaluated. Classification of GOD vs. GSOD and GOD vs. POD performed in the first step had AUCs of 0.84 and 0.88, respectively. By combining the images and metadata, the AUCs increased to 0.92 and 0.99, respectively. By combining images and metadata, excellent performance of the MLA can be achieved despite having only a small amount of data, thus supporting ophthalmologists with glaucoma diagnosis.
The benefits of ideation for both industry and academia alike have been outlined by countless studies, leading to research into various approaches attempting to add new ideation methods or examine how the quality of the ideas and solutions created can be measured. Although AI-based approaches are being researched, there is no attempt to provide the ideation participants with information that inspire new ideas and solutions in real time. Our proposal presents a novel and intuitive approach that supports users in real time by providing them with relevant information as they conduct ideation. By analyzing their ideas within the respective ideation sessions, our approach recommends items of interest with high contextual similarity to the proposed ideas, allowing users to skim through, for example, publications and inspire new ideas quickly. The recommendations also evolve in real time. As more ideas are written during the ideation session, the recommendations become more precise. This real-time approach is instantiated with various ideation methods as a proof of concept, and various models are evaluated and compared to identify the best model for working with ideas.
The kinetics and mechanism of drug binding to its target are critical to pharmacological efficacy. A high throughput (HTS) screen often results in hundreds of hits, of which usually only simple IC50 values are determined during reconfirmation. However, kinetic parameters such as residence time for reversible inhibitors and the kinact/KI ratio, which is the critical measure for evaluating covalent inactivators, are early predictive measures to assess the chances of success of the hits in the clinic. Using the promising cancer target human histone deacetylase 8 as an example, we present a robust method that calculates concentration-dependent apparent rate constants for the inhibition or inactivation of HDAC8 from dose–response curves recorded after different pre-incubation times. With these data, hit compounds can be classified according to their mechanism of action, and the relevant kinetic parameters can be calculated in a highly parallel fashion. HDAC8 inhibitors with known modes of action were correctly assigned to their mechanism, and the binding mechanisms of some hits from an internal HDAC8 screening campaign were newly determined. The oxonitriles SVE04 and SVE27 were classified as fast reversible HDAC8 inhibitors with moderate time-constant IC50 values of 4.2 and 2.6 µM, respectively. The hit compound TJ-19-24 and SAH03 behave like slow two-step inactivators or reversible inhibitors, with a very low reverse isomerization rate.
Für das Erreichen der Klimaneutralität 2045 in Deutschland hat die Bundesregierung 2022 ein Gesetz vorgelegt, welches Kommunen ab 20.000 Einwohner verpflichtet einen kommunalen Wärmebedarfsplan aufzustellen. Eine Wärmebedarfsplan beinhaltet den aktuellen, sowie den zukünftigen Wärmebedarf der Kommune. Außerdem werden Potenziale für die Erzeugung erneuerbarer Energien mit Wärmepumpen ermittelt (Landes Energie Agentur Hessen, 2024). Aus den Potenzialen lassen sich innerhalb der Kommune stadtteil-/ oder gebäudespezifisch Teilmaßnahmen für eine mögliche Umsetzung ableiten. Diese Maßnahmen bestehen aus Sanierungen, dem Aufbau und der Erweiterung von Wärmeverbundlösungen und spezifische Einzellösungen. Zudem dient die Potenzialermittlung einer zukünftigen effizienten Koordination von Planung, Umsetzung und Förderung. Über die Stadtwerke können die Kommunen zielgerichtet Wärmenetze neu bauen oder ausbauen (Landes Energie Agentur Hessen, 2020).
Biometric fingerprint identification hinges on the reliability of its sensors; however, calibrating and standardizing these sensors poses significant challenges, particularly in regards to repeatability and data diversity. To tackle these issues, we propose methodologies for fabricating synthetic 3D fingerprint targets, or phantoms, that closely emulate real human fingerprints. These phantoms enable the precise evaluation and validation of fingerprint sensors under controlled and repeatable conditions. Our research employs laser engraving, 3D printing, and CNC machining techniques, utilizing different materials. We assess the phantoms’ fidelity to synthetic fingerprint patterns, intra-class variability, and interoperability across different manufacturing methods. The findings demonstrate that a combination of laser engraving or CNC machining with silicone casting produces finger-like phantoms with high accuracy and consistency for rolled fingerprint recordings. For slap recordings, direct laser engraving of flat silicone targets excels, and in the contactless fingerprint sensor setting, 3D printing and silicone filling provide the most favorable attributes. Our work enables a comprehensive, method-independent comparison of various fabrication methodologies, offering a unique perspective on the strengths and weaknesses of each approach. This facilitates a broader understanding of fingerprint recognition system validation and performance assessment.
We address the need for a large-scale database of children’s faces by using generative adversarial networks (GANs) and face-age progression (FAP) models to synthesize a realistic dataset referred to as “HDA-SynChildFaces”. Hence, we proposed a processing pipeline that initially utilizes StyleGAN3 to sample adult subjects, which is subsequently progressed to children of varying ages using InterFaceGAN. Intra-subject variations, such as facial expression and pose, are created by further manipulating the subjects in their latent space. Additionally, this pipeline allows the even distribution of the races of subjects, allowing the generation of a balanced and fair dataset with respect to race distribution. The resulting HDA-SynChildFaces consists of 1,652 subjects and 188,328 images, each subject being present at various ages and with many different intra-subject variations. We then evaluated the performance of various facial recognition systems on the generated database and compared the results of adults and children at different ages. The study reveals that children consistently perform worse than adults on all tested systems and that the degradation in performance is proportional to age. Additionally, our study uncovers some biases in the recognition systems, with Asian and black subjects and females performing worse than white and Latino-Hispanic subjects and males.
District heating plays a key role in the German heat transition (“Wärmewende”) to achieve climate protection targets. In order to realise the heating transition, the legislator has established cost efficiency as a central criterion in the relevant legislation. Ecology, as the third pillar of sustainability, is thus taking a back seat, despite the transformation’s influence on other sustainability dimensions beyond climate protection.
The article takes an ecological perspective on the district heating
transformation and shows that, from this perspective, greater emphasis
should be placed on local environmental heat and large heat pumps.
In the second step, the decentralised information available on the
actual transformation plans of district heating suppliers is aggregated and evaluated at a national level for the first time. The evaluation indicates a possible gap between the developed sustainable target state and the plans of district heating suppliers, which are primarily focussed on the cost efficiency criterion. This comparison identifies a potential conflict of objectives between the legislative cost efficiency criterion and the ecological sustainability perspective.
In this paper, we present a new processing method, called MOSES—Impacts, for the detection of micrometer-sized damage on glass plate surfaces. It extends existing methods by a separation of damaged areas, called impacts, to support state-of-the-art recycling systems in optimizing their parameters. These recycling systems are used to repair process-related damages on glass plate surfaces, caused by accelerated material fragments, which arise during a laser–matter interaction in a vacuum. Due to a high number of impacts, the presented MOSES—Impacts algorithm focuses on the separation of connected impacts in two-dimensional images. This separation is crucial for the extraction of relevant features such as centers of gravity and radii of impacts, which are used as recycling parameters. The results show that the MOSES—Impacts algorithm effectively separates impacts, achieves a mean agreement with human users of (82.0 ± 2.0)%, and improves the recycling of glass plate surfaces by identifying around 7% of glass plate surface area as being not in need of repair compared to existing methods.
Human histone deacetylase 4 (HDAC4) is a key epigenetic regulator involved in a number of important cellular processes. This makes HDAC4 a promising target for the treatment of several cancers and neurodegenerative diseases, in particular Huntington's disease. HDAC4 is highly regulated by phosphorylation and oxidation, which determine its nuclear or cytosolic localization, and exerts its function through multiple interactions with other proteins, forming multiprotein complexes of varying composition. The catalytic domain of HDAC4 is known to interact with the SMRT/NCOR corepressor complex when the structural zinc-binding domain (sZBD) is intact and forms a closed conformation. Crystal structures of the HDAC4 catalytic domain have been reported showing an open conformation of HDAC4 when bound to certain ligands. Here, we investigated the relevance of this HDAC4 conformation under physiological conditions in solution. We show that proper zinc chelation in the sZBD is essential for enzyme function. Loss of the structural zinc ion not only leads to a massive decrease in enzyme activity, but it also has serious consequences for the overall structural integrity and stability of the protein. However, the Zn2+ free HDAC4 structure in solution is incompatible with the open conformation. In solution, the open conformation of HDAC4 was also not observed in the presence of a variety of structurally divergent ligands. This suggests that the open conformation of HDAC4 cannot be induced in solution, and therefore cannot be exploited for the development of HDAC4-specific inhibitors.
The development of compact neutron sources for applications is extensive and features many approaches. For ion-based approaches, several projects with different parameters exist. This article focuses on ion-based neutron production below the spallation barrier for proton and deuteron beams with arbitrary energy distributions with kinetic energies from 3 MeV to 97 MeV. This model makes it possible to compare different ion-based neutron source concepts against each other quickly. This contribution derives a predictive model using Monte Carlo simulations (an order of 50,000 simulations) and deep neural networks. It is the first time a model of this kind has been developed. With this model, lengthy Monte Carlo simulations, which individually take a long time to complete, can be circumvented. A prediction of neutron spectra then takes some milliseconds, which enables fast optimization and comparison. The models’ shortcomings for low-energy neutrons (<0.1 MeV
) and the cut-off prediction uncertainty (±3 MeV
) are addressed, and mitigation strategies are proposed.
A growing body of literature mainly in the context of consumer research indicates that the formal-aesthetic and conceptual design of objects can influence users' thoughts, emotions and even behavioural patterns. While there is strong evidence regarding these effects on actual purchasing decisions, evidence on the effect of aesthetic design features (e.g., haptics, colour) on health-related mental concepts and intentions for health behaviour change is scarce. Based on insights from material and conceptual priming, this article illustrates the research-driven and evidence-based design process of two design primes and comprises pre-tests and an experiment in two settings on the effect of design on health behaviour focusing i.a. on intention for health behaviour change. In an evidence-based and research-driven process, two lecterns were designed to work as primes, i.e., to have a positive vs. negative influence on several mental constructs (sense of control, sense of coherence, resiliency, self-efficacy) and health-related intention. The lecterns differed mainly in terms of aesthetic appearance (e.g., material, colour, proportion, steadiness). They were tested in (a.) a university setting with students (n = 83) and (b.) a clinical setting with orthopaedic rehabilitation patients (n = 38). Participants were asked to perform an unrelated task (evaluation of an unrelated product) while standing at and using the lecterns. Overall, t-tests and Mann-Whitney-U tests show no significant differences but differing tendencies in a mentioning task. When asked to name health-promoting activities, in the clinical setting, participants using the "positive" prime (i.e., the steady lectern, n = 13) mentioned more sport-related aspects on average and a higher portion of sport-related aspects of their answers than participants using the "negative" prime (n = 11). In the university setting (positive: n = 36; negative n = 38), no such differences emerged. This finding gives reason to believe that the prime might be specifically effective in the clinical setting as it relates to physical activity being the most relevant topic of the patients' pathology.
The leather industry is a complex system with multiple actors that faces a fundamental transition toward more sustainable chemistry. To support this process, this article analyzes challenges of the industry and consumers’ roles as a nexus of transition-relevant developments. We present findings of an empirical study (N = 439) among consumers on their perception of leather, related knowledge, and purchasing behavior. We found that participants perceived leather as natural, robust, and of high quality. Knowledge about the manufacturing of leather products was overall limited but varied. Applying a psychological behavior theory, we found that being aware of environmental and health consequences from conventional manufacturing of leather products was positively associated with a personal norm to purchase leather products that are less harmful to environment and health. The perceived ease of buying such products was positively associated with their purchase. Our findings shed light on consumers’ roles in the current leather system and their support of niche innovations toward more sustainable chemistry. Against this backdrop, we discuss implications for product design, consumer information, and needs for traceability along supply chains.
Technostress – d. h. Stress, der aus dem Umgang mit digitalen Technologien resultiert – ist eine gravierende Schattenseite der voranschreitenden Digitalisierung der Arbeitswelt. Die negativen Auswirkungen dieses Phänomens sind bereits heute erkennbar. Sie beinhalten sowohl negative gesundheitliche Folgen für die betroffenen Mitarbeiter_innen als auch gravierende Folgekosten für Unternehmen durch gesteigerte Fehlzeiten sowie negative Auswirkungen auf Mitarbeiterproduktivität und -zufriedenheit. Die vorliegende Studie untersucht, ob das Führungsverhalten einer Führungskraft die Entstehung von Technostress bei den ihr direkt unterstellten Mitarbeiter_innen beeinflusst. Darüber hinaus werden Einflüsse weiterer individueller und organisationaler Faktoren überprüft. Mittels validierter Erhebungsinstrumente werden selbstberichtete Daten von N=849 Mitarbeiter_innen deutscher Unternehmen erhoben. Die Einschätzung des Führungsverhaltens der direkten Führungskraft erfolgt auf Grundlage der Führungsstile des „Full Range of Leadership Modells“ nach Avolio und Bass (1991) unter Zuhilfenahme des MLQ 5x short. Die Ergebnisse der Datenauswertung mittels Strukturgleichungsmodellierung weisen darauf hin, dass das Führungsverhalten der bzw. des direkten Vorgesetzten Einfluss auf das Technostress-Empfinden der Mitarbeiter_innen hat.
Solar phase scintillation and solar amplitude scintillation are fundamentally important in deep space mission operations for designing a communication system capable of transmitting signals when the signal path is close to the Sun. The ESA’s BepiColombo measurement data were analyzed in a previous paper in terms of the power spectral density of the solar phase scintillation, also with a comparison with Woo’s solar phase scintillation theory, when X-band and Ka-band signals propagate close to the Sun with a small Sun-Earth-Probe (SEP) angle during the superior solar conjunction campaign in March 2021 in its cruise phase to Mercury. In this paper the solar amplitude scintillation is analyzed both by calculating the power spectral density and the scintillation index. The results of scintillation index, derived from these measurement data, fit the NASA JPL’s scintillation index model.
With regard to AI as a key technology, this scientific paper deals with the identification of user drivers on the purchase decision of a cooperative AI (as explainable AI—XAI), as well as the analysis of the willingness to pay in the context of value-based pricing. Besides the economic dimension with regard to usefulness and usability of the system, the focus is mainly on the (innovative) explainable character. The analysis is carried out by a choice-based conjoint analysis (CBC) using the example of an intelligent assistance system for employees that supports internal business processes and workflows in business organizations. For this purpose, fictitious purchase offers were created under which decision-makers in manufacturing business organizations in Germany made simulated purchase decisions. The analysis shows that the target group attach great utility value to transparency in the sense of explanatory content, in addition to a high degree of interactivity and a high level of reliability.
Random Forests are a powerful and frequently applied Machine Learning tool. The permutation variable importance (VIMP) has been proposed to improve the explainability of such a pure prediction model. It describes the expected increase in prediction error after randomly permuting a variable and disturbing its association with the outcome. However, VIMPs measure a variable’s marginal influence only, that can make its interpretation difficult or even misleading. In the present work we address the general need for improving the explainability of prediction models by exploring VIMPs in the presence of correlated variables. In particular, we propose to use a variable’s residual information for investigating if its permutation importance partially or totally originates from correlated predictors. Hypotheses tests are derived by a resampling algorithm that can further support results by providing test decisions and p-values. In simulation studies we show that the proposed test controls type I error rates. When applying the methods to a Random Forest analysis of post-transplant survival after kidney transplantation, the importance of kidney donor quality for predicting post-transplant survival is shown to be high. However, the transplant allocation policy introduces correlations with other well-known predictors, which raises the concern that the importance of kidney donor quality may simply originate from these predictors. By using the proposed method, this concern is addressed and it is demonstrated that kidney donor quality plays an important role in post-transplant survival, regardless of correlations with other predictors.
Evaluation of the Explanatory Power Of Layer-wise Relevance Propagation using Adversarial Examples
(2023)
Approaches for visualizing and explaining the decision process of convolutional neural networks (CNNs) have recently received increasing attention. Particularly popular approaches are so-called saliency methods, which aim to assign a valence to each input pixel based on its importance and influence on the classification via saliency maps. In our paper, we contribute by a novel analyzing approach build on adversarial examples to investigate the explanatory power of saliency methods exemplified by layer-wise relevance propagation (LRP). Based on the hypothesis that distinct decisions, such as an image’s classification and the classification of its corresponding adversarial examples, should yield to dissimilar saliency maps to provide transparent rationales, we break down relevance scores of images and corresponding adversarial examples and analyze them using a comprehensive statistical evaluation. It turns out that different relevance decomposition rules of LRP do not lead to clearly distinguishable saliency maps for images and corresponding adversarial examples, neither in terms of their contour lines, nor in terms of the statistical analysis.
Der Begriff und das Thema Nachhaltigkeit haben sich in der Sozialen Arbeit etabliert. Offen bleibt bisher, was genau mit Nachhaltigkeit verbunden wird und in welchen Kontexten Sozialer Arbeit welche Bezüge zur Nachhaltigkeitsdebatte aufgerufen werden. Der vorliegende Einführungsbeitrag in den Schwerpunkt Nachhaltigkeit – ein Thema (in) der Sozialen Arbeit will aufzeigen und systematisieren, was Nachhaltigkeit für die Soziale Arbeit derzeit bedeutet und zukünftig bedeuten kann oder soll. Neben einer theoretischen Auseinandersetzung gewährt er exemplarische Einblicke in die Nachhaltigkeitsdebatten in Praxisfeldern Sozialen Arbeit, identifiziert Impulse für Theorieentwicklung und Handlungskonzepte der Sozialen Arbeit und führt in die Beiträge des Schwerpunktes ein.
Artificial Intelligence in studies—use of ChatGPT and AI-based tools among students in Germany
(2023)
AI-based tools such as ChatGPT and GPT-4 are currently changing the university landscape and in many places, the consequences for future forms of teaching and examination are already being discussed. In order to create an empirical basis for this, a nationwide survey of students was carried out in order to analyse the use and possible characteristics of AI-based tools that are important to students. The aim of the quantitative study is to be able to draw conclusions about how students use such AI tools. A total of more than 6300 students across Germany took part in the anonymous survey. The results of this quantitative analysis make it clear that almost two-thirds of the students surveyed use or have used AI-based tools as part of their studies. In this context, almost half of the students explicitly mention ChatGPT or GPT-4 as a tool they use. Students of engineering sciences, mathematics and natural sciences use AI-based tools most frequently. A differentiated examination of the usage behaviour makes it clear that students use AI-based tools in a variety of ways. Clarifying questions of understanding and explaining subject-specific concepts are the most relevant reasons for use in this context.
A novel material testing concept is developed in order to provide tensile and compressive properties within a single mechanical test. A new specimen geometry is designed for testing in a universal testing machine. Under tensile load, both a homogeneous tensile stress condition as well as a homogeneous compressive stress condition occur in the specimen. Measurements accompanying the experimental test with digital image correlation provide tensile and compressive Poisson’s ratio as well as tensile modulus. These properties are input parameters for subsequent finite element simulations. The compressive modulus is determined by iteratively adjusting finite element simulations in order to couple experimental and simulated results. For validating the concept, experimental tests are carried out on polyoxymethylene. While the tensile Poisson’s ratio of the new concept shows the best agreement with the reference value, the compressive modulus is approximately 15% higher. Further work should focus on an appropriate material model in order to reduce the deviation.
A growing number of economic geography scholars have discussed the
spatial dimensions of sustainability transitions (STs), which entail radical changes in socio-technical systems to overcome societal, economic, and ecological problems. This involves innovation processes with a broad range of distinctive actors. Innovation intermediaries, such as universities and research institutes, are needed to support and accelerate the transfer of knowledge. Nevertheless, little is known about the influence of such actors on the configuration of the knowledge bases required for STs. This article presents insights from 14 semi-structured interviews with experts conducted in a regional innovation system (RIS) in East Germany. In cooperation with the Eberswalde University for Sustainable Development, we investigate four innovation intermediaries in the region of Eberswalde. The analytical framework links the concept of differentiated knowledge bases to small wins. Our results show that, first, in the Eberswalde region, the relevant actors involved in regional knowledge transfer focus predominantly on synthetic knowledge bases, such as experiencebased knowledge of local area settings. Second, symbolic knowledge bases are crucial and often prerequisites for intermediary organizations to recombine knowledge bases and support the capability to innovate in regional knowledge transfer. Symbolic knowledge entails the ability to translate scientific findings to a language that can be understood by the various actors in knowledge transfer. Third, changes in organizational structures complement changes in cultural–cognitive and normative institutions to support innovation on a systemic level and foster change processes.
Rezension „Cyber-Sicherheit“
(2023)
The manufacturing industry is undergoing a transformation marked by the emergence of Industry 4.0 and Industry 5.0 paradigms, which are characterized by the integration and automation of machinery. Thereby, the machinery evolves into Cyber-Physical Systems (CPSs). These CPSs consist of software and hardware modules implementing complex manufacturing processes. The ongoing integration of machinery, and external technologies, e.g., the Industrial Internet of Things (IIoT), led to an evolving Smart Manufacturing (SM) environment. At the same time, legacy machinery, the brownfield machinery, exists side-by-side with modern CPSs. The brownfield machinery might be integrated by retrofitting in the modern manufacturing process. Therefore, the evolution of the SM domain thriven by the Industry 4.0 and Industry 5.0 paradigms leads to a more complex SM environment. Moreover, the integration and ongoing adaption of technologies and processes introduce novel relationships and dependencies between employed machinery and systems. Fault Diagnosis (FD) in such a complex SM environment becomes more time-consuming and laborious. A side effect of the ongoing evolution is the advancing capabilities of the machinery and the ability to produce data. Therewith, not only complex data has to be analyzed during any FD but also vast quantities. The search for the origin of the fault is challenging. Additionally, technical challenges in the SM environment hinder a thorough FD. For instance, the available bandwidth for data transmission is unequal to the capabilities of the machinery to produce vast data quantities. Therefore, the application challenge exists to focus on specific areas of the SM environment while choosing a reasonable granularity in data surveillance to cover the fault traces without losing too much information. Thereby, any FD depends heavily on the domain knowledge of the professionals entrusted with the FD task. On top, there is also economic pressure, which raises the tension on the employed professionals as an unexpected downtime, and the loss in production quantity equals the economic loss.
The thesis introduces context-aware FD to mitigate the risen complexity of the SM environment and support the professionals in their work. By supporting the professionals, the time for FD can be reduced, which results in faster fault amendment and reduced cost-intensive production downtimes. The Context-Aware Diagnosis in Smart Manufacturing (TAOISM) Visual Analytics (VA) model backs the context-aware FD. The TAOISM VA model is the theoretical foundation for the context-aware FD and defines the data layer, the models layer, the visualization layer, and the knowledge layer for SM. Hereby, the VA model enables the definition of context, context models, and context hierarchies for their integration in the respective layers. The main idea behind the context-aware FD is to use the narrowing character of the context definition to slice vast amounts of data into manageable context-separated data groups. Thereby, the context model works as a virtual boundary across machinery and systems, which encloses the physical domain (hardware) and the immaterial domain (software) equally. Further, the thesis focuses on contextual faults, which arise from context model violations, and proposes approaches for collecting contextual data. Also, the automated building of context models and the extraction and transformation of contextual data is part of the thesis. Employing the context models impacts each layer of the proposed TAOISM VA model. For each layer, various approaches show the impact of the context models and their employment in three different application scenarios for FD in SM. The performed research is tested and verified in the scenarios of Robotics Application Development (RAD), Maintenance of Industrial Inspection Machines (MIIM) and Abnormal Event Management in Production Lines (AEMPL). Along with employing context models, data augmentation with context models is proposed. Along with other benefits, the presented data augmentation technique has the ability to balance undersampled datasets, which would enable a reduction of data recordings for any context-aware FD in the future. Thereby, the data augmentation technique is to answer the existing inaccuracies in an SM environment, which also impacts the quality of any employed Artificial Intelligence (AI). Another approach targets the unsupervised selection of production-relevant variables to focus FD-related data recordings and surveillance on areas of the SM environment active during production automatically without any domain knowledge involved. The hypothesis, which was proven right, was that faults, especially contextual faults, occur more often on active software and hardware modules. Another challenge from the vast amount of data is that labeling data for AI becomes uneconomical, even for small fault cases in AI. As a result, evaluating any AI model in SM becomes challenging, as standard measures, e.g., accuracy, precision, recall, and F1-score, cannot be applied. In this case, the thesis proposes novel AI performance metrics that decouple comparability and correctness to enable the evaluation of AI models in an SM environment. All the contributions have led to the development of two distinct Proof of Concepts (PoCs). The PoCs are the reference implementation of the context-aware FD and reflect a knowledge-based FD Expert System (ES) and an unsupervised data-driven FD system. The latter was part of the thorough evaluation of the context-aware FD by two groups of domain experts and junior professionals. The successful qualitative evaluation not only hints towards a working context-aware FD but also unveils future research directions and a future vision for SM. Additional domain expert interviews expose the views on the relevancy of a context-aware FD in SM for the future. In general, the evaluation hints towards a context-aware FD, which has versatile applicability, usability, and suitability in SM-related FD.
The overall objective of this dissertation is to enable a more efficient and effective point cloud and mesh partition for artists and 3D application developers. In this dissertation, 3D scans are assumed as the source data material of the 3D application development, reducing the manual and time-consuming modelling of virtual objects. Furthermore, the scanned data is assumed to be processed to a point cloud and reconstructed to a polygon mesh. The mesh has to be partitioned into the objects of interest to design specific interactions with a game engine. Interviews revealed that the partition is manually conducted on a mesh with a 3D manipulation software, which is time-consuming. The partition creation should be automated to increase efficiency and effectiveness. Freely available point cloud and mesh partition algorithms require an expert with appropriate programming skills and field knowledge, which makes them difficult to use. More precisely, the algorithms cannot be used in existing workflows as they are not implemented in a common graphical 3D manipulation software. Beneath these problems, the partition automation should work on real-world data and have a low runtime to raise efficiency. Different sub-research objectives were formulated from these problems and requirements, leading to novel approaches in the domains of: (a) sequential partition creation with deep reinforcement and imitation learning, (b) episodic partition creation with graph neural networks, (c) match-based reward calculation and (d) synthetic scene generation. One sub-research objective is the replacement of a human expert with an agent. In this context, a novel deep reinforcement learning (DRL) partition framework is presented. Experiments were conducted using this framework combined with the region growing algorithm and synthetic scenes created by a self-developed scene generator. The maximum reward could almost be achieved with a fine-tuned PointNet and by evaluating the wall and non-wall objects separately. This approach is not applicable to real-world scenes, which is necessary to achieve the efficiency and effectiveness objective. Therefore, another DRL partition approach is introduced, where an agent unifies superpoints in the so-called superpoint growing environment. The point cloud is divided into superpoints, which will be unified into the objects of interest by an agent. The experimental results show that this approach can be applied to real-world scenes. Beneath the application of DRL, an imitation learning approach was developed, increasing the agent’s performance in the superpoint growing environment. The runtime in the sequential superpoint growing environment is poor, as each union decision requires a neural network call. Hence, a further sub-research objective is to improve the runtime. An episodic environment was developed as a solution, only requiring one graph neural network call. Similarities between superpoints are estimated in this environment and passed to a union algorithm. The differences between two graph neural network architectures and two union algorithms were experimentally investigated. According to the results, calculating the superpoint similarities with a correlation of the embedded node features is more robust than the similarity estimation with a sigmoid activation function. The reward function, used in the DRL partition approaches, was realised by a matching procedure. As this function influences the partition quality, another sub-research objective is to investigate the differences between various match types. Matching functions from the literature were compared, and another match type was introduced. The usage of different match types in the learning process was experimentally evaluated. Although an agent gets more feedback with all match types, the best results (visual and in terms of the partition size) were achieved by only using first-order matches in the reward function. The synthetic scenes of the region growing approach lack realism as the lighting information is ignored, which can be important to train networks for the partition task. Therefore, a further sub-research objective is to develop a scene generator where the lighting is taken into account. After its development, the generated scenes were experimentally evaluated in a pre-training task. It turned out that the lighting information is important for a pre-training as larger accuracies were achieved. Furthermore, a faster convergence can be achieved with the pre-trained network instead of training a network on a target data set from scratch.
Another sub-research objective targets the development of a usable partition interface. In this context, the Blender add-on OpenXtract was developed, containing five open-source point cloud partition algorithms. The partition algorithms were extended by approximating geodesic distances so that the edges of meshes are used. An experiment has shown that the extended algorithms produce larger accuracies, which is considered an increase in effectiveness. Moreover, unstructured interviews revealed that OpenXtract can improve the effectiveness and efficiency of the partition creation.
"Jede Stadt hat ihre Mollerstadt" war ein geflügelter Satz im Forschungsprojekt "s:ne", dem Transferprojekt, aus dem diese Publikation hervorgegangen ist.
Tatsächlich haben viele deutsche Städte ähnliche Quartiere. Im Zweiten Weltkrieg zerbombt, wieder aufgebaut im Stil und mit den niedrigen Standards der Nachkriegszeit. Mit einer kleinteiligen Parzellenstruktur und heterogener Eigentümerschaft in einer innerstädtischen Eins-B-Lage - wo Wohnen anders als in der benachbarten "City" noch eine zentrale Rolle spielt.
Diese Quartiere sind in die Jahre gekommen. Sie haben aber eine besondere Bedeutung auch und gerade im Hinblick auf eine nachhaltige Stadtentwicklung im Sinne der Stadt der kurzen Wege.
Diese Arbeit verbindet durch die Einordnung in den Nachhaltigkeitswissenschaften als multidisziplinäres Wissenschaftsgebiet einen ingenieurstechnischen- und einen sozialwissenschaftlichen Teil. Im ingenieurstechnischen Teil konnten Indizien dafür gefunden werden, dass Rezyklatkunststoffe hinsichtlich deren mechanischen Eigenschaften in hochbelasteten Strukturbauteilen eingesetzt werden können und somit Neuwarekunststoffe substituieren. Dazu werden Neuwaren- und Rezyklatkunststoffe aus Polyprophylen mit 30 Gewichtsprozent Talkumfüllung systematisch untersucht. Dabei wird der Einfluss von Kerben, Bindenähten, Mittelspannung, Temperatur und Alterung auf die mechanischen Eigenschaften unter statischer und zyklischer Belastung untersucht. Begleitende analytische Untersuchungen beschreiben die molekularen und kristallinen Unterschiede von Neuwaren- und Rezyklatkunststoffen. Damit können Rückschlüsse auf die mechanischen Eigenschaften gezogen werden und lassen sich dadurch wissenschaftlich begründen.
Die ermittelten mechanischen Kennwerte unter statischer und zyklischer Belastung fließen in ein Kerbspannungskonzept nach dem höchst beanspruchten Werkstoffvolumen V80 und nach dem Spannungsgradienten χ* ein. Lokale Beanspruchungskennwerte werden nach dem in dieser Arbeit entwickelten Konzept der relativen inelastischen Dehnungen ermittelt.
Damit wird an einem Geräteträger einer Geschirrspülmaschine ein zyklischer Festigkeitsnachweis erbracht, dass der Geräteträger aus dem untersuchten Rezyklatmaterial die geforderte Lebensdauer ertragen kann. Begleitende numerische Berechnungen und Bauteilversuche unter Einsatzbedingungen validieren die Lebensdauerabschätzung.
Im sozialwissenschaftlichen Teil dieser Arbeit wird untersucht, wie Rezyklate aus ihrer technologischen Nische in eine breite Anwendung gelangen können. Dazu wird das Modell der Multi-Level Perspective nach Geels [Gee02] verwendet. Um Rezyklate aus der technologischen Nische zu heben, bedarf es einer Strategie von verschiedenen Akteuren aus unterschiedlichen Ebenen. Dabei soll die Strategie Faktoren ermitteln, die es ermöglichen, das Rezyklate ein eigenes Regime bilden können. Diese Strategie wird in leitfadengestützten Experteninterviews mit Akteurgruppen aus dem sozioökonomischen, -technischen und -politischen Bereich, erfragt. Dabei wird mit Hilfe einer inhaltlich strukturierenden qualitativen Inhaltsanalyse die Strategie abgeleitet, die Rezyklate verstärkter in technischen Anwendungen einsetzt und wie sich der Markt hierfür zukünftig weiterentwickeln muss.
Background
As the climate and environmental crises unfold, eco-anxiety, defined as anxiety about the crises’ devastating consequences for life on earth, affects mental health worldwide. Despite its importance, research on eco-anxiety is currently limited by a lack of validated assessment instruments available in different languages. Recently, Hogg and colleagues proposed a multidimensional approach to assess eco-anxiety. Here, we aim to translate the original English Hogg Eco-Anxiety Scale (HEAS) into German and to assess its reliability and validity in a German sample.
Methods
Following the TRAPD (translation, review, adjudication, pre-test, documentation) approach, we translated the original English scale into German. In total, 486 participants completed the German HEAS. We used Bayesian confirmatory factor analysis (CFA) to assess whether the four-factorial model of the original English version could be replicated in the German sample. Furthermore, associations with a variety of emotional reactions towards the climate crisis, general depression, anxiety, and stress were investigated.
Results
The German HEAS was internally consistent (Cronbach’s alphas 0.71–0.86) and the Bayesian CFA showed that model fit was best for the four-factorial model, comparable to the factorial structure of the original English scale (affective symptoms, rumination, behavioral symptoms, anxiety about personal impact). Weak to moderate associations were found with negative emotional reactions towards the climate crisis and with general depression, anxiety, and stress.
Discussion
Our results support the original four-factorial model of the scale and indicate that the German HEAS is a reliable and valid scale to assess eco-anxiety in German speaking populations.
Valid online inference is an important problem in contemporary multiple testing research,to which various solutions have been proposed recently. It is well-known that these existing methods can suffer from a significant loss of power if the null p-values are conservative. In this work, we extend the previously introduced methodology to obtain more powerful procedures for the case of super-uniformly distributed p-values. These types of p-values arise in important settings, e.g. when discrete hypothesis tests are performed or when the p-values are weighted. To this end, we introduce the method of super-uniformity reward (SUR) that incorporates information about the individual null cumulative distribution functions. Our approach yields several new 'rewarded' procedures that offer uniform power improvements over known procedures and come with mathematical guarantees for controlling online error criteria based either on the family-wise error rate (FWER) or the marginal false discovery rate (mFDR). We illustrate the benefit of super-uniform rewarding in real-data analyses and simulation studies. While discrete tests serve as our leading example, we also show how our method can be applied to weighted p-values.
Discrete uniform and homogeneous p-values often arise in applications with multiple testing. For example, this occurs in genome wide association studies whenever
a non-parametric one-sample (or two-sample) test is
applied throughout the gene loci. In this paper, we considermultiple comparison procedures for such scenarios
based on several existing estimators for the proportion
of true null hypotheses, 𝜋0, which take the discreteness
of the p-values into account. The theoretical guarantees
of the several approaches with respect to the estimation of 𝜋0 and the false discovery rate control are reviewed. The performance of the discrete procedures is investigated through intensive Monte Carlo simulations considering both independent and dependent p-values. The methods are applied to three real data sets for illustration
purposes too. Since the particular estimator of
𝜋0 used to compute the q-values may influence its performance, relative advantages and disadvantages of the reviewed procedures are discussed. Practical recommendations are given.
Several classical methods exist for controlling the false discovery exceedance (FDX) for large-scale multiple testing problems, among them the Lehmann-Romano procedure (Lehmann and Romano 2005) ([LR] below) and the Guo-Romano procedure (Guo and Romano 2007) ([GR] below). While these two procedures are the most prominent, they were
originally designed for homogeneous test statistics, that is, when the null distribution functions of the p-values Fi, 1 ≤ i ≤ m, are all equal. In many applications, however, the data are heterogeneous which leads to heterogeneous null distribution functions. Ignoring this heterogeneity induces a lack of power. In this paper, we develop three new procedures that incorporate the Fi’s, while maintaining rigorous FDX control. The heterogeneous version of [LR], denoted [HLR], is based on the arithmetic average of the Fi’s, while the heterogeneous version of [GR], denoted [HGR], is based on the geometric average of the Fi’s. We also introduce a procedure [PB], that is based on the Poisson-binomial distribution and that uniformly improves [HLR] and [HGR], at the price of a higher computational complexity. Perhaps surprisingly, this shows that, contrary to the known theory of false discovery rate (FDR) control under heterogeneity, the way to incorporate the Fi’s can be particularly simple in the case of FDX control, and does not require any further correction term. The performances of the new proposed procedures are illustrated by real and simulated data in two important heterogeneous settings: first, when the test statistics are continuous but
the p-values are weighted by some known independent weight vector, e.g., coming from co-data sets; second, when the test statistics are discretely distributed, as is the case for data representing frequencies or counts. Our new procedures are implemented in the R package FDX, see Junge and Döhler (2020).
This experimental study investigates readers’ perceived text quality and trust towards journalistic opinion pieces written by the language model GPT-3. GPT-3 is capable of automatically writing texts in human language and is often referred to as an artificial intelligence (AI). In a 2x2x2 within- subjects experimental design, 192 participants were presented with two randomly selected articles each for evaluation. The articles were varied with regard to the variables actual source, declared source (in each case human-written or AI-written) and the topic (1 & 2). Prior to the experimental design, participants indicated the extent to which they agreed with various statements about the trustworthiness of AI in order to capture their personal attitudes towards the topic.
The study found for one, that readers considered articles written by GPT-3 to be just as good as those written by human journalists. The AI-generated versions were rated slightly better in terms of text quality as well as the trust placed in the content. However, the effect was not statistically significant. For another, no negative effect on article perception was found for texts disclosed as AI-written. Articles declared as written by an AI were mostly rated equally well or again minimally better than texts declared as human, especially regarding trust. Only the readability was rated slightly worse for the case of declaring the AI as a source. Furthermore, a correlation was found between the participants’ personal attitudes towards the topic of AI and their perception of allegedly AI-written articles. For articles declared as AI-written, there are slight to moderate positive correlations of the personal attitudes towards AI with each quality rating criterion. Personal preconception thus plays a role in the perception of AI-written articles.
KI-basierte Tools wie ChatGPT bzw. GPT-4 verändern derzeit die Hochschullandschaft und vielerorts wird bereits über die Konsequenzen für die zukünftigen Lehr- und Prüfungsformen diskutiert. Um hier eine empirische Grundlage zu schaffen, ist eine deutschlandweite Befragung von Studierenden durchgeführt worden, in welcher das Nutzungsverhalten im Umgang mit KI-basierten Tools im Rahmen des Studiums und Alltags erfasst wurde. Hierbei wurden unter anderem diverse Funktionen der KIbasierten Tools identifiziert, die für die Studierenden als besonders wichtig eingeschätzt wurden. Das Ziel der quantitativen Befragung lag somit in der Erfassung davon, wie KI-Tools genutzt werden und welche Faktoren für die Nutzung maßgeblich sind.
Insgesamt haben sich deutschlandweit über 6300 Studierende an der anonymen Befragung beteiligt. Die Ergebnisse dieser quantitativen Analyse verdeutlichen, dass fast zwei Drittel der befragten Studierenden KI-basierte Tools im Rahmen des Studiums nutzen bzw. genutzt haben. Explizit nennen in diesem Kontext fast die Hälfte der befragten Studierenden ChatGPT bzw. GPT-4 als genutztes Tool. Am häufigsten nutzen Studierende der Ingenieurwissenschaften sowie Mathematik und Naturwissenschaften KI-basierte Tools.
Eine differenzierte Betrachtung des Nutzungsverhaltens verdeutlicht, dass die Studierenden KI-basierte Tools vielfältig einsetzen. Die Klärung von Verständnisfragen und Erläuterung fachspezifischer Konzepte zählen in diesem Kontext zu den relevantesten Nutzungsgründen.
Die Bekämpfung des Klimawandel erfordert eine grundlegende Veränderung des weltweiten Energiesystems, um den Ausstoß von klimaschädlichen Treibhausgasen zu reduzieren. Die Nutzung erneuerbarer Energien ermöglicht es, auf fossile Brennstoffe zu verzichten. Allerdings ist dafür im Vergleich zu fossilen Technologien ein erhöhter Einsatz mineralischer Rohstoffe erforderlich. Die globale Energiewende ist damit auch als ein Wandel des Energiesystems hin zu einem materialintensiveren System zu begreifen. Zunehmende Bemühungen zum Ausbau von erneuerbaren Energien und von Technologien zu deren Nutzung können daher zukünftig die Nachfrage nach mineralischen Rohstoffen stark ansteigen lassen. Vor diesem Hintergrund beschäftigt sich diese Arbeit mit der Einschätzung der potenziellen Bedarfssteigerungen und untersucht, welche möglichen zukünftigen Auswirkungen mit Fokus auf den Aspekten der Versorgungssituation und des Energiebedarfes sich dadurch ergeben können. Zusätzlich wird untersucht, ob die Energiewende von ihren eigenen Auswirkungen rückwirkend beeinflusst oder behindert wird und ob sich limitierende Faktoren identifizieren lassen, die die Transformation und insbesondere ihre Geschwindigkeit beeinträchtigen. In dieser Arbeit wird die Thematik beispielhaft anhand der mineralischen Rohstoffe Kupfer und Lithium zunächst separat untersucht, bevor die Ergebnisse anschließend in einen gemeinsamen Kontext gebracht werden. Die Arbeit bildet insgesamt eine breite und aktuelle Wissenssammlung über die Rolle von Rohstoffen in der Energiewende und verwendet insbesondere das Mittel der Metastudie, um fundierte Prognosen über zukünftige Entwicklungen bis zum Jahre 2050 abzuleiten. Die wichtigsten Erkenntnisse dieser Masterarbeit lassen sich in Form der folgenden Kernaussagen zusammenfassen:
• Die Geschwindigkeit der globalen Energiewende kann durch Knappheit und durch hohe Preise von Kupfer und Lithium negativ beeinflusst werden. Dies kann als Rückwirkung des globalen Marktes angesehen werden, der durch eine schnelle Transformation angespannt wird.
• Die Bedarfe nach Kupfer und Lithium steigen im Zuge der Energiewende voraussichtlich stark an, weswegen eine Unterdeckung der Nachfrage eintreten kann, da die Angebotskapazitäten gegebenenfalls an die Grenzen des realisierbaren Wachstums gelangen. Für beide betrachtete Rohstoffe stellt die Elektromobilität einen der größten Bedarfstreiber dar.
• Der entscheidendste limitierende Faktor für das Angebotswachstum beider betrachteter Rohstoffe ist die Geschwindigkeit des Ausbaus der primären Extraktionskapazitäten, da die Primärproduktion auch zukünftig weiterhin die wichtigste Versorgungsroute darstellen wird.
• Die Verfügbarkeit von Lithium stellt aufgrund fehlender absehbarer Substitutionsmöglichkeiten einen limitierenden Faktor für den Ausbau der Elektromobilität dar und könnte damit auch dämpfend auf die Energietransformation einwirken. Als Ergänzung sollten daher lithiumfreie Batterie- und Speichertechnologien verstärkt in Betracht gezogen werden.
• Erschöpfungserscheinungen der Erzvorkommen ergaben sich für Kupfer als die relevanteste Rückwirkung einer intensiven Förderung. Die schon lange Zeit praktizierte industrielle Gewinnung von Kupfer führt durch eine sinkende Erzqualität zu einem überproportional ansteigenden Aufwand bei der primären Förderung aus Minen. Dieser steigende Aufwand wirkt dämpfend auf das Angebotswachstum und erhöht den Energiebedarf der Kupferbereitstellung. Dies wiederum verschlechtert die Energiebilanz kupferhaltiger Technologien zunehmend.
• Das Innovationspotenzial zur Angebotssteigerung und Senkung des Energiebedarfes für Kupfer ist weitestgehend ausgeschöpft. Für den erst seit vergleichsweise kurzer Zeit in größerem Ausmaß genutzten Rohstoff Lithium besteht hingegen noch viel Innovationspotenzial in allen Bereichen.
Die hessische Landesregierung hat sich dazu verpflichtet, zukünftig den Endenergieverbrauch für Wärme vollständig aus erneuerbaren Energienquellen zu beziehen. Dies erfordert eine Umstellung von fossilen Energieträgern hin zu Heizkonzepten, welche erneuerbare Energiequellen nutzen. Durch den Einsatz einer Wärmepumpe besteht die Möglichkeit Wärmequellen zu erschließen, wordurch fossile Energieträger in jedem Fall keine Verwendung mehr finden.
In dieser Thesis wird die Machbarkeit des Einsatzes von Wärmepumpen am Campus Schöfferstraße der Hochschule Darmstadt geprüft.
Des Weiteren soll diese Arbeit aufzeigen, welche Maßnahmen für den Einsatz von Wärmepumpen erforderlich sind und welche Risiken bzw. Schwierigkeiten sich daraus ergeben. Auf dieser Grundlage ließen sich Investitionskosten abschätzen. Außerdem soll vermittelt werden, welche Faktoren einen Einfluss auf den Einsatz von Wärmepumpen haben.
Im Hinblick auf die Frage, ob der Campus Schöfferstraße an ein Fernwärmenetz der Stadt Darmstadt angeschlossen werden soll oder sich autark mit Wärmepumpen versorgt ist eine Machbarkeitsstudie für den Einsatz von Wärmepumpen essentiell.
Abstract
Background
The EU chemicals regulation “Registration, Evaluation, Authorisation and Restriction of Chemicals” (REACH) aims to reduce the usage of substances of very high concern (SVHCs) by firms. Therefore, a consumer right-to-know about SVHCs in articles is intended to create market-based incentives. However, awareness of the right-to-know among EU citizens is low. Moreover, the response window of 45 days afforded to suppliers impedes immediate, informed decisions by consumers. Consequently, despite being in effect for more than 10 years, only few consumer send requests. Civil society actors have developed smartphone applications reducing information search costs, allowing users to send right-to-know requests upon scanning an article’s barcode. Answers are stored in a database and made available to the public immediately. This paper assesses to which extent smartphone tools contribute to an increased use of the right-to-know by undertaking a case study of the application “ToxFox” by the German non-profit organisation Bund für Umwelt und Naturschutz Deutschland (BUND).
Results
An analysis of the data from the BUND database for the period 2016 to 2018 reveals that about 20 thousand users have sent almost 49 thousand requests. This has led to more than 9 thousand database entries, including 189 articles which contain SVHCs above the legal threshold. The data also indicate that receiving information on requested articles encourages further use of the application. Many suppliers accept the application and pro-actively provide information on articles without SVHCs above the threshold. However, most consumers use the application only for a short time, and suppliers are struggling to reply to right-to-know requests.
Conclusion
Evaluating the results, the study identifies options to enhance the application’s design in terms of user motivation and legal certainty, and to enhance the framework governing "barcode" assignments to articles with a view to better contributing to transparency. As for policy implications, a lack of consumer requests can in part be traced back to design flaws of the right-to-know and a lack of implementation and enforcement of REACH. In addition, suppliers have to increase their supply chain communication efforts to make sure they are in a position to properly answer consumer requests. We recommend several policy options addressing these and additional aspects, thus contributing to the legislative review of Art. 33 REACH.
Rezension „Data Governance“
(2020)
Due to the increasing demand for higher bandwidth in modern communication systems, conventional networks are continuously expanded with new technologies to improve coverage. Free space optical communications (FSOC) shows some significant advantages concerning system setup time in comparison with the classical fiber optical systems on one hand, substantial spectral bandwidth and performances in comparison with the wireless systems under certain conditions on the other hand. This makes this technology not only a reasonable extension for metropolitan area networks but also provides the capability to set up a network after an outage in case of natural disaster quickly. But transmitting data by using FSOC involves some limiting factors that have to be considered prior to each installation. Since the atmospheric channel is not static, the influence of changing weather conditions or industrial smog have a significant impact on the available bitrate. A simulation platform is developed and presented in this paper for investigation of FSOC considering these circumstances. Regarding the atmospheric channel, turbulence, distance-dependent beam divergence, and applied modulation schemes, a general overview of the capabilities is presented and discussed. The insight of this paper should help to make a decision under which preconditions either the FSOC provides a meaningful application possibility, or the limiting factors become too crucial and other technologies must be considered.
The relevance of Machine Intelligence, a.k.a. Artificial Intelligence (AI), is undisputed at the present time. This is not only due to AI successes in research but, more prominently, its use in day-to-day practice. In 2014, we started a series of annual workshops at the Leibniz Zentrum für Informatik, Schloss Dagstuhl, Germany, initially focussing on Corporate Semantic Web and later widening the scope to Applied Machine Intelligence. This article presents a number of AI applications from various application domains, including medicine, industrial manufacturing and the insurance sector. Best practices, current trends, possibilities and limitations of new AI approaches for developing AI applications are also presented. Focus is put on the areas of natural language processing, ontologies and machine learning. The article concludes with a summary and outlook.
To study combustion fundamentals of complex fuels under well-defined boundary conditions, a novel Temperature Controlled Jet Burner (TCJB) system is designed that can stabilise both gaseous or pre-vaporised liquid fuels. In a first experimental exploratory study, piloted turbulent jet flames of pre-vaporised methanol, ethanol, 2-propanol and 2-butanol mixtures are compared to methane/air as a reference fuel. Complementary one-dimensional laminar flame calculations are used to provide flame parameters for comparison. Blow-off and flame length as global flame characteristics are measured over a wide range of equivalence ratios. For fuel rich conditions, blow-off limits correlate well with extinction strain rate calculations. Differing flame lengths from lean to rich conditions are explained partly by different flame wrinkling that is assessed using planar laser-induced fluorescence imaging of the hydroxyl radical (OH-PLIF). A study of Lewis-number effects indicates that they have substantial influence on flame wrinkling. Lean alcohol/air flames, opposed to methane/air, have a Lewis-number greater than unity. This impedes curvature development, which promotes relatively large flame lengths. In contrast, across stoichiometric conditions, all alcohol/air mixture Lewis-numbers decrease significantly. At such conditions, alcohol/air flames show alike or even larger wrinkling compared to methane/air flames. However, quantitatively, the differences in flame length and wrinkling observed among the flames can neither be explained alone by Lewis-number differences, nor other global mixture parameters available from 1D laminar flame calculations. This study shall therefore emphasise the need for more detailed experimental analyses of the full thermochemical state of laminar and turbulent flames fuelled with complex fuels.
Signals and images with discontinuities appear in many problems in such diverse areas as biology, medicine, mechanics and electrical engineering. The concrete data are often discrete, indirect and noisy measurements of some quantities describing the signal under consideration. A frequent task is to find the segments of the signal or image which corresponds to finding the discontinuities or jumps in the data. Methods based on minimizing the piecewise constant Mumford–Shah functional—whose discretized version is known as Potts energy—are advantageous in this scenario, in particular, in connection with segmentation. However, due to their non-convexity, minimization of such energies is challenging. In this paper, we propose a new iterative minimization strategy for the multivariate Potts energy dealing with indirect, noisy measurements. We provide a convergence analysis and underpin our findings with numerical experiments.
The link performance of free space optical communications (FSOC) and deep space optical communications (DSOC) are investigated by considering two scenarios in space communications, for example, for the downlink and uplink between the earth ground stations and the near earth geostationary (GEO) satellites, and between the earth and spacecraft with large distance of 1 astronomical unit (AU) to the earth. Generally a distance larger than 0.01 AU or approximately 1,500,000 km from Earth is considered as deep space. In these theoretical investigations, different realistic system parameters for the optical lasers, transmitters, receivers, avalanche photodiodes (APDs), optical telescopes, atmospheric disturbances like scintillation and absorption are considered. The simulation results are compared with existing project data and valuable ESA experimental results to verify and improve the simulation models. The comparison in this paper shows that the simulation models for the link budget and the scintillation estimation are feasible for the investigations of FSOC and DSOC, and can be used to investigate improved design and implementation of DSOC projects for planned long-term and medium-term space missions.
During the course of a typical deep space mission like the Mars Earth mission, there exist a wide range of operating points, due to the different changes in geometry that consequently cause different link budgets in terms of received signal and noise power. These changes include distance range, Sun-Earth-Probe angle, zenith angle and atmospheric conditions. The different operating points, with different losses (background noise, pointing losses and atmospheric losses), lead to different capacities and data rates over the course of a typical deep space mission. Consequently, different engineering parameters are adjusted and optimized to combat some of these varying losses in order to get acceptable data rates and bit error probabilities. This is a useful reason to analyze and simulate various operating conditions that occur with the varying spatial orbital time periods of the resulting received signal power level, noise power level, capacity, data rates and bit error probabilities. This paper details results of simulations of typical deep space optical communication link operation.
Time series forecasting has been performed for decades in both science and industry. The forecasting models have evolved steadily over time. Statistical methods have been used for many years and were later complemented by neural network approaches. Currently, hybrid approaches are increasingly presented, aiming to combine both methods’ advantages. These hybrid forecasting methods could lead to more accurate predictions and enhance and improve visual analytics systems for making decisions or for supporting the decision-making process. In this work, we conducted a systematic literature review using the PRISMA methodology and investigated various hybrid forecasting approaches in detail. The exact procedure for searching and filtering and the databases in which we performed the search were documented and supplemented by a PRISMA flow chart. From a total of 1435 results, we included 21 works in this review through various filtering steps and exclusion criteria. We examined these works in detail and collected the quality of the prediction results. We summarized the error values in a table to investigate whether hybrid forecasting approaches deliver better results. We concluded that all investigated hybrid forecasting methods perform better than individual ones. Based on the results of the PRISMA study, the possible applications of hybrid prediction approaches in visual analytics systems for decision making are discussed and illustrated using an exemplary visualization.
Studies have shown that although having more information improves the quality of decision-making, information overload causes adverse effects on decision quality. Visual analytics and recommendation systems counter this adverse effect on decision-making. Accurately identifying relevant information can reduce the noise during exploration and improve decision-making. These countermeasures also help scientists make correct decisions during research. We present a novel and intuitive approach that supports real-time collaboration. In this paper, we instantiate our approach to scientific writing and propose a system that supports scientists. The proposed system analyzes text as it is being written and recommends similar publications based on the written text through similarity algorithms. By analyzing text as it is being written, it is possible to provide targeted real-time recommendations to improve decision-making during research by finding relevant publications that might not have been otherwise found in the initial research phase. This approach allows the recommendations to evolve throughout the writing process, as recommendations begin on a paragraph-based level and progress throughout the entire written text. This approach yields various possible use cases discussed in our work. Furthermore, the recommendations are presented in a visual analytics system to further improve scientists’ decision-making capabilities.
Abstract
Brand placements are omnipresent in video games, but their overall effect on brand attitudes is small and varies substantially between studies. The present research takes an evaluative conditioning perspective to explain when and how brand placements in video games influence brand attitudes. In two experiments with a 3D first‐person video game, we show that only brands encountered during positive in‐game experiences benefit from the placement, but not those encountered during negative in‐game experiences. Building on the cognitive processes underlying evaluative conditioning, we also show that brand attitudes largely depend on the memory for the pairing of a brand with positive/negative in‐game experiences. Pairing memory and thus also evaluative conditioning effects increase when players attend to the pairing of brands and positive/negative experiences, for example, when such pairings are a central part of the game's storyline. Overall, our findings show that evaluative conditioning and its cognitive mechanisms can be utilized to explain and predict advertising effects in applied settings, such as brand placements in video games.
Ten years after the journal’s first publication, we are taking a closer look at the knowledge flows of the output of the journal Publications. We analyzed the papers, topics, their authors and countries to assess the development of scholarly communication within Publications. Our bibliometric analyses show the research journal’s community, where the knowledge of this community is coming from, where it is going, and how diverse the community is based on its internationality and multidisciplinarity. We compare these findings with the scopes and topical goals the journal specifies. We aim at informing the editors and editorial board about the journal’s development to advance the journal’s role in scholarly communication. The results show that regarding topical diversity and internationality, the journal has remarkably developed. Moreover, the journal tends towards the field of library and information science, but strengthens its multidisciplinary status via its topics and author backgrounds.
With roughly half of the global population living in cities, urban environments become central to public health often perceived as health risk factors. Indeed, mental disorders show higher incidences in urban contexts compared to rural areas. However, shared urban environments also provide a rich potential to act as a resource for mental health and as a platform to increase mental health literacy. Based on the concepts of salutogenesis and restorative environments, we propose a framework for urban design interventions. It outlines (a) an output level, i.e., preventive and discursive potentials of such interventions to act as biopsychosocial resources, and (b) a process level, i.e., mechanisms of inter- and transdisciplinary collaboration of researchers and citizens in the design process. This approach aims at combining evidence-based, salutogenic, psychosocially-supportive design with a focus on mental health. Implementing low-threshold, resource-efficient options in the existing urban context brings this topic to the public space. Implications for the implementation of such interventions for citizens, researchers, and municipality stakeholders are discussed. This illustrates new directions of research for urban person-environment interactions, public health, and beyond.
We investigated stability and change of plasma and urinary oxytocin as well as OXTR DNA methylation patterns through psychotherapy. Furthermore, we explored the potential impact of inpatient psychotherapy on oxytocin-related biomarkers and vice versa by differentiating patients who remitted from depression versus non-remitters. Blood and urine samples were taken from 85 premenopausal women (aged 19–52), 43 clinically depressed patients from a psychosomatic inpatient unit, and 42 healthy control subjects matched for age and education at two points of time. Serum and urine oxytocin were measured using standard ELISA, and DNA methylation of the OXTR gene was assessed using bisulfite sequencing at the time of admission (baseline) and at discharge and from controls at matched time points. Oxytocin plasma levels were not associated with depression and were influenced by neither time in healthy controls nor psychotherapy in patients. Non-remitting depressed patients had significantly lower oxytocin urine levels before and after psychotherapy treatment. We found significantly lower exon 1 OTXR methylation in depressed patients over time and these differences were driven by patients remitting due to psychotherapy. A reverse pattern — higher levels of methylation in remitters — was found for exon 2 OXTR DNA methylation. Plasma oxytocin, urinary oxytocin, and OXTR DNA methylation patterns were intrapersonally relatively stable. OXTR-related factors were seemingly unaffected by inpatient psychotherapeutic treatment, but we found significant differences between remitting and non-remitting patients in urinary oxytocin and OXTR DNA methylation. If replicated, this suggests that OXTR-related markers may predict inpatient treatment outcomes of clinically depressed patients.
Heidi Heilmann
(2021)
Rezension „IT-Audit“
(2021)
Wer offene Lösungen einsetzen möchte, sieht sich einer Reihe von Fragen und Herausforderungen gegenüber. Nicht nur technische Aspekte, sondern vor allem zwischenmenschliche Faktoren und passende Regelungen sind dabei zu berücksichtigen. Das richtige „Mindset“ spielt eine zentrale Rolle. Mit ihm einher geht eine lebendige Umsetzung des Community-Gedankens. Er ermöglicht, die Idee der Offenheit über Open Source und Open Hardware hinaus auch auf Fragen des Identity Managements, der Entwicklung von Innovationen oder die Nutzung von Open Data auszudehnen. Hierzu bedarf es neben technischem Verständnis vor allem auch Akzeptanzkriterien und einem Bewusstsein für Risiken. Der Beitrag verfolgt daher das Ziel, einen Überblick über die Themen und Fragestellungen zu geben und eine Empfehlung für den Umgang mit Offenheit in der IT zu formulieren.
Machine intelligence, a.k.a. artificial intelligence (AI) is one of the most prominent and relevant technologies today. It is in everyday use in the form of AI applications and has a strong impact on society. This article presents selected results of the 2020 Dagstuhl workshop on applied machine intelligence. Selected AI applications in various domains, namely culture, education, and industrial manufacturing are presented. Current trends, best practices, and recommendations regarding AI methodology and technology are explained. The focus is on ontologies (knowledge-based AI) and machine learning.
Der folgende Artikel bietet einen Überblick über die häufigsten Anwendungsprobleme beim Einsatz und der Integration von freier und Open Source Software in der Praxis. Ein Schwerpunkt liegt dabei auf den Herausforderungen für Softwarehersteller, welche freie Software in ihre Produkte integrieren und vertreiben. Hingewiesen wird auf die unterschiedlichen freien Lizenzarten, Best Practice-Lösungsmöglichkeiten und Compliancefragen sowie die Grenzen der Auslegung der Lizenzen.
The awareness of emerging trends is essential for strategic decision making because technological trends can affect a firm’s competitiveness and market position. The rise of artificial intelligence methods allows gathering new insights and may support these decision-making processes. However, it is essential to keep the human in the loop of these complex analytical tasks, which, often lack an appropriate interaction design. Including special interactive designs for technology and innovation management is therefore essential for successfully analyzing emerging trends and using this information for strategic decision making. A combination of information visualization, trend mining and interaction design can support human users to explore, detect, and identify such trends. This paper enhances and extends a previously published first approach for integrating, enriching, mining, analyzing, identifying, and visualizing emerging trends for technology and innovation management. We introduce a novel interaction design by investigating the main ideas from technology and innovation management and enable a more appropriate interaction approach for technology foresight and innovation detection.
We consider geometric Hermite subdivision for planar curves, i.e., iteratively refining an input polygon with additional tangent or normal vector information sitting in the vertices. The building block for the (nonlinear) subdivision schemes we propose is based on clothoidal averaging, i.e., averaging w.r.t. locally interpolating clothoids, which are curves of linear curvature. To this end, we derive a new strategy to approximate Hermite interpolating clothoids. We employ the proposed approach to define the geometric Hermite analogues of the well-known Lane-Riesenfeld and four-point schemes. We present numerical results produced by the proposed schemes and discuss their features.
When NoSQL database systems are used in an agile software development setting, data model changes occur frequently and thus, data is routinely stored in different versions. The management of versioned data leads to an overhead potentially impeding the software development. Several data migration strategies exist that handle legacy data differently during data accesses, each of which can be characterized by certain advantages and disadvantages. Depending on the requirements for the software application, we evaluate and compare different migration strategies through metrics like migration costs and latency as well as precision and recall. Ideally, exactly that strategy should be selected whose characteristics fulfill service-level agreements and match the migration scenario, which depends on the query workload and the changes in the data model which imply an evolution of the database schema. In this paper, we present a methodology of self-adapting data migration, which automatically adjusts migration strategies and their parameters with respect to the migration scenario and service-level agreements, thereby contributing to the self-management of database systems and supporting agile development.
Die soziale Rolle der Frau und die soziale Rolle der Führungskraft gelten als unvereinbar (siehe Rollenkongruenztheorie nach Eagly & Karau (2002)). Frauen in Führungspositionen stehen daher vor der Herausforderung, gegensätzliche Rollenanforderungen in ihrem Verhalten umzusetzen. Diese Studie hat zum Ziel, das Spannungsverhältnis dieser Rollenerwartungen in der Wahrnehmung weiblicher Führungskräfte nachzuweisen und darüber hinaus Verhaltensmaßnahmen zur Bewältigung dieses Spannungsverhältnisses aufzuzeigen. Zur Beantwortung dieser Forschungsfragen wird eine qualitative Befragung mit weiblichen Führungskräften durchgeführt. Die Auswertung der leitfadenbasierten Interviews erfolgt mithilfe der Inhaltsanalyse nach Kuckartz (2022). Die Ergebnisse der Inhaltsanalyse belegen, dass zwischen der weiblichen Rolle und der Rolle einer Führungskraft ein Spannungsverhältnis existiert, das jedoch nicht alle Frauen wahrnehmen. Obwohl sich das Spannungsverhältnis nicht in der individuellen Wahrnehmung aller weiblichen Führungskräfte nachweisen lässt, ist es anhand der bewussten Verhaltensänderungen der Probandinnen erkennbar. Diese Verhaltensänderungen haben zum Ziel, sich von stereotypen weiblichen Eigenschaften und Verhaltensweisen zu entfernen, um als Führungskraft wahr- und ernst genommen zu werden. Diese Anpassungsleistung birgt folglich das Risiko, die wahrgenommene Weiblichkeit sowie die Authentizität der betreffenden Frauen zu gefährden, was gleichzeitig zur Schmälerung der Führungsautorität beiträgt. Für weibliche Führungskräfte resultiert daraus, dass es weder eine Auflösung noch eine ideale Verhaltensstrategie für den Umgang mit diesem Spannungsverhältnis gibt. Vielmehr besteht die Notwendigkeit, diese Unauflösbarkeit zu akzeptieren und einen individuellen Führungsstil zu entwickeln, der die eigene Authentizität weitestgehend aufrechterhält.
HDAC8 is an important target in several indication areas including childhood neuroblastoma. Several isozyme selective inhibitors of HDAC8 with L-shaped structures have been developed. A theoretical study has suggested that methionine 274 (M274) would act as a “switch” that controls a transient binding pocket, which is induced upon binding of L-shaped inhibitors. This hypothesis was experimentally examined in this study. The thermostability and functionality of HDAC8 wildtype and mutant variants with exchanged M274 were analyzed using biophysical methods. Furthermore, the binding kinetics of L-shaped and linear reference inhibitors of these HDAC8 variants were determined in order to elucidate the mode of interaction. Exchange of M274 has considerable impact on enzyme activity, but is not the decisive factor for selective recognition of HDAC8 by L-shaped inhibitors.
Biometric systems have experienced a large development in recent
years since they are accurate, secure, and in many cases, more user
convenient than traditional credential-based access control systems. Inspite of their benefits, biometric systems are still vulnerable to attack presentations (APs), which can be easily launched by a fraudulentsubject without having a wide expert knowledge. This way, he/she can gain access to several applications, such as bank accounts and smartphone unlocking, where biometric systems are frequently deployed. In order to mitigate such threats and increase the security of biometric systems, the development of reliable Presentation Attack
Detection (PAD) algorithms is of utmost importance to the research
community.In the context of PAD, we explore in this Thesis different strategies and methods in order to improve the generalisation capability of PAD schemes. To that end, we propose the definition of a semantic common feature space which successfully discriminates bona fide presentations (BPs)1 from APs. In essence, this process is seeking for those significant features extracted from known PAI species samples that are observed in unknown PAI species. In addition, we explore several handcrafted techniques in order to build a reliable description of features per biometric characteristic studied. The experimental evaluation shows that a common feature space can be computed through the fusion between generative models and discriminative approaches. Remarkable detection performances for high-security thresholds lead to the construction of a convenient (i.e., low BP rejection rates or Bona fide Presentation Classification Error Rate (BPCER)) and secure (i.e., low AP acceptance rates or Attack Presentation Classification Error Rate (APCER)) PAD subsystem.
Die cesar-Methodik
(2019)
Die hier vorgestellte cesar-Methodik beschreibt ein transdisziplinäres Verfahren zur Erforschung und Entwicklung innovativer Lösungen für unterschiedliche Einsatzgebiete.
Verwendungsbereiche dieser Methodik sind neben klassischen Gestaltungsdisziplinen wie dem Design und der Architektur auch die Entwicklungsbereiche der Ingenieurswissenschaften sowie die Unternehmensberatung. Primäres Ziel der cesar-Methodik ist es, aufbauend auf einem präzisen und umfassenden Verständnis des zu thematisierenden Kontextes, einen Bedarf zu identifizieren und diesen mittels innovativer Lösungen moderiert und strukturiert zu decken. Dabei wird in besonderem Maße eine Einbeziehung aller relevanten Akteure bzw. Abteilungen eines Kontextes angestrebt.
Der Name „cesar“ leitet sich dabei von den fünf zentralen Modulen und Perspektiven dieses Prozesses ab, nämlich corporate, experience, system, action und realisation. Der Gesamtprozess lässt sich in zwei Hauptphasen unterteilen. In der ersten Phase, den Modulen ces, findet vor allem eine analytische Betrachtung des Entwurfskontextes in drei unterschiedlichen Dimensionen statt. So bezieht sich das Modell im Rahmen der ersten Dimension auf die wirtschaftlich-unternehmerische
Perspektive (c = corporate) und betrachtet hierbei beispielsweise Wettbewerbsfaktoren und strategische Positionen. In der subjektbezogenen experience-Dimension (e) konzentriert sich das Modell auf die NutzerInnen bzw. AnwenderInnen und integriert hier neben anderen auch Erkenntnisse der Psychologie, Ergonomie und
User-Experience. Die dritte Dimension stellt die systemische Perspektive (s = system) dar und umfasst unter anderem alle funktionalen bzw. technischen Aspekte des Projektes. Hierbei spielen auch neue Produktionsverfahren und Materialeigenschaften eine Rolle. Zwischenergebnis dieses Forschungsprozesses bildet die Zusammenfassung
der relevanten Bedingungen für den darauffolgenden Entwicklungs-prozess.
Dieser wird in Form sogenannter „Focus Points“ fixiert und bildet die Grundlage für die zweite Hauptphase des Prozesses.
In dieser werden im Rahmen der Module action (a) und realisation (r) mittels klassischer Kreativwerkzeuge wie dem Design-Thinking-Ansatz Lösungen konzipiert und anhand der Focus Points selektiert und zum Schluss definiert. Hierbei stellt die Realisationsphase die tatsächliche Umsetzung der in der Action-Phase generierten
Entwürfe dar. So werden etwa Webseiten-Entwürfe praktisch finalisiert und umgesetzt oder Konsumgüter zur Produktionsreife geführt. Die cesar-Methodik lässt sich flexibel an die Entwurfsbedingungen anpassen und ist abhängig von Budget, Zeitplan und methodischer Nützlichkeit über drei Intensitätsebenen skalierbar.
Proposal for the consortium for the for the historically oriented humanities in the German national research data infrastructure (Nationale Forschungsdateninfrastruktur, “NFDI”). The application contains the scope and objectives, the composition and governance of the consotrium as well as the research data management strategy and the originally proposed work programme. NFDI4Memory aims to establish systematic, sustainable links between research, memory institutions and infrastructures; to integrate historical source criticism into data management; to build a network of the historically oriented research communities; to shape the knowledge order for the digital future of the past; to advance the analog / digital interface of historical source material and data; to generate standards for historical research data and sustainability; and to contribute to education and citizen participation.
Smart factories are complex; with the increased complexity of employed cyber-physical systems, the complexity evolves further. Cyber-physical systems produce high amounts of data that are hard to capture and challenging to analyze. Real-time recording of all data is not possible due to limited network capabilities. Limited network capabilities are the reason for a chain of faults introduced via active surveillance during fault diagnosis. These introduced faults may slow down production or lead to an outage of the production line. Here, we present a novel approach to automatically select production-relevant shop floor parameters to decrease the number of surveyed variables and, at the same time, maintain quality in fault diagnosis without overloading the network. We were able to achieve higher throughput, mitigate communication losses and prevent the disruption of factory instructions. Our approach uses an autoencoder ensemble via minority voting to differentiate between normal—always on—variables and production variables that may yield a higher entropy. Our approach has been tested in a production-equal smart factory and was cross-validated by a domain expert.
Die Schmierung durch Öle oder Gleitmittel kann zur Minimierung von Verschleiß bei Kunststoffen zum Einsatz kommen – ist allerdings nicht bei allen Anwendungen möglich oder erwünscht. Hier kann die
Strukturierung der Oberflächen von Kunststoffbauteilen helfen. Ein neues Rechenmodell zeigt einen neuen Weg zur Entwicklung tribologisch optimierter Produktoberflächen.
Dieser Beitrag behandelt schlaglichtartig die künftigen Perspektiven für die Studienrichtung Library Science/Bibliothekswissenschaft an der Hochschule Darmstadt. Ausgehend von den Besonderheiten der Darmstädter Bibliothekswissenschaft werden zukünftige Studienschwerpunkte auf den Gebieten Open Science und Data Science skizziert sowie ein Blick auf den sich veränderten Arbeitsmarkt für Bibliotheken geworfen, der in Darmstadt zur Entwicklung eines kooperativen dualen Studienangebotes für die Bibliothekswissenschaft geführt hat.
Cyber-physical systems become more complex, therewith production lines become more complex in the smart factory. Every employed system produces high amounts of data with unknown dependencies and relationships, making incident reasoning difficult. Context-aware fault diagnosis can unveil such relationships on different levels. A fault diagnosis application becomes context-aware when the current production situation is used in the reasoning process. We have already published TAOISM, a visual analytics model defining the context-aware fault diagnosis process for the Industry 4.0 domain. In this article, we propose the Flourish dashboard for context-aware fault diagnosis. The eponymous visualization Flourish is a first implementation of a context-displaying visualization for context-aware fault diagnosis in an Industry 4.0 setting. We conducted a questionnaire and interview-based bilingual evaluation with two user groups based on contextual faults recorded in a production-equal smart factory. Both groups provided qualitative feedback after using the Flourish dashboard. We positively evaluate the Flourish dashboard as an essential part of the context-aware fault diagnosis and discuss our findings, open gaps, and future research directions.
3‐Chloro‐5‐Substituted‐1,2,4‐Thiadiazoles (TDZs) as Selective and Efficient Protein Thiol Modifiers
(2022)
Abstract
The study of cysteine modifications has gained much attention in recent years. This includes detailed investigations in the field of redox biology with focus on numerous redox derivatives like nitrosothiols, sulfenic acids, sulfinic acids and sulfonic acids resulting from increasing oxidation, S‐lipidation, and perthiols. For these studies selective and rapid blocking of free protein thiols is required to prevent disulfide rearrangement. In our attempt to find new inhibitors of human histone deacetylase 8 (HDAC8) we discovered 5‐sulfonyl and 5‐sulfinyl substituted 1,2,4‐thiadiazoles (TDZ), which surprisingly show an outstanding reactivity against thiols in aqueous solution. Encouraged by these observations we investigated the mechanism of action in detail and show that these compounds react more specifically and faster than commonly used N‐ethyl maleimide, making them superior alternatives for efficient blocking of free thiols in proteins. We show that 5‐sulfonyl‐TDZ can be readily applied in commonly used biotin switch assays. Using the example of human HDAC8, we demonstrate that cysteine modification by a 5‐sulfonyl‐TDZ is easily measurable using quantitative HPLC/ESI‐QTOF‐MS/MS, and allows for the simultaneous measurement of the modification kinetics of seven solvent‐accessible cysteines in HDAC8.
This handbook aims at facilitating the design and development process of leather products and informing the people working within those processes. They are not intented as formulas to be simply followed in order to create “sustainable products”. Instead, the overall goal of this handbook is to broaden the scope of designers, marketers, product managers and all other parties involved in creating new leather products. Leather productsare usually embedded in a complex and diffuse system that makes it very difficult to design them “more sustainable”. In order to do so, a systemic perspective is needed that takes a wide variety of aspects and interrelations into account that sometimes contradict each other. Therefore, in the case of leather products, design for sustainable development might often be an
iterative process involving compromises and uncertainties and requires
a constant re-evaluation of design decisions and systemic effects. This handbook is therefore intended to contribute to a capacity-building process through which design and development processes become more systemic, leading towards more sustainable products.
Default nudges successfully guide choices across multiple domains. Online use cases for defaults range from promoting sustainable purchases to inducing acceptance of behavior tracking scripts, or “cookies.” However, many scholars view defaults as unethical due to the covert ways in which they influence behavior. Hence, opt-outs and other digital decision aids are progressively being regulated in an attempt to make them more transparent. The current practice of transparency boils down to saturating the decision environment with convoluted legal information. This approach might be informed by researchers, who hypothesized that nudges could become less effective once they are clearly laid out: People can retaliate against influence attempts if they are aware of them. A recent line of research has shown that such concerns are unfounded when the default-setters proactively discloses the purpose of the intervention. Yet, it remained unclear whether the effect persists when defaults reflect the current practice of such mandated transparency boils down to the inclusion of information disclosures, containing convoluted legal information. In two empirical studies (N = 364), respondents clearly differentiated proactive from mandated transparency. Moreover, they choose the default option significantly more often when the transparency disclosure was voluntary, rather than mandated. Policy implications and future research directions are discussed.
Magnetic Particle Imaging is an imaging modality that exploits the non-linear magnetization response of superparamagnetic nanoparticles to a dynamic magnetic field. In the multivariate case, measurement-based reconstruction approaches are common and involve a system matrix whose acquisition is time consuming and needs to be repeated whenever the scanning setup changes. Our approach relies on reconstruction formulae derived from a mathematical model of the MPI signal encoding. A particular feature of the reconstruction formulae and the corresponding algorithms is that these are independent of the particular scanning trajectories. In this paper, we present basic ways of leveraging this independence property to enhance the quality of the reconstruction by merging data from different scans. In particular, we show how to combine scans of the same specimen under different rotation angles. We demonstrate the potential of the proposed techniques with numerical experiments.
Automatische Erkennung von politischen Trends mit Twitter – brauchen wir Meinungsumfragen noch?
(2017)
Meinungsforschungsinstitute betreiben einen beträchtlichen Aufwand, um die Meinungstrends der Bevölkerung bezogen auf Politiker mit Telefon- und Straßenumfragen zu erfassen. Mit einer Studierendengruppe haben wir uns im Winter 2015/16 die Frage gestellt, ob es möglich ist, diesen Prozess zu automatisieren. Die Idee dahinter ist, dass die Plattform Twitter vielfach für politische Diskussionen genutzt wird. Da sich Tweets auf einen Umfang von 140 Zeichen beschränken und das jeweilige Thema durch Hashtags meist eindeutig zugeordnet werden kann, scheinen sich Twitter-Daten gut für eine automatische Sentiment-Analyse zu eignen. Mit Sentiment-Analyse-Methoden kann man diese Tweets automatisch in positive und negative Meinungsäußerungen klassifizieren. Wir haben dafür einen Twitter-Crawler und Sentiment-Analyse in der Programmiersprache Python implementiert. Anschließend haben wir über einen Zeitraum von vier Wochen Tweets zu Politikern gesammelt und die Ergebnisse der Meinungsanalysen visualisiert. Schließlich haben wir unsere Ergebnisse mit dem ZDF-Politbarometer verglichen.
A sample of tourists (N = 780) responded to a survey addressing purchasing intentions and consumption motives in relation to buying sustainable groceries at a local food market. These intentions and motives were contrasted for two consumption contexts: on vacation vs. at home. An initial analysis of the data indicated that self-reported purchasing intentions were weaker for a vacation scenario than for a home scenario. Further analyses suggested that motives associated with purchasing intentions were not universal between contexts. At home, normative motives (i.e., good conscience) were positively associated with intentions, whereas other motives failed to explain significant variance (i.e., value for money, calm and safe, avoid boredom, pleasure, and good impression). On vacation, associations with intentions followed a similar pattern, except for the finding that hedonic motives (i.e., pleasure) added explanatory variance. Despite the increased importance of hedonic motives on vacation compared to at home, normative motives showed the strongest association with purchasing intentions in both consumption contexts. The findings are discussed with reference to the literature on contextual discrepancies in environmental behavior, while noting possible implications for promoting sustainable consumption among tourists.
Lessons Learned From Applications of the Stage Model of Self-Regulated Behavioral Change: A Review
(2019)
Stage models are becoming increasingly popular in explaining change from current behavior to more environmentally friendly alternatives. We review empirical applications of a recently introduced model, the stage model of self-regulated behavioral change (SSBC). In the SSBC, change toward pro-environmental behavior takes place in four, qualitatively different stages (predecisional, preactional, actional, and postactional) which are each influenced by constructs taken from theories previously established to describe and predict pro-environmental behavior. We performed a systematic literature search to retrieve peer-reviewed SSBC-based studies. The review includes 10 studies published between 2013 and 2018, six of which employed a cross-sectional, three an interventional and one a correlational longitudinal design. The cross-sectional and longitudinal studies generally support the model, although there are some irregularities that warrant further investigation. The interventional studies found stage-tailored informational measures to be more effective than non-stage-tailored measures in promoting stage progression and behavioral change. Furthermore, we identified several challenges that researchers may face when applying the SSBC. These include whether and how to analyze multiple behavioral alternatives; how to address the challenge of measuring a comprehensive model while keeping questionnaire length manageable; selecting and defining the role of model constructs in a behavioral context while keeping results comparable; and establishing a validated and reliable tool to diagnose a person’s stage of change. Based on these insights, we develop recommendations for researchers designing SSBC studies, in order to support a founded and efficient advancement of the theory which will then serve both researchers and practitioners aiming to promote pro-environmental behavior.
Diese Bachelorarbeit beschäftigt sich mit der gängigen Anreicherung von Bibliothekskatalogen an wissenschaftlichen Bibliotheken in Deutschland. Anhand von Beispielen einzelner Bibliothekseinrichtungen werden die Anreicherungsarten sowie mögliche Entwicklungstendenzen vorgestellt. Die Anreicherungen beziehen sich auf die Auffindbarkeit der Bestände, Erweiterung von Suchräumen und die Zugänglichkeit der Daten, Bedienbarkeit für die Nutzenden, Einbezug der Nutzenden sowie zusätzliche Service-Angebote. Durch die Beobachtung der Bibliothekverbünde konnte ein Entwicklungs- und Nutzungsstand sowie eine Tendenz in Bezug auf die Anreicherung der Kataloge festgestellt werden. Auffällig ist die im Zuge der digitalen Transformation zunehmende maschinelle Sacherschließung sowie Metadatenübernahme, die langsam über die Verbundebene wächst. Der Einsatz von Ressource Discovery Systemen und gescannten Inhaltsverzeichnissen in Bibliothekskatalogen sind mittlerweile mehr Regel als Ausnahme. Weiterentwicklungsbedarf besteht in der Bestandserschließung, dem Relevanz-Ranking sowie im Einsatz von Konkordanzen.
Factory automation becomes a subject of the transformation through Industry 4.0 and Industrial Internet of Things, and this introduces a new set of requirements due to novel industrial use cases. Future industrial communication is based on a unified network infrastructure that serves diverse communication services ranging from time-critical to best-effort data. Driven by the industrial domain, Time-Sensitive Networking (TSN) is the communication technology for enabling full connectivity in the smart factory. In conjunction to the Ethernet-based TSN, wireless technologies become a key enabler for future automation scenarios in order to support mobility and modularity aspects that are being demanded by flexible manufacturing and advanced automation. The latest cellular networking standard 5G develops the Ultra Reliable Low Latency Communication profile which supports the novel requirements through an advanced Quality of Service framework
and an enhanced 5G New Radio physical layer. The convergence of TSN and 5G is a promising solution for future industrial networks. Time synchronization is a key aspect of industrial communication networks. It establishes a common sense of time between all network nodes. This is one enabler for the alignment of time-critical processes across
distributed systems such as time-triggered data transmission to achieve ultra reliable bounded low latency, i.e. deterministic communication.
This work researches the time synchronization in converged TSN/5G networks. A comprehensive use case analysis identifies potential applications and their corresponding requirements for the integration of converged TSN/5G networks. Particular use cases are highlighted due to their stringent requirements regarding synchronization. According
network topologies are derived which are later used to review practical aspects of the synchronization. In order to research the synchronization in converged TSN/5G networks, the involved heterogeneous procedures are modeled. Consequently, the individual
models are merged to establish a novel model which describes the joint synchronization in converged TSN/5G networks and thus allows to investigate how the heterogeneous mechanism engage. Departing from this joint synchronization model, potential improvements are derived. The discussed improvements are manifold and address the distribution
of timing information, the reference clock selection, and the correction of timing information. As evaluation, the joint synchronization model is applied to generic networks under worst-case parameterization, that is derived from related specification and standardization works. This allows to study the general behavior of the synchronization in converged TSN/5G networks. Supplementary, the joint synchronization model is also applied to use case specific networks. This permits to obtain a practical perspective of the
synchronization in converged TSN/5G networks. These analyses yield the determination of synchronization boundaries which indicate the worst to be expected synchronization quality in the given networks. Consequent simulative experiments validate the previous evaluation of the synchronization in converged TSN/5G networks. It draws a more natural picture of the synchronization as probabilistic parameterization and random effects are considered.
This work shows that a synchronization accuracy well below 1 μs can be achieved converged TSN/5G networks. At the same time it is shown that the synchronization strongly depends on the actual network architecture and the quality of the 5G Radio Access Network (RAN) synchronization. The size of the network affects the synchronization accuracy since each intermediate device between the reference clock and the synchronization target introduces additional inaccuracy to the synchronization. The RAN synchronization, on the other hand, is the big challenge in converged TSN/5G networks as it comprises the
radio link which exposes uncertainty and varying transmission characteristics. But just that enables the synchronization of distributed devices.
Within the last few decades, the need for subject authentication has grown steadily, and biometric recognition technology has been established as a reliable alternative to passwords and tokens, offering automatic decisions. However, as unsupervised processes, biometric systems are vulnerable to presentation attacks targeting the capture devices, where presentation attack instruments (PAI) instead of bona fide characteristics are presented. Due to the capture devices being exposed to the public, any person could potentially execute such attacks. In this work, a fingerprint capture device based on thin film transistor (TFT) technology has been modified to additionally acquire the impedances of the presented fingers. Since the conductance of human skin differs from artificial PAIs, those impedance values were used to train a presentation attack detection (PAD) algorithm. Based on a dataset comprising 42 different PAI species, the results showed remarkable performance in detecting most attack presentations with an APCER = 2.89% in a user-friendly scenario specified by a BPCER = 0.2%. However, additional experiments utilising unknown attacks revealed a weakness towards particular PAI species.
Mobile Contactless Fingerprint Recognition: Implementation, Performance and Usability Aspects
(2022)
This work presents an automated contactless fingerprint recognition system for smartphones.We provide a comprehensive description of the entire recognition pipeline and discuss important requirements for a fully automated capturing system. In addition, our implementation
is made publicly available for research purposes. During a database acquisition, a total number of 1360 contactless and contact-based samples of 29 subjects are captured in two different environmental
situations. Experiments on the acquired database show a comparable performance of our contactless scheme and the contact-based baseline scheme under constrained environmental influences. A comparative usability study on both capturing device types indicates that the majority of subjects prefer the contactless capturing method. Based on our experimental results, we analyze the impact of the current COVID-19 pandemic on fingerprint recognition systems. Finally, implementation aspects of contactless fingerprint recognition are summarized.
Transglutaminases are protein cross‐linking and protein‐modifying enzymes that have attracted considerable interest due to their causal involvement in various diseases and versatility in industrial applications. In particular, microbial transglutaminases (MTG) from Streptomyces bacteria have managed in recent years to evolve from simple food additives to specialized enzymes for the site‐directed modification of therapeutic proteins. The review summarizes relevant studies from the beginning dealing with the occurrence, production, structure, catalysis, and substrate molecules of MTG enzymes. It also addresses biotechnological procedures with MTG from S. mobaraensis (SmMTG) as the most prominent representative in focus. Reassessment of the available data revealed unexpected insights into catalysis of SmMTG and other transglutaminases, suggesting selection of glutamine donor proteins by subsites at the front vestibule and the existence of distinct lysine pockets. Flexibility of the SmMTG‐accessible glutamine donor substrate regions seems to be more important than the glutamine environment. Nevertheless, residues in close vicinity to glutamines also determine interaction with the SmMTG subsites. The apparent lack of subsites for lysine donor proteins suggests self‐assembly of the substrate proteins prior to enzymatic cross‐linking. The study of natural substrate proteins, especially their mutual interaction, is proposed to further illuminate catalysis of SmMTG. To this end, structure and function of the characterized substrate proteins from S. mobaraensis are discussed in conclusion.
Lamins are intermediate filaments that assemble in a meshwork at the inner nuclear periphery of metazoan cells. The nuclear periphery fulfils important functions by providing stability to the nuclear membrane, connecting the cytoskeleton with chromatin, and participating in signal transduction. Mutations in lamins interfere with these functions and cause severe, phenotypically diverse diseases collectively referred to as laminopathies. The molecular consequences of these mutations are largely unclear but likely include alterations in lamin-protein and lamin-chromatin interactions. These interactions are challenging to study biochemically mainly because the lamina is resistant to high salt and detergent concentrations and co-immunoprecipitation are susceptible to artefacts. Here, we used genetic code expansion to install photo-activated crosslinkers to capture direct lamin-protein interactions in vivo. Mapping the Ig-fold of laminC for interactions, we identified laminC-crosslink products with laminB1, LAP2, and TRIM28. We observed significant changes in the crosslink intensities between laminC mutants mimicking different phosphorylation states. Similarly, we found variations in laminC crosslink product intensities comparing asynchronous cells and cells synchronized in prophase. This method can be extended to other laminC domains or other lamins to reveal changes in their interactome as a result of mutations or cell cycle stages.