open_access
Refine
Year of publication
Document Type
- Article (124)
- Bachelor Thesis (32)
- Conference Proceeding (32)
- Master's Thesis (31)
- Other (15)
- Report (13)
- Book (12)
- Periodical (9)
- Preprint (7)
- Lecture (5)
Language
- English (149)
- German (136)
- Multiple languages (1)
- Russian (1)
Keywords
- Computersicherheit (14)
- cybersecurity (9)
- Informationssicherheit (7)
- Shodan (7)
- Censys (6)
- Maltego (6)
- OSINT (6)
- Software (6)
- IoT (5)
- SMath (5)
The effective alignment of soft skills with established competency frameworks is essential for the automated alignment of competences within workforce analytics frameworks. This paper presents a semantic mapping approach using Natural Language Processing (NLP) to align soft skills from the SkillsMatch framework with the contents of the e-Competence Framework (European standard EN 16234-1) as it does not explicitly illustrate those skills in the text. Cosine similarity index provides an indication of the semantic closeness of each soft skill to the e-CF dimensions to map them according to score thresholds, with additional confirmation by experts. This approach provides an automated, scalable solution to enable the semantic linkage between soft and technical competences, otherwise unfeasible in terms of effort and time due to the huge amount of information to be analysed. The result is additional explicit information on soft skills that enriches and extends the standard EN 16234-1 offering pragmatic guidance to practitioners for the application in real contexts. Moreover, the proposed method underscores the importance of integrating embedding-based semantic similarity modelling to work with competence frameworks and standards to extract and analyse information for additional practical value, overcoming the limitations of traditional manual methods.
In dieser Bachelorarbeit wird untersucht, wie eine dynamische Zugriffskontrolle innerhalb
von Zero Trust-Architekturen umgesetzt werden kann. Es wird ein kontextsensitives,
gewichtetes Punktesystem entwickelt, welches durch einen Schwellenwert über den
Zugang zu einer bestimmten Ressource entscheidet.Weiterhin werden Begründungen
für Zugriffsentscheidungen eingeführt.
Umgesetzt werden die Zugriffsentscheidungen mit dem Open Policy Agent, einer
Open Source-Policy Engine, welche Policies nach dem Schema Policy-as-Code verwaltet.
Die Kontextdaten für die Zugriffsentscheidungen werden ebenfalls mit Open
Source-Software erhoben. Es wird Wazuh zur Endgerätekontrolle und Keycloak zur
Authentifizierung eines Nutzers genutzt. Zusätzlich wird ein Demonstrator entwickelt,
der die Funktionsweise und einen alternativen Ansatz zur Zugriffskontrolle zeigt.
In Virtual Reality (VR) learning applications, the integration of pedagogical agents is
particularly promising, as they function as a virtual social support. With the recent developments in Large Language Models (LLMs), conversational language models are now available that enable natural language interaction in a VR environment. We evaluated four possible designs for LLM-based agents (푛 = 21) in an aircraft-engine training. Focusing on the primary questions when bringing LLM-based agents into VR: whether to use text or speech and whether to embody the conversational agent in 3D or not. Pre- and post-tests were used to measure retention and questionnaires were used to measure the User Experience (UX) of different design variants. Contrary to our hypotheses, retention was higher when using a non-embodied design compared to an embodied design. The output modality did not have a significant impact on the learning success, but it did have an impact on the UX.
Setting the Stage for Collaboration: A Multi-View Table for Touch and Tangible Map Interaction
(2025)
We present a multi-user map application based on a novel multiview concept that enables simultaneous and independent interaction with shared geospatial content. Each user operates a personal View Finder and Focus View, color-coded for clarity, while a shared, immutable Context View provides a common reference frame. The system supports both touch-based and tangible interaction techniques, including gesture control, virtual joysticks, and physical objects. Users can flexibly arrange their workspaces on a multi-touch table, supporting both individual exploration and collaborative tasks.
In aerospace engineering, data-driven surrogate models are increasingly employed to mitigate the computational and temporal costs of simulations, numerical analyses, and experiments. Two major challenges accompany this trend. First, the training of surrogate models often requires a sufficient amount of data, the determination of which is inherently difficult. Second, these models often exhibit high complexity, limiting both the traceability of their outputs and the extraction of useful insights. Explainable Artificial Intelligence (XAI) methods have therefore emerged as promising tools to enhance the interpretability, explainability, and transparency of such models. In this work, a combination of the established Shapley Additive Explanations (SHAP) approach with a bootstrap-based method is investigated. The proposed framework provides insights into the contribution of individual features and enables an assessment of data sufficiency with respect to surrogate model performance. Building upon these findings, the Bootstrap-Informed Feature Importance (BIFI) method is proposed. BIFI offers a model-agnostic, robust identification of relevant features.The method is analyzed in the context of Design of Experiments (DOE) processes used for surrogate model construction. Evaluation on four synthetic datasets of increasing complexity, as well as a dataset from aero-engine development, demonstrates that BIFI-based DOEs can improve surrogate model quality measured in terms of R and MSE by up to 90%. Consequently, the proposed method enables more efficient utilization of simulations, computations, and experiments while reducing the required number of samples.
This study compares Tangible User Interfaces (TUIs) with conventional touch interfaces (CTIs) on multi-touch tables for exploratory multi-user map applications. We developed a prototype that enables multiple users to interact with map content simultaneously and independently. Four interaction methods were implemented and evaluated: gesture-based touch control, widget-based touch joystick control, and two tangible interaction variants (a joystick and car-steering metaphor). A user study with 15 participants was conducted in which users navigated predefined routes with varying difficulty. We collected both objective performance measurements and subjective user assessments. While touch-based methods yielded higher accuracy in objective metrics, TUI-based interactions were rated significantly better by participants in terms of user experience. Notably, the car-steering tangible control was particularly well-received, highlighting how physical, playful interaction can enhance usability despite lower accuracy. These findings contribute to our understanding of how different interaction paradigms support collaborative exploration on multi-touch surfaces.
Light is the primary cue driving zooplankton diel vertical migration (DVM), a strategy that balances predation risk with resource access. However, DVM is often oversimplified, with limited consideration of how light-driven risks and resource needs vary across taxa and life stages. This simplification is partly due to constraints on collecting high-resolution, size-resolved data —especially at night, when subtle shifts in illumination reshape nocturnal risk landscapes. To overcome these limitations, we deployed a high-resolution in situ modular Deep-focus Plankton Imager and an image-recognition approach to quantify fine scale DVM and body sizes of Cladocerans and Copepods in Lake Stechlin, Germany. Data was collected from day into night and across moonrise and was compared with environmental data from vertical profiling sondes. Typical DVM patterns emerged, with deeper daytime distributions, however, moonlight introduced additional behavioural complexity: larger individuals avoided illuminated layers, likely managing predation risk, while smaller individuals moved into these layers, possibly exploiting foraging opportunities and reduced risk. These light-mediated shifts were further shaped by ecological conditions; copepods tracked food-rich layers regardless of light levels at night, while cladocerans showed light-dependent responses to both temperature and food, such that light caused them to avoid otherwise favourable (warm, food-rich) layers. Our approach provides new insight into how zooplankton navigate nocturnal lightscapes, revealing size- and taxon-specific strategies. By establishing size-dependent responses to natural moonlight, this work provides a crucial baseline for predicting how artificial light at night may restructure zooplankton communities and destabilize freshwater food webs.
How far can we see at day?
(2025)
We discuss the farthest objects on Earth observable for the unaided, healthy naked eye during the daytime, i.e., the maximum visual range for observers on Earth. Visual range depends first on the properties of the material between observer and object and its interaction processes with radiation, but second also on our visual perception system. After a rough comparison of ranges in water, glass, and the atmosphere, we focus on the physical basis of visual range for the latter. As a contrast phenomenon, visual range refers to allowed light paths within the atmosphere. It results from the interplay of geometry, refraction, and light scattering. We present a concise overview of this field by qualitative descriptions and quantitative estimates as well as classroom demonstration experiments. The starting point is the common geometrical visual ranges, followed by extensions due to refraction and limitations due to contrast, which depend on scattering and absorption processes within the atmosphere. The quantitative discussion of scattering is very helpful to easily understand the huge ranges in nature from meters in dense fog to hundreds of kilometers in clear atmospheres. Extreme visual ranges from about 300 km to above 500 km require optimal atmospheric conditions, cleverly chosen locations and times, and a sophisticated topography analysis. Even longer visual ranges are possible when looking through the vertical atmosphere. From the ISS, daytime ranges well above 1000 km are possible.
How far can we see with the naked eye at night? Many celestial objects like stars and galaxies as well as transient phenomena such as comets and supernovae can be observed in the night sky. We discuss the furthest distances of such objects and phenomena observable with the naked eye during the night-time for Earth-bound observers. The physics of night-time visual ranges differs from that of daytime observations because human vision shifts from cones to rods. In addition, mostly point sources are observed due to the large distances involved. Whether celestial objects and phenomena can be detected depends on the contrast of their radiation and the background sky luminance. We present a concise overview of how far we can see at night by first discussing the effects of the Earth's atmosphere. This includes attenuation of transmitted radiation as well as its role as a source of background radiation. Disregarding the attenuation of light due to interstellar and intergalactic dust, simple maximum night-time visual range estimates are based on the inverse square law, which can be easily verified by laboratory and demonstration experiments. From the respective calculations, we find that individual stars within the Milky Way galaxy of up to 15 000 light years are observable. Even further away are observable galaxies with several billion stars. The Andromeda galaxy can be observed with the naked eye at a distance of around 2.5 million light years. Similarly, the observability of supernovae also allows a visual range beyond the Milky Way galaxy. Finally, gamma ray bursts as the most energetic events in the universe are discussed concerning naked eye observations.
Naked eye studies of the clear night sky reveal that a certain percentage of all observable stars can be perceived as having color. Subjective estimates differ widely, ranging from just a few to a maximum of above two hundred. Explanations are based on the emission spectra of the stars, which are modified by interstellar dust clouds, the Earth atmosphere, and mostly the inverse square law. Color changes occur not only for variation of the star’s angular elevation above the horizon, but as well for decreasing nighttime sky brightness due to the transition from photopic via mesopic to scotopic vision. The maximum number of stars showing color to the naked eye depends on star illuminances on Earth and the background sky luminance. The limit of observing color is found to correspond to apparent visual magnitudes around , defining the number of colored stars. This also means that naked eye perception of stars with color is only possible for a certain star distance range, which is well below the maximum naked eye visual range of stars.
We investigate the forces of flowing granular material on an obstacle. A sphere suspended in a discharging silo experiences both weight of the overlaying layers and drag of the surrounding moving grains. In experiments with frictional hard glass beads, the force on the obstacle was found to be practically flow-rate independent. In contrast, flow of nearly frictionless soft hydrogel spheres added drag forces which increased with the flow rate until reaching saturation at high flow speeds. The total force grew quadratically with the obstacle diameter in the soft, low friction material, while it grew much weaker, nearly linearly with the obstacle diameter, in the bed of hard, frictional glass spheres. In addition to the drag, obstacles embedded in the flowing hydrogel spheres experience a weight force from the top as if immersed in a hydrostatic pressure profile, but negligible counter-forces from below. In contrast, the frictional hard particles create a strong pressure gradient near the upper surface of the obstacle. Numerical simulations provide additional information that is difficult to access experimentally. They reproduce the experimental results and give hints for the origin of the different force contributions. The results have considerable practical importance for the discharge of storage containers with large objects suspended in flowing granular material.
Granular gases are not only of interest in fundamental physics, but they can also serve as a test ensembles for the validity of collision models employed in (loose) granular matter. The theoretical literature mainly addresses spheres under ideal conditions and simulations allow full access to all particle parameters, but experiments cannot fulfill these idealizations. We investigate granular gases of soft, rough spheres by combining microgravity experiments and adjusted simulations. We introduce Smart Particles with embedded autarkic micro-sensors for in-situ measurements of rotation rates and accelerations. Additionally, we extract 3D positions, translations and orientations of the particles from stereoscopic video data using Machine-Learning based algorithms. We address the partition of kinetic energy between the degrees of freedom, the angular and translational velocity as well as collision statistics. A simulation is adjusted to experiment parameters, showing good agreement of translational motion, but qualitative differences in the decay of rotational kinetic energy.
Continuously excited dense granular gases in microgravity can develop spatial inhomogeneities of the particle distribution. Dynamical clustering is a phenomenon where a significant share of particles concentrate in strongly overpopulated regions. It is caused by a complex interplay between the energy influx and dissipation in collisions. The overall packing fraction, container geometry, and excitation parameters influence the gas-cluster transition. We perform Discrete Element Method (DEM) simulations for frictional spheres in a cuboid container and apply statistical criteria to the packing fraction profiles. Machine learning (ML) methods are used to study the dependence of the gas-cluster transition on system parameters. It is a promising alternative to predict the state of the system without the need for the time-consuming DEM simulations. We identify the best models for predicting the dynamical clustering of frictional spheres in a specific experimental geometry.
When granular gases in microgravity are continuously excited mechanically, spatial inhomogeneities of the particle distribution can emerge. At a sufficiently large overall packing fraction, a significant share of particles tend to concentrate in strongly overpopulated regions, so-called clusters, far from the excitation sources. This dynamical clustering is caused by a complex balance between energy influx and dissipation. The mean number density of particles, the geometry of the container, and the excitation strength influence cluster formation. A quantification of clustering thresholds is not trivial. We generate ‘synthetic’ data sets by Discrete Element Method simulations of frictional spheres in a cuboid container and apply established criteria to classify the local packing fraction profiles. Machine learning approaches that predict dynamic clustering from known system parameters on the basis of classical test criteria areoposed and tested. It avoids the necessity of complex numerical simulations.
Microgravity experiments with three-dimensional (3D) granular gases, i.e., ensembles of freemoving macroscopic particles which collide inelastically, produce large amounts of stereo video footage which require processing and analysis. The main steps of data treatment are particle detection, 3D matching and tracking in stereoscopic views, and quantification of ensemble statistical properties such as, e.g. mean kinetic energy or collision processes. Frequent overlapping or clustering of particles and their complicated movement patterns require smart software solutions. In recent years, Artificial Intelligence/Machine Learning (AI/ML) methods were successfully used for analysis of granular systems. We have applied such techniques to the granular gases of rod-like particles and developed a software tool which enables a full cycle of semi-automatic experimental data analysis. The approach is now tested on more complex, non-convex particles, shaped as 3D crosses (hexapods). Another challenge is optical analysis of dense granular gases, where individual particles cannot be tracked. We present a preliminary result of application of an ML method for number density profiles extraction in VIP-Gran experiment with dense ensemble of rod-like particles.
Microgravity (µg)-generated three-dimensional (3D) multicellular aggregates can serve as models of tissue and disease development. They are relevant in the fields of cancer and in vitro metastasis or regenerative medicine (tissue engineering). Driven by the 3R concept—replacement, reduction, and refinement of animal testing—µg-exposure of human cells represents a new alternative method that avoids animal experiments entirely. New Approach Methodologies (NAMs) are used in biomedical research, pharmacology, toxicology, cancer research, radiotherapy, and translational regenerative medicine. Various types of human cells grow as 3D spheroids or organoids when exposed to µg-conditions provided by µg simulating instruments on Earth. Examples for such µg-simulators are the Rotating Wall Vessel, the Random Positioning Machine, and the 2D or 3D clinostat. This review summarizes the most recent literature focusing on µg-engineered tissues. We are discussing all reports examining different tumor cell types from breast, lung, thyroid, prostate, and gastrointestinal cancers. Moreover, we are focusing on µg-generated spheroids and organoids derived from healthy cells like chondrocytes, stem cells, bone cells, endothelial cells, and cardiovascular cells. The obtained data from NAMs and µg-experiments clearly imply that they can support translational medicine on Earth.
Diese Arbeit vergleicht die Open-Source-Plattformen MISP und
OpenCTI hinsichtlich ihrer Eignung für den Einsatz in einem
behördlichen Security Operations Center (SOC). Ziel war die
Bewertung technischer, organisatorischer und rechtlicher Aspekte im
Kontext eines zukünftigen Netzverbundes. Die Untersuchung erfolgte
anhand eines Kriterienkatalogs in einer containerisierten
Testumgebung mit Integration in IBM QRadar.
Die Ergebnisse zeigen, dass MISP durch hohe Interoperabilität und
ressourcenschonenden Betrieb überzeugt, während OpenCTI ein
semantisch tiefes, STIX-basiertes Datenmodell und erweiterte
Analysefunktionen bietet. Beide Systeme ergänzen sich: MISP eignet
sich für den operativen Informationsaustausch, OpenCTI für
strategische und analytische Bedrohungsbewertung. Eine kombinierte
Nutzung ermöglicht einen souveränen und skalierbaren CTI-Betrieb
im behördlichen Umfeld.
In dieser Arbeit wird ein hybrides Intrusion Detection System (IDS)
entwickelt und evaluiert. Das System kombiniert das anomaliebasierte
IDS Zeek mit dem signaturbasierten IDS Suricata und einem
dazwischen geschalteten Python-Regelgenerator. Zeek analysiert
aufgezeichneten Netzwerkverkehr aus dem Datensatz CIC-IDS2017
und schreibt Anomalien in Logdateien, aus denen der Regelgenerator
automatisch Suricata-Regeln ableitet. Suricata nutzt diese Regeln
zusätzlich zu ET Open, um wiederkehrende Angriffe schneller zu
erkennen.
Die Evaluation erfolgt offline auf PCAP-Dateien und vergleicht drei
Konfigurationen: Zeek-Baseline, Suricata-Baseline (ET Open) und die
hybride Variante. Bewertet wird anhand von Confusion-Matrixbasierten Metriken (u. a. Recall, Precision, F1-Score, False-PositiveRate). Die Ergebnisse zeigen, dass der hybride Ansatz die
Erkennungsrate im Vergleich zu den Baselines deutlich erhöht und in
mehreren Angriffsszenarien einen Recall nahe 1,0 erreicht.
Gleichzeitig steigt jedoch die False-Positive-Rate erheblich an,
insbesondere bei komplexem HTTP-Verkehr.
Damit beantwortet die Arbeit die Forschungsfrage ambivalent: Die
Kopplung von Zeek-Anomalien und automatischer SuricataRegelgenerierung ist technisch machbar und effektiv im Hinblick auf
die Angriffserkennung, aber in der vorliegenden Form noch nicht für
den produktiven Einsatz geeignet. Der entwickelte Prototyp dient als
reproduzierbarer Referenzaufbau und zeigt klar, an welchen Stellen
feinere Regelmodelle, Feedback-Mechanismen und realistischere
Betriebsumgebungen notwendig sind.
Effective communication skills are increasingly recognized as critical for leadership in digital transformation contexts. Recently, AI-Chatbots such as Talk to Transform (T2T) have been developed to enhance leadership competencies through interactive role-plays and feedback. This paper proposes their adaptation for cybersecurity training. We discuss the current landscape of cybersecurity training, highlight the importance of communication, and present T2T as an innovative approach to bridge this gap through chatbot-driven role-plays.
Genetic risk factor identification for common epilepsies guided by integrative omics data analysis
(2025)
Objective
Genetic generalized epilepsies (GGEs) comprise the most common genetically determined epilepsy syndromes, following a complex mode of inheritance. Although many important common and rare genetic factors causing or contributing to these epilepsies have been identified in the past decades, many features of the genetic architecture are still insufficiently understood. This study integrates genome-wide association study (GWAS) data from the International League Against Epilepsy Consortium on Complex Epilepsies with transcriptome-wide association studies to identify genes whose genetically regulated expression levels are associated with epilepsy.
Methods
To achieve this, we used multiple computational approaches, including MAGMA, a tool for gene analysis of GWAS data, and its derivatives E-MAGMA and H-MAGMA, to improve gene mapping accuracy by utilizing tissue-specific expression and chromatin interaction data. Furthermore, we developed ME-MAGMA to incorporate methylation quantitative trait loci data, providing insights into epigenetic factors.
Results
We identified a total of 897 false discovery rate-corrected (<.05) candidates. These include voltage-gated calcium channels, voltage-gated potassium channels, and other genes such as NPRL2, CACNB2, and KCNT1 associated with epilepsy pathogenesis that act as key players in neuronal communication and signaling in the brain.
Significance
In this study, we propose new candidate genes to expand the dataset of potential epilepsy-causing genes. Further research on these genes may enhance our understanding of the complex regulatory mechanisms underlying GGE and other types of epilepsy, potentially revealing targets for therapeutic intervention.
The recent discovery of superconductivity in the bilayer Ruddlesden-Popper nickelate La3Ni2O7 under high pressure has generated much interest in the superconducting pairing mechanism of nickelates. Despite extensive work, the superconducting pairing symmetry in La3Ni2O7 remains unresolved, with conflicting results even for identical methods. We argue that different superconducting states in La3Ni2O7 are in close competition and highly sensitive to the choice of interaction parameters as well as pressure-induced changes in the electronic structure. Our study uses a multiorbital Hubbard model, incorporating all Ni 3𝑑
and O 2𝑝 states. We analyze the superconducting pairing mechanism of La3Ni2O7 within the random phase approximation and find a transition between 𝑑-wave and sign-changing 𝑠-wave pairing states as a function of pressure and interaction parameters, which is driven by spin fluctuations with different wave vectors. These spin fluctuations with incommensurate wave vectors cooperatively stabilize a superconducting order parameter with 𝑑𝑥2−𝑦2 symmetry for realistic model parameters. Simultaneously, their competition may be responsible for the absence of magnetic order in La3Ni2O7, demonstrating that magnetic frustration and superconducting pairing can arise from the same set of incommensurate spin fluctuations.
This paper presents an approach to teaching and consolidating skills in the context of sustainability “Prototyping Sustainability–Designing Sustainable IT” (ProS), using the workshop format for participatory and creative learning. The workshop integrates principles from Education for Sustainable Development (ESD), transformative and experiential learning, participatory design, and critical reflection on the digital age to engage participants in critically examining the environmental, economic, and social impacts of digital technologies in the context of Sustainable Development Goals (SDGs). Structured in five modular phases, from self-reflection and knowledge activation to collaborative prototyping and peer evaluation, the workshop offers a hands-on, gamified learning experience centred on real-world sustainability challenges. Learners create user-centred paper-based prototypes for digital products using tactile materials, persona-driven scenarios and knowledge of sustainable product characteristics gained in the workshop. Outcome measurement is supported through pre- and post-workshop surveys, peer voting templates, and paper-based prototype artefacts, enabling rich insight into behavioural intentions and learning gains. The paper discusses the educational value and sustainability relevance of the workshop engaging young people in critically reflecting on the environmental, economic, and social consequences of digitalization. Finally, it highlights challenges and limitations and proposes directions for future research.
Objectives
Human Papillomavirus (HPV) is a prevalent sexually transmitted infection and a leading cause of cervical cancer. In Georgia, cervical cancer ranks as the fifth most common cancer among women, with approximately 330 new cases and 200 deaths reported annually. Despite the availability of effective HPV vaccines, national vaccination coverage remains low. This study aimed to evaluate HPV vaccination coverage, analyze cervical cancer incidence trends, and model the potential impact of increased vaccination uptake on cancer prevention outcomes in Georgia.
Study design
A retrospective observational study was conducted using national health registry data and modeling projections to assess the burden of cervical cancer and estimate the effect of scaled vaccination coverage.
Methods
National health databases were used to analyze HPV vaccination rates and cervical cancer incidence. Descriptive statistics, chi-square tests, and linear regression were applied to identify trends and disparities. Additionally, a dynamic transmission model was developed to simulate the 10-year impact of increasing HPV vaccination coverage on cervical cancer incidence.
Results
The crude cervical cancer incidence rate was 15.7 per 100,000 women, with an age-standardized rate of 10.6 per 100,000. In 2022, only 38 % of eligible girls aged 13–18 received the first HPV vaccine dose, and 26 % completed the second dose. Regional disparities in vaccination and screening were noted, and overall screening coverage declined to 13,890 women screened in 2022. Modeling showed that increasing vaccine coverage to 60 % could reduce cervical cancer incidence by 50 % (preventing ∼163 cases); coverage of 80 % and 90 % could reduce incidence by 70 % and 85 %, preventing 228 and 276 cases, respectively.
Conclusion
Low HPV vaccination uptake in Georgia (38 % first dose and 26 % dull coverage) and declining screening limit cervical cancer prevention. Modeling shows that increasing vaccination coverage to 60–90 % could prevent 163–276 cases over the next decade. Strengthening vaccination and screening strategies is essential to move forward elimination.
Die Bachelorarbeit erkundet die Verwendung eines Forward-Looking-Sonars zur Hinderniserkennung unter Wasser auf dem „Sonobot-5“ der Firma EvoLogics. Es werden zwei klassische, ein probabilistischer und ein Deep Learning-basierter Ansatz verglichen. Ziel ist es, eine echtzeitfähige Hinderniserkennung zu ermöglichen, die zur Kollisionsvermeidung verwendet werden kann.
Die vorliegende Arbeit untersucht die Integration von Cyber Threat
Intelligence in IT- und Gesundheitsumgebungen. Ziel war es, zu
evaluieren, ob CTI-Daten in bestehende Sicherheitslösungen
eingebunden werden können. Außerdem sollte festgestellt werden,
welchen Mehrwert die Daten im Hinblick auf die Erkennung und
Abwehr von Bedrohungen bieten. Hierzu wurde eine Laborumgebung
aufgebaut, bestehend aus OpenCTI zur Sammlung und Verwaltung
von Bedrohungsinformationen, Logstash zur Datenweiterleitung
sowie Wazuh zur Analyse und Alarmierung. Über einen Zeitraum von
acht Wochen wurden Daten aus verschiedenen Open-Source-Feeds
(u. a. CVE, MITRE, AlienVault, ThreatFox) gesammelt, verarbeitet und
auf ihre Relevanz für den Gesundheitssektor geprüft. Zur Bewertung
der Qualität der CTI-Daten wurde ein Beispielszenario für eine
Krankenhausinfrastruktur entwickelt. Dieses Szenario umfasste
typische Systeme und Hersteller im Gesundheitssektor, darunter
Krankenhausinformationssysteme, Medizingeräte sowie Netzwerkinfrastrukturen. Die gesammelten Daten wurden auf diese
Infrastruktur gemappt, um zu prüfen, in welchem Umfang spezifische
Bedrohungen abgedeckt werden. Die Ergebnisse zeigen, dass eine
technische Integration grundsätzlich möglich ist und für generische
IT-Bedrohungen belastbare Indikatoren vorliegen. Spezifische
Informationen für den Gesundheitssektor konnten jedoch nur
eingeschränkt identifiziert werden. Daraus ergibt sich, dass die
Nutzung spezialisierter Feeds, etwa aus dem Health-ISAC, notwendig
ist, um sektorspezifische Bedrohungen zuverlässig abdecken zu
können.
In an era where AI-powered chatbots are increasingly being integrated into education and corporate learning, it is critical to determine whether these approaches benefit all learners or primarily cater to those with specific preferences. This study explores the interplay between learning preferences and learning outcomes in communication training using an AI-powered chatbot. In a field experiment with 17 participants, systematic thinkers and intrinsically motivated learners reported higher satisfaction and greater skill improvement, while those who preferred model learning and direct feedback benefited less. These findings suggest that AI-powered chatbots should be carefully designed to accommodate diverse learners and mitigate potential negative effects.
Future-proofing Education: A Prototype for Simulating Oral Examinations Using Large Language Models
(2023)
This study explores the impact of Large Language Models (LLMs) in higher education, focusing on an automated oral examination simulation using a prototype. The design considerations of the prototype are described, and the system is evaluated with a select group of educators and students. Technical and pedagogical observations are discussed. The prototype proved to be effective in simulating oral exams, providing personalized feedback, and streamlining educators' workloads. The promising results of the prototype show the potential for LLMs in democratizing education, inclusion of diverse student populations, and improvement of teaching quality and efficiency.
The growing number of connected medical devices in hospitals poses serious operational technology (OT) security challenges. Effective countermeasures require a structured analysis of the communication interfaces and security configurations of individual devices. State of the art: Although Manufacturer Disclosure Statements for Medical Device Security (MDS2, Version 2019) offer relevant information, they are rarely integrated into cybersecurity workflows. Existing studies are limited in scope and lack scalable methodologies for systematic evaluation. Concept: This study analyzed 209 MDS2 documents and 161 security white papers to extract structured information on ports, protocols, and protective measures. Over 52,000 question–answer pairs were converted into a machine-readable format using customized parsing and validation routines. The aim was to establish whether this dataset could inform risk assessments and future applications involving Large Language Models (LLMs). Implementation: The analysis revealed 367 distinct ports, including common protocols such as HTTPS (443), DICOM (104), and RDP (3389), as well as vendor-specific proprietary ports. Approximately 40% of the devices used over 20 ports, indicating a broad attack surface. OCR errors and inconsistent formatting required manual corrections. A consolidated dataset was developed to support clustering, comparison across vendors and versions, and preparation for downstream LLM use, particularly via structured SBOM and configuration data. Lessons learned: Although no model training was conducted, the structured dataset can support AI-based OT security workflows. The findings highlight the critical need for up-to-date, machine-readable manufacturer data in standardized formats and schemas. Such information could greatly enhance the automation, comparability, and scalability of hospital cybersecurity measures.
The threat situation due to cyber attacks in hospitals is emerging and patient life is at risk. One significant source of potential vulnerabilities is medical cyber-physical systems (MCPS). Detecting intrusions in this environment faces challenges different from other domains, mainly due to the heterogeneity of devices, the diversity of connectivity types, and the variety of terminology. To summarize existing results, we conducted a structured literature review (SLR) following the guidelines of Kitchenham et al. for SLRs in software engineering. We developed six research questions regarding detection approach, detection location, included features, adversarial focus, utilized datasets, and intrusion prevention. We identified that most researchers focused on an anomaly-based detection approach at the network layer. The primary focus was on the detection of malicious insiders. While several researchers used publicly available datasets for training and testing their algorithms, the lack of suitable datasets resulted in the development of testbeds consisting of various medical devices. Based on the results, we formulated five future research topics. First, the special conditions of hospital networks, the MCPS deployed within them, and the contrasts to other IT and OT environments should be examined. Thereupon, MCPS-specific datasets should be created that allow researchers to address the health domain’s unique requirements and possibilities. At the same time, endeavors aimed at standardization in this area should be supported and expanded. Moreover, the use of medical context for attack detection should be further explored. Last but not least, efforts for MCPS-tailored intrusion prevention should be intensified. This way, the emerging threat landscape can be addressed, IT security in hospitals can be improved, and patient health can be protected.
Digital platforms can grow by motivating users to explore new ways to use a wider range of affiliated products and services. This work explores the power of IT Identity to motivate such innovative use, through identity's ability to intrinsically motivate behavior. Data from 209 Amazon.com users indicates that IT Identity may cause Trying to Innovate with an IT, mediated by Self-Esteem.
Assessing Cybersecurity of Internet-Facing Medical IT Systems in Germany & Spain Using OSINT Tools
(2025)
This paper investigates cybersecurity threats in medical IT (Information Technology) systems exposed to the Internet. To that end, we develop a methodology and build a data processing pipeline that allows to gather data from different OSINT (Open Source Intelligence) sources, and processes it to obtain relevant cybersecurity metrics. To validate its operation and usefulness, we apply it to two countries, Germany and Spain, allowing to study the main threats that affect medical IT systems in these countries. Our initial findings reveal that 20% of German hosts and 15% of Spanish hosts tagged as medical devices have at least one CVE (Common Vulnerabilities and Exposures) with a CVSS (Common Vulnerability Scoring System) graded as critical (i.e., value 8 or greater). Moreover, we found that 74% of CVEs found in German hosts are dated from earlier than 2020, whereas for Spanish hosts the percentage is 60%. This indicates that medical IT systems exposed to the Internet are seldom updated, which further increases their exposure to cyberthreats. Based on these initial findings, we finish the paper providing some insights on how to improve cybersecurity of these systems.
Environmental monitoring systems often operate continuously, measuring various parameters, including carbon dioxide levels (CO2), relative humidity (RH), temperature (T), and other factors that affect environmental conditions. Such systems are often referred to as smart systems because they can autonomously monitor and respond to environmental conditions and can be integrated both indoors and outdoors to detect, for example, structural anomalies. However, these systems typically have high energy consumption, data overload, and large equipment sizes, which makes them difficult to install in constrained spaces. Therefore, three challenges remain unresolved: efficient energy use, accurate data measurement, and compact installation. To address these limitations, this study proposes a two-to-one threshold sampling approach, where the CO2 measurement is activated when the specified T and RH change thresholds are exceeded. This event-driven method avoids redundant data collection, minimizes power consumption, and is suitable for resource-constrained embedded systems. The proposed approach was implemented on a low-power, small-form and self-made multivariate sensor based on the PIC16LF19156 microcontroller. In contrast, a commercial monitoring system and sensor modules based on the Arduino Uno were used for comparison. As a result, by activating only key points in the T and RH signals, the number of CO2 measurements was significantly reduced without loss of essential signal characteristics. Signal reconstruction from the reduced points demonstrated high accuracy, with a mean absolute error (MAE) of 0.0089 and root mean squared error (RMSE) of 0.0117. Despite reducing the number of CO2 measurements by approximately 41.9%, the essential characteristics of the signal were saved, highlighting the efficiency of the proposed approach. Despite its effectiveness in controlled conditions (in buildings, indoors), environmental factors such as the presence of people, ventilation systems, and room layout can significantly alter the dynamics of CO2 concentrations, which may limit the implementation of this approach. Future studies will focus on the study of adaptive threshold mechanisms and context-dependent models that can adjust to changing conditions. This approach will expand the scope of application of the proposed two-to-one sampling technique in various practical situations.
This study investigated the use of a semi-automated, Retrieval-Augmented Generation (RAG)-based multi-agent architecture to analyze security-relevant data and assemble specialized exploitation paths targeting medical devices. The input dataset comprised device-specific sources, namely, the Manufacturer Disclosure Statement for Medical Device Security (MDS2) documents and Software Bills of Materials (SBOMs), enriched with public vulnerability databases, including Common Vulnerabilities and Exposures (CVE), Known Exploited Vulnerabilities (KEV), and Metasploit exploit records. The objective was to assess whether a modular, Large Language Model (LLM)-driven agent system could autonomously correlate device metadata with known vulnerabilities and existing exploit information to support structured threat modeling. The architecture follows a static RAG design based on predefined prompts and fixed retrieval logic, without autonomous agent planning or dynamic query adaptation. The developed Vulnerability Intelligence for Threat Analysis in Medical Security (VITAMedSec) system operates under human-prompted supervision and successfully synthesizes actionable insights and exploitation paths without requiring manual step-by-step input during execution. Although technically coherent results were obtained under controlled conditions, real-world validation remains a critical avenue for future research. This study further discusses the dual-use implications of such an agent-based framework, its relevance to patient safety in medical device cybersecurity, and the broader applicability of the proposed architecture to other critical infrastructure sectors. These findings emphasize both the technical potential and ethical responsibility for applying semi-automated AI workflows in medical cybersecurity contexts.
In recent years, many studies have shown that light pollution adversely affects wildlife, ecosystems, and human
well-being. To assess and mitigate these impacts, it is crucial that measurements of night sky quality are
reliable and comparable across sites and instruments. However, the lack of standardised night sky brightness
metrology and the use of a wide variety of measurement instruments with varying spectral responsivity and
field-specific measurement units hinder meaningful comparison. We collected night sky spectra from 44 nights
at dark locations (existing and proposed dark sky parks). Based on this observational dataset, we created
a larger random set of spectra. These data served to fit conversion parameters for a wide variety of units.
We demonstrate that RGB cameras, when used as multichannel measuring devices, enable the retrieval of
measurements that facilitate conversions between different units. Furthermore, even airglow can be quantified
from a given measurement, enabling the determination of oxygen and sodium emission line contributions.
Since this contribution is not negligible, quantitative measurements of its magnitude are crucial for accurately
assessing light pollution at dark-sky sites. Using our spectral measurement database, we constructed the
most probable transformation from the cameras’ R, G, and B channel 𝑑𝑠𝑢 values to other units, such as the
astronomical Bessel V band magnitudes. The unit conversion formulas provided in this paper are valid for
mildly polluted sites (existing and proposed dark sky places), in the 21-22 magV∕arcsec2 range.
Large Language Models (LLMs) demonstrate strong performance on different language tasks, but tend to hallucinate – generate plausible but factually incorrect outputs. Recently, several approaches to integrate Knowledge Graphs (KGs) into LLM inference were published to reduce hallucinations. This paper presents a systematic literature review (SLR) of such approaches. Following established SLR methodology, we identified relevant work by systematically search in different academic online libraries and applying a selection process. Nine publications were chosen for indepth analysis.
Our synthesis reveals differences and similarities of how the KG is accessed, traversed, and how the context is finally assembled. KG integration can significantly improve LLM performance on benchmark datasets and additionally to mitigate hallucination enhance reasoning capabilities, explainability, and access to domain-specific knowledge. We also point out current limitations and outline directions for future work.
Diese Arbeit entwickelt und realisiert automatisierte Reaktionsme-chanismen zur Netzwerkisolierung kompromittierter Endgeräte mit-tels der Open-Source-Plattform Wazuh. Es werden zwei Ansätze als Proof of Concept implementiert und verglichen: eine host-basierte Isolierung durch Deaktivierung der lokalen Netzwerk-schnittstelle (Windows/Linux) und eine netzwerk-basierte Methode, die den Switch-Port des Geräts zentral abschaltet. Die Validierung in einer Testumgebung zeigt, dass die host-basierte Methode eine schnelle Reaktion ermöglicht, während der netzwerk-basierte An-satz unempfindlicher gegenüber Manipulationen auf dem Host ist.
Das Ziel der vorliegenden Arbeit ist es, die These zu bestätigen, dass sich eine mobile Incident-Response-Netzwerksensorik in einer zuvor unbekannten Netzwerkumgebung innerhalb eines festgelegten Zeitraums installieren lässt und die forensische Analyse entscheidend unterstützen kann. Dazu wurden Anforderungen an die mobile Netzwerksensorik mithilfe von zwei Experteninterviews erhoben, deren Erfüllung an die Bestätigung der These gekoppelt ist. Zur Umsetzung der Anforderungen wurden Soft- und Hardwarekomponenten ausgewählt, die in Kombination die Netzwerksensorik bilden. Bei der Softwareauswahl zeigte sich, dass die Plattformlösung Security Onion aufgrund der Vielzahl implementierter Softwarekomponenten, der umfangreichen Dokumentation und der Architektur besser geeignet ist, die Projektanforderungen zu erfüllen und damit die These zu bestätigen, als Clear NDR. Durch drei Versuche konnte die These erfolgreich bestätigt werden. Damit liefert die Arbeit wichtige Grundlagen für die Implementierung einer mobilen Netzwerksensorik. Durch weitere Versuche und die in dieser Arbeit vorgestellten Hardwareempfehlungen kann in Zukunft eine praxistaugliche mobile Netzwerksensorik in Betrieb genommen werden.
Recently, an increasing number of IT security incidents involving malware, which makes use of hidden and steganographic channels for malicious communication (a.k.a. as "stegomalware"), can be observed in the wild. Especially the use of images to hide malicious code is rising. In consideration of this shift, a new model is proposed in this paper, which aims to help security professionals to identify and analyze incidents revolving around steganographic malware in the future. The model focuses on practical aspects of steganalysis of communication data to elaborate linking properties to previous code analysis knowledge. The model features two distinct roles that interact with a knowledge base which stores malware features and helps building a context for the incident. For evaluation, two image steganography malware types are chosen from popular databases (malpedia and MITRE ATT&CK®), which are analyzed in multiple steps including steganalysis and code analysis. It is conceptually shown, how the extracted features can be stored in a knowledge base for later use to identify stegomalware from communication data without the need of a thorough code analysis. This allows to uncover previously hidden meta-information about the examined malicious programs, enrich the incident’s forensic context traces and thus allows for thorough forensic insights, including attribution and improved preventive security measures in the future.
Light pollution is an emerging ecological threat. To mitigate its negative consequences, creative inter- and transdisciplinary solutions and societal interactions are needed. To this end, we introduce nocturnal umbrella species representative of light-sensitive biodiversity whose protection will safeguard vital ecosystem services and a wide range of co-occurring species.
Dementia, marked by cognitive decline, significantly impacts daily life. With global prevalence rising, traditional treatments manage symptoms but have side effects and offer no cure. Non-pharmacological interventions, like serious games, are gaining importance. This study assesses the feasibility and benefits of serious games for people with mild to moderate dementia over a 10-week intervention. Sixty-one patients were recruited, with 35 completing the study. The intervention included six games focusing on physical and cognitive training. Outcome measures were motor function, cognitive assessments, quality of life, and depression. Results showed significant improvements in dynamic balance (p = .013) but no significant changes in other measures. The findings suggest that serious games are feasible and can improve motor functions like balance. However, short intervention periods may limit their impact on cognitive function and quality of life. Longer interventions and personalized game designs are recommended for greater benefits.
Light pollution poses significant ecological challenges for nocturnal animals reliant on natural light for migration, orientation, and circadian rhythms. The physiological effects of abrupt exposure to artificial light at night (ALAN) on migratory fish, such as the light experienced passing near illuminated infrastructures, remain poorly understood. This study investigates the physiological responses of brown trout (Salmo trutta) smolts to low-intensity (0.02 lx) and short-term (30 s) ALAN, simulating nocturnal migration light conditions near illuminated bridges. To evaluate the influence of social dynamics, trout were tested individually (solitary) or in groups of six. Using continuous cardiac monitoring with data storage tags, alongside analyses of oxidative stress markers and adenylate kinase (AK) activity in the heart, we identified distinct patterns of physiological responses. Solitary fish exhibited significant heart rate variability (HRV) increases following repeated ALAN exposure, suggesting impaired physiological regulation under repeated ALAN exposure. In contrast, trout in groups displayed consistently lower HRV over the entire 90-min experiment, implying that social dynamics likely influenced a sustained oxidative stress response, corroborated by increased AK activity. Oxidative stress markers further reflected social effects, with significant upregulation of key antioxidant enzymes (sod1, sod2, gpx1, gpx4) and elevated lipid peroxidation, identifying lipids as primary oxidative targets. The observed divergence between superoxide dismutase (SOD) activity and sod gene expression suggests adaptive post-transcriptional regulation to maintain redox balance under combined environmental and social stress. These findings reveal that social dynamics under ALAN can amplify physiological stress, potentially affecting migratory outcomes.
We introduce STAResNet, a ResNet architecture in Spacetime Algebra (STA) to solve Maxwell's partial differential equations (PDEs). Recently, networks in Geometric Algebra (GA) have been demonstrated to be an asset for truly geometric machine learning. In [1], GA networks have been employed for the first time to solve partial differential equations (PDEs), demonstrating an increased accuracy over real-valued networks. In this work we solve Maxwell's PDEs both in GA and STA employing the same ResNet architecture and dataset, to discuss the impact that the choice of the right algebra has on the accuracy of GA networks. Our study on STAResNet shows how the correct geometric embedding in Clifford Networks gives a mean square error (MSE), between ground truth and estimated fields, up to 2.6 times lower than than obtained with a standard Clifford ResNet with 6 times fewer trainable parameters. STAREsNet demonstrates consistently lower MSE and higher correlation regardless of scenario. The scenarios tested are: sampling period of the dataset; presence of obstacles with either seen or unseen configurations; the number of channels in the ResNet architecture; the number of rollout steps; whether the field is in 2D or 3D space. This demonstrates how choosing the right algebra in Clifford networks is a crucial factor for more compact, accurate, descriptive and better generalising pipelines.
CGAPoseNet+GCAN: A Geometric Clifford Algebra Network for Geometry-aware Camera Pose Regression
(2024)
We introduce CGAPoseNet+ GCAN, which enhances CGAPoseNet, an architecture for camera pose regression, with a Geometric Clifford Algebra Network (GCAN). With the addition of the GCAN we obtain a geometry-aware pipeline for camera pose regression from RGB images only. CGAPoseNet employs Clifford Geometric Algebra to unify quaternions and translation vectors into a single mathematical object, the motor, which can be used to uniquely describe camera poses. CGAPoseNet solves the issue of balancing rotation and translation components in the loss function, and can obtain comparable results to other approaches without the need of expensive tuning of the loss function or additional information about the scene, such as 3D point clouds, which might not always be available. CGAPoseNet, however, like several approaches in the literature, only learns to predict motor coefficients, and it is unaware of the mathematical space in which predictions sit in and of their geometrical meaning. By leveraging recent advances in Geometric Deep Learning, we modify CGAPoseNet with a GCAN: proposals of possible motor coefficients associated with a camera frame are obtained from the InceptionV3 backbone, and the GCAN downsamples them to a single motor through a sequence of layers that work in G_ 4, 0. The network is hence geometry-aware, has multivector-valued inputs, weights and biases and preserves the grade of the objects that it receives in input. CGAPoseNet+ GCAN has almost 4 million fewer trainable parameters, it reduces the average rotation error by 41% and the average translation error by 8.8% compared to CGAPoseNet. Similarly, it reduces rotation and translation errors by 32.6% and 19.9%, respectively, compared to the best performing PoseNet strategy. CGAPoseNet+ GCAN reaches the state-of-the-art results on 13 commonly employed datasets. To the best of our knowledge, it is the first experiment in GCANs applied to the problem of camera pose regression.
Cortical actomyosin flows, among other mechanisms, scale up spontaneous symmetry breaking and thus play pivotal roles in cell differentiation, division, and motility. According to many model systems, myosin motor-induced local contractions of initially isotropic actomyosin cortices are nucleation points for generating cortical flows. However, the positive feedback mechanisms by which spontaneous contractions can be amplified towards large-scale directed flows remain mostly speculative. To investigate such a process on spherical surfaces, we reconstituted and confined initially isotropic minimal actomyosin cortices to the interfaces of emulsion droplets. The presence of ATP leads to myosin-induced local contractions that self-organize and amplify into directed large-scale actomyosin flows. By combining our experiments with theory, we found that the feedback mechanism leading to a coordinated directional motion of actomyosin clusters can be described as asymmetric cluster vibrations, caused by intrinsic non-isotropic ATP consumption with spatial confinement. We identified fingerprints of vibrational states as the basis of directed motions by tracking individual actomyosin clusters. These vibrations may represent a generic key driver of directed actomyosin flows under spatial confinement in vitro and in living systems.
We employ Clifford Group Equivariant Neural Network (CGENN) layers to predict protein coordinates in a Protein Structure Prediction (PSP) pipeline. PSP is the estimation of the 3D structure of a protein, generally through deep learning architectures. Information about the geometry of the protein chain has been proven to be crucial for accurate predictions of 3D structures. However, this information is usually flattened as machine learning features that are not representative of the geometric nature of the problem. Leveraging recent advances in geometric deep learning, we redesign the 3D projector part of a PSP architecture with the addition of CGENN layers . CGENNs can achieve better generalization and robustness when dealing with data that show rotational or translational invariance such as protein coordinates, which are independent of the chosen reference frame. CGENNs inputs, outputs, weights and biases are objects in the Geometric Algebra of 3D Euclidean space, i.e. G3,0,0, and hence are interpretable from a geometrical perspective. We test 6 approaches to PSP and show that CGENN layers increase the accuracy in term of GDT scores by up to 2.1\%, with fewer trainable parameters compared to linear layers and give a clear geometric interpretation of their outputs.
GA-ReLU: an activation function for Geometric Algebra Networks applied to 2D Navier-Stokes PDEs
(2024)
Many differential equations describing physical phenomena are intrinsically geometric in nature. It has been demonstrated how this geometric structure of data can be captured effectively through networks sitting in Geometric Algebra (GA) that work with multivectors, making them suitable candidates to solve differential equations. GA networks however, are still mostly uncharted territory. In this paper we focus on non-linearities, since applying them to multivectors is not a trivial task: they are generally applied in a point-wise fashion over each real-valued component of a multivector. This approach discards interactions between different elements of the multivector input and compromises the geometric nature of GA networks. To bridge this gap, we propose GA-ReLU, a GA approach to the rectified linear unit (ReLU), and show how it can improve the solution of Navier-Stokes PDEs.
Climate change, but also geopolitical circumstances, are moving topics such as energy efficiency and renewable energies more and more into the focus of the population, economy , and politics. As a result, the will to optimize new and existing energy systems extends from private individuals to companies and even entire communities. This work describes the development and usage of a new software called FINEconcepts which creates a digital twin of an energy system. This virtual model can then be used to optimize the energy system based on annual costs, CO2 emissions or other relevant criteria such as self-sufficiency. Because all system components, which include renewable technologies as well, can be added as a building block with chosen but changeable parameters, the software allows the user to explore and awaken interest and understanding of technologies that were previously considered too costly, irrelevant, or unrealistic. Implemented projects in small and large companies as well as in residential areas did prove, that the usage of FINEconcepts leads not only to more efficient energy systems by increasing the use of renewable energy, but also increased knowledge and understanding in terms of energy. Besides economics, ecology and security, understanding is an equally important factor in achieving a sustainable energy supply.
The products of digital entrepreneurs are highly innovative, and their business models contribute to the prosperity and further development of the economy and society. However, studies indicate that most of all startups fail, particularly during the early stages of their business journey Prototyping as part of Lean Startup or Business Model Testing approaches, can assist digital early-stage startups in navigating uncertainty and achieving successful product launches. However, these methods are applied very individually and there is little empirical research on best practices. We therefore conducted 65 explorative expert interviews and asked successful startups about their prototyping practices. Our results include learnings on the prototyping process and the testing format, the role of the founding team during prototyping practices, the customer focus and the role of networks. Our study adds important details to theory and practice of the innovation and prototyping processes of digital early-stage startups. Our results offer actionable advice and guidance to any current and potential entrepreneur, but especially to first-time founders and less experienced executives in early stage-startups. Additionally, our contribution enhances the theoretical understanding of the Lean Startup approach and prototyping practices.
Das Projekt semPart – Semantik der Partitur versucht, durch Übertragung von Methoden der Informatik auf die musikalische Notation exakte Aussagen über deren Semantik zu erzielen. Dies geschieht durch Remodellierung, also durch das Erstellen kleiner mathematischer Modelle, die jeweils das historisch-kulturell bestimmte Dekodieren von isolierten Parameter-Schichten eines Notates nachbilden. Erstes wichtiges Ergebnis ist, dass es in keinem Bereich jemals eine einzige ›richtige‹ Definition von Syntax und Semantik geben kann, sondern einen Katalog von vielfältigen theoretisch möglichen und in der Praxis auch angetroffenen Varianten. Diese sind durch ihre Remodellierung exakt identifizier- und benennbar; ihre Gesamtheit bildet ein Raster, durch das Notationsstile und -werkzeuge beschreib- und vergleichbar werden. Dieses Raster und die Remodelle sind besonders politisch wichtig, zur Förderung einer öffentlichen Diskussion von Notationssystemen und ihren Eigenschaften, da zunehmend die Arbeitsweise von digitalen Werkzeugen die notationelle Praxis zu überformen und zu filtrieren droht.
The project semPart – Semantics of the Musical Score tries to clarify the semantics of musical notation by applying methods from informatics. This is done by remodelling, i. e., constructing small mathematical models that mimic the mentally and culturally determined processes taking place when decoding isolated parameters in musical notation. The first important result is that there can never be one single ‘correct’ definition of semantics in any area but rather a catalogue of many theoretically possible variants encountered in practice. Through their remodelling, these can be precisely identified and named; their totality constitutes a classification grid applicable to styles, corpora, single scores and digital tools such as editors or encoding standards. This grid and the remodels are of particular political importance in promoting a public discussion on notation systems and their characteristics, given that the increasing use of digital processing systems threatens to over-shape and narrow down notational practice.
Forschungsdatenmanagement gewinnt immer größere Bedeutung in der Forschung. Daher arbeiten im Projekt IN-FDM-BB acht Hochschulen an der Institutionalisierung und nachhaltigen Verstetigung von Forschungsdatenmanagement (FDM) in Brandenburg.
Im Rahmen des lokalen Kompetenzaufbaus und der Institutionalisierung von FDM im Projekt geht es im Arbeitspaket AP 1 u. a. um „Entwicklung von hochschulspezifischen Informationsmaterialien (z. B. Flyer, FDM-Leitfäden, FAQs)“ und „Aufbau und/oder Aktualisierung einer lokalen FDM-Webseite“.
Der Werkstattbericht Konzept für Informationsmaterialien und FDM-Webseite (W 1.1.1) ist der erste Bericht, den alle acht am Projekt IN-FDM-BB beteiligen Hochschulen vorlegen und der konkret die lokale Institutionalisierung von FDM an den Einrichtungen behandelt.
In industrialised countries, one in ten patients suffers harm during hospitalization. Critical Incident Reporting Systems (CIRS) aim to minimize this by learning from errors and identifying potential risks. However, a lack of interoperability among the 16 CIRS in Germany hampers their effectiveness.
Competing with dominant players in the digital platform (DP) economy is an increasingly complex and expensive endeavor for alternative digital platform (ADP) providers trying to disrupt established platform ecosystems. Nevertheless, ADPs position themselves deliberately in the competitive space of large competitors, targeting customer groups whose values and mindsets differ from mainstream DPs. Although there are many examples to find in practice, the variety of ADPs across different DP ecosystems and industries, especially in regards of strategic objectives, directions and characteristics, has not been researched in-depth yet. Based on a systematic, practice-based artifact review, this paper analyses and compares 105 ADPs in the competitive space of dominant DP ecosystem players. The study utilizes the theory of Corporate Aikido as an analytical framework, mapping the strategic elements of overthrowing established value propositions and turning a competitors’ strengths into weaknesses. Our findings indicate that privacy- and security-awareness, empowerment and inclusivity, premium quality and curation, open source and free access, uncensored, transparent and decentralized platforms as well as ecological and climate-friendly offers shape the landscape of alternative approaches in today’s DP economy. The results are subsequently discussed, yielding managerial implications and future opportunities for DP research.
This paper presents a coaching assistant for network operator processes based on a Retrieval-Augmented Gen-
eration (RAG) system leveraging open-source Large Language Models (LLMs) as well as Embedding Models.
The system addresses challenges in employee onboarding and training, particularly in the context of increased
customer contact due to more complex and extensive processes. Our approach incorporates domain-specific
knowledge bases to generate precise, context-aware recommendations while mitigating LLM hallucination.
We introduce our systems architecture to run all components on-premise in an our own datacenter, ensuring
data security and process knowledge control. We also describe requirements for underlying knowledge doc-
uments and their impact on assistant answer quality. Our system aims to improve onboarding accuracy and
speed while reducing senior employee workload.
The results of our study show that realizing a coaching assistant for German network operators is reasonable,
when addressing performance, correctness, integration and locality. However current results regarding accu-
racy do not yet meet the requirements for productive use.
Effectiveness of massage chair and classic massage in recovery from physical exertion: a pilot study
(2023)
Quick and cost-effective recovery is foundational to high-quality training and good competition results in today’s sports.
The aim of the research was to elucidate the effects of hand and massage chair massage on the biomechanical parameters of muscles of lower limbs and back, indicators of Pain Pressure Thresholds (PPT) and subjectively perceived fatigue.
A total of 32 female recreational athletes (18 – 50 years old) were assigned to a hand massage, massage chair, or lying down the group. They were measured for muscle biomechanical properties (MyotonPro), PPT (Wagner Instruments) and subjectively perceived fatigue (VAS scale) before and after fatigue tests and treatment. The recovery procedure and subjective satisfaction with treatment were rated on a Likert scale.
Changes in the median value of m. rectus femoris and m. gastrocnemius stiffness with treatment showed that hand massage could be more effective in reducing stiffness, as compared to chair massage.
Hand massage may have benefits for recovery from physical exertion, but due to the individuality of subjects, detailed methodological studies are needed to evaluate the effects of massage chair vs. hand massage.
The study examines the effectiveness of microlearning in developing the capacity to address climate change and adapt to environmental challenges. Conducted as an online field experiment with 140 participants, the study focused on the impacts of smartphone use as an illustrative learning domain for education in climate change-related areas. Using a pre–post research design and simultaneous equation models, the study found improvements in knowledge retention, with a median increase of 38%. The results indicated that the brief microlearning units also promoted sustainability action competencies, such as confidence in one's influence and willingness to act sustainably. This suggests that higher-order learning processes were also triggered, although to a much lesser extent. Furthermore, learner satisfaction was identified as a mediating variable for these positive outcomes. The study concludes that short bursts of knowledge delivered through microlearning activities can be used alongside traditional training methods to build the critical skills needed for initiatives such as the EU Green Deal.
Artificial light at night (ALAN) contributes to the globally observed insect decline. ALAN attracts nocturnal insects from their native ecosystems and disturbs their functions in the food web. Road lights in this context are ubiquitous and relevant ALAN sources that are often not considered in conservation approaches. In a previous study we showed that shielded LED road lights are suited to be part of conservation measures by effectively reducing the attraction of nocturnal insects. Here we show that this positive effect holds true for parasitoid wasps in an experimental BACI design (Before-After-Control-Impact). Combining morphological with molecular and phylogenetic analyses, we identified 106 individuals (62 morphotypes) of a minimum of 45 genera out of 13 Hymenoptera families. We were able to identify 21 species, 11 of which are newly reported in Southern Germany (Baden-Württemberg). Further combining knowledge on life history and host appearance from our data and the literature, we discuss potential impacts of ALAN ranging from an influence on nocturnal pollination via parasitoid pressure on moth species and biological control of invasive pest species to tritrophic interactions between primary and secondary parasitoids. We conclusively think that shielded LED road lights will reduce the ecological impact of ALAN on parasitoid wasps in a large and undescribed number of taxa with different host associations, likely affecting associated ecosystem functions such as biological control.
This paper presents a low-cost hardware platform for outdoor robots, being suitable for education, industrial prototyping and private use. The choice of components is
discussed, including platform, sensors and controller as well as GPS and image processing hardware. Furthermore, a software approach is proposed, allowing students and researchers to easily implement own algorithms for localization, navigation and the tasks to fulfill. Several robots can be integrated in a framework which connects various different hardware platforms, called the BOSPORUS network. Together with other components, they form an intelligent network for gathering sensor and image data, sensor data fusion, navigation and control of mobile platforms. The architecture of a reference platform on the campus of the Brandenburg University of Applied Sciences is presented and evaluated.
Da sich Cyberbedrohungen stetig weiterentwickeln, müssen auch die defensiven Maßnahmen
erweitert werden. Ein Konzept, welches sich mit derartigen Maßnahmen beschäftigt wird als
Zero Trust bezeichnet. Diese Arbeit befasst, sich mit der Analyse des Konzepts Zero Trust
sowie dessen praktischer Umsetzung. Ziel ist es, den aktuellen Entwicklungsstand von Zero
Trust-Umgebungen zusammenzufassen und anschließend aus den gewonnenen Erkenntnissen
einen Demonstrator zu konzipieren und letztendlich umzusetzen. Dazu sollen ausschließlich
Open Source Komponenten verwendet werden.
Zu diesem Zweck wurden aktuelle Quellen zu Zero Trust wie NIST, CISA, BSI und wis
senschaftlichen Arbeiten analysiert und deren Inhalte zusammengefasst. Aus den Inhalten
wurde anschließend ein Konzept für die Erstellung eines Zero Trust Demonstrators erarbei
tet. Die praktische Umsetzung des Demonstrators wird anschließend beschrieben sowie auf
Problematiken bei der Implementierung eingegangen. Der Demonstrator konnte nicht- wie
ursprünglich konzipiert- umgesetzt werden, jedoch wurde exemplarisch dargestellt, wie die
komplette Implementation ausgesehen hätte.
Die Beratung von Forschenden spielt eine zentrale Rolle bei der Umsetzung und Etablierung des Forschungsdatenmanagements (FDM) im Forschungsalltag. Nach wie vor benötigen viele Forschende Unterstützung im Umgang mit ihren Forschungsdaten. Dies kann einerseits durch einen eher generischen Kenntnisstand zum Thema FDM bedingt sein, andererseits stellt der FDM-Kosmos einen hochfluiden Bereich dar – mit fortlaufend aktualisierten oder gänzlich neuen Tools und Infrastrukturen sowie veränderten Anforderungen an Forschende auch von Seiten der Forschungsförderer. Eine zentrale FDM-Anlaufstelle für die Ratsuchenden ist an den Hochschulen unerlässlich und folglich auch unter den Maßnahmen des Verbundprojekts „IN-FDM-BB – Institutionalisiertes und nachhaltiges Forschungsdatenmanagement in Brandenburg“ zu finden.
Seit Oktober 2022 (Projektbeginn) werden an den beteiligten brandenburgischen Hochschulen entsprechende FDM-Kontaktpunkte auf- bzw. ausgebaut um die (Erst-)Beratung von Forschenden durchführen zu können. Darüber hinaus werden unterstützende Angebote etwa für rechtliche/ethische Fragen konzipiert und etabliert, um eine umfassende Beratung auch für spezifische Aspekte des FDMs zu ermöglichen. Bedarfsgerechte Materialien und Handreichungen, ebenso Bestandteil des Projektplans, ergänzen das individuelle Beratungsangebot.
Der vorliegende Werkstattbericht stellt Erkenntnisse und Herausforderungen der Erprobungsphase dar und skizziert Ansätze, um diesen Herausforderungen lösungsorientiert begegnen zu können. Wertvolle Einblicke in die Beratungspraxis aller am Projekt beteiligter Hochschulen bilden dabei die Grundlage.
IN-FDM-BB Werkstattbericht: W 1.2.2 Auswertung Bedarfserhebung mit daraus folgenden Aktivitäten
(2023)
Forschungsdatenmanagement (FDM) gewinnt an den Hochschulen in Brandenburg eine immer größere Bedeutung. Für den Aufbau und die Institutionalisierung eines nachhaltigen FDM an den brandenburgischen Hochschulen ist eine möglichst genaue Ermittlung des aktuellen Kenntnisstandes der Forschenden und deren Bedarfe bezüglich des Umgangs mit Forschungsdaten erforderlich. Hierzu diente die Bedarfserhebung, die im Rahmen des vom BMBF und dem MWFK geförderten Projekts IN-FDM-BB durchgeführt wurde. Beteiligt an der brandenburgweiten Umfrage waren alle acht staatlichen forschenden Hochschulen: vier Universitäten (die Brandenburgische Technische Universität Cottbus – Senftenberg (BTU), die Filmuniversität Babelsberg KONRAD WOLF (FB), die Europa-Universität Viadrina Frankfurt (Oder) (EUV) und die Universität Potsdam (UP)) und vier Hochschulen für Angewandte Wissenschaften (die Fachhochschule Potsdam (FHP), die Hochschule für nachhaltige Entwicklung Eberswalde (HNEE), die Technische Hochschule Brandenburg (THB) und die Technische Hochschule Wildau (THW)). Die Hochschulen unterscheiden sich erheblich in ihrer Größe und fachlichen Ausrichtung. So hat die UP deutlich über 20.000 Studierende und über 1.900 Professor*innen und akademische Mitarbeiter*innen, die FB hingegen, weniger als 1.000 Studierende und ca. 140 Professor*innen und akademische Mitarbeiter*innen. Landesweit sind fast alle Fachdisziplinen vertreten, jedoch mit unterschiedlichen Schwerpunkten je Hochschule.
Das Ziel der Bedarfserhebung war, den Kenntnisstand bezüglich des professionellen Umgangs mit Forschungsdaten der Forschenden an den einzelnen Hochschulen zu ermitteln, FDM-Bedarfe zu erkennen und daraus mögliche FDM-Aktivitäten für die jeweilige Hochschule abzuleiten. Die Bedarfserhebung richtete sich daher in erster Linie an die Forschenden der jeweiligen Hochschule.
Digital platform (DP) enterprises have risen to the top of the global economy by inverting traditional business models. They earn money through matchmaking, transaction facilitation, and efficient orchestration of other stakeholders' resources. Small- and medium-sized enterprises (SMEs) play a decisive role in this success story: they offer products and services on many leading DPs and thereby feed platform networks as sellers and suppliers. On the other hand, while DPs enable SMEs to reach new customers and outsource costly managerial tasks, many SMEs struggle with the negative consequences of asymmetric power dynamics and dependency risks in platform business. This paper analyzes 14 cases of SMEs that face power asymmetry as DP value providers, unraveling challenges encountered and coping practices employed in this situation. The findings reveal that power asymmetry between dominant DPs and SMEs creates significant challenges, including exploitative relationships, loss of autonomy, surveillance pressures, and devaluation of SME contributions, while SMEs cope through disintermediation, multi-homing, individual resistance tactics, and striving for more equitable DP models. Moreover, three relationality types are identified: DPs being SMEs' “partners,” “tools,” and “necessary evil.” This article offers a new perspective on boundaries of SME strategies in the DP economy and contributes with empirical insights to the advancing field of entrepreneurial platform research.
The integrated Berry curvature is a geometric property that has dramatic implications for material properties. This study investigates the integrated Berry curvature and other contributions to the anomalous Hall effect in CrGeTe3 as a function of pressure. The anomalous Hall effect is absent in the insulating phase of CrGeTe3 and evolves with pressure in a domelike fashion as pressure is applied. The dome's edges are characterized by Fermi surface deformations, manifested as mixed electron and hole transport. We corroborate the presence of bipolar transport using ab initio calculations, which also predict a nonmonotonic behavior of the Berry curvature as a function of pressure. Quantitative discrepancies between our calculations and experimental results indicate that additional scattering mechanisms, which are also strongly tuned by pressure, contribute to the anomalous Hall effect in CrGeTe3.
Artificial light propagating towards the night sky can be scattered back to Earth and reach ecosystems tens of kilometres away from the original light source. This phenomenon is known as artificial skyglow. Its consequences on freshwaters are largely unknown. In a large-scale lake enclosure experiment, we found that skyglow at levels of 0.06 and 6 lux increased the abundance of anoxygenic aerobic phototrophs and cyanobacteria by 32 (+/- 22) times. An ecosystem metabolome analysis revealed that skyglow increased the production of algal-derived metabolites, which appeared to stimulate heterotrophic activities as well. Furthermore, we found evidence that skyglow decreased the number of bacteria-bacteria interactions. Effects of skyglow were more pronounced at night, suggesting that responses to skyglow can occur on short time scales. Overall, our results call for considering skyglow as a reality of increasing importance for microbial communities and carbon cycling in lake ecosystems.
Es wird das vielleicht einfachst mögliche Modell vorgestellt, mit dem zeitabhängige CO2 Konzent-rationen c(t) in der Atmosphäre ausgehend von verschiedenen globalen Emissionsszenarien für CO2berechnet werden. Dazu wird eine einzelne inhomogene lineare Differenzialgleichung 1. Ordnung hergeleitet, deren Parameter sich aus den quantitativen Daten des global carbon project sowie Mauna Loa Daten für CO2 Konzentrationen errechnen. Das Modell wird erstens getestet am Zeitraum 1960 bis 2020 mit vergleichsweise guter quantitativer Übereinstimmung zu Messdaten. Zweitens wird für zwei typische IPCC Emissions-Szenarien ein Vergleich der Modellvorhersagen mit denen der kom-plexen IPCC Earth-System-Klimamodelle diskutiert mit qualitativer Übereinstimmung des zeitli-chen Verlaufs. Drittens werden Ergebnisse einiger ausgewählter neuer Emissionsszenarien präsen-tiert. Ungeachtet einiger Abweichungen zu komplexeren Klimamodellen zeichnet sich unser Mo-dell durch zwei wichtige Vorteile für die Lehre aus. Zum einen ist es sehr einfach für Studierende und begabte Schüler nutzbar, da die erforderliche Lösung der Differentialgleichung bereits mit han-delsüblicher Tabellenkalkulationssoftware wie z.B. Excel programmiert werden kann. Dadurch ge-stattet es zum anderen auch sehr einfach, den zeitlichen Verlauf von Emissionsszenarien zu verän-dern und innerhalb weniger Sekunden Veränderungen aufgrund geänderter Eingaben zu berechnen. Insofern eignet sich das Modell sehr gut als Einstieg in das Thema Klimamodellierung in einführen-den Hochschulvorlesungen zum Themenbereich Kohlenstoffkreislauf und Klimawandel. In der Schule kann es gegen Ende der Sekundarstufe 2 beispielsweise im Projektunterricht zum Themen-komplex Nachhaltigkeit in Physik und/oder Mathematik eingesetzt werden
Coarsening of quasi two-dimensional emulsions formed by islands in free-standing smectic films
(2024)
We study the coarsening behavior of assemblies of islands on smectic A freely suspended films in ISS microgravity experiments. The islands can be regarded as liquid inclusions in a two-dimensional fluid in analogy to liquid droplets of the discontinuous phase of an emulsion. The coarsening is effectuated by two processes, predominantly by island coalescence, but to some extend also by Ostwald ripening, whereby large islands grow at the expense of surrounding smaller ones. A peculiarity of this system is that the continuous and the discontinuous phases consist of the same material. We determine the dynamics, analyze the self-similar aging of the island size distribution and discuss characteristic exponents of the mean island growth.
Granular gases are fascinating non-equilibrium systems with interesting features such as spontaneous clustering and non-Gaussian velocity distributions. Mixtures of different components represent a much more natural composition than monodisperse ensembles but attracted comparably little attention so far. We present the observation and characterization of a mixture of rod-like particles with different sizes and masses in a drop tower experiment. Kinetic energy decay rates during granular cooling and collision rates were determined and Haff’s law for homogeneous granular cooling was confirmed. Thereby, energy equipartition between the mixture components and between individual degrees of freedom is violated. Heavier particles keep a slightly higher average kinetic energy than lighter ones. Experimental results are supported by numerical simulations.
A granular gas composed of monodisperse spherical particles was studied in microgravity experiments in a drop tower. Translations and rotations of the particles were extracted from optical video data. Equipartition is violated, the rotational degrees of freedom were excited only to roughly 2/3 of the translational ones. After stopping the mechanical excitation, we observed granular cooling of the ensemble for a period of three times the Haff time, where the kinetic energy dropped to about 5% of its initial value. The cooling rates of all observable degrees of freedom were comparable, and the ratio of rotational and translational kinetic energies fluctuated around a constant value. The distributions of translational and rotational velocity components showed slight but systematic deviations from Gaussians at the start of cooling.
Im Rahmen des Projekts "SzA4Hosp - Systeme zur Angriffserkennung in der Medizinischen Versorgung" wurde eine Erhebung für deutsche Krankenhäuser durchgeführt, bei der genutzte Quellen für Cyber Threat Intelligence identifiziert wurden. Diese Arbeit stellt die 41 dabei erfassten Quellen hinsichtlich verschiedener Attribute wie Format, Inhalt und Bereitstellungsmodell vor. Abschließend erfolgt eine übergreifende Auswertung der Quellen in Bezug auf die Formate, in denen sie Informationen bereitstellen. Es wird gezeigt, dass die Verwendung von natürlicher Sprache in den untersuchten Quellen deutlich verbreiteter ist, als der Einsatz strukturierter Formate.
The public sector faces considerable challenges that stem from increasing external and internal demands, the need for diverse and complex services, and citizens' lack of satisfaction and trust in public sector organisations (PSOs). An alternative to traditional public service delivery is the co-creation of public services. Data analytics has been fueled by the availability of immense amounts of data, including textual data, and techniques to analyze data, so it has immense potential to foster data-driven solutions for the public sector. In the paper, we systematically review the existing literature on the application of Text Analytics (TA) techniques on textual data that can support public service co-creation. In this review, we identify the TA techniques, the public services and the co-creation phase they support, as well as envisioned public values for the stakeholder groups. On the basis of the analysis, we develop a Research Framework that helps to structure the TA-enabled co-creation process in PSOs, increases awareness among public sector organizations and stakeholders on the significant potential of TA in creating value, and provides scholars with some avenues for further research.
When it comes to human resource management (HRM), small and medium-sized enterprises (SMEs) primarily rely on informal approaches. Thereby, HR practitioners often listen to their guts feeling instead of using more systematic HR practices that are, as research evidence shows, more effective for attracting, motivating and retaining talented staff. This paper elaborates on the question how SMEs may profit from the design and implementation of evidence-based, systematic HRM instruments. The analysis focuses on the antecedents of the so called science-practitioner gap in HRM and why it is even more pronounced in SMEs as compared to larger companies. As a conclusion, we find that SMEs need to design HRM instruments that are systematic and evidence-based in their approach, but can be used by HR practitioners on an intuitive basis without ex ante training or cumbersome formal procedures. However, designing HRM instruments based on a so-called "walk-up-and-use" approach means beating new paths. Given the scarce resources for people management in SMEs, such innovative HRM instruments cannot be developed by a single SME alone. Therefore, the paradigm of open cooperation as it has been successfully applied in open source software (OSS) development is evaluated , in particular whether it can be transferred on the design of HRM instruments that take into account the specific needs of SMEs. First of all, we conceptualize a general framework how HR practitioners from SMEs and HR experts from business and academia can jointly develop Open HRM instruments. Then we discuss the social side of Open HRM, in particular why someone should contribute to Open HRM without receiving immediate profits, how to ensure the quality of openly developed HRM instruments and other gov-ernance issues. As a conclusion, we find that-despite a number of technical and social challenges that have to be overcome-Open HRM has a great potential in order to open up evidence-based HRM for SMEs.
This is the Mission Description Document (MDD) for a future Earth observation satellite mission addressing visible band observations of nighttime lights. The mission driving science applications are for the remote sensing of electricity, energy, and ecological impacts via the observation of artificial light. The MDD covers the mission requirements justification from the high-level scientific objectives and societal benefits to measurement requirements at product level 1b, i.e. radiances at the top of the atmosphere. In providing justification for the individual requirements and their traceability to scientific objectives and societal challenges, we address Scientific Readiness Levels 1 to 3. The MDD is the outcome of the European Space Agency New Earth Observation Mission Ideas project "Night Watch". The Task Reports (TR) from the project are appended to the MDD, and are authored by the same group as the MDD itself.
In modern societies, technology has a high value, as nearly all systems are controlled by information technology. This thesis aims to develop a technical representation of the MITRE ATT&CK matrix to enable faster and more effective responses to cyber attacks. Based on a analysis of common cyber attack models, such as the Diamnond Model of Intrusion Analysis and the Lockheed Martin Cyber Kill Chain, the MITRE ATT&CK matrix is used to develop an extended Kill Chain model. In this new model, MITRE tactics are grouped into five sequential attack phases, allowing for a clearer differentiation between actions taken during an attack. Aditionally, one ongoing group of MITRE tactics was identified.
Building on this model, two graph-based representations are developed: a simple graph structure and a hierarchical, tree-shaped graph of MITRE tactics and techniques. The following discussion addresses the practical application of the new model, as well as potential further developments, such as integrating AI to analyse and predict existing and potential attack patterns.
Der Stellenwert der Überwachung der Sicherheit von IT-Systemen nimmt mit dem Voranschreiten der Digitalisierung immer weiter zu.
Um diese Aufgabe zu vereinfachen werden Monitoring-Systeme eingesetzt.
Diese sammeln und verarbeiten Daten, um den Status des IT-Systems zu ermitteln.
Zu diesen Monitoring-Systemen zählt Wazuh, welches in der Lage ist, die "Integrität" und "Vertraulichkeit" unter Zuhilfenahme von Agenten zu überwachen.
Damit deckt Wazuh lediglich zwei der drei Schutzziele der IT-Sicherheit ab.
Aus diesem Grund wird in dieser Bachelorarbeit untersucht, auf welche Art und Weise sich das SIEM und XDR System um Heartbeat Monitoring erweitern lässt.
Dadurch wird Wazuh befähigt das dritte Schutzziel der IT-Sicherheit, die Verfügbarkeit, zu überwachen.
Zu diesem Zweck wird eine Erweiterung der Wazuh-Implementierung um den Heartbeat Daemon des Elastic Stacks durchgeführt.
Die Implementierung soll in der Lage sein, die Daten des Heartbeat Daemons zu empfangen, zu speichern, darzustellen und Benachrichtigungen auszulösen,
sobald sich der Status von überwachten Systemkomponenten ändert.
Um die Funktionalität dieses Proof of Concepts zu überprüfen wird Heartbeat Monitoring ebenfalls in einem Vergleichssystem umgesetzt.
Dabei handelt es sich um die native Umgebung des Daemons, den Elastic Stack.
Die Funktionalität des Elastic Stacks konnte auf unterschiedlichen Wegen reproduziert werden.
Dies erfolgt zum einen durch die direkte Verarbeitung der Heartbeat-Ausgabe durch Wazuh.
Alternativ besteht die Möglichkeit, die Heartbeat-Daten direkt in Wazuhs Datenbank zu schreiben.
In Abhängigkeit der Version des Heartbeat-Daemons ist die Verwendung von Logstash als Middleware erforderlich.
Das Darstellen der Daten ist für beide Methoden in Wazuhs Weboberfläche möglich.
Das Erweitern von Wazuh um Heartbeat Monitoring ermöglicht der Software die Abdeckung der relevantesten Schutzziele der IT-Sicherheit.
Es wird gezeigt, welche Methoden zur Erweiterung der Open-Source-Software angewendet werden können.
This thesis falls within the domain of cybersecurity and computer science, specifically focusing on the vulnerability CVE-2024-3094, which has a CVSS score of 10, discovered in late March of 2024. It integrates methodologies from network security, reverse engineering, and OSINT to thoroughly investigate this vulnerability.
An aspect of this thesis will be the exploration of the incident and its social engineering campaign that led to a threat actor becoming a maintainer of XZ Utils. Another part of this work will be the analysis of how the vulnerability in XZ Utils affects SSH and whether SSH authentication is compromised by this vulnerability. The objective is to conduct a Proof of Concept (PoC) to demonstrate the presence of the backdoor.
Diese Arbeit untersucht die Integration von externer Threat Intelligence in Cybersicherheitsplattformen, um die Erkennung von Bedrohungen in Echtzeit und die Reaktion auf Vorfälle zu verbessern. Durch den Einsatz von Datenscraping-Techniken auf sozialen Medien (X) in Kombination mit einer Python-basierten Verarbeitungspipeline und Elasticsearch zur Speicherung und Abfrage von Daten wird ein robustes System zur Analyse und Visualisierung bedrohungsrelevanter Informationen entwickelt. Das System integriert Natural Language Processing (NLP) Techniken, wie das Stanford CoreNLP-Framework und den Stucco Entity Extractor, um gesammelte Daten mit Cybersicherheitsentitäten anzureichern. Zudem werden maschinelle Lernmodelle, insbesondere Satztransformatoren, verwendet, um Texte einzubetten und die Kosinus-Ähnlichkeit zwischen potenziellen Bedrohungen und Warnmeldungen zu berechnen. Damit wird gezeigt, dass die Korrelation externer Datenquellen mit Threat Intelligence die Präzision von Alarmen erheblich verbessert und sicherheitsrelevante Einblicke für Analysten liefert. Das finale System bietet eine skalierbare und effiziente Datenverarbeitung sowie Echtzeit-Visualisierungen durch Elasticsearch und Kibana-Dashboards zur Unterstützung der Entscheidungsfindung in der Cybersicherheit.
In recent decades, inland water remote sensing has seen growing interest and very strong development. This includes improved spatial resolution, increased revisiting times, advanced multispectral sensors and recently even hyperspectral sensors. However, inland waters are more challenging than oceanic waters due to their higher complexity of optically active constituents and stronger adjacency effects due to their small size and nearby vegetation and built structures. Thus, bio-optical modeling of inland waters requires higher ground-truthing efforts. Large-scale ground-based sensor networks that are robust, self-sufficient, non-maintenance-intensive and low-cost could assist this otherwise labor-intensive task. Furthermore, most existing sensor systems are rather expensive, precluding their employability. Recently, low-cost mini-spectrometers have become widely available, which could potentially solve this issue. In this study, we analyze the characteristics of such a mini-spectrometer, the Hamamatsu C12880MA, and test it regarding its application in measuring water-leaving radiance near the surface. Overall, the measurements performed in the laboratory and in the field show that the system is very suitable for the targeted application.
Dieser Abschlussbericht stellt das Ergebnis der Projektarbeit über die Implementierung von Systemen zur Angriffserkennung (SzA) in deutschen Krankenhäusern im Kontext des IT-Sicherheitsgesetzes 2.0 und des branchenspezifischen Sicherheitsstandards (B3S) für die medizinische Versorgung dar. Ziel des Projekts war es, den aktuellen Umsetzungsstand von SzA in deutschen Krankenhäusern zu analysieren und Handlungsempfehlungen für die Weiterentwicklung des B3S zu erarbeiten. Die Analyse basiert auf einer umfangreichen Befragung von Krankenhausbetreibern, Expertengesprächen sowie der Auswertung relevanter nationaler und internationaler Standards und Good Practices. Die Ergebnisse zeigen deutliche Unterschiede im Reifegrad der SzA-Implementierung zwischen verschiedenen Bereichen, wobei die Informationstechnik branchenweit am fortgeschrittensten ist. Der Bericht bietet konkrete Vorschläge zur Verbesserung der IT-Sicherheitslage in Krankenhäusern und betont die Notwendigkeit kontinuierlicher Weiterentwicklungen der SzA-Systeme, um den steigenden Anforderungen der IT-Sicherheit in der stationären Versorgung gerecht zu werden.
Die Bachelorarbeit untersucht die Performance und Ressourcenauslastung von Platform- und Virtual-Threads in Java 21. Im Kontext der E-Commerce-Plattform der Telefónica Germany GmbH & Co. OHG stellt sich die Frage, ob die neu eingeführten Virtual-Threads dazu verwendet werden können, den aktuellen Softwareentwicklungsprozess des Unternehmens zu optimieren. Zu diesem Zweck werden Proof-of-Concept-Anwendungen entwickelt, die auf den Webframeworks Spring MVC und Spring WebFlux basieren. Diese Anwendungen werden anschließend einem Stresstest unterzogen, um die Leistungsunterschiede der Threading-Modelle zu analysieren.
Die Ergebnisse der Untersuchung legen nahe, dass die Verwendung von Virtual- Threads im Vergleich zu Platform-Threads in einer asynchronen Spring MVC- Architektur zu einer Reduktion der durchschnittlichen Antwortzeit um etwa 20 % und zu einer Steigerung des Durchsatzes um nahezu 20 % führt. Des Weiteren konnte eine Reduktion der CPU- und RAM-Auslastung bei maximaler Last um 11 % bzw. 13 % festgestellt werden. Obwohl die Spring WebFlux- Architektur unter Verwendung von Platform-Threads und eines Netty-Servers in den meisten Testmetriken bessere Ergebnisse erzielte, sind die Resultate der asynchronen Spring MVC-Architektur mit Virtual-Threads vielversprechend. Eine Implementierung dieser Technologie könnte demnach eine deutliche Verbesserung des aktuellen Softwareentwicklungsprozesses im Unternehmen bewirken.
In der modernen IT-Infrastruktur hat die Container-Technologien sehr viel an Bedeutung gewonnen und finden breite Anwendung in der Softwareentwicklung.
Die zunehmende Komplexität zeitgenössischer Anwendungen erfordert effektive Sicherheitsmaßnahmen, um potenzielle Schwachstellen frühzeitig zu erkennen und zu beheben.
Diese Bachelorarbeit befasst sich mit der Evaluierung verschiedener Vulnerability
Scanner, die speziell für Container-Images entwickelt wurden.
Im Rahmen der Arbeit wird eine einfache Webanwendung mit Frontend und Backend erstellt und anschließend containerisiert. Diese Beispielanwendung dient als Grundlage für die systematische Untersuchung und Bewertung mehrerer Vulnerability Scanner. Ziel ist es, die Ergebnisse der verschiedenen Tools zu analysieren und das effektivste Werkzeug zur Identifizierung von Sicherheitslücken in containerisierten Umgebungen zu bestimmen.
Durch die Analyse und den Vergleich der Ergebnisse wird aufgezeigt, welches Tool die besten Ergebnisse hinsichtlich Erkennungsrate und Genauigkeit liefert. Zusätzlich bietet die Arbeit einen Einblick in die Nutzung der Scanner, woraus sich die Benutzerfreundlichkeit der einzelnen Scanner ableiten lässt.
Diese Arbeit leistet einen wichtigen Beitrag zur Sicherheit und Zuverlässigkeit von
containerisierten Anwendungen in der Praxis und bietet praktische Empfehlungen
zur Optimierung der Containersicherheit.
Programs in embedded domain-specific languages are realized as graphs of objects of the host language rather than as static input texts. This property enables dynamic meta-programming, but also makes it harder to attach location information to diagnostic messages that arise at a later stage, after the program graph construction. Thus, EDSL-generating expressions and algorithms can be difficult to debug. Here, we present a technique for transparently capturing and replaying location information about the origin of EDSL program objects. It has been implemented in the context of the LLJava-live EDSL-to-bytecode compiler framework on the JVM. The basic idea can be generalized to other contexts,
and to any managed runtime environment with reified stack traces
Units of measure with prefixes and conversion rules are given a formal semantic model in terms of categorial group theory. Basic structures and both natural and contingent semantic operations are defined. Conversion rules are represented as a class of ternary relations with both group-like and category-like properties. A hierarchy of subclasses is explored, each satisfying stronger useful algebraic properties than the preceding, culminating in a direct efficient conversion-by-rewriting algorithm.
This demo explores an innovative artistic installation that creatively visualizes global temperature data using graphical visualization and motion capture technologies. By combining video-based posture capturing of nearby individuals with a dynamically rendered 3D model of the planet Earth, this installation offers an interactive and immersive experience. The goal is to transform climate change data into an engaging visual format, making it more accessible and impactful for a wide range of audiences.
The attraction of insects to artificial light is a global environmental problem with far-reaching implications for ecosystems. Since light pollution is rarely integrated into conservation approaches, effective mitigation strategies towards environmentally friendly lighting that drastically reduce insect attraction are urgently needed. Here, we tested novel luminaires in two experiments (i) at a controlled experimental field site and (ii) on streets within three municipalities. The luminaires are individually tailored to only emit light onto the target area and to reduce spill light. In addition, a customized shielding renders the light source nearly invisible beyond the lit area. We show that these novel luminaires significantly reduce the attraction effect on flying insects compared to different conventional luminaires with the same illuminance on the ground. This underlines the huge potential of spatially optimized lighting to help to bend the curve of global insect decline without compromising human safety aspects. A customized light distribution should therefore be part of sustainable future lighting concepts, most relevant in the vicinity of protected areas.
Der Erwartungswert einer Zufallsgröße kann generell am Graphen ihrer Verteilungsfunktion durch eine naheliegende Gleichheit zweier Flächeninhalte festgelegt werden. Äquivalent dazu lässt er sich mit zwei uneigentlichen Riemann-Integralen darstellen. Davon ausgehend werden die Tschebyschow-Ungleichung und ihre Modifikation mit strikten Ungleichheitszeichen hergeleitet.
In dieser Forschung wird es gezeigt, wie sicher und benutzerfreundlich eine Authentifizierung mit Passkey/FIDO2 auf Ebene des Web-Browsers ist, im Vergleich zu einer Authentifizierung mit einem starken Passwort oder eine Authentifizierung aus einer Kombination von einem Passwort und einer Zwei-Faktor-Authentifizierung.
Untersuchung der Rotorfestigkeit von permanenterregten Synchronmaschinen mit vergrabenen Magneten
(2023)
Diese Projektarbeit behandelt die mechanische Simulation der Rotoren von permanenterregten Synchronmaschinen für den Festigkeitsnachweis. Hierzu wird die Finite-Elemente-Methode mit dem Programm CalculiX verwendet. Als Beispiel wird der Elektromotor des Toyota Prius II im Detail betrachtet. Hierbei werden die unterschiedlichen Lasten modelliert, dazu zählen neben der Zentrifugalkraft auch der Innendruck durch eine Presspassung sowie die elektromagnetischen Kräfte. Für die Modellierung von Kräften auf der Rotoroberfläche wird eine Methodik zur Umsetzung in CalculiX entwickelt und anhand eines Beispielmodells demonstriert. Der angestrebte Festigkeitsnachweis kann im Rahmen dieser Arbeit erbracht werden. Weiterhin wird gezeigt, dass der Einfluss der elektromagnetischen Kräfte im Arbeitspunkt mit der höchsten Drehzahl sehr gering ausfällt. Insgesamt bietet diese Arbeit die Grundlagen für mechanische Modellierungen von Rotoren unter Berücksichtigung aller Kräfte. Dies kann direkt auf andere Elektromotoren angewendet werden.
Einsicht in die Abläufe und Leistung heutiger Software und verteilter Systeme zu erlangen, wird durch stetig steigende Komplexität zunehmend erschwert. Die Firma Peterson Technologies GmbH implementierte zur Linderung dieser Problematik ein internes Telemetriesystem. Ziel dieser Arbeit ist es, das System auf Wunsch der Firma durch ein neues zu ersetzen, um so die Überwachung zu verbessern und aktuellere Software zu verwenden. Nach einer Untersuchung des vorherigen Systems werden Prioritäten festgelegt, welche die spätere Nutzung, Instandhaltung und Erweiterung des neuen Systems erleichtern sollen. Darauf folgen die Planung seiner Architektur, die Auswahl und Implementierung der Software und schließlich das Erzeugen einer einfachen Dokumentation. Der resultierende Telemetriestack ist funktionstüchtig und erfüllt die Vorstellungen der Firma. Dennoch weist er einige Probleme, wie beispielsweise eine hohe CPU Belastung auf, welche zusammen mit Möglichkeiten der Verbesserung und Erweiterung abschließend betrachtet werden.
Single dielectric microspheres can manipulate light focusing and collection to enhance optical interaction with surfaces. To demonstrate this principle, we experimentally investigate the enhancement of the Raman signal collected by a single dielectric microsphere, with a radius much larger than the exciting laser spot size, residing on the sample surface. The absolute microsphere-assisted Raman signal from a single graphene layer measured in air is more than a factor of two higher than that obtained with a high numerical aperture objective. Results from Mie’s theory are used to benchmark numerical simulations and an analytical model to describe the isolated microsphere focusing properties. The analytical model and the numerical simulations justify the Raman signal enhancement measured in the microsphere-assisted Raman spectroscopy experiments.
Biomass gasification is recognized as a viable avenue to accelerate the sustainable production of hydrogen. In this work, a numerical simulation model of air gasification of rice husks is developed using the Aspen Plus to investigate the feasibility of producing hydrogen-rich syngas. The model is experimentally validated with rice husk gasification results and other published studies. The influence of temperature and equivalence ratio on the syngas composition, H2 yield, LHVSyngas, H2/CO ratio, CGE, and PCG was studied. Furthermore, the synchronized effects of temperature and ER are studied using RSM to determine the operational point of maximizing H2 yield and PCG. The RSM analysis results show optimum performance at temperatures between 820 °C and 1090 °C and ER in the range of 0.06–0.10. The findings show that optimal operating conditions of the gasification system can be achieved at a more refined precision through simulations coupled with advanced optimization techniques.
Glass-ceramic composites consisting of potassium-iron-silicate glass and barium titanate mixed in various proportions were successfully synthesized by low-temperature sintering. The crystal structure of the obtained composite samples, the porosity and the microhardness were studied by the X-ray diffraction, the electron microscopy, the weight method, and the Vickers method. Electrical characteristics (dielectric permittivity, tunability and losses) of as-prepared and annealed in oxygen medium samples were investigated at microwaves. According to structural analysis, the synthesized samples are a mixture of KFeSi glass, ferroelectric BaTiO3, and dielectric barium polytitanates; the ratio of the latter determines the electrical properties of the composites. Depending on the content of barium titanate, the studied composite samples show a permittivity from 50 to 270 with a dielectric loss level of 0,1–0,02 in frequency range from 3 to 10 GHz. Annealing of composite samples in an oxygen-containing environment leads to an increase in their dielectric permittivity and tunability by 10–25% and a twofold decrease in dielectric loss.