Hochschulbibliografie
Refine
Year of publication
Document Type
- Article (935)
- Conference Proceeding (796)
- Lecture (172)
- Part of a Book (155)
- Book (122)
- Report (66)
- Other (44)
- Preprint (16)
- Periodical (15)
- Contribution to a Periodical (5)
Is part of the Bibliography
- yes (2343)
Keywords
- Deutschland (10)
- Compositing (9)
- Digitalkamera (9)
- Farbbildverarbeitung (9)
- HDTV (9)
- Kameraarbeit (9)
- Videobearbeitung (9)
- cybersecurity (9)
- Dreidimensionale Video (8)
- Filmproduktion (8)
Institute
The effective alignment of soft skills with established competency frameworks is essential for the automated alignment of competences within workforce analytics frameworks. This paper presents a semantic mapping approach using Natural Language Processing (NLP) to align soft skills from the SkillsMatch framework with the contents of the e-Competence Framework (European standard EN 16234-1) as it does not explicitly illustrate those skills in the text. Cosine similarity index provides an indication of the semantic closeness of each soft skill to the e-CF dimensions to map them according to score thresholds, with additional confirmation by experts. This approach provides an automated, scalable solution to enable the semantic linkage between soft and technical competences, otherwise unfeasible in terms of effort and time due to the huge amount of information to be analysed. The result is additional explicit information on soft skills that enriches and extends the standard EN 16234-1 offering pragmatic guidance to practitioners for the application in real contexts. Moreover, the proposed method underscores the importance of integrating embedding-based semantic similarity modelling to work with competence frameworks and standards to extract and analyse information for additional practical value, overcoming the limitations of traditional manual methods.
In Virtual Reality (VR) learning applications, the integration of pedagogical agents is
particularly promising, as they function as a virtual social support. With the recent developments in Large Language Models (LLMs), conversational language models are now available that enable natural language interaction in a VR environment. We evaluated four possible designs for LLM-based agents (푛 = 21) in an aircraft-engine training. Focusing on the primary questions when bringing LLM-based agents into VR: whether to use text or speech and whether to embody the conversational agent in 3D or not. Pre- and post-tests were used to measure retention and questionnaires were used to measure the User Experience (UX) of different design variants. Contrary to our hypotheses, retention was higher when using a non-embodied design compared to an embodied design. The output modality did not have a significant impact on the learning success, but it did have an impact on the UX.
Setting the Stage for Collaboration: A Multi-View Table for Touch and Tangible Map Interaction
(2025)
We present a multi-user map application based on a novel multiview concept that enables simultaneous and independent interaction with shared geospatial content. Each user operates a personal View Finder and Focus View, color-coded for clarity, while a shared, immutable Context View provides a common reference frame. The system supports both touch-based and tangible interaction techniques, including gesture control, virtual joysticks, and physical objects. Users can flexibly arrange their workspaces on a multi-touch table, supporting both individual exploration and collaborative tasks.
In aerospace engineering, data-driven surrogate models are increasingly employed to mitigate the computational and temporal costs of simulations, numerical analyses, and experiments. Two major challenges accompany this trend. First, the training of surrogate models often requires a sufficient amount of data, the determination of which is inherently difficult. Second, these models often exhibit high complexity, limiting both the traceability of their outputs and the extraction of useful insights. Explainable Artificial Intelligence (XAI) methods have therefore emerged as promising tools to enhance the interpretability, explainability, and transparency of such models. In this work, a combination of the established Shapley Additive Explanations (SHAP) approach with a bootstrap-based method is investigated. The proposed framework provides insights into the contribution of individual features and enables an assessment of data sufficiency with respect to surrogate model performance. Building upon these findings, the Bootstrap-Informed Feature Importance (BIFI) method is proposed. BIFI offers a model-agnostic, robust identification of relevant features.The method is analyzed in the context of Design of Experiments (DOE) processes used for surrogate model construction. Evaluation on four synthetic datasets of increasing complexity, as well as a dataset from aero-engine development, demonstrates that BIFI-based DOEs can improve surrogate model quality measured in terms of R and MSE by up to 90%. Consequently, the proposed method enables more efficient utilization of simulations, computations, and experiments while reducing the required number of samples.
This study compares Tangible User Interfaces (TUIs) with conventional touch interfaces (CTIs) on multi-touch tables for exploratory multi-user map applications. We developed a prototype that enables multiple users to interact with map content simultaneously and independently. Four interaction methods were implemented and evaluated: gesture-based touch control, widget-based touch joystick control, and two tangible interaction variants (a joystick and car-steering metaphor). A user study with 15 participants was conducted in which users navigated predefined routes with varying difficulty. We collected both objective performance measurements and subjective user assessments. While touch-based methods yielded higher accuracy in objective metrics, TUI-based interactions were rated significantly better by participants in terms of user experience. Notably, the car-steering tangible control was particularly well-received, highlighting how physical, playful interaction can enhance usability despite lower accuracy. These findings contribute to our understanding of how different interaction paradigms support collaborative exploration on multi-touch surfaces.
In Zeiten rasanter technologischer Entwicklungen stellt sich die Frage, welchen Einfluss Rebranding auf den Markenwert hat, insbesondere im Kontext künstlicher Intelligenz. Während Rebranding traditionell als strategische Maßnahme zur Anpassung an veränderte Marktbedingungen verstanden wurde, eröffnet die Integration von KI neue Möglichkeiten der Automatisierung, Personalisierung und datengetriebenen Entscheidungsfindung. Der vorliegende Beitrag beleuchtet die Rolle von KI im Rebranding-Prozess und untersucht, inwiefern KI-gestützte und menschliche Strategien den Markenwert beeinflussen. Durch eine bibliometrische und systematische Literaturanalyse wird der aktuelle Forschungsstand aufgearbeitet, zentrale Mechanismen identifiziert und zukünftiger Forschungsbedarf abgeleitet. Abschließend werden Implikationen für Wissenschaft und Praxis diskutiert, um eine fundierte Grundlage für den strategischen Einsatz von KI im Markenmanagement zu schaffen.
Light is the primary cue driving zooplankton diel vertical migration (DVM), a strategy that balances predation risk with resource access. However, DVM is often oversimplified, with limited consideration of how light-driven risks and resource needs vary across taxa and life stages. This simplification is partly due to constraints on collecting high-resolution, size-resolved data —especially at night, when subtle shifts in illumination reshape nocturnal risk landscapes. To overcome these limitations, we deployed a high-resolution in situ modular Deep-focus Plankton Imager and an image-recognition approach to quantify fine scale DVM and body sizes of Cladocerans and Copepods in Lake Stechlin, Germany. Data was collected from day into night and across moonrise and was compared with environmental data from vertical profiling sondes. Typical DVM patterns emerged, with deeper daytime distributions, however, moonlight introduced additional behavioural complexity: larger individuals avoided illuminated layers, likely managing predation risk, while smaller individuals moved into these layers, possibly exploiting foraging opportunities and reduced risk. These light-mediated shifts were further shaped by ecological conditions; copepods tracked food-rich layers regardless of light levels at night, while cladocerans showed light-dependent responses to both temperature and food, such that light caused them to avoid otherwise favourable (warm, food-rich) layers. Our approach provides new insight into how zooplankton navigate nocturnal lightscapes, revealing size- and taxon-specific strategies. By establishing size-dependent responses to natural moonlight, this work provides a crucial baseline for predicting how artificial light at night may restructure zooplankton communities and destabilize freshwater food webs.
Dieses Lehrbuch führt am Beispiel der Technischen Mechanik in die Nutzung der Mathematiksoftware SMath Studio für die Durchführung und Dokumentation technischer Berechnungen ein. Das für Privatanwender kostenlose SMath ist, ähnlich wie Mathcad, eine leistungsfähige Alternative zu Papier, Bleistift und Taschenrechner oder auch zu den bekannten Tabellenkalkulationsprogrammen. Das Buch nimmt Einsteiger an die Hand, behandelt aber auch Erweiterungen wie numerische Methoden und Computeralgebra. Mit Beispielen aus Statik, Festigkeitslehre und Dynamik wird das typische Themenspektrum der Technischen Mechanik in technischen Studiengängen, insbesondere im Maschinenbau, abgedeckt.
Environmental monitoring plays a crucial role in analyzing environmental parameters and detecting anomalies. However, sensor systems work continuously, which results in constant energy consumption and data redundancy, especially for sensors with limited computing power and memory. In addition, installing and maintaining sensors in remote places creates additional challenges. An adaptive environmental sensing approach is developed to reduce data redundancy and energy consumption. A custom-designed sensor based on PIC16LF19156 microcontroller measures CO2, humidity and temperature simultaneously. The sensor is connected to a Raspberry Pi, where a signal processing algorithm is executed, aimed at reducing data redundancy, thereby increasing the energy efficiency of the system. The algorithm includes the discrete wavelet transform (DWT) to extract spectral features from the signals. A machine learning model trained on previous data estimates the daily variability of the signal and saves data labeled as SAVE (deviations) or SKIP (consistency). As a result, only the relevant intervals showing significant fluctuations are retained. The effectiveness of the proposed approach was evaluated by reconstructing the compressed signals and comparing them with the original data based on RMSE and MAE metrics, which confirms the insignificant loss of information.
Following a brief description of the atmosphere and spectra of the Sun as dominant daytime light source, the most common optical phenomena within the troposphere are discussed, which are due to scattering of radiation with the constituents of the atmosphere. At first mirages, rainbows, coronas, iridescence, glories and halos are explained. Then light scattering phenomena which give rise to sunset colors, blue and colorful skies are presented as well as related phenomena like blue mountains, white clouds, green flashes and visual ranges. The review ends with a short survey of other less easily observable optical phenomena of the atmosphere and a very detailed bibliography.
Optik und ihre Phänomene
(2024)
Dieses Lehr-, Lern-, Fach- und Sachbuch präsentiert die Grundlagen der Optik in Theorie und ausführlich beschriebenem Experiment sowie vielfältige faszinierende optische Phänomene. Ob in Vorlesungen, Seminaren, für Projektarbeiten, Schulunterricht oder Selbststudium - dieses Buch ist eine wertvolle Ressource für alle, die sich für Optik interessieren. Durch die große Zahl zitierter Originalarbeiten schlägt es nicht nur die Brücke zur Lehre sondern auch zur Forschung.
Schwingungen und Wellen zeigen sich in vielen Alltagsphänomenen der Physik, d. h. in der Lebenswelt von Schülerinnen und Schülern. Dazu zählen in der Mechanik Beispiele wie Schaukeln, Seilwellen oder Wasserwellen am Strand, in der Akustik Schallwellen durch beliebige Geräusche oder stehende Wellen in Musikinstrumenten und im Elektromagnetismus die allgegenwärtigen elektromagnetischen Wellen. Letztere haben vielfältigste Anwendungen, z. B. Erhitzen mit Mikrowellengeräten, Kommunizieren mit Smartphones, Datenübertragung mit Lichtleitern oder Fotografieren mit Kameras, ganz zu schweigen von medizinischen Anwendungen der Endoskopie, des Röntgens oder laserbasierten chirurgischen Eingriffen. Viele dieser Anwendungen haben ein enormes Motivationspotenzial in der Lehre, weshalb das Thema fest in Lehrplänen der Sekundarstufen verankert ist. Im Folgenden werden zunächst allgemeine Grundlagen und Gemeinsamkeiten der Beschreibung beliebiger Wellen diskutiert, bevor das Hauptaugenmerk auf elektromagnetische Wellen und ausgewählte Anwendungen gelegt wird.
How far can we see at day?
(2025)
We discuss the farthest objects on Earth observable for the unaided, healthy naked eye during the daytime, i.e., the maximum visual range for observers on Earth. Visual range depends first on the properties of the material between observer and object and its interaction processes with radiation, but second also on our visual perception system. After a rough comparison of ranges in water, glass, and the atmosphere, we focus on the physical basis of visual range for the latter. As a contrast phenomenon, visual range refers to allowed light paths within the atmosphere. It results from the interplay of geometry, refraction, and light scattering. We present a concise overview of this field by qualitative descriptions and quantitative estimates as well as classroom demonstration experiments. The starting point is the common geometrical visual ranges, followed by extensions due to refraction and limitations due to contrast, which depend on scattering and absorption processes within the atmosphere. The quantitative discussion of scattering is very helpful to easily understand the huge ranges in nature from meters in dense fog to hundreds of kilometers in clear atmospheres. Extreme visual ranges from about 300 km to above 500 km require optimal atmospheric conditions, cleverly chosen locations and times, and a sophisticated topography analysis. Even longer visual ranges are possible when looking through the vertical atmosphere. From the ISS, daytime ranges well above 1000 km are possible.
How far can we see with the naked eye at night? Many celestial objects like stars and galaxies as well as transient phenomena such as comets and supernovae can be observed in the night sky. We discuss the furthest distances of such objects and phenomena observable with the naked eye during the night-time for Earth-bound observers. The physics of night-time visual ranges differs from that of daytime observations because human vision shifts from cones to rods. In addition, mostly point sources are observed due to the large distances involved. Whether celestial objects and phenomena can be detected depends on the contrast of their radiation and the background sky luminance. We present a concise overview of how far we can see at night by first discussing the effects of the Earth's atmosphere. This includes attenuation of transmitted radiation as well as its role as a source of background radiation. Disregarding the attenuation of light due to interstellar and intergalactic dust, simple maximum night-time visual range estimates are based on the inverse square law, which can be easily verified by laboratory and demonstration experiments. From the respective calculations, we find that individual stars within the Milky Way galaxy of up to 15 000 light years are observable. Even further away are observable galaxies with several billion stars. The Andromeda galaxy can be observed with the naked eye at a distance of around 2.5 million light years. Similarly, the observability of supernovae also allows a visual range beyond the Milky Way galaxy. Finally, gamma ray bursts as the most energetic events in the universe are discussed concerning naked eye observations.
Naked eye studies of the clear night sky reveal that a certain percentage of all observable stars can be perceived as having color. Subjective estimates differ widely, ranging from just a few to a maximum of above two hundred. Explanations are based on the emission spectra of the stars, which are modified by interstellar dust clouds, the Earth atmosphere, and mostly the inverse square law. Color changes occur not only for variation of the star’s angular elevation above the horizon, but as well for decreasing nighttime sky brightness due to the transition from photopic via mesopic to scotopic vision. The maximum number of stars showing color to the naked eye depends on star illuminances on Earth and the background sky luminance. The limit of observing color is found to correspond to apparent visual magnitudes around , defining the number of colored stars. This also means that naked eye perception of stars with color is only possible for a certain star distance range, which is well below the maximum naked eye visual range of stars.
We investigate the forces of flowing granular material on an obstacle. A sphere suspended in a discharging silo experiences both weight of the overlaying layers and drag of the surrounding moving grains. In experiments with frictional hard glass beads, the force on the obstacle was found to be practically flow-rate independent. In contrast, flow of nearly frictionless soft hydrogel spheres added drag forces which increased with the flow rate until reaching saturation at high flow speeds. The total force grew quadratically with the obstacle diameter in the soft, low friction material, while it grew much weaker, nearly linearly with the obstacle diameter, in the bed of hard, frictional glass spheres. In addition to the drag, obstacles embedded in the flowing hydrogel spheres experience a weight force from the top as if immersed in a hydrostatic pressure profile, but negligible counter-forces from below. In contrast, the frictional hard particles create a strong pressure gradient near the upper surface of the obstacle. Numerical simulations provide additional information that is difficult to access experimentally. They reproduce the experimental results and give hints for the origin of the different force contributions. The results have considerable practical importance for the discharge of storage containers with large objects suspended in flowing granular material.
Granular gases are not only of interest in fundamental physics, but they can also serve as a test ensembles for the validity of collision models employed in (loose) granular matter. The theoretical literature mainly addresses spheres under ideal conditions and simulations allow full access to all particle parameters, but experiments cannot fulfill these idealizations. We investigate granular gases of soft, rough spheres by combining microgravity experiments and adjusted simulations. We introduce Smart Particles with embedded autarkic micro-sensors for in-situ measurements of rotation rates and accelerations. Additionally, we extract 3D positions, translations and orientations of the particles from stereoscopic video data using Machine-Learning based algorithms. We address the partition of kinetic energy between the degrees of freedom, the angular and translational velocity as well as collision statistics. A simulation is adjusted to experiment parameters, showing good agreement of translational motion, but qualitative differences in the decay of rotational kinetic energy.
One popular application domain for Augmented Reality (AR) technology is facilitating human-building interactions (HBI). Such applications help users to retrieve and transact information through localized interaction points displayed on their (mobile) devices. Growing adoption of AR technology underlines the importance of incorporating suitable AR components for HBI, which is important for effective implementation and testing of HBI-AR use cases, increasing user satisfaction, and long-term viability. In this context, our short paper presents key experiences and lessons learned from an applied science project targeting the development and evaluation of an HBI-AR university campus app. Drawing on an ex-post inquiry of project collaborator experiences, we prescriptively highlight relevant AR solution requirements for this specific design context. Our findings amplify the importance of (1) balancing trade-offs between different spatial embedding computation methods, (2) modularity, customizability, and testability for efficient software development, and (3) taking building complexity and prototype maturity requirements into account when taking AR solution make-or-buy decisions. Our shared insights may inform future best practices and design principles for convergent HBI and AR projects in research and practice.
Continuously excited dense granular gases in microgravity can develop spatial inhomogeneities of the particle distribution. Dynamical clustering is a phenomenon where a significant share of particles concentrate in strongly overpopulated regions. It is caused by a complex interplay between the energy influx and dissipation in collisions. The overall packing fraction, container geometry, and excitation parameters influence the gas-cluster transition. We perform Discrete Element Method (DEM) simulations for frictional spheres in a cuboid container and apply statistical criteria to the packing fraction profiles. Machine learning (ML) methods are used to study the dependence of the gas-cluster transition on system parameters. It is a promising alternative to predict the state of the system without the need for the time-consuming DEM simulations. We identify the best models for predicting the dynamical clustering of frictional spheres in a specific experimental geometry.
When granular gases in microgravity are continuously excited mechanically, spatial inhomogeneities of the particle distribution can emerge. At a sufficiently large overall packing fraction, a significant share of particles tend to concentrate in strongly overpopulated regions, so-called clusters, far from the excitation sources. This dynamical clustering is caused by a complex balance between energy influx and dissipation. The mean number density of particles, the geometry of the container, and the excitation strength influence cluster formation. A quantification of clustering thresholds is not trivial. We generate ‘synthetic’ data sets by Discrete Element Method simulations of frictional spheres in a cuboid container and apply established criteria to classify the local packing fraction profiles. Machine learning approaches that predict dynamic clustering from known system parameters on the basis of classical test criteria areoposed and tested. It avoids the necessity of complex numerical simulations.
Microgravity experiments with three-dimensional (3D) granular gases, i.e., ensembles of freemoving macroscopic particles which collide inelastically, produce large amounts of stereo video footage which require processing and analysis. The main steps of data treatment are particle detection, 3D matching and tracking in stereoscopic views, and quantification of ensemble statistical properties such as, e.g. mean kinetic energy or collision processes. Frequent overlapping or clustering of particles and their complicated movement patterns require smart software solutions. In recent years, Artificial Intelligence/Machine Learning (AI/ML) methods were successfully used for analysis of granular systems. We have applied such techniques to the granular gases of rod-like particles and developed a software tool which enables a full cycle of semi-automatic experimental data analysis. The approach is now tested on more complex, non-convex particles, shaped as 3D crosses (hexapods). Another challenge is optical analysis of dense granular gases, where individual particles cannot be tracked. We present a preliminary result of application of an ML method for number density profiles extraction in VIP-Gran experiment with dense ensemble of rod-like particles.
Microgravity (µg)-generated three-dimensional (3D) multicellular aggregates can serve as models of tissue and disease development. They are relevant in the fields of cancer and in vitro metastasis or regenerative medicine (tissue engineering). Driven by the 3R concept—replacement, reduction, and refinement of animal testing—µg-exposure of human cells represents a new alternative method that avoids animal experiments entirely. New Approach Methodologies (NAMs) are used in biomedical research, pharmacology, toxicology, cancer research, radiotherapy, and translational regenerative medicine. Various types of human cells grow as 3D spheroids or organoids when exposed to µg-conditions provided by µg simulating instruments on Earth. Examples for such µg-simulators are the Rotating Wall Vessel, the Random Positioning Machine, and the 2D or 3D clinostat. This review summarizes the most recent literature focusing on µg-engineered tissues. We are discussing all reports examining different tumor cell types from breast, lung, thyroid, prostate, and gastrointestinal cancers. Moreover, we are focusing on µg-generated spheroids and organoids derived from healthy cells like chondrocytes, stem cells, bone cells, endothelial cells, and cardiovascular cells. The obtained data from NAMs and µg-experiments clearly imply that they can support translational medicine on Earth.
Effective communication skills are increasingly recognized as critical for leadership in digital transformation contexts. Recently, AI-Chatbots such as Talk to Transform (T2T) have been developed to enhance leadership competencies through interactive role-plays and feedback. This paper proposes their adaptation for cybersecurity training. We discuss the current landscape of cybersecurity training, highlight the importance of communication, and present T2T as an innovative approach to bridge this gap through chatbot-driven role-plays.
Genetic risk factor identification for common epilepsies guided by integrative omics data analysis
(2025)
Objective
Genetic generalized epilepsies (GGEs) comprise the most common genetically determined epilepsy syndromes, following a complex mode of inheritance. Although many important common and rare genetic factors causing or contributing to these epilepsies have been identified in the past decades, many features of the genetic architecture are still insufficiently understood. This study integrates genome-wide association study (GWAS) data from the International League Against Epilepsy Consortium on Complex Epilepsies with transcriptome-wide association studies to identify genes whose genetically regulated expression levels are associated with epilepsy.
Methods
To achieve this, we used multiple computational approaches, including MAGMA, a tool for gene analysis of GWAS data, and its derivatives E-MAGMA and H-MAGMA, to improve gene mapping accuracy by utilizing tissue-specific expression and chromatin interaction data. Furthermore, we developed ME-MAGMA to incorporate methylation quantitative trait loci data, providing insights into epigenetic factors.
Results
We identified a total of 897 false discovery rate-corrected (<.05) candidates. These include voltage-gated calcium channels, voltage-gated potassium channels, and other genes such as NPRL2, CACNB2, and KCNT1 associated with epilepsy pathogenesis that act as key players in neuronal communication and signaling in the brain.
Significance
In this study, we propose new candidate genes to expand the dataset of potential epilepsy-causing genes. Further research on these genes may enhance our understanding of the complex regulatory mechanisms underlying GGE and other types of epilepsy, potentially revealing targets for therapeutic intervention.
The recent discovery of superconductivity in the bilayer Ruddlesden-Popper nickelate La3Ni2O7 under high pressure has generated much interest in the superconducting pairing mechanism of nickelates. Despite extensive work, the superconducting pairing symmetry in La3Ni2O7 remains unresolved, with conflicting results even for identical methods. We argue that different superconducting states in La3Ni2O7 are in close competition and highly sensitive to the choice of interaction parameters as well as pressure-induced changes in the electronic structure. Our study uses a multiorbital Hubbard model, incorporating all Ni 3𝑑
and O 2𝑝 states. We analyze the superconducting pairing mechanism of La3Ni2O7 within the random phase approximation and find a transition between 𝑑-wave and sign-changing 𝑠-wave pairing states as a function of pressure and interaction parameters, which is driven by spin fluctuations with different wave vectors. These spin fluctuations with incommensurate wave vectors cooperatively stabilize a superconducting order parameter with 𝑑𝑥2−𝑦2 symmetry for realistic model parameters. Simultaneously, their competition may be responsible for the absence of magnetic order in La3Ni2O7, demonstrating that magnetic frustration and superconducting pairing can arise from the same set of incommensurate spin fluctuations.
Inzwischen hat sich die Einsicht durchgesetzt, dass Cyberangriffe nicht komplett verhindert werden können. Stattdessen sollen die möglichen Auswirkungen von Vorfällen sinnvoll reduziert werden. Damit steigt die Bedeutung der Erkennung von Sicherheitsvorfällen, um schnell reagieren und das Schadensausmaß begrenzen zu können. Damit die Fortschritte bei der Erkennung künftig auch in der Praxis angewendet werden können, ist eine Verankerung in den entsprechenden Vorgehensweisen erforderlich. Dies erfolgt nachfolgend am Beispiel des IT-Grundschutzes, dessen Methoden in der deutschsprachigen IT-Sicherheitscommunity verbreitet sind.
Wie gelingt Zukunftsfähigkeit in einer komplexen Welt? Unsere Welt ist im Wandel - technologisch, global, ökologisch. Selbst Handlungskonzepte wie das "grüne Wachstum" wirken zunehmend unsicher, vor allem auf einem Planeten mit begrenzten Ressourcen und wachsender Weltbevölkerung. Dieses Lehrbuch lädt dazu ein, neue Perspektiven zu gewinnen: Wie können sich Unternehmen und Standorte zukunftsfähig ausrichten? Und: Was müssen wir aus Sicht der Volkswirtschaftslehre - disziplinär und interdisziplinär - hierfür über komplex adaptive Systeme lernen? Das erforderliche Komplexitätsverständnis wird in drei Teilen Schritt für Schritt entwickelt und mit Blick auf die Nachhaltigkeit anwendungsbezogen veranschaulicht.
Teil 1 bietet inspirierende Einblicke in die Vielfalt menschlicher Handlungsweisen, vom rationalen bis zum vielschichtigen Menschen, der sich von Narrativen, Emotionen und kognitiven Verzerrungen beeinflussen lässt.
Teil 2 sieht den Menschen als Systemelement bzw. Akteur und beleuchtet systemische Wechselwirkungen bei zunehmendem Komplexitätsgrad. Die Vorstellungen von einem "guten Leben" wandeln sich, von materiellen Lebensstandards und sozialer Gerechtigkeit hin zu Krisenresilienz und transformativer Leistungsfähigkeit.
Teil 3 stellt Ideen vor, wie die Governance menschengemachter Systeme mittels Regulierung, Kulturentwicklung und Nudging gelingen kann, wenn wir von der Partizipation in Wissens- und Lernnetzwerken und potenzieller Selbstwirksamkeit ausgehen.
Das Buch bietet Impulse für Lehre, Forschung und Praxis. Es eignet sich nicht nur für Studierende und Lehrende der Volkswirtschaftslehre. Es eröffnet auch Lehrenden der Sekundarstufe und Erwachsenenbildung neue Sichten sowie denjenigen, die in Wirtschaft, Verbänden, Politik, Verwaltung oder in zivilgesellschaftlichen Organisationen beratend aktiv sind. Kurzum: Ein Buch für alle, die systemischen Wandel nicht nur verstehen, sondern mitgestalten wollen.
Unlocking the secrets of award-winning retail store design – The case of Deutsche Telekom in Germany
(2025)
Despite the rise of digital sales channels, Deutsche Telekom continues to invest heavily in its physical retail operations. This case study demonstrates how the company leverages stores as strategic innovation hubs that extend far beyond their traditional sales function. The analysis highlights the role of store design, technological integration, and symbolic branding as key drivers of customer experience and brand equity. Flagship stores serve as a testing ground for testing new concepts, while local adaptations strengthen cultural relevance within communities. Findings suggest that in highly digital industries, physical retail remains strategically valuable – not as a relic of the past, but as a platform for differentiation, customer engagement, and innovation. Properly designed, physical stores complement digital channels and reinforce the overall brand experience.
This paper presents an approach to teaching and consolidating skills in the context of sustainability “Prototyping Sustainability–Designing Sustainable IT” (ProS), using the workshop format for participatory and creative learning. The workshop integrates principles from Education for Sustainable Development (ESD), transformative and experiential learning, participatory design, and critical reflection on the digital age to engage participants in critically examining the environmental, economic, and social impacts of digital technologies in the context of Sustainable Development Goals (SDGs). Structured in five modular phases, from self-reflection and knowledge activation to collaborative prototyping and peer evaluation, the workshop offers a hands-on, gamified learning experience centred on real-world sustainability challenges. Learners create user-centred paper-based prototypes for digital products using tactile materials, persona-driven scenarios and knowledge of sustainable product characteristics gained in the workshop. Outcome measurement is supported through pre- and post-workshop surveys, peer voting templates, and paper-based prototype artefacts, enabling rich insight into behavioural intentions and learning gains. The paper discusses the educational value and sustainability relevance of the workshop engaging young people in critically reflecting on the environmental, economic, and social consequences of digitalization. Finally, it highlights challenges and limitations and proposes directions for future research.
Objectives
Human Papillomavirus (HPV) is a prevalent sexually transmitted infection and a leading cause of cervical cancer. In Georgia, cervical cancer ranks as the fifth most common cancer among women, with approximately 330 new cases and 200 deaths reported annually. Despite the availability of effective HPV vaccines, national vaccination coverage remains low. This study aimed to evaluate HPV vaccination coverage, analyze cervical cancer incidence trends, and model the potential impact of increased vaccination uptake on cancer prevention outcomes in Georgia.
Study design
A retrospective observational study was conducted using national health registry data and modeling projections to assess the burden of cervical cancer and estimate the effect of scaled vaccination coverage.
Methods
National health databases were used to analyze HPV vaccination rates and cervical cancer incidence. Descriptive statistics, chi-square tests, and linear regression were applied to identify trends and disparities. Additionally, a dynamic transmission model was developed to simulate the 10-year impact of increasing HPV vaccination coverage on cervical cancer incidence.
Results
The crude cervical cancer incidence rate was 15.7 per 100,000 women, with an age-standardized rate of 10.6 per 100,000. In 2022, only 38 % of eligible girls aged 13–18 received the first HPV vaccine dose, and 26 % completed the second dose. Regional disparities in vaccination and screening were noted, and overall screening coverage declined to 13,890 women screened in 2022. Modeling showed that increasing vaccine coverage to 60 % could reduce cervical cancer incidence by 50 % (preventing ∼163 cases); coverage of 80 % and 90 % could reduce incidence by 70 % and 85 %, preventing 228 and 276 cases, respectively.
Conclusion
Low HPV vaccination uptake in Georgia (38 % first dose and 26 % dull coverage) and declining screening limit cervical cancer prevention. Modeling shows that increasing vaccination coverage to 60–90 % could prevent 163–276 cases over the next decade. Strengthening vaccination and screening strategies is essential to move forward elimination.
Anecdotal reports in angling media suggest that using fluorescent lures may increase catch rates in dim light or at high turbidity. We conducted a controlled angling experiment, comprising 501 30-min experimental fishing trials in three meso- to eutrophic waterbodies and assessed catch rates and sizes of European perch (Perca fluviatilis) caught when offered two soft plastic lures (fluorescent vs. non-fluorescent) with similar reflective spectra. We also examined fluorescent properties of a range of market-available lures and modelled the experimental lure’s fluorescing effects under natural lake light. Considering the specific light environment of the study waters, the experimental fluorescent lure could get excited by downwelling visible daylight and fluoresce at depths of up to 3 meters. Based on a sample catch of 331 perch, and after controlling for interactions with illuminance, cloud cover, water depth and daytime, the fluorescence of the experimental lure did, however, neither affect the catch rate nor the size of perch caught. Lure fluorescence maybe less important than many anglers believe, but further studies in different lake conditions are needed.
Artificial light at night (ALAN) disrupts ecosystems by altering natural light cycles and affecting the physiology and behaviour of species, and represents a widespread and increasingly recognised global ecological threat. This meta‐analysis investigates the effects of varying wavelengths and illuminance levels of ALAN on organisms. Broad‐spectrum ‘cool’ light, enriched with blue and ultraviolet radiation, strongly disrupts circadian rhythms, melatonin production and nocturnal activity. However, contrary to common assumptions, broad‐spectrum ‘warm’ light can be nearly as impactful as broad‐spectrum ‘cool’ light lacking ultraviolet radiation, despite the predominant influence of short wavelengths on physiological processes. The impact of ALAN is not consistently dose‐dependent, as even low light levels (< 5 lx) can cause substantial biological disruptions. Thus, effective mitigation strategies require tailored solutions to specific ecological contexts and should generally avoid nocturnal illumination unless clearly needed, as there is no single ‘safe dose’ and ‘safe spectrum’ of ALAN.
PIN-a-Boo: Revealing Smartphone PINs via Segmentation and Hand Skeleton Tracking from Video Feeds
(2025)
It is crucial to improve smartphone security, given the prevalence of sensitive information stored on them. This study presents an attack strategy that reveals smartphone PIN entries using computer vision and pattern recognition techniques. By leveraging modern segmentation and hand skeleton tracking, our method accurately identifies and analyzes finger movement patterns, even when partially obscured. We can reliably infer the entered PIN by combining these movement patterns with the smartphone’s position and the on-screen keypad layout. This approach significantly enhances shoulder-surfing attacks, requiring only a video recording of the entry process. Our attack requires much less specialized expertise, making it more accessible. We conclude by analyzing the method’s potential impact and its implications for public safety.
The design of fan-blisks is a multi-criterion optimisation challenge primarily involving the aerodynamic shape optimisation of blade profiles and subsequent blade balancing, i.e., shifting profiles in axial and circumferential directions to avoid stress hotspots. Although it is already known that blade balancing affects aerodynamic properties, this correlation is often not taken into account, which is why it is usually performed as an independent step after solving the aerodynamic design problem. However, this assumption is questionable because such shifts should be used to control both secondary flow effects and resulting stress. Therefore, the paper proposes problem formulations combining both aspects.
Since optimisation requires costly numerical evaluations, machine learning methods are investigated to predict aerodynamic performance and stress constraints more efficiently. This enables a global optimisation process by reducing computational costs. Various different surrogate types are investigated, where stress constraints are formulated either as regression task predicting stress maxima, or as classification problem directly assessing design feasibility.
Belief-Desire-Intention (BDI) agents offer a unique approach for engi-
neering complex behavior for individual agents in multi-agent systems. A developer
can define goals for each agent, specifying the desired outcomes in various con-
texts and implement plans as the means to reach those goals. The BDI reasoning
engine can then proceed to automatically select goals to pursue (goal deliberation)
and choose one or more of the provided plans to attempt to achieve them. However,
usually plans or at least part of the plans have to be provided by the agent’s developer
before the system is deployed. In this paper we present an approach for BDI agents
to generate their own plans using large language models (LLMs) using solely the
available context information such as goal descriptions, available beliefs and their
structural information. We show that the approach is viable in principle and explore
its reliability as well as discuss further use of LLMs in the context of automating
BDI-based agents.
Securing End-to-End Encrypted File Sharing Services with the Messaging Layer Security Protocol
(2025)
To protect data on the servers of cloud service providers, file-sharing services rely on End-to-End Encryption (E2EE). However, existing solutions have weaknesses that allow attackers to bypass E2EE permanently after stealing a clients keys once. In this paper, a concept for an E2EE file-sharing service is proposed which does not have this vulnerability. It is based on Messaging Layer Security (MLS) groups for key distribution, an authentication system based on asymmetric cryptography, Attribute-Based Access Control (ABAC) based access rights and a tamper-proof versioned storage system for synchronising sensitive data. The applicability of the concept is demonstrated by a prototype implementation and an evaluation based on benchmarks and a security analysis. Overall, the concept can fulfil the requirements of a basic file sharing service while providing stronger security guarantees than existing solutions.
In an era where AI-powered chatbots are increasingly being integrated into education and corporate learning, it is critical to determine whether these approaches benefit all learners or primarily cater to those with specific preferences. This study explores the interplay between learning preferences and learning outcomes in communication training using an AI-powered chatbot. In a field experiment with 17 participants, systematic thinkers and intrinsically motivated learners reported higher satisfaction and greater skill improvement, while those who preferred model learning and direct feedback benefited less. These findings suggest that AI-powered chatbots should be carefully designed to accommodate diverse learners and mitigate potential negative effects.
In turbomachinery blade design, rapid and accurate performance prediction is essential to accelerate optimization and reduce reliance on costly high-fidelity simulations. Traditional data-driven approaches often use dense surface point-cloud representations as input features, requiring extensive training datasets and computational resources. This work presents a more efficient methodology leveraging a compact B-Spline-based surface representation, where control points serve as input features, significantly reducing geometric dimensionality and computational overhead.
A systematic Design of Experiments (DoE) is performed to generate a diverse set of blade geometries for NASA Rotor 67. Each design is evaluated via computational fluid dynamics (CFD) simulations in ANSYS CFX, providing key aerodynamic performance metrics such as isentropic efficiency. We train and compare Graph Convolutional Neural Networks (GCNN) and Random Forest Regression (RFR) models to predict blade performance directly from the reduced control-point parameterization. Incorporating first- and second-order geometric derivatives (gradients and Laplacians) into the feature set significantly enhances predictive accuracy and stability, capturing essential curvature-related flow physics.
Results demonstrate that this B-Spline-based, CAD-centric methodology can achieve competitive accuracy in predictions with as few as 150–200 training simulations—comparable to other GCNN-based approaches. Consequently, the proposed framework reduces training overhead from days to minutes, enabling faster, more cost-effective turbomachinery design workflows and guiding optimization toward high-performing blade geometries.
Numerical Study of Ice Accretion on Fan Blades: Implications for the Design of Blade Geometries
(2025)
Ice formation on aircraft components due to the impact of supercooled droplets poses a severe safety risk. In particular, the formation of ice on the fan blades can lead to vibrations that affect the entire engine. While numerous studies have examined the effects of environmental conditions on ice accumulation, the influence of blade geometry has received little attention. This study investigates how variations in blade geometry affect ice accretion in a low-pressure compressor using a numerical approach. A Design of Experiments (DoE) is conducted on the NASA Rotor67, focusing on the sensitivity of ice formation to geometric modifications. The workflow includes geometry generation (ParaBlade), flow simulation (ANSYS CFX), and ice accretion modeling (ANSYS FENSAP-ICE) under rime ice conditions. The results reveal a strong correlation between the inlet metal angle and both accreted ice mass and maximum ice thickness. Furthermore, designs with good aerodynamic performance tend to exhibit higher ice accumulation. These findings enhance the understanding of icing behavior in low-pressure compressors and offer valuable insights for optimizing blade design in adverse environmental conditions.
This paper introduces the LoRaWAN Collaboration Framework (LCF), a strategic blueprint for deploying and managing LoRaWAN infrastructures in smart cities with an emphasis on rural and small municipalities. LoRaWAN technology is distinguished by its capability to support long-range, low-power IoT applications, making it ideal for extensive and sparsely populated areas. The LCF aims to address common challenges in these settings, such as limited technical expertise, financial constraints, and the need for cross-municipal cooperation. It outlines roles and responsibilities across various stakeholders including municipal authorities, IT service providers, application developers, and end-users. The framework emphasizes the balance of technological, economic, ecological, and social sustainability in line with the United Nations' Sustainable Development Goals. In this paper we describe the experiences from several LoRaWAN projects in small towns and municipalities, the derived framework, and future research directions towards ensuring the economic viability of the proposed model.
Future-proofing Education: A Prototype for Simulating Oral Examinations Using Large Language Models
(2023)
This study explores the impact of Large Language Models (LLMs) in higher education, focusing on an automated oral examination simulation using a prototype. The design considerations of the prototype are described, and the system is evaluated with a select group of educators and students. Technical and pedagogical observations are discussed. The prototype proved to be effective in simulating oral exams, providing personalized feedback, and streamlining educators' workloads. The promising results of the prototype show the potential for LLMs in democratizing education, inclusion of diverse student populations, and improvement of teaching quality and efficiency.
On-Demand-Verkehr bezeichnet Mobilitätsangebote, die auf Bestellung verfügbar sind. Ziel ist es, durch flexible Mobilitätsangebote Lücken im ländlichen öffentlichen Personennahverkehr (ÖPNV) zu schließen, insbesondere in dünn besiedelten Räumen und zu Randzeiten des Tages. Die Einführung solcher Dienste gestaltet sich für viele Kommunen jedoch aufgrund der hohen Lizenzgebühren für proprietäre Software als äußerst schwierig, insbesondere in strukturschwachen Regionen, die solche Dienste besonders benötigen. Im Rahmen des Projekts "Open-Source-Software für ländlichen On-Demand-Verkehr" (OSLO) wurde die Umsetzbarkeit einer niedrigschwelligen Open-Source-Lösung geprüft. Diese Lösung soll langfristig in die Open-Source-Mobilitätsplattform bbnavi integriert werden, um die Nutzung vorhandener Mobilitätsdaten zu erleichtern, Interoperabilität zu gewährleisten und die Anpassung an andere Regionen zu vereinfachen. Das Konzept beinhaltet eine Software-Architektur, einen speziell für den ländlichen Nahverkehr entwickelten intermodalen Routing-Algorithmus und ein Betriebs- und Organisationsmodell ([1]). In diesem Beitrag wird beschrieben, wie eine effiziente Software-Architektur für das OSLO-Projekt entworfen wurde. Die zentralen Themen beinhalten die Strukturierung der Komponenten, die Anwendung von GTFS-Flex-V2, die Integration des angepassten Routing-Algorithmus sowie die Einbindung externer Schnittstellen.
Abstract—This paper introduces the LoRaWAN Collaboration Framework (LCF), a strategic blueprint for deploying and managing LoRaWAN infrastructures in smart cities with an emphasis on rural and small municipalities. LoRaWAN technology distinguishes itself by its capability to support long-range, low-power IoT applications, making it ideal for extensive and sparsely populated areas. The LCF aims to address common challenges in these settings, such as limited technical expertise, financial constraints, and the need for cross-municipal cooperation. It outlines roles and responsibilities across various stakeholders including municipal authorities, IT service providers, application developers, and end-users. The framework emphasizes the balance of technological, economic, social and ecological sustainability in line with the United Nations' Sustainable Development Goals. In this paper, we describe the experiences from several LoRaWAN projects in small towns and municipalities in Germany and give some insights to these use cases, the derived collaboration framework, and other arguments to consider before implementing LoRaWAN infrastructures.
Die Krankenhaus-IT sieht sich einer stetig zunehmenden Bedrohung der Sicherheit der Patientendaten und des Krankenhausbetriebes gegenüber. Der Gesundheitssektor gehört zu den Kritischen Infrastrukturen, die zunehmend strengeren Regelungen unterliegen. Jüngste Regulierung ist das IT-Sicherheitsgesetz 2.0, das unter anderem Maßnahmen zur Detektion von Angriffen fordert. Der Beitrag stellt vor, wie eine branchenspezifische Umsetzung dieser Anforderung aussehen könnte.
The growing number of connected medical devices in hospitals poses serious operational technology (OT) security challenges. Effective countermeasures require a structured analysis of the communication interfaces and security configurations of individual devices. State of the art: Although Manufacturer Disclosure Statements for Medical Device Security (MDS2, Version 2019) offer relevant information, they are rarely integrated into cybersecurity workflows. Existing studies are limited in scope and lack scalable methodologies for systematic evaluation. Concept: This study analyzed 209 MDS2 documents and 161 security white papers to extract structured information on ports, protocols, and protective measures. Over 52,000 question–answer pairs were converted into a machine-readable format using customized parsing and validation routines. The aim was to establish whether this dataset could inform risk assessments and future applications involving Large Language Models (LLMs). Implementation: The analysis revealed 367 distinct ports, including common protocols such as HTTPS (443), DICOM (104), and RDP (3389), as well as vendor-specific proprietary ports. Approximately 40% of the devices used over 20 ports, indicating a broad attack surface. OCR errors and inconsistent formatting required manual corrections. A consolidated dataset was developed to support clustering, comparison across vendors and versions, and preparation for downstream LLM use, particularly via structured SBOM and configuration data. Lessons learned: Although no model training was conducted, the structured dataset can support AI-based OT security workflows. The findings highlight the critical need for up-to-date, machine-readable manufacturer data in standardized formats and schemas. Such information could greatly enhance the automation, comparability, and scalability of hospital cybersecurity measures.
The threat situation due to cyber attacks in hospitals is emerging and patient life is at risk. One significant source of potential vulnerabilities is medical cyber-physical systems (MCPS). Detecting intrusions in this environment faces challenges different from other domains, mainly due to the heterogeneity of devices, the diversity of connectivity types, and the variety of terminology. To summarize existing results, we conducted a structured literature review (SLR) following the guidelines of Kitchenham et al. for SLRs in software engineering. We developed six research questions regarding detection approach, detection location, included features, adversarial focus, utilized datasets, and intrusion prevention. We identified that most researchers focused on an anomaly-based detection approach at the network layer. The primary focus was on the detection of malicious insiders. While several researchers used publicly available datasets for training and testing their algorithms, the lack of suitable datasets resulted in the development of testbeds consisting of various medical devices. Based on the results, we formulated five future research topics. First, the special conditions of hospital networks, the MCPS deployed within them, and the contrasts to other IT and OT environments should be examined. Thereupon, MCPS-specific datasets should be created that allow researchers to address the health domain’s unique requirements and possibilities. At the same time, endeavors aimed at standardization in this area should be supported and expanded. Moreover, the use of medical context for attack detection should be further explored. Last but not least, efforts for MCPS-tailored intrusion prevention should be intensified. This way, the emerging threat landscape can be addressed, IT security in hospitals can be improved, and patient health can be protected.
Advancing Image Spam Detection: Evaluating Machine Learning Models Through Comparative Analysis
(2025)
Image-based spam poses a significant challenge for traditional text-based filters, as malicious content is often embedded within images to bypass keyword detection techniques. This study investigates and compares the performance of six machine learning models—ResNet50, XGBoost, Logistic Regression, LightGBM, Support Vector Machine (SVM), and VGG16—using a curated dataset containing 678 legitimate (ham) and 520 spam images. The novelty of this research lies in its comprehensive side-by-side evaluation of diverse models on the same dataset, using standardized dataset preprocessing, balanced data splits, and validation techniques. Model performance was assessed using evaluation metrics such as accuracy, receiver operating characteristic (ROC) curve, precision, recall, and area under the curve (AUC). The results indicate that ResNet50 achieved the highest classification performance, followed closely by XGBoost and Logistic Regression. This work provides practical insights into the strengths and limitations of traditional, ensemble-based, and deep learning models for image-based spam detection. The findings can support the development of more effective and generalizable spam filtering solutions in multimedia-rich communication platforms.
Digital platforms can grow by motivating users to explore new ways to use a wider range of affiliated products and services. This work explores the power of IT Identity to motivate such innovative use, through identity's ability to intrinsically motivate behavior. Data from 209 Amazon.com users indicates that IT Identity may cause Trying to Innovate with an IT, mediated by Self-Esteem.
Assessing Cybersecurity of Internet-Facing Medical IT Systems in Germany & Spain Using OSINT Tools
(2025)
This paper investigates cybersecurity threats in medical IT (Information Technology) systems exposed to the Internet. To that end, we develop a methodology and build a data processing pipeline that allows to gather data from different OSINT (Open Source Intelligence) sources, and processes it to obtain relevant cybersecurity metrics. To validate its operation and usefulness, we apply it to two countries, Germany and Spain, allowing to study the main threats that affect medical IT systems in these countries. Our initial findings reveal that 20% of German hosts and 15% of Spanish hosts tagged as medical devices have at least one CVE (Common Vulnerabilities and Exposures) with a CVSS (Common Vulnerability Scoring System) graded as critical (i.e., value 8 or greater). Moreover, we found that 74% of CVEs found in German hosts are dated from earlier than 2020, whereas for Spanish hosts the percentage is 60%. This indicates that medical IT systems exposed to the Internet are seldom updated, which further increases their exposure to cyberthreats. Based on these initial findings, we finish the paper providing some insights on how to improve cybersecurity of these systems.
Environmental monitoring systems often operate continuously, measuring various parameters, including carbon dioxide levels (CO2), relative humidity (RH), temperature (T), and other factors that affect environmental conditions. Such systems are often referred to as smart systems because they can autonomously monitor and respond to environmental conditions and can be integrated both indoors and outdoors to detect, for example, structural anomalies. However, these systems typically have high energy consumption, data overload, and large equipment sizes, which makes them difficult to install in constrained spaces. Therefore, three challenges remain unresolved: efficient energy use, accurate data measurement, and compact installation. To address these limitations, this study proposes a two-to-one threshold sampling approach, where the CO2 measurement is activated when the specified T and RH change thresholds are exceeded. This event-driven method avoids redundant data collection, minimizes power consumption, and is suitable for resource-constrained embedded systems. The proposed approach was implemented on a low-power, small-form and self-made multivariate sensor based on the PIC16LF19156 microcontroller. In contrast, a commercial monitoring system and sensor modules based on the Arduino Uno were used for comparison. As a result, by activating only key points in the T and RH signals, the number of CO2 measurements was significantly reduced without loss of essential signal characteristics. Signal reconstruction from the reduced points demonstrated high accuracy, with a mean absolute error (MAE) of 0.0089 and root mean squared error (RMSE) of 0.0117. Despite reducing the number of CO2 measurements by approximately 41.9%, the essential characteristics of the signal were saved, highlighting the efficiency of the proposed approach. Despite its effectiveness in controlled conditions (in buildings, indoors), environmental factors such as the presence of people, ventilation systems, and room layout can significantly alter the dynamics of CO2 concentrations, which may limit the implementation of this approach. Future studies will focus on the study of adaptive threshold mechanisms and context-dependent models that can adjust to changing conditions. This approach will expand the scope of application of the proposed two-to-one sampling technique in various practical situations.
This study investigated the use of a semi-automated, Retrieval-Augmented Generation (RAG)-based multi-agent architecture to analyze security-relevant data and assemble specialized exploitation paths targeting medical devices. The input dataset comprised device-specific sources, namely, the Manufacturer Disclosure Statement for Medical Device Security (MDS2) documents and Software Bills of Materials (SBOMs), enriched with public vulnerability databases, including Common Vulnerabilities and Exposures (CVE), Known Exploited Vulnerabilities (KEV), and Metasploit exploit records. The objective was to assess whether a modular, Large Language Model (LLM)-driven agent system could autonomously correlate device metadata with known vulnerabilities and existing exploit information to support structured threat modeling. The architecture follows a static RAG design based on predefined prompts and fixed retrieval logic, without autonomous agent planning or dynamic query adaptation. The developed Vulnerability Intelligence for Threat Analysis in Medical Security (VITAMedSec) system operates under human-prompted supervision and successfully synthesizes actionable insights and exploitation paths without requiring manual step-by-step input during execution. Although technically coherent results were obtained under controlled conditions, real-world validation remains a critical avenue for future research. This study further discusses the dual-use implications of such an agent-based framework, its relevance to patient safety in medical device cybersecurity, and the broader applicability of the proposed architecture to other critical infrastructure sectors. These findings emphasize both the technical potential and ethical responsibility for applying semi-automated AI workflows in medical cybersecurity contexts.
In recent years, many studies have shown that light pollution adversely affects wildlife, ecosystems, and human
well-being. To assess and mitigate these impacts, it is crucial that measurements of night sky quality are
reliable and comparable across sites and instruments. However, the lack of standardised night sky brightness
metrology and the use of a wide variety of measurement instruments with varying spectral responsivity and
field-specific measurement units hinder meaningful comparison. We collected night sky spectra from 44 nights
at dark locations (existing and proposed dark sky parks). Based on this observational dataset, we created
a larger random set of spectra. These data served to fit conversion parameters for a wide variety of units.
We demonstrate that RGB cameras, when used as multichannel measuring devices, enable the retrieval of
measurements that facilitate conversions between different units. Furthermore, even airglow can be quantified
from a given measurement, enabling the determination of oxygen and sodium emission line contributions.
Since this contribution is not negligible, quantitative measurements of its magnitude are crucial for accurately
assessing light pollution at dark-sky sites. Using our spectral measurement database, we constructed the
most probable transformation from the cameras’ R, G, and B channel 𝑑𝑠𝑢 values to other units, such as the
astronomical Bessel V band magnitudes. The unit conversion formulas provided in this paper are valid for
mildly polluted sites (existing and proposed dark sky places), in the 21-22 magV∕arcsec2 range.
Belief-Desire-Intention (BDI) agents enable complex behavior in multi-agent systems by defining goals and implementing plans to achieve their goals. Typically, developers pre-define plans, which limits adaptability. This paper presents an approach for BDI agents to dynamically generate plans using large language models (LLMs), exploiting contextual information such as goals, beliefs, and structural data. We evaluate the feasibility, reliability, and limitations of this method, and discuss its implications for the automation of BDI-based agents. Our results suggest that LLMs can enhance agent autonomy by reducing the need for manual plan definition, while maintaining goal-oriented reasoning.
Large Language Models (LLMs) demonstrate strong performance on different language tasks, but tend to hallucinate – generate plausible but factually incorrect outputs. Recently, several approaches to integrate Knowledge Graphs (KGs) into LLM inference were published to reduce hallucinations. This paper presents a systematic literature review (SLR) of such approaches. Following established SLR methodology, we identified relevant work by systematically search in different academic online libraries and applying a selection process. Nine publications were chosen for indepth analysis.
Our synthesis reveals differences and similarities of how the KG is accessed, traversed, and how the context is finally assembled. KG integration can significantly improve LLM performance on benchmark datasets and additionally to mitigate hallucination enhance reasoning capabilities, explainability, and access to domain-specific knowledge. We also point out current limitations and outline directions for future work.
Diese Forschung untersucht die Bedeutung einer offenen Fehlerkultur für den Erfolg von Organisationen. Ziel ist es, die Auswirkungen einer Learning-from-Failure Culture (LFFC) auf Innovationskraft und Resilienz zu analysieren. Durch Literaturrecherche, Umfragen und Fallstudien sollen Handlungsempfehlungen für HR-Verantwortliche und Führungskräfte entwickelt werden. Erwartet wird, dass eine positive Fehlerkultur das Lernen und die Zusammenarbeit fördert und die Innovationsfähigkeit steigert. Die Ergebnisse sollen als Grundlage für Schulungsprogramme und HR-Strategien dienen, um die Fehlerkultur in Unternehmen zu stärken.
Since the last decade, it is well known that Industrial
Control Systems (ICS) are under attack and attackers nowadays
increasingly use stealthy malware (i.e., stegomalware) imple-
mented by steganographic embedding methods to in- and exfil-
trate hidden information. Unfortunately, current mechanisms to
distinguish between network steganographic embedding methods
and embedded message types need improvement for a potential
attribution of attackers. For the analysis of steganographic em-
bedding methods which are utilized in stealthy malware, the work
presented in this paper builds upon a state-of-the-art analysis
testbed proposed earlier, which is recapitulated here. It offers
the opportunity to analyze network steganographic embedding
methods in ICS to elaborate methods to detect and distinguish
between them to gain forensic information for attribution of
potential attackers and their methods. In this work, we introduce
a novel machine learning based approach to distinguish between
five selected embedding methods and two embedded message
types. We use the analysis testbed to evaluate and determine the
accuracy of the novel approach compared to a state-of-the-art
approach. In our extensive evaluation, our novel approach has
shown to be able to distinguish between network steganographic
embedding methods with an average accuracy of 85.7%, which
is an improvement in comparison to the state-of-the-art by
+5.9% and enables a more accurate attribution of attackers.
Additionally, the novel approach is able to improve the accuracy
of distinction between embedding method and embedded message
type by +9.3% in comparison to the evaluated state-of-the-art
approach.
Recently, an increasing number of IT security incidents involving malware, which makes use of hidden and steganographic channels for malicious communication (a.k.a. as "stegomalware"), can be observed in the wild. Especially the use of images to hide malicious code is rising. In consideration of this shift, a new model is proposed in this paper, which aims to help security professionals to identify and analyze incidents revolving around steganographic malware in the future. The model focuses on practical aspects of steganalysis of communication data to elaborate linking properties to previous code analysis knowledge. The model features two distinct roles that interact with a knowledge base which stores malware features and helps building a context for the incident. For evaluation, two image steganography malware types are chosen from popular databases (malpedia and MITRE ATT&CK®), which are analyzed in multiple steps including steganalysis and code analysis. It is conceptually shown, how the extracted features can be stored in a knowledge base for later use to identify stegomalware from communication data without the need of a thorough code analysis. This allows to uncover previously hidden meta-information about the examined malicious programs, enrich the incident’s forensic context traces and thus allows for thorough forensic insights, including attribution and improved preventive security measures in the future.
With growing popularity of social networks and online brand engagement, small and medium-sized enterprises (SME) are facing challenges in leveraging business value effectively from interactions with their online communities. Social listening tools, enhanced by artificial intelligence (AI), help enterprises to better understand customer needs and trends by synthesizing engagement and interaction data from various channels into actionable insights. However, prevalent tools widely appear too costly, inaccessible and complex to handle for SME managers. In response, we present this article on the design of "aiSnoop", an AI-assisted social listening tool for SME managers. The article describes how the tool was developed, demonstrated and evaluated as a design artifact with an SME-contextualized design science research approach. We find that automation, usability and transparency, and advanced AI integration are important social listening tool capabilities for SMEs, contributing to the broader discussion on AI-assisted decision-support system design from an SME-and practice-oriented research perspective.
The unexpected discovery of intrinsic ferromagnetism in layered van der Waals materials has sparked interest in both their fundamental properties and their potential for novel applications. Recent studies suggest near roomtemperature ferromagnetism in the pressurized van der Waals crystal CrGeTe3. We perform a comprehensive experimental and theoretical investigation of magnetism and electronic correlations in CrGeTe3, combining broad-frequency reflectivity measurements with density functional theory and dynamical mean-field theory calculations. Our experimental optical conductivity spectra trace the signatures of developing ferromagnetic order and of the insulator-to-metal transition (IMT) as a function of temperature and hydrostatic pressure. With increasing pressure, we observe the emergence of a midinfrared feature in the optical conductivity, indicating the development of strong orbital-selective correlations in the high-pressure ferromagnetic phase. We find a distinct relationship between the plasma frequency and Curie temperature of CrGeTe3, which strongly suggests that a double-exchange mechanism is responsible for the observed near room-temperature ferromagnetism. Our results clearly demonstrate the existence of a charge-transfer gap in the metallic phase, ruling out its previously conjectured collapse under pressure.
Successful Augmented Reality (AR) solutions require high technology acceptance and desire among users. Applying appropriate design methodologies can enhance the development of desirable AR systems and help to streamline design research projects. This article presents a methodological literature review of 58 pertinent articles, exploring the representation of methodological design frameworks in currently applied AR research practice. In result, User-Centered Design was identified most frequently by a large margin. Though, it could be observed that most articles from the field of applied AR only marginally describe any design processes. Moreover, Human-Centered Design, Participatory Design and Value Sensitive Design were only scantily represented; less than half of the papers followed a Design Science Research or Action Design Research approach. Several implications for applied research and practice are derived from these findings and discussed together with future research opportunities. Overall, this study contributes to the refinement of methodological design practice in the field of applied AR research.
Light pollution is an emerging ecological threat. To mitigate its negative consequences, creative inter- and transdisciplinary solutions and societal interactions are needed. To this end, we introduce nocturnal umbrella species representative of light-sensitive biodiversity whose protection will safeguard vital ecosystem services and a wide range of co-occurring species.
Dementia, marked by cognitive decline, significantly impacts daily life. With global prevalence rising, traditional treatments manage symptoms but have side effects and offer no cure. Non-pharmacological interventions, like serious games, are gaining importance. This study assesses the feasibility and benefits of serious games for people with mild to moderate dementia over a 10-week intervention. Sixty-one patients were recruited, with 35 completing the study. The intervention included six games focusing on physical and cognitive training. Outcome measures were motor function, cognitive assessments, quality of life, and depression. Results showed significant improvements in dynamic balance (p = .013) but no significant changes in other measures. The findings suggest that serious games are feasible and can improve motor functions like balance. However, short intervention periods may limit their impact on cognitive function and quality of life. Longer interventions and personalized game designs are recommended for greater benefits.
Light pollution poses significant ecological challenges for nocturnal animals reliant on natural light for migration, orientation, and circadian rhythms. The physiological effects of abrupt exposure to artificial light at night (ALAN) on migratory fish, such as the light experienced passing near illuminated infrastructures, remain poorly understood. This study investigates the physiological responses of brown trout (Salmo trutta) smolts to low-intensity (0.02 lx) and short-term (30 s) ALAN, simulating nocturnal migration light conditions near illuminated bridges. To evaluate the influence of social dynamics, trout were tested individually (solitary) or in groups of six. Using continuous cardiac monitoring with data storage tags, alongside analyses of oxidative stress markers and adenylate kinase (AK) activity in the heart, we identified distinct patterns of physiological responses. Solitary fish exhibited significant heart rate variability (HRV) increases following repeated ALAN exposure, suggesting impaired physiological regulation under repeated ALAN exposure. In contrast, trout in groups displayed consistently lower HRV over the entire 90-min experiment, implying that social dynamics likely influenced a sustained oxidative stress response, corroborated by increased AK activity. Oxidative stress markers further reflected social effects, with significant upregulation of key antioxidant enzymes (sod1, sod2, gpx1, gpx4) and elevated lipid peroxidation, identifying lipids as primary oxidative targets. The observed divergence between superoxide dismutase (SOD) activity and sod gene expression suggests adaptive post-transcriptional regulation to maintain redox balance under combined environmental and social stress. These findings reveal that social dynamics under ALAN can amplify physiological stress, potentially affecting migratory outcomes.
Menschen in Organisationen treffen Entscheidungen. In der Regel wird ein Ziel angestrebt und die Entscheidung soll Maßnahmen in Gang bringen, welche die Erreichung des Ziels befördern. Den finalen Entscheidungen gehen Prozesse voraus, welche die Entscheidungsfindung vorbereiten. In diesem Rahmen werden etwaige Kon strukte – wie Handlungen, Technologien, Produkte, Verfahren, Steuerungssysteme – in Bezug auf ihren Beitrag zur Zielerreichung bewertet. Nun ist es nicht neu, dabei eine ganzheitliche Bewertung anzumahnen und nicht nur auf eine Wirkung zu blicken. Dennoch erhält man oft den Eindruck, dass im Rahmen von Entscheidungsfindungen in Politik, Gesellschaft, öffentlichen Institutionen aber eben auch Unternehmen zu wenig auf etwaige weitere Neben- und Folgewirkungen von Entscheidungen geblickt wird. Daher wird gefragt, ob die Grundidee des Ansatzes der Technologiefolgenabschätzung hilfreiche Instrumente bietet, welche die Entscheidungsprozesse in den betrieblichen Feldern Innovation, Technologieentwicklung und Management anreichern können.
Lernen mit Neuronalen Netzen
(2024)
Sitz der menschlichen Intelligenz ist das Gehirn. Was läge also näher, als Erkenntnisse aus der Hirnforschung für die Erzeugung künstlicher Intelligenz zu nutzen? Künstliche neuronale Netze simulieren in einer vereinfachten und abgewandelten Form den grundlegenden Aufbau des Gehirns und sind als eine leistungsfähige Variante des maschinellen Lernens vielseitig einsetzbar, um komplexe Funktionen, z. B. in der Bild- und Sprachverarbeitung, aus großen Mengen von Daten zu lernen. Viele aktuelle und Aufsehen erregende Anwendungen der künstlichen Intelligenz – leistungsfähige ChatBots und Bildgeneratoren, Bilderkennung in Fahrassistenzsystemen, die Erkennung von Krankheitsbildern in der bildgebenden Diagnostik, Prognosen im Finanzwesen und viele mehr – beruhen auf künstlichen neuronalen Netzen. Dieses Kapitel behandelt den Grundaufbau eines einzelnen künstlichen Neurons und eines mehrschichtigen neuronalen Netzes. Es wird erklärt, wie ein neuronales Netz Eingaben verarbeitet und wie es mit Daten trainiert werden kann, um eine bestimmte Funktion auszuführen.
Ensuring the reliability, safety, and efficiency of railway systems is increasingly critical in global transportation networks. This paper addresses the necessity for advanced monitoring systems by introducing a multivariate energy-efficient wireless sensor node designed for proactive maintenance in rail applications.
Ethical considerations in software requirements engineering are a critical but often overlooked aspect of the software development process. However, requests for transparency and autonomy in the way IT artefacts are designed, described, used, applied and integrated in the everyday life are getting more pressing within the society. In this research, the process of software engineering is taken as an illustrative model for the proactive incorporation of ethical principles in the system design within the design of a software artefact. Specifically, the phase of requirements elicitation and analysis are expanded with ethical aspects, since that is where the first steps of the software construction as an artefact are initiated and the first common ground of understanding is achieved. Being rooted in the security engineering, the SQUARE process is expanded to provide a basic structure for incorporating of ethical aspects into software design. By doing so, the social and moral values become central to the design and development of new technologies.
We introduce STAResNet, a ResNet architecture in Spacetime Algebra (STA) to solve Maxwell's partial differential equations (PDEs). Recently, networks in Geometric Algebra (GA) have been demonstrated to be an asset for truly geometric machine learning. In [1], GA networks have been employed for the first time to solve partial differential equations (PDEs), demonstrating an increased accuracy over real-valued networks. In this work we solve Maxwell's PDEs both in GA and STA employing the same ResNet architecture and dataset, to discuss the impact that the choice of the right algebra has on the accuracy of GA networks. Our study on STAResNet shows how the correct geometric embedding in Clifford Networks gives a mean square error (MSE), between ground truth and estimated fields, up to 2.6 times lower than than obtained with a standard Clifford ResNet with 6 times fewer trainable parameters. STAREsNet demonstrates consistently lower MSE and higher correlation regardless of scenario. The scenarios tested are: sampling period of the dataset; presence of obstacles with either seen or unseen configurations; the number of channels in the ResNet architecture; the number of rollout steps; whether the field is in 2D or 3D space. This demonstrates how choosing the right algebra in Clifford networks is a crucial factor for more compact, accurate, descriptive and better generalising pipelines.
CGAPoseNet+GCAN: A Geometric Clifford Algebra Network for Geometry-aware Camera Pose Regression
(2024)
We introduce CGAPoseNet+ GCAN, which enhances CGAPoseNet, an architecture for camera pose regression, with a Geometric Clifford Algebra Network (GCAN). With the addition of the GCAN we obtain a geometry-aware pipeline for camera pose regression from RGB images only. CGAPoseNet employs Clifford Geometric Algebra to unify quaternions and translation vectors into a single mathematical object, the motor, which can be used to uniquely describe camera poses. CGAPoseNet solves the issue of balancing rotation and translation components in the loss function, and can obtain comparable results to other approaches without the need of expensive tuning of the loss function or additional information about the scene, such as 3D point clouds, which might not always be available. CGAPoseNet, however, like several approaches in the literature, only learns to predict motor coefficients, and it is unaware of the mathematical space in which predictions sit in and of their geometrical meaning. By leveraging recent advances in Geometric Deep Learning, we modify CGAPoseNet with a GCAN: proposals of possible motor coefficients associated with a camera frame are obtained from the InceptionV3 backbone, and the GCAN downsamples them to a single motor through a sequence of layers that work in G_ 4, 0. The network is hence geometry-aware, has multivector-valued inputs, weights and biases and preserves the grade of the objects that it receives in input. CGAPoseNet+ GCAN has almost 4 million fewer trainable parameters, it reduces the average rotation error by 41% and the average translation error by 8.8% compared to CGAPoseNet. Similarly, it reduces rotation and translation errors by 32.6% and 19.9%, respectively, compared to the best performing PoseNet strategy. CGAPoseNet+ GCAN reaches the state-of-the-art results on 13 commonly employed datasets. To the best of our knowledge, it is the first experiment in GCANs applied to the problem of camera pose regression.
Cortical actomyosin flows, among other mechanisms, scale up spontaneous symmetry breaking and thus play pivotal roles in cell differentiation, division, and motility. According to many model systems, myosin motor-induced local contractions of initially isotropic actomyosin cortices are nucleation points for generating cortical flows. However, the positive feedback mechanisms by which spontaneous contractions can be amplified towards large-scale directed flows remain mostly speculative. To investigate such a process on spherical surfaces, we reconstituted and confined initially isotropic minimal actomyosin cortices to the interfaces of emulsion droplets. The presence of ATP leads to myosin-induced local contractions that self-organize and amplify into directed large-scale actomyosin flows. By combining our experiments with theory, we found that the feedback mechanism leading to a coordinated directional motion of actomyosin clusters can be described as asymmetric cluster vibrations, caused by intrinsic non-isotropic ATP consumption with spatial confinement. We identified fingerprints of vibrational states as the basis of directed motions by tracking individual actomyosin clusters. These vibrations may represent a generic key driver of directed actomyosin flows under spatial confinement in vitro and in living systems.
We employ Clifford Group Equivariant Neural Network (CGENN) layers to predict protein coordinates in a Protein Structure Prediction (PSP) pipeline. PSP is the estimation of the 3D structure of a protein, generally through deep learning architectures. Information about the geometry of the protein chain has been proven to be crucial for accurate predictions of 3D structures. However, this information is usually flattened as machine learning features that are not representative of the geometric nature of the problem. Leveraging recent advances in geometric deep learning, we redesign the 3D projector part of a PSP architecture with the addition of CGENN layers . CGENNs can achieve better generalization and robustness when dealing with data that show rotational or translational invariance such as protein coordinates, which are independent of the chosen reference frame. CGENNs inputs, outputs, weights and biases are objects in the Geometric Algebra of 3D Euclidean space, i.e. G3,0,0, and hence are interpretable from a geometrical perspective. We test 6 approaches to PSP and show that CGENN layers increase the accuracy in term of GDT scores by up to 2.1\%, with fewer trainable parameters compared to linear layers and give a clear geometric interpretation of their outputs.
Lernen aus Daten
(2024)
In diesem Kapitel werden Grundlagen des maschinellen Lernens erläutert und von wissensbasierten Ansätzen Künstlicher Intelligenz abgegrenzt. Dazu werden verschiedene Ansätze des maschinellen Lernens unterschieden und an Beispielen ausgeführt. Aufbauend hierauf werden gesellschaftliche Bezüge und Implikationen beleuchtet.
GA-ReLU: an activation function for Geometric Algebra Networks applied to 2D Navier-Stokes PDEs
(2024)
Many differential equations describing physical phenomena are intrinsically geometric in nature. It has been demonstrated how this geometric structure of data can be captured effectively through networks sitting in Geometric Algebra (GA) that work with multivectors, making them suitable candidates to solve differential equations. GA networks however, are still mostly uncharted territory. In this paper we focus on non-linearities, since applying them to multivectors is not a trivial task: they are generally applied in a point-wise fashion over each real-valued component of a multivector. This approach discards interactions between different elements of the multivector input and compromises the geometric nature of GA networks. To bridge this gap, we propose GA-ReLU, a GA approach to the rectified linear unit (ReLU), and show how it can improve the solution of Navier-Stokes PDEs.
Climate change, but also geopolitical circumstances, are moving topics such as energy efficiency and renewable energies more and more into the focus of the population, economy , and politics. As a result, the will to optimize new and existing energy systems extends from private individuals to companies and even entire communities. This work describes the development and usage of a new software called FINEconcepts which creates a digital twin of an energy system. This virtual model can then be used to optimize the energy system based on annual costs, CO2 emissions or other relevant criteria such as self-sufficiency. Because all system components, which include renewable technologies as well, can be added as a building block with chosen but changeable parameters, the software allows the user to explore and awaken interest and understanding of technologies that were previously considered too costly, irrelevant, or unrealistic. Implemented projects in small and large companies as well as in residential areas did prove, that the usage of FINEconcepts leads not only to more efficient energy systems by increasing the use of renewable energy, but also increased knowledge and understanding in terms of energy. Besides economics, ecology and security, understanding is an equally important factor in achieving a sustainable energy supply.
Human settlements on the Moon, crewed missions to Mars and space tourism will become a reality in the next few decades. Human presence in space, especially for extended periods of time, will therefore steeply increase. However, despite more than 60 years of spaceflight, the mechanisms underlying the effects of the space environment on human physiology are still not fully understood. Animals, ranging in complexity from flies to monkeys, have played a pioneering role in understanding the (patho)physiological outcome of critical environmental factors in space, in particular altered gravity and cosmic radiation. The use of animals in biomedical research is increasingly being criticized because of ethical reasons and limited human relevance. Driven by the 3Rs concept, calling for replacement, reduction and refinement of animal experimentation, major efforts have been focused in the past decades on the development of alternative methods that fully bypass animal testing or so-called new approach methodologies. These new approach methodologies range from simple monolayer cultures of individual primary or stem cells all up to bioprinted 3D organoids and microfluidic chips that recapitulate the complex cellular architecture of organs. Other approaches applied in life sciences in space research contribute to the reduction of animal experimentation. These include methods to mimic space conditions on Earth, such as microgravity and radiation simulators, as well as tools to support the processing, analysis or application of testing results obtained in life sciences in space research, including systems biology, live-cell, high-content and real-time analysis, high-throughput analysis, artificial intelligence and digital twins. The present paper provides an in-depth overview of such methods to replace or reduce animal testing in life sciences in space research.
The products of digital entrepreneurs are highly innovative, and their business models contribute to the prosperity and further development of the economy and society. However, studies indicate that most of all startups fail, particularly during the early stages of their business journey Prototyping as part of Lean Startup or Business Model Testing approaches, can assist digital early-stage startups in navigating uncertainty and achieving successful product launches. However, these methods are applied very individually and there is little empirical research on best practices. We therefore conducted 65 explorative expert interviews and asked successful startups about their prototyping practices. Our results include learnings on the prototyping process and the testing format, the role of the founding team during prototyping practices, the customer focus and the role of networks. Our study adds important details to theory and practice of the innovation and prototyping processes of digital early-stage startups. Our results offer actionable advice and guidance to any current and potential entrepreneur, but especially to first-time founders and less experienced executives in early stage-startups. Additionally, our contribution enhances the theoretical understanding of the Lean Startup approach and prototyping practices.
Open-source intelligence is gaining popularity due to the rapid development of social networks. There is more and more information in the public domain. One of the most popular social networks is Twitter. It was chosen to analyze the dependence of changes in the number of likes, reposts, quotes and retweets on the aggressiveness of the post text for a separate profile, as this information can be important not only for the owner of the channel in the social network, but also for other studies that in some way influence user accounts and their behavior in the social network. Furthermore, this work includes a detailed analysis and evaluation of the Tweety library capabilities and situations in which it can be effectively applied. Lastly, this work includes the creation and description of a compiled neural network whose purpose is to predict changes in the number of likes, reposts, quotes, and retweets from the aggressiveness of the post text for a separate profile.
This paper presents a practical Open Source Intelligence (OSINT) use case for user similarity measurements with the use of open profile data from the Reddit social network. This PoC work combines the open data from Reddit and the part of the state-of-the-art BERT model. Using the PRAW Python library, the project fetches comments and posts of users. Then these texts are converted into a feature vector - representation of all user posts and comments. The main idea here is to create a comparable user's pair similarity score based on their comments and posts. For example, if we fix one user and calculate scores of all mutual pairs with other users, we will produce a total order on the set of all mutual pairs with that user. This total order can be described as a degree of written similarity with this chosen user. A set of "similar" users for one particular user can be used to recommend to the user interesting for him people. The similarity score also has a "transitive property": if $user_1$ is "similar" to $user_2$ and $user_2$ is similar to $user_3$ then inner properties of our model guarantees that $user_1$ and $user_3$ are pretty "similar" too. In this way, this score can be used to cluster a set of users into sets of "similar" users. It could be used in some recommendation algorithms or tune already existing algorithms to consider a cluster's peculiarities. Also, we can extend our model and calculate feature vectors for subreddits. In that way, we can find similar to the user's subreddits and recommend them to him.
Open Source Intelligence (OSINT) has come a long way, and it is still developing ideas, and lots of investigations are yet to happen in the near future. The main essential requirement for all the OSINT investigations is the information that is valuable data from a good source. This paper discusses various tools and methodologies related to Facebook data collection and analyzes part of the collected data. At the end of the paper, the reader will get a deep and clear insight into the available techniques, tools, and descriptions about tools that are present to scrape the data out of the Facebook platform and the types of investigations and analyses that the gathered data can do.
Das Projekt semPart – Semantik der Partitur versucht, durch Übertragung von Methoden der Informatik auf die musikalische Notation exakte Aussagen über deren Semantik zu erzielen. Dies geschieht durch Remodellierung, also durch das Erstellen kleiner mathematischer Modelle, die jeweils das historisch-kulturell bestimmte Dekodieren von isolierten Parameter-Schichten eines Notates nachbilden. Erstes wichtiges Ergebnis ist, dass es in keinem Bereich jemals eine einzige ›richtige‹ Definition von Syntax und Semantik geben kann, sondern einen Katalog von vielfältigen theoretisch möglichen und in der Praxis auch angetroffenen Varianten. Diese sind durch ihre Remodellierung exakt identifizier- und benennbar; ihre Gesamtheit bildet ein Raster, durch das Notationsstile und -werkzeuge beschreib- und vergleichbar werden. Dieses Raster und die Remodelle sind besonders politisch wichtig, zur Förderung einer öffentlichen Diskussion von Notationssystemen und ihren Eigenschaften, da zunehmend die Arbeitsweise von digitalen Werkzeugen die notationelle Praxis zu überformen und zu filtrieren droht.
The project semPart – Semantics of the Musical Score tries to clarify the semantics of musical notation by applying methods from informatics. This is done by remodelling, i. e., constructing small mathematical models that mimic the mentally and culturally determined processes taking place when decoding isolated parameters in musical notation. The first important result is that there can never be one single ‘correct’ definition of semantics in any area but rather a catalogue of many theoretically possible variants encountered in practice. Through their remodelling, these can be precisely identified and named; their totality constitutes a classification grid applicable to styles, corpora, single scores and digital tools such as editors or encoding standards. This grid and the remodels are of particular political importance in promoting a public discussion on notation systems and their characteristics, given that the increasing use of digital processing systems threatens to over-shape and narrow down notational practice.
Forschungsdatenmanagement gewinnt immer größere Bedeutung in der Forschung. Daher arbeiten im Projekt IN-FDM-BB acht Hochschulen an der Institutionalisierung und nachhaltigen Verstetigung von Forschungsdatenmanagement (FDM) in Brandenburg.
Im Rahmen des lokalen Kompetenzaufbaus und der Institutionalisierung von FDM im Projekt geht es im Arbeitspaket AP 1 u. a. um „Entwicklung von hochschulspezifischen Informationsmaterialien (z. B. Flyer, FDM-Leitfäden, FAQs)“ und „Aufbau und/oder Aktualisierung einer lokalen FDM-Webseite“.
Der Werkstattbericht Konzept für Informationsmaterialien und FDM-Webseite (W 1.1.1) ist der erste Bericht, den alle acht am Projekt IN-FDM-BB beteiligen Hochschulen vorlegen und der konkret die lokale Institutionalisierung von FDM an den Einrichtungen behandelt.
Importance of OSINT/SOCMINT for modern disaster management evaluation - Australia, Haiti, Japan
(2023)
Open-source technologies (OSINT) and Social Media Intelligence (SOCMINT) are becoming increasingly popular with investigative and government agencies, intelligence services, media companies, and corporations. These OSINT and SOCMINT technologies use sophisticated techniques and special tools to efficiently analyze the continually growing sources of information. There is a great need for training and further education in the OSINT field worldwide. This report describes the importance of open source or social media intelligence for evaluating disaster management. It also gives an overview of the government work in Australia, Haiti, and Japan for disaster management using various OSINT tools and platforms. Thus, decision support for using OSINT and SOCMINT tools is given, and the necessary training needs for investigators can be better estimated.