Digitalisierung
Refine
Year of publication
Document Type
- conference proceeding (article) (345)
- Article (182)
- conference proceeding (presentation, abstract) (45)
- Part of a Book (36)
- Book (13)
- Preprint (11)
- Working Paper (5)
- conference proceeding (volume) (4)
- Report (4)
- conference talk (3)
Is part of the Bibliography
- no (660)
Keywords
- Offshoring (13)
- Betriebliches Informationssystem (12)
- Informationstechnik (11)
- Datenschutz (10)
- Digitalisierung (10)
- Datensicherung (8)
- Elektronische Gesundheitskarte (8)
- Information systems (8)
- Internet of Things (8)
- Literaturbericht (8)
Institute
- Fakultät Informatik und Mathematik (372)
- Fakultät Elektro- und Informationstechnik (222)
- Laboratory for Safe and Secure Systems (LAS3) (207)
- Labor für Digitalisierung (LFD) (84)
- Regensburg Strategic IT Management (ReSITM) (59)
- Labor eHealth (eH) (36)
- Fakultät Maschinenbau (32)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (25)
- Labor Parallele und Verteilte Systeme (23)
- Fakultät Sozial- und Gesundheitswissenschaften (18)
Begutachtungsstatus
- peer-reviewed (264)
- begutachtet (7)
For manufacturing firms, success in innovating IT-enabled services is a critical antecedent to benefit from digital servitization of their business models. Digital servitization literature has explored mechanisms for success in innovating IT-enabled services, indicating that the phenomenon is multifaceted and needs to be explained from multiple theoretical perspectives. We derive a conceptual model for success in innovating IT-enabled services covering its multifaceted nature by referring to knowledge-based and organizational control theory. We test this model using qualitative cases of IT-enabled service innovation initiatives in manufacturing firms and use set-theoretic analyses to account for the multifaceted nature of the phenomenon. The necessary condition analysis yields that a certain degree of service innovation capabilities is a prerequisite for success. With the results of a qualitative comparative analysis, we obtain five solution terms as causal recipes for success in innovating IT-enabled services. Our results contribute to research by offering a theory-based approach that explains the multiplicity of success in IT-enabled service innovation. Practitioners benefit from our results by understanding prerequisites and causal recipes for success while learning from unsuccessful initiatives in innovating IT-enabled services of manufacturing firms. Our study is also an example of how to rigorously calibrate qualitative data using a structured approach.
In higher education, improving learning and learning success are goals of general improvement. Lecturers teaches content and students acquire that content in an efficient way. To structure content, learning element categories are evaluated from the student's point of view in higher education area. The aim is to validate given definitions of ten learning element categories within a Learning Management System (LMS).
This paper evaluates a categorization of learning elements for organizing learning content in online education within LMSs. Therefore, ten categories of learning elements and corresponding definitions were defined in a previous work as base for this paper. The learning elements to examine are manuscript, exercise, quiz, brief overview, learning goal, summary, collaboration tool, auditory additional material, textual additional material, and visual additional material. To validate the definitions and to get improvements to each learning element a survey is processed. Beside the demographic data questions, the survey consists of two questions to the acceptance of the definitions and asks for improvements. 148 students between the ages 19 and 35 participate in the survey in summer term 2023. The education level of the participants ranges from undergraduates to Ph.D. students.
The results of this paper are that more than 80% accept the given definitions. Some definitions of the learning elements are changed, but the changes are restricted to additions of maximal four words. This categorization of learning elements could lead to improvements in learning by giving the content more structure. With the structure students get the possibility to learn with preferred learning elements which could lead to more success in learning and to a decreasing dropout rate in universities. In the future, the learning elements allow to classify content within LMSs with the goal of generating individual learning paths. Furthermore, our project will integrate these learning elements, use them to generate learning paths, and could set a new standard in the way of personalized learning.
Context:
Causal probabilistic graph-based models have gained widespread utility, enabling the modeling of cause-and-effect relationships across diverse domains. With their rising adoption in new areas, such as safety analysis of complex systems, software engineering, and machine learning, the need for an integrated lifecycle framework akin to DevOps and MLOps has emerged. Currently, such a reference for organizations interested in employing causal engineering is missing. This lack of guidance hinders the incorporation and maturation of causal methods in the context of real-life applications.
Objective:
This work contextualizes causal model usage across different stages and stakeholders and outlines a holistic view of creating and maintaining them within the process landscape of an organization.
Method:
A novel lifecycle framework for causal model development and application called CausalOps is proposed. By defining key entities, dependencies, and intermediate artifacts generated during causal engineering, a consistent vocabulary and workflow model to guide organizations in adopting causal methods are established.
Results:
Based on the early adoption of the discussed methodology to a real-life problem within the automotive domain, an experience report underlining the practicability and challenges of the proposed approach is discussed.
Conclusion:
It is concluded that besides current technical advancements in various aspects of causal engineering, an overarching lifecycle framework that integrates these methods into organizational practices is missing. Although diverse skills from adjacent disciplines are widely available, guidance on how to transfer these assets into causality-driven practices still need to be addressed in the published literature. CausalOps’ aim is to set a baseline for the adoption of causal methods in practical applications within interested organizations and the causality community.
Interpretable Machine Learning for Mode Choice Modeling on Tracking-Based Revealed Preference Data
(2024)
Mode choice modeling is imperative for predicting and understanding travel behavior. For this purpose, machine learning (ML) models have increasingly been applied to stated preference and traditional self-recorded revealed preference data with promising results, particularly for extreme gradient boosting (XGBoost) and random forest (RF) models. Because of the rise in the use of tracking-based smartphone applications for recording travel behavior, we address the important and unprecedented task of testing these ML models for mode choice modeling on such data. Furthermore, as ML approaches are still criticized for leading to results that are hard to understand, we consider it essential to provide an in-depth interpretability analysis of the best-performing model. Our results show that the XGBoost and RF models far outperform a conventional multinomial logit model, both overall and for each mode. The interpretability analysis using the Shapley additive explanations approach reveals that the XGBoost model can be explained well at the overall and mode level. In addition, we demonstrate how to analyze individual predictions. Lastly, a sensitivity analysis gives insight into the relative importance of different data sources, sample size, and user involvement. We conclude that the XGBoost model performs best, while also being explainable. Insights generated by such models can be used, for instance, to predict mode choice decisions for arbitrary origin–destination pairs to see which impacts infrastructural changes would have on the mode share.
In a number of tomographic applications, data cannot be fully acquired, resulting in severely underdetermined image reconstruction. Conventional methods in such cases lead to reconstructions with significant artifacts. To overcome these artifacts, regularization methods are applied that incorporate additional information. An important example is TV reconstruction, which is known to be efficient in compensating for missing data and reducing reconstruction artifacts. On the other hand, tomographic data are also contaminated by noise, which poses an additional challenge. The use of a single regularizer must therefore account for both the missing data and the noise. A particular regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction over multiple scales, in which case ℓ1 curvelet regularization methods are well suited. To address this issue, in this paper, we present a novel variational regularization framework that combines the advantages of different regularizers. The basic idea of our framework is to perform reconstruction in two stages. The first stage is mainly aimed at accurate reconstruction in the presence of noise, and the second stage is aimed at artifact reduction. Both reconstruction stages are connected by a data proximity condition. The proposed method is implemented and tested for limited-view CT using a combined curvelet–TV approach. We define and implement a curvelet transform adapted to the limited-view problem and illustrate the advantages of our approach in numerical experiments.
Hintergrund und Fragestellung
Nichtregierungsorganisationen (NRO) sind ein wichtiger Bestandteil der Zivilgesellschaft und interagieren auch mit Regierungen, Unternehmen und anderen gesellschaftlichen Akteuren. Aufgrund der komplexer werdenden Arbeit von NROs scheint Künstliche Intelligenz (KI) Möglichkeiten zur Bewältigung aktueller und zukünftiger Herausforderungen zu bieten. Jedoch ist wenig über die Arbeit von NROs mit KI bekannt.
Methodik
Es wurden fünf explorative Interviews mit Vertreter*innen von Nichtregierungsorganisationen (NROs) zum Thema Wissen, Akzeptanz, Bedarfe und Risikoeinschätzungen geführt und ausgewertet. Dabei sind NROs aus verschiedenen Handlungsfeldern und Größe interviewt. Es wurden informelle Vorgespräche geführt, um eine erste Orientierung im Forschungsfeld zu generieren. Aus den Erkenntnissen der Vorgespräche
und den Ergebnissen des Scoping Reviews ist ein Leitfaden erstellt worden, der zur Orientierung für die Expert*inneninterviews dient. Im Anschluss wurden dann fünf explorative leitfadengestützte Expert*inneninterviews durchgeführt und qualitativ ausgewertet.
Ergebnisse
Ein zentraler Befund ist, dass das Thema KI gerade in den NROs ankommt und es noch keine gefestigten Strukturen und Vorstellungen zum Einsatz von KI gibt. KI wird in einzelnen spezifischen Projekten eingesetzt, ohne dass diese umfassend in Arbeitsabläufe integriert ist. Die Akzeptanz von KI ist generell positiv; die Technologie wird als potenzielle Lösung für strukturelle Herausforderungen und Unterstützung im Alltag gesehen. Die Nutzung von KI-Anwendungen beschränkt sich jedoch mit Ausnahme von Large Language Models auf Pilotprojekte. Mit jüngerem Alter und Technikaffinität ist eine höhere Akzeptanz verbunden. Besonders kritisch werden Anwendungen von KI im Sozial- oder Gesundheitsbereich als Ersatz für menschliche Interaktionen gesehen. Betont werden auch ethische Bedenken und eine hohe Bedeutsamkeit von Datenschutz.
Schlussfolgerung
Künstliche Intelligenz in Nichtregierungsorganisationen ist ein aufkommendes und sich entwickelndes Forschungsthema. Die Interviews unterstreichen den Bedarf an mehr Wissen, ethischen Richtlinien und finanziellen Ressourcen für eine effektive Nutzung von KI in NROs. Ein umfassendes Verständnis von KI und eine tiefergehende, systematische Integration in Arbeitsabläufe in diesen Organisationen müssen noch entwickelt werden.
Background and objectiveDue to the high prevalence of dental caries, fixed dental restorations are regularly required to restore compromised teeth or replace missing teeth while retaining function and aesthetic appearance. The fabrication of dental restorations, however, remains challenging due to the complexity of the human masticatory system as well as the unique morphology of each individual dentition. Adaptation and reworking are frequently required during the insertion of fixed dental prostheses (FDPs), which increase cost and treatment time. This article proposes a data-driven approach for the partial reconstruction of occlusal surfaces based on a data set that comprises 92 3D mesh files of full dental crown restorations.MethodsA Generative Adversarial Network (GAN) is considered for the given task in view of its ability to represent extensive data sets in an unsupervised manner with a wide variety of applications. Having demonstrated good capabilities in terms of image quality and training stability, StyleGAN-2 has been chosen as the main network for generating the occlusal surfaces. A 2D projection method is proposed in order to generate 2D representations of the provided 3D tooth data set for integration with the StyleGAN architecture. The reconstruction capabilities of the trained network are demonstrated by means of 4 common inlay types using a Bayesian Image Reconstruction method. This involves pre-processing the data in order to extract the necessary information of the tooth preparations required for the used method as well as the modification of the initial reconstruction loss.ResultsThe reconstruction process yields satisfactory visual and quantitative results for all preparations with a root mean square error (RMSE) ranging from 0.02 mm to 0.18 mm. When compared against a clinical procedure for CAD inlay fabrication, the group of dentists preferred the GAN-based restorations for 3 of the total 4 inlay geometries.ConclusionsThis article shows the effectiveness of the StyleGAN architecture with a downstream optimization process for the reconstruction of 4 different inlay geometries. The independence of the reconstruction process and the initial training of the GAN enables the application of the method for arbitrary inlay geometries without time-consuming retraining of the GAN.
Hintergrund und Fragestellung
Nichtregierungsorganisationen (NRO) sind ein wichtiger Bestandteil der Zivilgesellschaft und interagieren auch mit Regierungen, Unternehmen und anderen gesellschaftlichen Akteuren. Aufgrund der komplexer werdenden Arbeit von NROs scheint Künstliche Intelligenz (KI) Möglichkeiten zur Bewältigung aktueller und zukünftiger Herausforderungen zu bieten. Jedoch ist wenig über die Arbeit von NROs mit KI bekannt. Daher beschäftigt sich das Projekt KINiro in diesem ersten Working Paper mit der Frage, welche (nicht-)wissenschaftlichen Erkenntnisse zum Themenkomplex NROs und KI bereits vorliegen.
Methodik
Es wurde ein Scoping Review zur Erfassung (nicht-)wissenschaftlicher Texte zu NROs und KI durchgeführt. Die systematische Literaturrecherche wurde in den Datenbanken Web of Science, Science Gate und WISO durchgeführt. In den Review wurden schließlich 14 Titel eingeschlossen und qualitativ analysiert.
Ergebnisse
Die Mehrheit der gefundenen Treffer sind Pressemitteilungen. Unter den Treffern befinden sich lediglich zwei (wissenschaftliche) Studien. Die NROs setzen sich auf verschiedenen Ebenen mit der Thematik von KI auseinander, wobei sich zwei Herangehensweisen unterscheiden lassen. Einige NROs nehmen am gesellschaftlichen Diskurs über den Einsatz von KI teil und treiben diesen in theoretischer Hinsicht voran, ohne die Technik dabei selbst zu nutzen. Andere NROs integrieren die KI-Systeme praktisch in ihre Arbeitsabläufe oder führen Projekte zum Zweck der NRO mit KI-Unterstützung durch. Für die Entwicklung von KI-Anwendungen wird mit For-Profit-Unternehmen kooperiert und die Expertise der Unternehmen mit Daten der NROs kombiniert. Durch den Einsatz von KI erhoffen sich NROs einen gezielteren Einsatz von Ressourcen. Hierbei zeigt sich, dass für die Nutzung in KI-Systemen ein interdisziplinärer Konsens über Standards in Datenerhebung und Speicherung als notwendig angesehen wird.
Schlussfolgerung
Aus der geringen Anzahl an gefundenen Texten, insbesondere (wissenschaftlichen) Studien, und dem Veröffentlichungszeitraum, der in den vergangenen sieben Jahren liegt, lässt sich schließen, dass es sich um einen jungen Forschungsbereich handelt. Die ausgeschlossenen Titel zeigen auf, dass NROs aktuell noch häufiger mit der Digitalisierunge allgemein beschäftigt sind und die Auseinandersetzung mit KI erst noch am Anfang steht.
With the ongoing miniaturization of wireless devices, the importance of wearable textiles in the antenna segment has increased significantly in recent years. Due to the widespread utilization of wireless body sensor networks for healthcare and ubiquitous applications, the design of wearable antennas offers the possibility of comprehensive monitoring, communication, and energy harvesting and storage. This article reviews a number of properties and benefits to realize comprehensive background information and application ideas for the development of lightweight, compact and low-cost wearable patch antennas. Furthermore, problems and challenges that arise are addressed. Since both electromagnetic and mechanical specifications must be fulfilled, textile and flexible antennas require an appropriate trade-off between materials, antenna topologies, and fabrication methods—depending on the intended application and environmental factors. This overview covers each of the above issues, highlighting research to date while correlating antenna topology, feeding techniques, textile materials, and contacting options for the defined application of wearable planar patch antennas.
Generative deep learning approaches for the design of dental restorations: A narrative review
(2024)
Objectives:
This study aims to explore and discuss recent advancements in tooth reconstruction utilizing deep learning (DL) techniques. A review on new DL methodologies in partial and full tooth reconstruction is conducted.
Data/Sources:
PubMed, Google Scholar, and IEEE Xplore databases were searched for articles from 2003 to 2023.
Study selection:
The review includes 9 articles published from 2018 to 2023. The selected articles showcase novel DL approaches for tooth reconstruction, while those concentrating solely on the application or review of DL methods are excluded. The review shows that data is acquired via intraoral scans or laboratory scans of dental plaster models. Common data representations are depth maps, point clouds, and voxelized point clouds. Reconstructions focus on single teeth, using data from adjacent teeth or the entire jaw. Some articles include antagonist teeth data and features like occlusal grooves and gap distance. Primary network architectures include Generative Adversarial Networks (GANs) and Transformers. Compared to conventional digital methods, DL-based tooth reconstruction reports error rates approximately two times lower.
Conclusions:
Generative DL models analyze dental datasets to reconstruct missing teeth by extracting insights into patterns and structures. Through specialized application, these models reconstruct morphologically and functionally sound dental structures, leveraging information from the existing teeth. The reported advancements facilitate the feasibility of DL-based dental crown reconstruction. Beyond GANs and Transformers with point clouds or voxels, recent studies indicate promising outcomes with diffusion-based architectures and innovative data representations like wavelets for 3D shape completion and inference problems.
Clinical significance:
Generative network architectures employed in the analysis and reconstruction of dental structures demonstrate notable proficiency. The enhanced accuracy and efficiency of DL-based frameworks hold the potential to enhance clinical outcomes and increase patient satisfaction. The reduced reconstruction times and diminished requirement for manual intervention may lead to cost savings and improved accessibility of dental services.
Um den zunehmenden Anforderungen an die Beherrschung digitaler Techniken und an die Fähigkeit zur interdisziplinären Zusammenarbeit an Studierende aller Fachrichtungen zu begegnen wurde das interdisziplinäre Lehrformat Digitalisierungskollegs für Studierende entwickelt. Das in vielen Fachbereichen ausbaufähige Angebot von Digitalthemen in der Hoch- schullehre wird hiermit dauerhaft erweitert. Ein Digitalisierungskolleg besteht aus einer Vorlesungs- reihe mit angrenzendem Seminar, in denen Studierende interdisziplinäre Lösungen für Fragen der digitalen Transformation entwickeln. Geleitet werden sie von etablierten Wissenschaftlerinnen und Wissenschaftlern, aktiv betreut und ausgestaltet von ein bis zwei Coaches. Kernelement sowohl des Kollegs als auch der einzelnen Projekte ist die Interdisziplinarität. Eine*r der beteiligten Projekt- leiter*innen hat einen direkten Bezug zur Technik und kommt aus der Informatik, der Wirtschafts- informatik, der Elektrotechnik oder vergleichbaren Disziplinen. Zielgruppe der Projekte sind Studierende verschiedener Disziplinen im Masterstudium oder in den letzten Semestern eines Bachelorstudiums. Durch die Teilnahme erwerben auch Studierende aus digitalisierungsfernen Fächern frühzeitig umfangreiche IT-Kenntnisse. Als Begleiteffekt der umfangreichen Vernetzung zwischen den Digitalisierungskollegs (Studierende, Coaches und Projektleitende) entsteht bereits zu Beginn einer wissenschaftlichen Karriere eine große digitale Community. Alle Teilnehmenden lernen frühzeitig die interdisziplinäre Zusammenarbeit und verbessern erheblich ihre Karriere- chancen innerhalb und außerhalb der Wissenschaft.
Case study research is one of the most widely used research methods in Information Systems (IS). In recent years, an increasing number of publications have used case studies with few sources of evidence, such as single interviews per case. While there is much methodological guidance on rigorously conducting multiple case studies, it remains unclear how researchers can achieve an acceptable level of rigour for this emerging type of multiple case study with few sources of evidence, i.e., multiple mini case studies. In this context, we synthesise methodological guidance for multiple case study research from a cross-disciplinary perspective to develop an analytical framework. Furthermore, we calibrate this analytical framework to multiple mini case studies by reviewing previous IS publications that use multiple mini case studies to provide guidelines to conduct multiple mini case studies rigorously. We also offer a conceptual definition of multiple mini case studies, distinguish them from other research approaches, and position multiple mini case studies as a pragmatic and rigorous approach to research emerging and innovative phenomena in IS.
It remains difficult to segregate pelagic habitats since structuring processes are dynamic on a wide range of scales and clear boundaries in the open ocean are non-existent. However, to improve our knowledge about existing ecological niches and the processes shaping the enormous diversity of marine plankton, we need a better understanding of the driving forces behind plankton patchiness. Here we describe a new machine-learning method to detect and quantify pelagic habitats based on hydrographic measurements. An Autoencoder learns two-dimensional, meaningful representations of higher-dimensional micro-habitats, which are characterized by a variety of biotic and abiotic measurements from a high-speed ROTV. Subsequently, we apply a density-based clustering algorithm to group similar micro-habitats into associated pelagic macro-habitats in the German Bight of the North Sea. Three distinct macro-habitats, a “surface mixed layer,” a “bottom layer,” and an exceptionally “productive layer” are consistently identified, each with its distinct plankton community. We provide evidence that the model detects relevant features like the doming of the thermocline within an Offshore Wind Farm or the presence of a tidal mixing front.
Control and Automation of services of the urban infrastructure offered to citizens and tourists are elementary parts of a smart city. But both rely on a stable supply of data from sensors spread across the whole city, e. g., the fill level sensors of waste bins needed for a waste management tool which we developed in a collaboration with the Regensburg city council for the on-demand collection of waste bins. Europe has a lot of historic cities like Regensburg with narrow streets and huge building walls, some made from granite and fieldstones, which often represents an insurmountable obstacle to wireless data transmission. The reduction of the road traffic volume poses an additional challenge for city planners. By means of networked planning and simulation software, the situation, state and efficiency of citywide logistic services can be monitored and optimized. In the course of such optimizations, we propose the combination of digital and logistic services. As an example, we show that monitoring state information, such as the waste bin fill levels, can be accomplished using the same vehicles and the same planning software, that is used for luggage transportation. Moreover, we describe how we adapted a solver for a variant of the TSP, namely the prize-collecting traveling salesman, to optimize the route planning dynamically.
In the engineering domain, representing real-world objects using a body of data, called a digital twin, which is frequently updated by “live” measurements, has shown various advantages over tradi- tional modelling and simulation techniques. Consequently, urban planners have a strong interest in digital twin technology, since it provides them with a laboratory for experimenting with data before making far-reaching decisions. Realizing these decisions involves the work of professionals in the architecture, engineering and construction (AEC) domain who nowadays collaborate via the methodology of building information modeling (BIM). At the same time, the citizen plays an integral role both in the data acquisition phase, while also being a beneficiary of the improved resource management strategies. In this paper, we present a prototype for a “digital energy twin” platform we designed in cooperation with the city of Regensburg. We show how our extensible platform de- sign can satisfy the various requirements of multiple user groups through a series of data processing solutions and visualizations, in- dicating valuable design and implementation guidelines for future projects. In particular, we focus on two example use cases concern- ing building electricity monitoring and BIM. By implementing a flexible data processing architecture we can involve citizens in the data acquisition process, meeting the demands of modern users regarding maximum transparency in the handling of their data.
In the realm of parallel computing, optimization plays a pivotal role in achieving efficient and scalable solutions. In this work, we present the parallelization of a hybrid genetic search for solving the Capacitated Vehicle Routing Problem with Pickup and Delivery (CVRPPD).It leverages the synergy between genetic algorithms and parallel computing to address the complex optimization problem. This hybrid algorithm combines a customized version of local search with a genetic algorithm to compute an effective solution. Our implementation makes use of the Message Passing Interface (MPI) for data distribution and parallel execution. In addition, we run multi-threaded processes on NVIDIA graphical processors using the CUDA technology, which further increases the computation speed and consequently minimizes the runtime. Parallelization also allows the best-improvement strategy to be used instead of the rst-improvement strategy while maintaining the same runtime. We store the resulting routes in a bus route database which we created as the basis of an extensive library of optimal routes for our specifc use case of optimizing bus routes in a rural area. The experimental results on real road data show that the parallel implementation of the Hybrid Genetic Search (HGS) achieves significant improvements in runtime over the sequential implementation above a certain problem size. We believe that our implementation of the parallel hybrid genetic search method can have a great in influence on optimization strategies in parallel computing and can also be applied to other subproblems of the VRP.
In educational research, non-personalized learning content increases learners' cognitive load, causing them to lower their performance and sometimes drop out of the course. Personalizing learning content with learners’ unique characteristics, like learning styles, personality traits, and learning strategies, is being suggested to improve learners’ success. Several theories exist for assessing learners’ unique characteristics. By the end of 2020, 71 learning style theories have been formulated, and research has shown that combining multiple learning style theories to recommend learning paths yields better results. As of the end of 2022, there is no single research that demonstrates a relationship between the Index of Learning Styles (ILS) based Felder-Silverman learning style model (FSLSM) dimensions, Big Five (BFI-10) based personality traits, and the Learning strategies in studying (LIST-K) based learning strategies factors for personalizing learning content.
In this paper, an innovative approach is proposed to estimate the relationship between these theories and map the corresponding learning elements to create personalized learning paths. Respective questionnaires were distributed to 297 higher education students for data collection. A three-step approach was formulated to estimate the relationship between the models. First, a literature search was conducted to find existing studies. Then, an expert interview was carried out with a group of one software engineering education research professor, three doctoral students, and two master’s students. Finally, the correlations between the students' questionnaire responses were calculated. To achieve this, a Bayesian Network was built with expert knowledge from the three-step approach, and the weights were learned from collected data. The probability of individual FSLSM learning style dimensions was estimated for a new test sample. Based on the literature, the learning elements were mapped to the respective FSLSM learning style dimensions and were initiated as learning paths to the learners.
The next steps are proposed to extend this framework and dynamically recommend learning paths in real time. In addition, the individual levels of learning style dimensions, personality traits, and learning strategies can be considered to improve the recommendations. Further, using probabilities for mapping learning elements to learning styles can increase the chance of initiating multiple learning paths for an individual learner.
This paper presents the results of a data collection with the LIST-K questionnaire. This questionnaire measures students’ learning strategies and shows which strategies are particularly dominant or rather weak.
Learning strategies have long been a major area of research in educational science and psychology. In these disciplines, learning strategies are understood as intentional behaviors and cognitive skills that learners employ to effectively complete learning tasks, by selecting, acquiring, organizing, and integrating information into their existing knowledge for long-term retention.
The LIST-K, developed by Klingsieck in 2018, was chosen for accessing learning strategies due to its thematic suitability, widespread use, and test economy. It covers a total of four main categories (i.e., cognitive strategies, metacognitive strategies, management of internal resources, and management of external resources), each of which are subdivided into further subscales. With a total of 39 items answered via a 5-step Likert scale, the LIST-K can cover the topic relatively comprehensively and at the same time be completed in a reasonable amount of time of approximately 10 minutes.
The LIST-K was used as part of a combined data collection along with other questionnaires on their personal data, their preferences regarding certain learning elements, their learning style (i.e. the ILS), and personality (i.e. the BFI-10). A total of 207 students from different study programs participated via an online survey created using the survey tool "LimeSurvey". Participation in the study was voluntary, anonymously, and in compliance with the GDPR.
Overall, the results of the LIST-K show that students are willing to work intensively on relevant topics intensively and to perform beyond the requirements of the course seeking additional learning material. At the same time, however, it is apparent that the organization of their own learning process could still be improved. For example, students start repeating content too late (mean=2.70; SD=0.92) and do not set goals for themselves and do not create a learning plan (mean=3.19; SD=0.90). They also learn without a schedule (mean=2.23; SD=0.97) and miss opportunities to learn together with other students (mean=3.17; SD=0.94).
The findings of the data collection will be used to create an AI-based adaptive learning management system that will create individualized learning paths for students in their respective courses. From the results of the LIST-K, it appears that the adaptive learning management system should primarily support organizational aspects of student learning. Even small impulses (an individual schedule of when to learn what or a hierarchical structuring of the learning material) could help students to complete their courses more successfully and improve their learning.
Eye tracking has proven to be a powerful tool in a variety of empirical research areas; hence, it is steadily gaining attention. Driven by the expanding frontiers of Artificial Intelligence and its potential for data analysis, eye tracking technology offers promising applications in diverse fields, from usability research to cognitive research. The education sector in particular can benefit from the increased use of eye tracking technology - both indirectly, for example by studying the differences in gaze patterns between experts and novices to identify promising strategies, and directly by using the technology itself to teach in future classrooms.
As with any empirical method, the results depend directly on the quality of the data collected. That raises the question of which parameters educators or researchers can influence to maximize the data quality of an eye tracker. This is the starting point of the present work: In an empirical study of eye tracking as an (educational) technology, we systematically examine factors that influence the data quality, such as illumination, sampling frequency, and head orientation - parameters that can be varied without much additional effort in everyday classroom or research use - using two human subjects, an artificial face, and the Tobii Pro Spectrum.
We rely on metrics derived from the raw gaze data, such as accuracy or precision, to measure data quality. The obtained results derive practical advice for educators and researchers, such as using the lowest sampling frequency appropriate for a certain purpose. Thereby, this research fills a gap in the current understanding of eye tracker performance and, by offering best practices, enables researchers or teachers to produce data of the highest quality possible and therefore best results when using eye trackers in laboratories or future classrooms.
Universities are faced with a rising number of dropouts in recent years. This is largely due to students' limited capability of finding individual learning paths through various course materials. However, a possible solution to this problem is the introduction of adaptive learning management systems, which recommend tailored learning paths to students – based on their individual learning styles. For the classification of learning styles, the most commonly used methods are questionnaires and learning analytics. Nevertheless, both methods are prone to errors: questionnaires may give superficial answers due to lack of time or motivation, while learning analytics do not reflect offline learning behavior. This paper proposes an alternative approach to classify students' learning styles by integrating eye tracking in combination with Machine Learning (ML) algorithms.
Incorporating eye tracking technology into the classification process eliminates the potential problems arising from questionnaires or learning analytics by providing a more objective and detailed analysis of the subject's behavior. Moreover, this approach allows for a deeper understanding of subconscious processes and provides valuable insights into the individualized learning preferences of students.
In order to demonstrate this approach, an eye tracking study is conducted with 117 participants using the Tobii Pro Fusion. Using qualitative and quantitative analyses, certain patterns in the subjects' gaze behavior are assigned to their learning styles given by the validated Index of Learning Styles (ILS) questionnaire.
In short, this paper presents an innovative solution to the challenges associated with classifying students' learning styles. By combining eye tracking data with ML algorithms, an accurate and insightful understanding of students' individual learning paths can be achieved, ultimately leading to improved educational outcomes and reduced dropout rates.
Moving Object Databases are designed to store and process database objects with attributes that can change over time. Simple examples are moving points, that change position over time, a bit more complex are moving regions, that can also change shape. The spatial and spatiotemporal object types in current moving objects databases are limited to two dimensions. This work strives to extend the set of spatial moving object types into the third and even higher dimensions while preserving a consistent family of operations for it. A robust algorithm for the interpolation of two regions to a moving region of any dimensionality is developed, as well as the fundamental ideas for several other operations.
Finding the optimal join order (JO) is one of the most important problems in query optimisation, and has been extensively considered in research and practise. As it involves huge search spaces, approximation approaches and heuristics are commonly used, which explore a reduced solution space at the cost of solution quality. To explore even large JO search spaces, we may consider special-purpose software, such as mixed-integer linear programming (MILP) solvers, which have successfully solved JO problems. However, even mature solvers cannot overcome the limitations of conventional hardware prompted by the end of Moore’s law. We consider quantum-inspired digital annealing hardware, which takes inspiration from quantum processing units (QPUs). Unlike QPUs, which likely remain limited in size and reliability in the near and mid-term future, the digital annealer (DA) can solve large instances of mathematically encoded optimisation problems today. We derive a novel, native encoding for the JO problem tailored to this class of machines that substantially improves over known MILP and quantum-based encodings, and reduces encoding size over the state-of-the-art. By augmenting the computation with a novel readout method, we derive valid join orders for each solution obtained by the (probabilistically operating) DA. Most importantly and despite an extremely large solution space, our approach scales to practically relevant dimensions of around 50 relations and improves result quality over conventionally employed approaches, adding a novel alternative to solving the long-standing JO problem.
Control Oriented Mathematical Modeling of a Bidirectional DC-DC Converter - Part 1: Buck Mode
(2023)
Parallel connection of different batteries equipped with bidirectional DC-DC converters offers an increase of the total storage capacity, the provision of higher currents and an improvement of reliability and system availability. To share the load current among the DC-DC converters while maintaining the safe operating range of the batteries, appropriate controllers are needed. The basis for the design of these control approaches requires knowledge of both the static and dynamic characteristics of the DC-DC converter used. In this paper, the small signal analysis of a DC-DC converter in buck mode is shown using the circuit averaging technique. The paper gives an overview of all required transfer functions:. The control and line to output transfer functions for CCM and DCM relevant for average current mode control as well as for voltage control are derived and their poles and zeros are determined. This provides the basis for stability consideration, analysis of the overall control structure and controller design.
This paper examines the conceptualization of sustainability in the context of information and communication technology (ICT) research. Through an inductive text analysis of sixteen literature reviews spanning from 2014 to 2023, key themes and concepts are identified, highlighting the complex relationship between ICT and sustainability. ICT is perceived both as an enabler and a problem for sustainability. Furthermore, the terminology and concept of sustainability in the context of ICT remain unclear. The emergence of digitalization as a novel socio-technical phenomenon poses additional challenges for conceptual alignment. While a holistic view of sustainability in ICT is desired, business and social implications receive less attention. The paper summarizes and discusses the developments in research on this topic over the past decade.
In modern vehicles, system complexity and technical capabilities are constantly growing. As a result, manufacturers and regulators are both increasingly challenged to ensure the reliability, safety, and intended behavior of these systems. With current methodologies, it is difficult to address the various interactions between vehicle components and environmental factors. However, model-based engineering offers a solution by allowing to abstract reality and enhancing communication among engineers and stakeholders. Applying this method requires a model format that is machine-processable, human-understandable, and mathematically sound. In addition, the model format needs to support probabilistic reasoning to account for incomplete data and knowledge about a problem domain. We propose structural causal models as a suitable framework for addressing these demands. In this article, we show how to combine data from different sources into an inferable causal model for an advanced driver-assistance system. We then consider the developed causal model for scenario-based testing to illustrate how a model-based approach can improve industrial system development processes. We conclude this paper by discussing the ongoing challenges to our approach and provide pointers for future work.
In the field of software engineering, graph-based models are used for a variety of applications. Usually, the layout of those graphs is determined at the discretion of the user. This article empirically investigates whether different layouts affect the comprehensibility or popularity of a graph and whether one can predict the perception of certain aspects in the graph using basic graphical laws from psychology (i.e., Gestalt principles). Data on three distinct layouts of one causal graph is collected from 29 subjects using eye tracking and a print questionnaire. The evaluation of the collected data suggests that the layout of a graph does matter and that the Gestalt principles are a valuable tool for assessing partial aspects of a layout.
In a distributed system, functionally equivalent nodes work together to form a system with improved availability, reliability and fault tolerance. Thereby, the purpose is to achieve a common control objective. As multiple components cooperate to accomplish tasks, coordination between them is required. Electing a node as the temporary leader can be a possible solution to perform coordination. This work presents a self-stabilizing algorithm for the election of a leader in dynamically reconfigurable bus topology-based broadcast systems with a message and time complexity of O(1). The election is performed dynamically, i.e., not only when the leader node fails, and is criterion-based. The criterion used is a performance related value which evaluates the properties of the node regarding the ability to perform the tasks of the leader. The increased demands on the leader are taken into account and a re-election is started when the criterion value drops below a predefined level. The goal here is to distribute the load more evenly and to reduce the probability of failure due to overload of individual nodes. For improved system availability and reduced fault rates, a management level consisting of leader, assistant and co-assistant is introduced. This reduces the number of required messages and the duration in case of non-initial election. For further reduction of required messages to uniquely determine a leader, the CAN protocol is exploited. The proposed algorithm selects a node with an improved failure rate and a reduced message and hence time complexity while satisfying the safety and termination constraints. The operation of the algorithm is validated using a hardware test setup.
Auch kleine und mittlere Unternehmen (KMUs) benötigen zunehmend ein effektives Informationstechnologie- (IT)-Management, um wettbewerbsfähig zu bleiben. Im Vergleich zu großen Unternehmen verfügen KMUs jedoch oft nicht über die Ressourcen, die Arbeitgeberattraktivität oder den Bedarf, um einen Chief Information Officer (CIO) in Vollzeit zu beschäftigen. Um diese Lücke zu schließen, hat eine wachsende Zahl von Expertinnen und Experten weltweit damit begonnen, CIO-Dienste in Teilzeit anzubieten. Auf diese Weise erhalten KMUs Zugang zu erfahrenen und kompetenten IT-Führungskräften zu einem Bruchteil der Kosten und ohne langfristige Verpflichtungen. Während diese so genannten „Fractional CIOs“ in der Praxis bereits einen Mehrwert schaffen, gibt es noch kaum wissenschaftliche Untersuchungen zu diesem neuen Phänomen. In einem größeren Forschungsprojekt mit insgesamt 62 Fractional CIOs aus 10 Ländern wurden daher eine Definition, Typen verschiedenartiger Engagements und Erfolgsfaktoren abgeleitet. Die vorliegende Studie fasst die Ergebnisse zusammen und setzt sie in Bezug zum deutschen Markt, indem sie drei Fractional CIOs/CTOs aus Deutschland befragt. Es zeigt sich, dass die folgenden vier Engagement-Typen von Fractional CIOs für KMUs in verschiedenen Situationen von Nutzen sind: Strategisches IT-Management, Restrukturierung, Skalierung und Hands-on Support. Darüber hinaus zeigt die Studie, dass Vertrauen, die Unterstützung durch das Top-Management-Team und die Integrität des Fractional CIOs Schlüsselfaktoren für den Erfolg von Fractional CIO-Engagements sind. Für den deutschen Markt werden die Ergebnisse durch drei befragte Fractional CIOs/CTOs weitgehend bestätigt. Die Fractional CIOs/CTOs können zwar keine genauen Gründe für die geringe Akzeptanz der Rolle nennen, betonen aber ihr Wertpotenzial für den deutschen Markt.
The average tenure of Chief Information Officers (CIOs) has increased over the past few years. Nevertheless, the average tenure of CIOs is shorter than that of Chief Executive Officers (CEOs). While most studies on tenure and background are based on data from US IT executives, studies on German CIOs are missing. This study analyzes the tenure of German CIOs as a proxy for management effectiveness and how certain factors influence it. An original and unique dataset of 384 IT executives from German companies is examined. The data include the size and industry sector of the companies, educational and professional backgrounds of the CIOs, and the CIOs' reporting lines. Data were analyzed using the chi-square test and Fisher's exact test. The German CIOs had a median tenure of 4.0 years. However, if we examine executives who are currently in office and executives with a completed term of office separately , the median tenure differs. The results also show that German CIOs do not have shorter tenures than German CEOs. When compared with US CIOs, the results depend on the values selected for comparison. In addition, the analysis shows that neither the size and industry sector of the companies nor the educational and professional backgrounds of the CIOs and managers of the CIO reports have a statistically significant influence on the tenure of IT executives. The factors examined in this study can be considered as preconditions for the CIO position. In the future, factors that play a role during tenure should be examined.
Im Rahmen eines Forschungsprojektes soll eine Plattform für die Vermittlung von Telekonsilen und die Bereitstellung einer Konsilakte an die Telematik-Infrastruktur (TI) angeschlossen werden. Um sowohl eine bestmögliche Skalierbarkeit als auch eine optimale Integrierbarkeit in bestehende Systeme und Anwendungen zu erreichen, wurde HL7 FHIR als syntaktischer Standard für das Reha-Konsil festgelegt. Dieses Dokument liefert einen systematischen Überblick über die notwendigen Schritte und Voraussetzungen, um diesen Anschluss zu bewerkstelligen.
An immense diversity in bottle types requires high accuracy during sorting for recycling purposes by breweries. This extremely complex and time-consuming procedure can result in enormous additional costs for them. This paper presents transfer learning-based algorithms for classifying beer bottle brands using camera images, applicable in individual sorting solutions for different use cases. The problem is tackled using customised EfficientNet, InceptionResNet and VGG models along with an augmented dataset. In addition, a detailed analysis of different model and parameter combinations is performed, enabling tailor-made technologies for specific conditions and resource limitations. In accompanying validations and subsequent tests, a test accuracy of 100% in the recognition of beer brands could be achieved, proving the proposed method fully contributes to the solution of the problem.
Preface QDSM
(2023)
The first international workshop on Quantum Data Science and anagement (QDSM), co-located with VLDB 2023, is centered around addressing the possibilities of quantum computing for data science and data management. Quantum computing is a relatively new and emerging field that is believed to have huge computational potential in the future. In the QDSM workshop, we want to provide a venue for discussing and publishing novel results of applying quantum computing to hard data science and data management problems. These problems include join order optimization, designing efficient quantum feature maps, studying possibilities of solving linear programs with quantum algorithms, and divergent index tuning with quantum machine learning. Besides, we include a short and visionary survey on quantum computing for databases. Theworkshop provides a platform for active discussion on these and related topics.
Recent advances in the manufacture of quantum computers attract much attention over a wide range of fields, as early-stage quantum processing units (QPU) have become accessible. While contemporary quantum machines are very limited in size and capabilities, mature QPUs are speculated to eventually excel at optimisation problems. This makes them an attractive technology for database problems, many of which are based on complex optimisation problems with large solution spaces. Yet, the use of quantum approaches on database problems remains largely unexplored. In this paper, we address the long-standing join ordering problem, one of the most extensively researched database problems. Rather than running arbitrary code, QPUs require specific mathematical problem encodings. An encoding for the join ordering problem was recently proposed, allowing first small-scale queries to be optimised on quantum hardware. However, it is based on a faithful transformation of a mixed integer linear programming (MILP) formulation for JO, and inherits all limitations of the MILP method. Most strikingly, the existing encoding only considers a solution space with left-deep join trees, which tend to yield larger costs than general, bushy join trees. We propose a novel QUBO encoding for the join ordering problem. Rather than transforming existing formulations, we
construct a native encoding tailored to quantum systems, which allows us to process general bushy join trees. This makes the
full potential of QPUs available for solving join order optimisation problems.
Inverse problems are inherently ill-posed and therefore require regularization techniques to achieve a stable solution. While traditional variational methods have wellestablished theoretical foundations, recent advances in machine learning based approaches have shown remarkable practical performance. However, the theoretical foundations of learning-based methods in the context of regularization are still underexplored. In this paper, we propose a general framework that addresses the current gap between learning-based methods and regularization strategies. In particular, our approach emphasizes the crucial role of data consistency in the solution of inverse problems and introduces the concept of data-proximal null-space networks as a key component for their solution. We provide a complete convergence analysis by extending the concept of regularizing null-space networks with data proximity in the visual part. We present numerical results for limited-view computed tomography to illustrate the validity of our framework.
In Anbetracht der hohen Zahl an zu ertüchtigenden Bauwerken in Deutschland bei gleichzeitig zunehmendem Fachkräftemangel in der Baubranche ist es wichtig, effiziente Planungs- und Bauprozesse zu schaffen. Digitale und automatisierte Arbeitsmethoden auf Basis von digitalen Bauwerksmodellen beginnen sich daher zu etablieren. Für viele Bestandsbauwerke werden diese Methoden aber nur zögerlich angewendet, da das als Grundlage benötigte digitale Bauwerksmodell erst aufwendig erstellt werden müsste. Moderne Techniken der künstlichen Intelligenz (KI) wie Machine- oder Deep Learning bewegen im Moment viele Forschungs- und Wirtschaftsbereiche. Ihr Einsatz kann auch in der Bauwirtschaft Lösungen für komplexe oder repetitive Aufgaben bieten.
In dieser Arbeit wird ein Tool konzipiert und evaluiert, welches in der Lage ist, mithilfe von KI-Techniken aus Bestandsplänen dreidimensionale (3D) Bauwerksmodelle zu rekonstruieren. Bei der Konzeptionierung hat sich eine der großen Herausforderung beim Einsatz von Machine Learning herauskristallisiert. Um KI-Systeme überwacht trainieren zu können, werden sog. gelabelte Trainingsdaten benötigt. In diesem konkreten Fall Positionspläne, bei denen in einer maschinenlesbaren Form hinterlegt ist, welche Art von Bauteilen an welcher Stelle des Planes dargestellt sind. Damit kann eine KI lernen, selbstständig unbekannte Pläne zu analysieren und darin Bauteile zu klassifizieren. Diese Daten können entweder unter großem Aufwand manuell erstellt werden oder man greift auf synthetische Daten zurück. Synthetische Daten sind künstlich erschaffene Daten, die von ihrer Struktur und ihren Eigenschaften her echte Daten imitieren. Da dabei der Erzeugungsprozess gesteuert wird, lassen sich die benötigten Labels gleichzeitig miterzeugen. Auf diese Weise entsteht sog. Ground Truth Data, also Daten, bei denen man sich sicher sein kann, dass sie korrekt gelabelt sind. Für diese Arbeit wurde eine Datenpipeline umgesetzt, die in der Lage ist, synthetische Daten speziell für die Zwecke der Baubranche zu generieren. Mithilfe des Applications Programming Interface von Autodesk Revit wurde eine C#-Anwendung programmiert, die zufallsgesteuert Modelle erzeugt und anhand dieser Modelle realistische Pläne ableitet. Parallel dazu werden mithilfe der im Modell hinterlegten Bauteilinformationen die für die KI benötigten gelabelten Daten erzeugt. Erste Testläufe mit dem Objekterkennungsframework YOLOv5 waren vielversprechend. Der gewählte Ansatz hat sich somit als flexibel und skalierbar erwiesen, um damit KI-Systeme für bauspezifische Aufgaben - wie die Bauteilerkennung auf Plänen - zu trainieren.
Ziel der Studie:
Ziel der Studie ist die Messung des Stands der Digitalisierung und die mit einer Anbindung an die Telematikinfrastruktur verbundenen Chancen und Herausforderungen für Rehabilitationseinrichtungen.
Methodik:
Teilstandardisierte Online-Befragung bei Trägern von Rehabilitationseinrichtungen in Bayern (n=33). Der Fragebogen mit 36 Fragen beinhaltet eine leicht veränderte Skala auf Basis des „Electronic Medical Record Adoption Model (EMRAM)“.
Ergebnisse:
Der Digitalisierungsgrad wurde in 70 Prozent der Rehabilitationseinrichtungen mit Stufe 0 angegeben (Stufenmodell bis 7). Die Übermittlung patientenbezogener Daten (Eingang und Ausgang) erfolgt häufig analog, wohingegen die Verarbeitung innerhalb der Einrichtung in vielen Fällen bereits überwiegend digital ist. Beim Anschluss an die Telematikinfrastruktur wird hoher Aufwand bei der Installation, aber auch der Schulung des Personals und der Anpassung der Arbeitsorganisation gesehen.
Schlussfolgerung:
Durch Änderung der gesetzlich-finanziellen Lage in Deutschland eröffnen sich für Rehabilitationseinrichtungen neue Möglichkeiten einer verstärkten Digitalisierung. Hürden hängen mit Anforderungen an IT-Sicherheit, Schulung des Personals und sowie dem ebenfalls geringen Digitalisierungsstand bei Krankenhäusern und Ärzt*innen sowie Patient*innen zusammen, die eine digitale Datenübermittlung erschweren.
The Internet of Things (IoT) is an emerging computing paradigm providing new approaches to collect and analyze environmental data. However, as specific challenges arose, the paradigm of Edge Computing with its potential solution capabilities came into place. The combination of both paradigms is currently highly discussed in industry and research. This paper aims to contribute to this field by conducting a systematic literature review to examine the differences and relation between IoT and Edge Computing on a meta-level. It first investigates conceptual backgrounds, use cases, and implementation types. After that, the differences between the paradigms are highlighted. It becomes clear that the significant distinction is in the architectural composition. However, the scientific consensus reveals that both paradigms have a common historical background, and Edge Computing is perceived as the next step in the evolution of IoT. Furthermore, Edge Computing-based systems can address common IoT challenges identified in the two paradigms’ problem-solution space. Ultimately, there is a need for further research in security, edge intelligence, and standardization, with Edge Computing frameworks able to address these in practice.
Lean IT
(2023)
Companies have applied Lean Management and its methods in their production functions for several decades. They also increasingly use Lean Management to improve service delivery, for example, in their IT organizations, which is referred to as “Lean IT”. Lean IT finds widespread recognition in business practice, but corresponding academic research is still scarce. The paper at hand intends to shed light on the current perspectives of Lean IT from an academic point and a practitioner point of view. The paper applies an innovative quantitative approach of literature analysis using semantic entity annotator and a keyword analysis to systematically identify and compare topics academics and practitioners deem relevant in context of Lean IT. We analyze practitioner media and scholarly articles published from January 2014 to June 2019. The analysis shows that research does not seem to adequately address the topics that are highly relevant for practitioners when it comes to Lean IT, e.g., issues pertinent to Automation, DevOps, role of the CIO, IT Service Management or Scrum in context of Lean IT are under-researched. Our analysis further shows that interest in Lean IT as a field is rising in both groups. Our study can help to guide further research activities.
Business process improvement (BPI) is of high priority for practitioners. But especially the most value-adding phase in a BPI project, namely the “act of improvement”, is insufficiently supported despite the many existing methods and techniques. Until now, it is largely unclear as to what degree existing BPI techniques support each other and are interrelated with one another. Thus, the purpose of this paper is to investigate the functional interdependencies between BPI techniques to get a better understanding for the beneficial synergies between the BPI techniques and to provide a basis for purposefully combining them within projects. Based on the functional interdependencies, a graphical “Functional Interdependency Map” is developed and its usability demonstrated in an experiment. The paper is valuable for academics and practitioners alike because the impact of BPI on organizational performance is high.
MongoDB kompakt
(2023)
Die Dokumentendatenbank MongoDB ist das am weitesten verbreitete NoSQL-Datenbanksystem. Dieses kompakte Werk präsentiert MongoDB von der Installation bis zur Administrierung in großen Rechenclustern. Sie erfahren, wie Sie JSON-Dokumente in Kollektionen einfügen, suchen, ändern und löschen, wie man Replikation und Sharding effektiv einsetzt und wie Sie komplexe Analysen mithilfe der Aggregation-Pipeline durchführen können.
This study uses holistic models of image perception to analyze and interpret eye movements during a code review. 23 participants (15 novices and 8 experts) take part in the experiment. The subjects’ task is to review six short code examples in C programming language and identify possible errors. During the experiment, their eye movements are recorded by an SMI 250 REDmobile. Additional data is collected through questionnaires and retrospective interviews. The results implicate that holistic models of image perception provide a suitable theoretical background for the analysis and interpretation of eye movements during code reviews. The assumptions of these models are particularly evident for expert programmers. Their approach can be divided into different phases with characteristic eye movement patterns. It is best described as switching between scans of the code example (global viewing) and the detailed examination of errors (focal viewing).
Nowadays, learning management systems are widely employed in all educational institutions to instruct students as a result of the increasing in online usage. Today’s learning management systems provide learning paths without personalizing them to the characteristics of the learner. Therefore, research these days is concentrated on employing AI-based strategies to personalize the systems. However, there are many different AI algorithms, making it challenging to determine which ones are most suited for taking into account the many different features of learner data and learning contents. This paper conducts a systematic literature review in order to discuss the AI-based methods that are frequently used to identify learner characteristics, organize the learning contents, recommend learning paths, and highlight their advantages and disadvantages.