Fakultät Elektro- und Informationstechnik
Refine
Year of publication
Document Type
- conference proceeding (article) (227)
- Article (41)
- conference proceeding (presentation, abstract) (7)
- Report (3)
- Book (2)
- Part of a Book (1)
- conference talk (1)
- Working Paper (1)
Is part of the Bibliography
- no (283)
Keywords
- Real-time systems (3)
- Safety (3)
- eye tracking (3)
- higher education (3)
- interface (3)
- reliability analysis (3)
- semantic (3)
- AUTOSAR (2)
- Causality (2)
- Functional Safety (2)
Institute
- Fakultät Elektro- und Informationstechnik (283)
- Laboratory for Safe and Secure Systems (LAS3) (271)
- Labor Datenkommunikation (17)
- Labor Industrielle Elektronik (14)
- Fakultät Informatik und Mathematik (6)
- Labor Informationssicherheit und Complience (ISC) (6)
- Fakultät Angewandte Natur- und Kulturwissenschaften (2)
- Labor Optische Übertragungssysteme (2)
- Sensorik-Applikationszentrum (SappZ) (2)
- Fakultät Sozial- und Gesundheitswissenschaften (1)
Begutachtungsstatus
- peer-reviewed (67)
- begutachtet (13)
Context:
Causal probabilistic graph-based models have gained widespread utility, enabling the modeling of cause-and-effect relationships across diverse domains. With their rising adoption in new areas, such as safety analysis of complex systems, software engineering, and machine learning, the need for an integrated lifecycle framework akin to DevOps and MLOps has emerged. Currently, such a reference for organizations interested in employing causal engineering is missing. This lack of guidance hinders the incorporation and maturation of causal methods in the context of real-life applications.
Objective:
This work contextualizes causal model usage across different stages and stakeholders and outlines a holistic view of creating and maintaining them within the process landscape of an organization.
Method:
A novel lifecycle framework for causal model development and application called CausalOps is proposed. By defining key entities, dependencies, and intermediate artifacts generated during causal engineering, a consistent vocabulary and workflow model to guide organizations in adopting causal methods are established.
Results:
Based on the early adoption of the discussed methodology to a real-life problem within the automotive domain, an experience report underlining the practicability and challenges of the proposed approach is discussed.
Conclusion:
It is concluded that besides current technical advancements in various aspects of causal engineering, an overarching lifecycle framework that integrates these methods into organizational practices is missing. Although diverse skills from adjacent disciplines are widely available, guidance on how to transfer these assets into causality-driven practices still need to be addressed in the published literature. CausalOps’ aim is to set a baseline for the adoption of causal methods in practical applications within interested organizations and the causality community.
In higher education, improving learning and learning success are goals of general improvement. Lecturers teaches content and students acquire that content in an efficient way. To structure content, learning element categories are evaluated from the student's point of view in higher education area. The aim is to validate given definitions of ten learning element categories within a Learning Management System (LMS).
This paper evaluates a categorization of learning elements for organizing learning content in online education within LMSs. Therefore, ten categories of learning elements and corresponding definitions were defined in a previous work as base for this paper. The learning elements to examine are manuscript, exercise, quiz, brief overview, learning goal, summary, collaboration tool, auditory additional material, textual additional material, and visual additional material. To validate the definitions and to get improvements to each learning element a survey is processed. Beside the demographic data questions, the survey consists of two questions to the acceptance of the definitions and asks for improvements. 148 students between the ages 19 and 35 participate in the survey in summer term 2023. The education level of the participants ranges from undergraduates to Ph.D. students.
The results of this paper are that more than 80% accept the given definitions. Some definitions of the learning elements are changed, but the changes are restricted to additions of maximal four words. This categorization of learning elements could lead to improvements in learning by giving the content more structure. With the structure students get the possibility to learn with preferred learning elements which could lead to more success in learning and to a decreasing dropout rate in universities. In the future, the learning elements allow to classify content within LMSs with the goal of generating individual learning paths. Furthermore, our project will integrate these learning elements, use them to generate learning paths, and could set a new standard in the way of personalized learning.
In the field of software engineering, graph-based models are used for a variety of applications. Usually, the layout of those graphs is determined at the discretion of the user. This article empirically investigates whether different layouts affect the comprehensibility or popularity of a graph and whether one can predict the perception of certain aspects in the graph using basic graphical laws from psychology (i.e., Gestalt principles). Data on three distinct layouts of one causal graph is collected from 29 subjects using eye tracking and a print questionnaire. The evaluation of the collected data suggests that the layout of a graph does matter and that the Gestalt principles are a valuable tool for assessing partial aspects of a layout.
C is one of the most widely used programming languages - MISRA C is one of the most known sets of coding guidelines for C. This paper examines the usefulness and comprehensibility of the MISRA C:2012 guidelines in an eye tracking study. There, subjects encounter non-compliant code in four different code review settings: with no additional reference, with an actual MISRA C guideline, with a case-specific interpretation of a MISRA C guideline, and with a compliant version of the code. The data collected was analyzed not only in terms of the four presentation styles, but also by dividing the subjects into experience levels based on their semesters of study or years of work experience. Regarding the difference between actual and interpreted guidelines, we found that for interpreted guidelines the error detection rate is higher whereas the duration and frequency of visits to the guideline itself are mainly lower. This suggest that the actual guidelines are less useful and more difficult to understand. The former is contradicted by the subjects’ opinions: when surveyed, they rated the usefulness of the actual guidelines higher.
As humans, we tend to use models to describe reality. Modeling languages provide the formal frameworks for creating such models. Usually, the graphical design of individual model elements is based on subjective decisions; their suitability is determined at most by the prevalence of the modeling language. With other words: there is no objective way to compare different designs of model elements. The present paper addresses this issue: it introduces a systematic approach for evaluating the elements of graph-based modeling languages comprising 14 criteria – derived from standards, usability analyses, or the design theories ‘Physics of Notations’ and ‘Cognitive Dimensions of Notations’. The criteria come with measurement procedures and evaluation schemes based on reasoning, eye tracking, and questioning. The developed approach is demonstrated with a specific use case: three distinct sets of node elements for causal graphs are evaluated in an eye tracking study with 41 subjects.
The dropout rate at universities has been very high for years. Thereby, the inexperience and lack of knowledge of students in dealing with individual learning paths in various courses of study plays a decisive role. Adaptive learning management systems are suitable countermeasures, in which learners’ learning styles are classified using questionnaires or computationally intensive algorithms before a learning path is suggested accordingly. In this paper, a study design for student learning style classification using eye tracking is presented. Furthermore, qualitative and quantitative analyses clarify certain relationships between students’ eye movements and learning styles. With the help of classification based on eye tracking, the filling out of questionnaires or the integration of computationally or cost-intensive algorithms can be made redundant in the future.
This study examines how Klingsieck’s LIST-K questionnaire [22] can be shortened and adapted to the requirements of an online learning management system. In a study with 213 participants, the questionnaire is subjected to an exploitative factor analysis. In a next step, the results are evaluated in terms of their reliability. This process creates a modified factor structure for the LIST-K, comprising a total of eight factors. The reliability of the modified questionnaire is at an α of .770. The shortened version of the LIST-K questionnaire is currently being used on an experimental basis in different courses.
Over the past decade, cars have turned gradually into real cyber physical systems. The collaboration of services between the service-oriented, cross-enterprise vehicle application frameworks has increased to generate novel, smart and complicated vehicle services. Consequently, from an interoperability perspective, semantically mapping of vehicle service component’s interface ontological models emerged as a big research interest in automotive application domain that manipulates several cross-enterprise synergy knowledge applications frameworks. The ontological metamodeling lays the foundation for building semantic bridge and exploring semantic associations between service components’ interface models based on the domain knowledge for semantic interoperability. Also, several semantic quality metrics has been defined over time for the vehicle service interface ontological metamodels. The empirically evaluated values of these metrics can be used to assess progress in cross-enterprise interoperability between the service and the clients’ APIs ontological models in vehicle domain. Despite potential benefits of semantic alignment quality metrics, the effective use of these metrics for vehicle service interface ontologies have proven elusive. Yes, such metrics can be used successfully for quantification, but then they mostly fail to provide adequate annotations in subsequent decision-making in the direction of semantic interoperability and reusability. The effective use of ontology semantic alignment quality metrics is basically hindered in the absence of the meaningful thresholds. In fact, the absence of an effective and meaningful threshold for the semantic similarity measure between various vehicle service interface ontological metamodels, motivates this research work which not only proposes a design approach to an optimized threshold for the semantic similarity metrics but also applies this threshold on few defined semantic alignment quality metrics. This paper also uses a real-world vehicle domain industrial case study to illustrate the design approach.
This study investigates the impact of eye movement modeling examples in Software Engineering education. Software Engineering is a highly visual domain. The daily tasks of a software engineer (e.g., formulating requirements, creating UML diagrams, or conducting a code review) require in many cases the use of certain visual strategies. Although these strategies can be found for experts, it has been observed in different eye tracking studies that students have difficulties in learning and applying them. To familiarize students with these visual strategies and to provide them with a better understanding for the cognitive processes involved, a total of seven eye movement modeling examples was created. The seven eye movement modeling examples cover relevant parts of an introductory Software Engineering lecture; they are focused on typical situations in which visual strategies are applied. The results of a questionnaire-based evaluation shows that students consider the eye movement modeling examples as useful, feel supported in their learning process, and would like to see more use of them in the Software Engineering lecture. Furthermore, the students suggested that eye movement modeling examples should also be used in other lectures.
Nowadays, learning management systems are widely employed in all educational institutions to instruct students as a result of the increasing in online usage. Today’s learning management systems provide learning paths without personalizing them to the characteristics of the learner. Therefore, research these days is concentrated on employing AI-based strategies to personalize the systems. However, there are many different AI algorithms, making it challenging to determine which ones are most suited for taking into account the many different features of learner data and learning contents. This paper conducts a systematic literature review in order to discuss the AI-based methods that are frequently used to identify learner characteristics, organize the learning contents, recommend learning paths, and highlight their advantages and disadvantages.
This paper assesses the relation between personality, demographics, and learning style. Hence, data is collected from 200 participants using 1) the BFI-10 to obtain the participant’s expression of personality traits according to the five-factor model, 2) the ILS to determine the participant’s learning style according to Felder and Silverman, and 3) a demographic questionnaire. From the obtained data, we train and evaluate a Bayesian network. Using Bayesian statistics, we show that age and gender slightly influence personality and that demographics as well as personality have at least a minor effect on learning styles. We also discuss the limitations and future work of the presented approach.
An immense diversity in bottle types requires high accuracy during sorting for recycling purposes by breweries. This extremely complex and time-consuming procedure can result in enormous additional costs for them. This paper presents transfer learning-based algorithms for classifying beer bottle brands using camera images, applicable in individual sorting solutions for different use cases. The problem is tackled using customised EfficientNet, InceptionResNet and VGG models along with an augmented dataset. In addition, a detailed analysis of different model and parameter combinations is performed, enabling tailor-made technologies for specific conditions and resource limitations. In accompanying validations and subsequent tests, a test accuracy of 100% in the recognition of beer brands could be achieved, proving the proposed method fully contributes to the solution of the problem.
In modern vehicles, system complexity and technical capabilities are constantly growing. As a result, manufacturers and regulators are both increasingly challenged to ensure the reliability, safety, and intended behavior of these systems. With current methodologies, it is difficult to address the various interactions between vehicle components and environmental factors. However, model-based engineering offers a solution by allowing to abstract reality and enhancing communication among engineers and stakeholders. Applying this method requires a model format that is machine-processable, human-understandable, and mathematically sound. In addition, the model format needs to support probabilistic reasoning to account for incomplete data and knowledge about a problem domain. We propose structural causal models as a suitable framework for addressing these demands. In this article, we show how to combine data from different sources into an inferable causal model for an advanced driver-assistance system. We then consider the developed causal model for scenario-based testing to illustrate how a model-based approach can improve industrial system development processes. We conclude this paper by discussing the ongoing challenges to our approach and provide pointers for future work.
In educational research, non-personalized learning content increases learners' cognitive load, causing them to lower their performance and sometimes drop out of the course. Personalizing learning content with learners’ unique characteristics, like learning styles, personality traits, and learning strategies, is being suggested to improve learners’ success. Several theories exist for assessing learners’ unique characteristics. By the end of 2020, 71 learning style theories have been formulated, and research has shown that combining multiple learning style theories to recommend learning paths yields better results. As of the end of 2022, there is no single research that demonstrates a relationship between the Index of Learning Styles (ILS) based Felder-Silverman learning style model (FSLSM) dimensions, Big Five (BFI-10) based personality traits, and the Learning strategies in studying (LIST-K) based learning strategies factors for personalizing learning content.
In this paper, an innovative approach is proposed to estimate the relationship between these theories and map the corresponding learning elements to create personalized learning paths. Respective questionnaires were distributed to 297 higher education students for data collection. A three-step approach was formulated to estimate the relationship between the models. First, a literature search was conducted to find existing studies. Then, an expert interview was carried out with a group of one software engineering education research professor, three doctoral students, and two master’s students. Finally, the correlations between the students' questionnaire responses were calculated. To achieve this, a Bayesian Network was built with expert knowledge from the three-step approach, and the weights were learned from collected data. The probability of individual FSLSM learning style dimensions was estimated for a new test sample. Based on the literature, the learning elements were mapped to the respective FSLSM learning style dimensions and were initiated as learning paths to the learners.
The next steps are proposed to extend this framework and dynamically recommend learning paths in real time. In addition, the individual levels of learning style dimensions, personality traits, and learning strategies can be considered to improve the recommendations. Further, using probabilities for mapping learning elements to learning styles can increase the chance of initiating multiple learning paths for an individual learner.
In recent years, mapping of application software components’ ontologies semantically emerged as a big research challenge in automotive application domain that manipulates several cross-enterprise synergy knowledge application frameworks. The same knowledge formalized by different experts in different vehicle application frameworks leads to heterogeneous representations of components’ interface data. Consequently, this causes the most daunting impediment in semantic interoperability between the service components in cooperative automotive systems. From a modeling perspective, in the absence of standardized domain-based unified modeling techniques, the orchestration and resolution of semantic data interoperability between various vehicle application frameworks’ components’ interface models remain a challenge. However, this challenge could be addressed using ontological metamodeling by specifying semantic associations between components’ interface model concepts based on the domain knowledge. Apart from the semantic mapping of interface ontological metamodels, this work also defines quality metrics to determine the degree of semantic alignment achieved between the various interface ontologies. Additionally, to reduce development time and cost towards semantic interoperability, this work proposes a semi-automated plugin tool for the applicability of the evaluated quality metrics to semantic mapping of real-world components’ interface models.
This study uses holistic models of image perception to analyze and interpret eye movements during a code review. 23 participants (15 novices and 8 experts) take part in the experiment. The subjects’ task is to review six short code examples in C programming language and identify possible errors. During the experiment, their eye movements are recorded by an SMI 250 REDmobile. Additional data is collected through questionnaires and retrospective interviews. The results implicate that holistic models of image perception provide a suitable theoretical background for the analysis and interpretation of eye movements during code reviews. The assumptions of these models are particularly evident for expert programmers. Their approach can be divided into different phases with characteristic eye movement patterns. It is best described as switching between scans of the code example (global viewing) and the detailed examination of errors (focal viewing).
This paper presents the results of a data collection with the LIST-K questionnaire. This questionnaire measures students’ learning strategies and shows which strategies are particularly dominant or rather weak.
Learning strategies have long been a major area of research in educational science and psychology. In these disciplines, learning strategies are understood as intentional behaviors and cognitive skills that learners employ to effectively complete learning tasks, by selecting, acquiring, organizing, and integrating information into their existing knowledge for long-term retention.
The LIST-K, developed by Klingsieck in 2018, was chosen for accessing learning strategies due to its thematic suitability, widespread use, and test economy. It covers a total of four main categories (i.e., cognitive strategies, metacognitive strategies, management of internal resources, and management of external resources), each of which are subdivided into further subscales. With a total of 39 items answered via a 5-step Likert scale, the LIST-K can cover the topic relatively comprehensively and at the same time be completed in a reasonable amount of time of approximately 10 minutes.
The LIST-K was used as part of a combined data collection along with other questionnaires on their personal data, their preferences regarding certain learning elements, their learning style (i.e. the ILS), and personality (i.e. the BFI-10). A total of 207 students from different study programs participated via an online survey created using the survey tool "LimeSurvey". Participation in the study was voluntary, anonymously, and in compliance with the GDPR.
Overall, the results of the LIST-K show that students are willing to work intensively on relevant topics intensively and to perform beyond the requirements of the course seeking additional learning material. At the same time, however, it is apparent that the organization of their own learning process could still be improved. For example, students start repeating content too late (mean=2.70; SD=0.92) and do not set goals for themselves and do not create a learning plan (mean=3.19; SD=0.90). They also learn without a schedule (mean=2.23; SD=0.97) and miss opportunities to learn together with other students (mean=3.17; SD=0.94).
The findings of the data collection will be used to create an AI-based adaptive learning management system that will create individualized learning paths for students in their respective courses. From the results of the LIST-K, it appears that the adaptive learning management system should primarily support organizational aspects of student learning. Even small impulses (an individual schedule of when to learn what or a hierarchical structuring of the learning material) could help students to complete their courses more successfully and improve their learning.
Universities are faced with a rising number of dropouts in recent years. This is largely due to students' limited capability of finding individual learning paths through various course materials. However, a possible solution to this problem is the introduction of adaptive learning management systems, which recommend tailored learning paths to students – based on their individual learning styles. For the classification of learning styles, the most commonly used methods are questionnaires and learning analytics. Nevertheless, both methods are prone to errors: questionnaires may give superficial answers due to lack of time or motivation, while learning analytics do not reflect offline learning behavior. This paper proposes an alternative approach to classify students' learning styles by integrating eye tracking in combination with Machine Learning (ML) algorithms.
Incorporating eye tracking technology into the classification process eliminates the potential problems arising from questionnaires or learning analytics by providing a more objective and detailed analysis of the subject's behavior. Moreover, this approach allows for a deeper understanding of subconscious processes and provides valuable insights into the individualized learning preferences of students.
In order to demonstrate this approach, an eye tracking study is conducted with 117 participants using the Tobii Pro Fusion. Using qualitative and quantitative analyses, certain patterns in the subjects' gaze behavior are assigned to their learning styles given by the validated Index of Learning Styles (ILS) questionnaire.
In short, this paper presents an innovative solution to the challenges associated with classifying students' learning styles. By combining eye tracking data with ML algorithms, an accurate and insightful understanding of students' individual learning paths can be achieved, ultimately leading to improved educational outcomes and reduced dropout rates.
Eye tracking has proven to be a powerful tool in a variety of empirical research areas; hence, it is steadily gaining attention. Driven by the expanding frontiers of Artificial Intelligence and its potential for data analysis, eye tracking technology offers promising applications in diverse fields, from usability research to cognitive research. The education sector in particular can benefit from the increased use of eye tracking technology - both indirectly, for example by studying the differences in gaze patterns between experts and novices to identify promising strategies, and directly by using the technology itself to teach in future classrooms.
As with any empirical method, the results depend directly on the quality of the data collected. That raises the question of which parameters educators or researchers can influence to maximize the data quality of an eye tracker. This is the starting point of the present work: In an empirical study of eye tracking as an (educational) technology, we systematically examine factors that influence the data quality, such as illumination, sampling frequency, and head orientation - parameters that can be varied without much additional effort in everyday classroom or research use - using two human subjects, an artificial face, and the Tobii Pro Spectrum.
We rely on metrics derived from the raw gaze data, such as accuracy or precision, to measure data quality. The obtained results derive practical advice for educators and researchers, such as using the lowest sampling frequency appropriate for a certain purpose. Thereby, this research fills a gap in the current understanding of eye tracker performance and, by offering best practices, enables researchers or teachers to produce data of the highest quality possible and therefore best results when using eye trackers in laboratories or future classrooms.
The Logical Execution Time (LET) has recently been integrated in multi-core automotive systems to ensure timing and dataflow determinism. Although buffering mechanisms are introduced to incorporate LET semantics, they do not guarantee that tasks are executed within their LET frames. In fact, LET and buffering semantics are violated if scheduling is not designed to execute all tasks within their LET frames and in a specific order. In this paper, we describe a scheduling synthesis technique for Fixed-Priority Scheduling (FPS) to achieve resource-efficient execution of LET systems. The proposed approach considers LET semantics, scheduling overheads, and delays caused by operating system operations and provides the possibility to optimize the schedule with respect to aspects like scheduling overheads. Our performance and feasibility evaluation shows that the proposed algorithm provides results in a reasonable amount of time for models of complex industrial applications. Thus, the integration of the proposed algorithm into an automated process is of high benefit to accelerate the development of vehicle applications.
Today's cyberphysical systems are increasingly prone to misuse. To secure existing and future software systems, introducing concepts of IT-Security and Secure Software Engineering (SecSE) in Software Engineering (SE) courses is essential for academic education of future software engineers. This is not only important for computer science students, but also for engineering students studying topics of computing and SE. However, only little research exists on integrating these topics into traditional SE courses, especially for engineering students in non-computer science majors. To narrow this gap, this paper contributes with the design and evaluation of an exercise on modeling misuse cases alongside use cases, based on the inductive teaching method problem-based learning (PBL). The exercise is part of an educational design research investigating which learning content and teaching methods are suitable for integrating IT-Security and SecSE topics into traditional SE education of engineering students to convey factual knowledge as well as raise awareness and interest for both topics during software development. We present the integration of the exercise design into a traditional SE course for engineering students and its evaluation to examine its suitability. We evaluated the exercise design regarding the suitability of the design components, the learning content of misuse cases and the intended learning goals as well as its impact on students' motivation, and their interest in IT-security. The paper then presents indications on the feasibility and success of the exercise design for teaching misuse cases to engineering students and sparking their interest in IT-Security.
Lately, parallel task models have received much attention in the development of real-time multiprocessor systems, as they allow highly compute-intensive tasks to have shorter deadlines which is very much required in modern reactive systems. However, missing modularity and portability can make parallel programming a cumbersome endeavor. As a consequence, compute-intensive sectors in the desktop and server segment have relied on parallelism frameworks such as Intel Threading Building Blocks, Cilk and OpenMP. These parallelism frameworks, however, are optimized for decent average case performance and consequently, do not meet the strict requirements imposed by real-time systems.
In this paper, we present a proof-of-concept parallelism framework which was implemented in particular for soft real-time systems and having tight timing and safety requirements of such critical systems in mind. The proposed runtime system implements static memory allocation in a work-stealing environment that conforms to the strict space and tight probabilistic time bounds of work-stealing schedulers. Furthermore, we evaluate the performance of this framework by conducting multiprogrammed benchmarks on a real-time embedded multicore architecture.
Supervisory Control and Data Acquisition (SCADA) systems are used to control and monitor components within the energy grid, playing a significant role in the stability of the system. As a part of critical infrastructures, components in these systems have to fulfill a variety of different requirements regarding their dependability and must also undergo strict audit procedures in order to comply with all relevant standards. This results in a slow adoption of new functionalities. Due to the emerged threat of cyberattacks against critical infrastructures, extensive security measures are needed within these systems to protect them from adversaries and ensure a stable operation. In this work, a solution is proposed to integrate extensive security measures into current systems. By deploying additional security-gateways into the communication path between two nodes, security features can be integrated transparently for the existing components. The developed security-gateway is compliant to all regulatory requirements and features an internal architecture based on the separation-of-concerns principle to increase its security and longevity. The viability of the proposed solution has been verified in different scenarios, consisting of realistic field tests, security penetration tests and various performance evaluations.
The modular addition is used as a non-linear operation in ARX ciphers because it achieves the requirement of introducing non-linearity in a cryptographic primitive while only taking one clock cycle to execute on most modern architectures. This makes ARX ciphers especially fast in software implementations, but comes at the cost of making it harder to protect against side-channel information leakages using Boolean masking: the best known 2-shares masked adder for ARM Thumb micro-controllers takes 83 instructions to add two 32-bit numbers together. Our approach is to operate in bitsliced mode, performing 32 additions in parallel on a 32-bit microcontroller. We show that, even after taking into account the cost of bitslicing before and after the encryption, it is possible to achieve a higher throughput on the tested ciphers (CRAX and ChaCha20) when operating in bitsliced mode. Furthermore, we prove that no first-order information leakage is happening in either simulated power traces and power traces acquired from real hardware, after sufficient countermeasures are put into place to guard against pipeline leakages.
In the real-time systems sector, various task models and corresponding tests exist to model and verify the schedulability of task sets on the system at hand. While those models and schedulability tests have intensively been studied from a theoretical point of view, it is hard to ma e use of them to compare the actual execution behavior of scheduling algorithms on a real system. In contrast to schedulability tests, simulators can help to investigate the performance of specific scheduling algorithms. One of the most generalized task models to describe parallel tasks is the Directed Acyclic Graph model that allows to represent tasks as a series of subtasks that depict the potentially parallel computations and precedence constraints that denote the order in which the subtasks are allowed to execute.
In this paper, we investigate various scheduling algorithms for the Directed Acyclic Graph model. For that, we first recapitulate the examined scheduling algorithms in detail and point out relevant differences. Subsequently, we present the evaluation of different global and federated scheduling algorithms using fine-grained parallel tasks. To this end, we generate random Directed Acyclic Graph tasks and simulate their execution on multiprocessor systems using scheduling algorithms such as global rate-monotonic and semi-federated scheduling as well as global scheduling policies using the thread pool model.
The discipline of causal inference uses so-called causal graphs to model cause and effect relations of random variables. As those graphs only encode a relation structure there is no hard rule concerning their alignment. The present paper presents a study with the aim of working out the optimal alignment of causal graphs with respect to comprehensibility and interestingness. In addition, the study examines whether the central gestalt principles of psychology apply for causal graphs. Data from 29 participants is acquired by triangulating eye tracking with a questionnaire. The results of the study suggest that causal graphs should be aligned downwards. Moreover, the gestalt principles proximity, similarity and closure are shown to hold true for causal graphs.
The paper presents a penetration testing framework for automotive IT security education and evaluates its realization. The automotive sector is changing due to automated driving functions, connected vehicles, and electric vehicles. This development also creates new and more critical vulnerabilities. This paper addresses a possible countermeasure, automotive IT security education. Some existing solutions are evaluated and compared with the created Automotive Penetration Testing Education Platform (APTEP) framework. In addition, the APTEP architecture is described. It consists of three layers representing different attack points of a vehicle. The realization of the APTEP is a hardware case and a virtual platform referred to as the Automotive Network Security Case (ANSKo). The hardware case contains emulated control units and different communication protocols. The virtual platform uses Docker containers to provide a similar experience over the internet. Both offer two kinds of challenges.
The first introduces users to a specific interface, while the second combines multiple interfaces, to a complex and realistic challenge. This concept is based on modern didactic theories, such as constructivism and problem-based/challenge-based learning.
Computer Science students from the Ostbayerische Technische Hochschule (OTH) Regensburg experienced the challenges as part of a elective subject. In an online survey evaluated in this paper, they gave positive feedback. Also, a part of the evaluation is the mapping of the ANSKo and the maturity levels in the Software Assurance Maturity Model (SAMM) practice Education & Guidance as well as the SAMM practice Security Testing. The scientific contribution of this paper is to present an APTEP, a corresponding learning concept and an evaluation method.
The modular addition is a popular building block when designing lightweight ciphers. While algorithms mainly based on the addition can reach very high performance, masking their implementations results in a huge penalty. Since efficient protection against side-channel attacks is a requirement in lots of use cases, we focus on optimizing the Boolean masking of the modular addition. Contrary to recent related work, we target evolving a masked full adder instead of parts of a parallel prefix adder. We study how techniques typically found in neural network evolution and genetic algorithms can be adapted in order to help in evolving an efficiently masked adder. We customize a well-known neuroevolution algorithm, develop an optimized masked adder with our new approach and implement the ChaCha20 cipher on an ARM Cortex-M3 controller. We compare the performance of the protected neuroevolved implementation to solutions found by traditional search methods. Moreover, the leakage of our new solution is validated by a t-test conducted with a leakage simulator. We present under which circumstances our masked implementation outperforms related work and prove the feasibility of successfully using neuroevolution when searching for complex Boolean networks.
Development and verification of modern, dependable automotive systems require appropriate modelling approaches. Classic automotive safety is described by the normative regulations ISO 26262, its relative ISO/PAS 21448, and their respective methodologies. In recent publications, an emerging demand to combine environmental influences, machine learning, or reasoning under uncertainty with standard-compliant analysis techniques can be noticed. Therefore, adapting established methods like FTA and proper tool support is necessary. We argue that Bayesian Networks (BNs) can be used as a central component to address and merge these demands. In this paper, we present our Open-Source Python package BayesianSafety. First, we review how BNs relate to data-driven methods, model-to-model transformations, and causal reasoning. Together with FTA and ETA, these models form the core functionality of our software. After describing currently implemented features and possibilities of combining individual modelling approaches, we provide an informal view of the tool’s architecture and of the resulting software ecosystem. By comparing selected publicly available safety and reliability analysis libraries, we outline that many relevant methodologies yield specialized implementations. Finally, we show that there is a demand for a flexible, unifying analysis tool that allows researching system safety by using multi-model and multi-domain approaches.
Engineering is based on the understanding of causes and effects. Thus, causality should also guide the safety assessment of complex systems such as autonomous driving cars. To ensure the safety of the intended functionality of these systems, normative regulations like ISO 21448 recommend scenario-based testing. An important task here is to identify critical scenarios, so-called edge and corner cases. Data-driven approaches to this task (e.g. based on machine learning) cannot adequately address a constantly changing operational design domain. Model-based approaches offer a remedy – they allow including different sources of knowledge (e.g. data, human experts) into safety considerations. With this paper, we outline a novel approach for ensuring automotive system safety. We propose to use structural causal models as a probabilistic modelling language to combine knowledge about an open-context environment from different sources. Based on these models, we investigate parameter configurations that are candidates for critical scenarios. In this paper, we first discuss some aspects of scenario-based testing. We then provide an informal introduction to causal models and relate their development lifecycle to the established V-model. Finally, we outline a generic workflow for using causal models to identify critical scenarios and highlight some challenges that arise in the process.
In this work, we present our benchmarking results for the ten finalist ciphers of the Lightweight Cryptography (LWC) project initiated by National Institute of Standards and Technology (NIST). We evaluate the speed and code size of various software implementations on five different platforms featuring four different architectures. Moreover, we benchmark the dynamic memory utilization of the remaining NIST LWC algorithms on one 32-bit ARM controller. We describe our test cases and methodology and provide some information regarding the design and properties of the finalists before showing and discussing our results. Altogether, we evaluated almost 300 implementations of the 3rd round candidates and pick the most appropriate and best (primary) implementation of each cipher for our comparisons. We include a variant of AES-GCM in our benchmarking in order to be able to compare the state-of-the-art to the novel LWC ciphers. Our research gives an overview over the performance of the latest software implementations of the NIST LWC finalists and shows under which circumstances which candidate is performing the best in our individual test cases. Additionally, we make all benchmarking results, the code for our test framework and every tested implementation available to the public to ensure a transparent testing process.
With autonomous driving, the system complexity of vehicles will increase drastically. This requires new approaches to ensure system safety. Looking at standards like ISO 26262 or ISO/PAS 21448 and their suggested methodologies, an increasing trend in the recent literature can be noticed to incorporate uncertainty. Often this is done by using Bayesian Networks as a framework to enable probabilistic reasoning. These models can also be used to represent causal relationships. Many publications claim to model cause-effect relations, yet rarely give a formal introduction of the implications and resulting possibilities such an approach may have. This paper aims to link the domains of causal reasoning and automotive system safety by investigating relations between causal models and approaches like FMEA, FTA, or GSN. First, the famous “Ladder of Causation” and its implications on causality are reviewed. Next, we give an informal overview of common hazard and reliability analysis techniques and associate them with probabilistic models. Finally, we analyse a mixed-model methodology called Hybrid Causal Logic, extend its idea, and build the concept of a causal shell model of automotive system safety.
Diagnostic protocols in automotive systems can offer a huge attack surface with devastating impacts if vulnerabilities are present. This paper shows the application of active automata learning techniques for reverse engineering system state machines of automotive systems. The developed black-box testing strategy is based on diagnostic protocol communication. Through this approach, it is possible to automatically investigate a highly increased attack surface. Based on a new metric, introduced in this paper, we are able to rate the possible attack surface of an entire vehicle or a single Electronic Control Unit (ECU). A novel attack surface metric allows comparisons of different ECUs from different Original Equipment Manufacturers (OEMs), even between different diagnostic protocols. Additionally, we demonstrate the analysis capabilities of our graph-based model to evaluate an ECUs possible attack surface over a lifetime.
In the last few years, the collaboration of services between the service-oriented, cross-enterprise vehicle application frameworks has gradually increased to generate novel and more complicated vehicle services for the automotive industry. In these service collaboration scenarios, where heterogeneous application frameworks participate to realize complex vehicle services, a source of discord that emerges is that the service providers must always check, before the service deployment, whether the clients or service consumers on the other side of the communication link are compatible with a given service's API (Application Program Interface). While using standardized templates like ontologies for API's semantic specifications are crucial for a service discovery and semantic interoperability, nevertheless, accessing these service APIs' semantic data using a standardized and understandable syntactical specification template is also equally substantial to ease services interoperability. In fact, such complex service collaboration scenarios motivate this research work which proposes a design approach towards standardized, domain-specific, platform-agnostic semantic and syntactic specification of vehicle services API models. This paper also uses a typical vehicle domain case study to illustrate the design approach and a reference mapping between the platform-agnostic semantics specifications of a vehicle service API ontological model and its corresponding language-neutral, syntactic representation using the OpenAPI standard.
Modern high-end embedded systems nowadays have to process enormous amounts of data. In order to speed up the computations and fully exploit the resources of the underlying hardware architectures, software developers can avail parallelism frameworks such as Intel Threading Building Blocks or compiler extensions as OpenMP. They ease the development of parallel applications by providing interfaces for common parallel design patterns and by internally distributing the work among the workers of a thread pool. However, such frameworks and compiler extensions do not yet support the stringent timing requirements of real-time systems and therefore, an adaption of their computation model to the sector of real-time systems needs to be conducted.
In this paper, we address the problem of scheduling parallel real-time directed acyclic graphs tasks on multiprocessor architectures where the subtasks are dispatched among and executed by the workers of a thread pool. In contrast to existing work in the state-of-the-art, we limit the maximum parallelism of real-time tasks not by the number of processors in the system, but by the number of worker threads used in the thread pool of each real-time application. For this model, we derive a worst-case response time analysis for task sets scheduled by a preemptive global fixed-priority scheduler. In order to evaluate the performance of our response time analysis, we further perform schedulability tests on generated task sets and compare the results to existing feasibility analyses in the current state-of-the-art.
Semantic alignment of application software components’ ontologies represents a great interest in vehicle application domains that manipulate heterogeneous overlapping knowledge application frameworks. In the past few years, with the growth in the novel vehicle service requirements such as autonomous driving, V2X (Vehicle-to-Vehicle communication) and many others, automotive application software component models are becoming increasingly collaborative with other qualified cross-enterprise industrial partners to accomplish these complex service requirements. The most daunting impediment to this cross-enterprise collaboration is semantic interoperability. For efficient services collaboration through cross-enterprise semantic interoperability between the vehicle application frameworks’ software components, aligning the interface ontologies of these components by identifying the depth of semantic alignment relationships between the concepts of the interface ontologies is the major focus of this paper. In contrast to several existing ontology structural metrics, this work defines, evaluates and validates ontology metrics to measure the depth of semantic alignment between the vehicle domain software component frameworks’ interface ontological models. To emphasize the substantial role of semantic alignment of software component frameworks’ interface ontologies in semantic interoperability, a typical vehicle domain case study involving vehicle applications is considered for demonstration.
In recent years, our society faced a massive interconnection of computer-based everyday objects, which opens these items for cyber-attacks. Dependent on the physical capabilities, successful attacks can vary from data exposure or a loss of functionality to a threat to life and limb. Connected- and autonomous vehicles are extremely safety-critical systems with a huge damage potential. This global trend, together with existing and upcoming regulations (ISO 270xx, ISO 21434, UNECE WP.29, UNECE R155), and the lack of qualified professionals create a tension field for the entire automotive industry.
Hence, new education concepts for engineers of safety-critical and connected systems are necessary to secure our daily and future systems against cyber-attacks and raise awareness and knowledge of the topic of IT-Security. Existing automotive security education systems have one common problem: All systems are hardware-based and therefore have very steep learning curves for beginners. Hardware-based systems, in general, are expensive in their initial costs, require regular maintenance, and add diverse operational difficulties independent of the aspired education goal. Additionally, the global pandemic increased the necessity of virtual education concepts for security training in cyber-physical systems.
Therefore, we present a novel concept for the education of cyber-security professionals for automotive systems based on discovery and problem-based learning in a virtual learning environment (VLE). Our concept contains individual exercises focusing on the topics of vulnerabilities and attacks in automotive networks and systems. Each exercise relates those topics to the corresponding security goals and countermeasures for mitigation. The learners work collaboratively in a self-contained manner within the VLE to acquire the necessary information to answer questions or find a solution to the given problem. To consider the heterogeneous background (e.g. knowledge, experience, preconceptions) of the learners, the topics can be presented in different difficulties, enabling an adaptable learning environment and different learning trajectories within the exercises.
The concept is based on a VLE, consisting of automotive networks and components, which simulate the behavior of a vehicle. This environment provides a hands-on, "real-life" scenario, which allows discovery and problem-based learning in a realistic, but cheap and scalable education environment. Furthermore, virtualization removes common difficulties, always present in training on real hardware. This aims to decrease complexity, prevent learning obstacles related to hardware handling, and enables a location-independent learning environment.
The target group of our education concept is Bachelor and Master students of computer science, engineering (e.g. electrical engineering, mechatronics), or similar studies, and (experienced) engineers from the industry.
In summary, our publication contains two contributions. We present an adaptable virtual learning environment for automotive security education, combined with an educational concept based on discovery and problem-based learning techniques. The goals of our concept are the education of cyber-security professionals for safety-critical, connected automotive systems and the support of life-long learning reaching from academic education to training in the industry.
Lately, parallel task models have received much attention in the development of real-time multiprocessor systems, as they allow highly compute-intensive tasks to have shorter deadlines which is very much required in modern reactive systems. However, missing modularity and portability can make parallel programming a cumbersome endeavor. As a consequence, compute-intensive sectors in the desktop and server segment have relied on parallelism frameworks such as Intel Threading Building Blocks, Cilk and OpenMP. These parallelism frameworks, however, are optimized for decent average case performance and consequently, do not meet the strict requirements imposed by real-time systems.
In this paper, we present a proof-of-concept parallelism framework which was implemented in particular for soft real-time systems and having tight timing and safety requirements of such critical systems in mind. The proposed runtime system implements static memory allocation in a work-stealing environment that conforms to the strict space and tight probabilistic time bounds of work-stealing schedulers. Furthermore, we evaluate the performance of this framework by conducting multiprogrammed benchmarks on a real-time embedded multicore architecture.
Car manufacturers define proprietary protocols to be used inside their vehicular networks, which are kept an industrial secret, therefore impeding independent researchers from extracting information from these networks. This article describes a statistical and a neural network approach that allows reverse engineering proprietary controller area network (CAN)-protocols assuming they were designed using the data base CAN (DBC) file format. The proposed algorithms are tested with CAN traces taken from a real car. We show that our approaches can correctly reverse engineer CAN messages in an automated manner.
Aufgrund der immer weiter anwachsenden Vernetzung der Stromnetze wird die Kommunikation zwischen der Leitstelle des Energieversorgers und den Infrastrukturkomponenten innerhalb eines Umspannwerks immer bedeutsamer. Dabei werden sowohl Steuerbefehle als auch Daten für Überwachungsfunktionen übertragen. In den aktuellen Netzwerkarchitekturen findet diese Kommunikation ohne eine kryptografische Absicherung statt, was einen Angriffspunkt für gezielte Attacken und damit eine potenzielle Gefährdung der Energieversorgung darstellt. Um solchen Angriffen in Zukunft entgegenzuwirken, wird das ES³M-Sicherheitsmodul entwickelt. Dieses soll in das Netzwerk zwischen den beiden Kommunikationspartnern eingesetzt werden und so den Datenverkehr absichern. Mithilfe einer Bedrohungsanalyse wurden Anforderungen abgeleitet, die neben kryptografischen Maßnahmen auch Themen wie funktionale Sicherheit und Langlebigkeit umfassen. Um diese zu erfüllen, wurde eine spezielle Systemarchitektur auf Basis einer Aufgabenteilung entworfen. Diese Architektur und korrespondierende Designentscheidungen werden präsentiert.
Modern compute architectures often consist of multiple CPU cores to achieve their performance, as physical properties put a limit on the execution speed of a single processor. This trend is also visible in the embedded and real-time domain, where programmers are forced to parallelize their software to keep deadlines. Additionally, embedded systems rely increasingly on modular applications, that can easily be adapted to different system loads and hardware configurations.
To parallelize applications under these dynamic conditions, often dispatching frameworks like Threading Building Blocks (TBB) are used in the desktop and server segment. More recently, Embedded Multicore Building Blocks (EMB2) was developed as a task-based programming solution designed with the constraints of embedded systems in mind.
In this paper, we discuss how task-based programming fits such systems by analyzing scheduler implementation variants, with a focus on classic work-stealing and the libraries TBB and EMB2. Based on the state of the art we introduce a novel resource-trading concept that allows static memory allocation in a work-stealing runtime holding strict space and time bounds. We conduct benchmarks between an early prototype of the concept, TBB and EMB2, showing that resource-trading does not introduce additional runtime overheads, while unfortunately also not improving on execution time variances.
The number of IoT devices in SCADA and ICS systems is rising quickly, especially in the domain of critical infrastructures. But these kinds of systems are performing mission critical tasks like controlling devices in industrial facilities or substations in the smart grid. Therefore, they are subject to a lot of regulatory standards. Yet, to provide remote access over the internet, special architectures are developed to integrate a network interface into these devices without inferring with the actual functionality. However, these architectures either lack security measures against cyber-attacks or do not offer the necessary performance for time-critical communication interfaces. To solve that, an architecture consisting of three units is introduced in this paper to provide a network interface with extensive security measures and a high performance. The main feature is the isolation of the cryptographic functionality onto an additional MCU. After proposing the basic concept, the paper presents many implementation details. Based on the current state of implementation, a concept validation of the realized architecture is described.
In the beginning of every security analysis or penetration test of a system, information about the target has to be gathered. On IT-Systems a port scan is usually performed as a first step of an investigation. Since the communication protocols differ in automotive systems, generic port scanning tools can’t be used for a security analysis of CANs.
More complex protocols have a higher likelihood of implementation errors and bugs. On CAN networks, such payloads are transferred through International Standard Transport Protocol (ISO-TP) communication. We designed a new methodology to identify ISO-TP endpoints in automotive networks. Every of these endpoints can provide exploitable application layer protocols and therefor has to be considered during penetration testing and security analysis.
We contribute a new scan approach for the automated evaluation of possible attack surfaces in automotive CAN networks which has a higher coverage and multiple advantages than state of the art approaches.
In the age of the Internet of Things, it is becoming increasingly important to integrate knowledge about the development of secure systems (Secure Software Engineering) into academic teaching. However, teaching IT Security and Secure Software Engineering to non-computer scientists is rare. Therefore, we focus our research on the integration of IT Security into software engineering education of non-computer scientists, particularly electrical engineers, by means of inductive teaching- and learning-arrangements. After collecting students' preconceptions of IT Security and Secure Software Engineering in prior work, this paper now contributes with a first mapping of these preconceptions with corresponding learning content as well as suitable inductive teaching methods to be able to create new lecture and exercise units and improve academic learning and teaching in both areas.
Code reviews are an essential part of quality assurance in modern software projects. But despite their great importance, they are still carried out in a way that relies on human skills and decisions. During the last decade, there have been several publications on code reviews using eye tracking as a method, but only a few studies have focused on the performance differences between experts and novices. To get a deeper understanding of these differences, the following experiment was developed: This study surveys expertise-related differences in experts’, advanced programmers’, and novices’ eye movements during the review of eight short C++ code examples, including correct and erroneous codes. A sample of 35 participants (21 novices, 14 advanced and expert programmers) were recruited. A Tobii Spectrum 600 was used for the data collection. Measures included participants’ eye movements during the code review, demographic background data, and cued retrospective verbal comments on replays of their own eye movement recordings. Preliminary results give proof for experience-related differences between participants. Advanced and expert programmers performed significantly better in case of error detection and the eye tracking data implies a more efficient reviewing strategy.
Quantum computing is considered the “next big thing” when it comes to solving computational problems impossible to tackle using conventional computers. However, a major concern is that quantum computers could be used to crack current cryptographic schemes designed to withstand traditional cyberattacks. This threat also impacts future automated vehicles as they become embedded in a vehicle-to-everything (V2X) ecosystem. In this scenario, encrypted data is transmitted between a complex network of cloud-based data servers, vehicle-based data servers, and vehicle sensors and controllers. While the vehicle hardware ages, the software enabling V2X interactions will be updated multiple times. It is essential to make the V2X ecosystem quantum-safe through use of “post-quantum cryptography” as well other applicable quantum technologies. This SAE EDGE™ Research Report considers the following three areas to be unsettled questions in the V2X ecosystem: How soon will quantum computing pose a threat to connected and automated vehicle technologies? What steps and measures are needed to make a V2X ecosystem “quantum-safe?” What standardization is needed to ensure that quantum technologies do not pose an unacceptable risk from an automotive cybersecurity perspective?
The National Institute of Standards and Technology (NIST) started the standardization process for lightweight cryptography algorithms in 2018. By the end of the first round, 32 submissions have been selected as 2nd round candidates. NIST allowed designers of 2nd round submissions to provide small updates on both their specifications and implementation packages. In this work, we introduce a benchmarking framework for evaluating the performance of NIST Lightweight Cryptography (LWC) candidates on embedded platforms. We show the features and application of the framework and explain its design rationale. Moreover, we provide information on how we aim to present up-to-date performance figures throughout the NIST LWC competition. In this paper, we present an excerpt of our software benchmarking results regarding speed and memory requirements of selected ciphers. All up-to-date results, including benchmarking different test cases for multiple variants of each 2nd round algorithm on five different microcontrollers, are periodically published to a public website. While initially only the reference implementations were available, the ability of automatically testing the performance of the candidate algorithms on multiple platforms becomes especially relevant as more optimized implementations are developed. Finally, we show how the framework can be extended in different directions: support for more target platforms can be easily added, different kinds of algorithms can be tested, and other test metrics can be acquired. The focus of this paper should rather lay on the framework design and testing methodology than on the current results, especially for reference code.
Experts predict that more IT-Security specialists will be needed in the coming years, but in current higher education in engineering disciplines, this topic is hardly addressed. Newer learning methods such as game-based learning (GBL) are enjoying an increasing popularity as their improvement in the education of subject specific topics can be proven by a variety of studies. We chose Educational Escape Rooms (EduER) as a GBL-tool to impart IT-Security in higher education of engineers. In our Escape Room (ER), the students try to solve puzzles and riddles with learned knowledge on the emphasis of cryptography. This paper first deals with a brief introduction to GBL and EduERs, followed by the design of our ER concept, containing different tasks with focus on the topic of cryptography. The tasks cover different cryptographic methods and hash algorithms, e.g. AES, RSA, SHA3. Afterwards the experimental study is presented. The study of the EduER was carried out with students from the bachelor's program in Electrical Engineering and Information Technology at the OTH Regensburg. The participants were divided into three groups of 5 to 8 persons each. They received a briefing with important information, followed by the ER execution, a debriefing afterwards and an exam-like evaluation sheet to test their learned knowledge. Finally, first basic results are presented.
Software process models are important in software projects in order to give the work of a project guidelines or a framework. However, teaching process models in higher education seems to be quite challenging. This has to do with the fact that undergraduates have no experience with projects in which process models are used. The theoretical mediation of process models is initially on a very abstract level. For this reason, we chose to combine two didactic approaches, namely problem-based learning and project work.
Various traditional plan-driven process models have been expanded in courses in Software Engineering with agile process models. The Scrum Framework is the focus of consideration of this paper.
Three Universities of Applied Sciences which cooperate in the EVELIN project focused on Scrum as a process model and integrated it into their teaching. Since the respective concepts of implementation differ, they should be presented and compared in this article to presents some practice approaches.
The goal of this presentation of is a uniform evaluation in order to obtain insights from different perspectives. This comparison can draw conclusions for possible necessary improvements of the respective concepts.
With the rising complexity and processing power of modern computer systems, the amount of MCU on a single PCB also rises. These microcontrollers often need to communicate with each other to exchange payload and control information in a bidirectional manner. Today’s well-established communication protocols in MCUs either do not fit modern transmission speed requirements or do have an inappropriate master-slave attribute, which does not allow the communication partners to have equal bus access rights. Therefore, this paper introduces an extension of the Serial Peripheral Interface (SPI) to allow an equally distributed access right for the communication interface between two microcontrollers. It simultaneously does fit modern transmission speed requirements of a common network interface, so that the message transmission does not constitute a bottleneck in data processing. Besides the protocol design, we do also provide a first prototype implementation, which constitutes a proof of concept.
UML (Unified Modeling Language) is the current de facto as well as de jure standard (ISO/IEC 19505:2012) notation to visualize models in software development. UML provides essential guidelines and rules to visualize and understand complex software systems. This is the reason why it has become part of curricula for software engineering courses at many universities worldwide. It is well known, however, that UML is hard to grasp for novices, mainly due to its complexity. In order to tackle the problem of teaching UML to novice students appropriately, it is inevitable to understand their needs and problems much better than we do now. This paper presents empirical insights into students' problems when developing common UML diagrams. Identified problems are generalized, giving rise to a problem catalogue that is derived from our empirical findings, thus establishing a basis for addressing these problems through focused learning arrangements.
This work in progress study examines through which activities programmers perform deliberate practice to improve their own skills in coding and programming. For this reason, a qualitative questionnaire was developed and conducted with a sample of 22 participants. The results indicate that programmers perform formal and informal forms of training and learning. Typically, a classical programming training in the context of a university course or for work-related reasons is a first step in the acquisition of expertise. Building on these basic skills, non-formal and informal learning activities are carried out by the learners. Especially the social interaction and the collaborative work with other programmers is of great importance in this context. The activities mentioned by the test persons fulfil the characteristics of deliberate practice and will be examined more closely in a further study.
Tutorial on Software Engineering Education in Co-Located Multi-User Eye-Tracking-Environments
(2020)
We briefly describe a tutorial on the application of Eye-Tracking technology for Software Engineering Education. We will showcase our setup of a large-scale Eye-Tracking-Classroom and its usage for real-time improvement of traditional learning scenarios in Software Engineering Education. We will focus on the integration of gaze data into modern integrated development environments (IDEs) and demonstrate a complete workflow for its usage in co-located multi-user Eye-Tracking-Environments.
Over the past few years, ontology merging, and ontology semantic alignment has gained significant interest as research topics in automotive application domain for finding solutions to semantic data heterogeneity. To accomplish the complex and novel vehicle service requirements such as autonomous driving, V2X (Vehicle-to-Vehicle communication), etc. the automotive applications involve collaborations of several platform-specific data from heterogeneous enterprises component frameworks and consequently there has been increase in data interoperability issues. At the application component level, data interoperability relies on the semantic alignment or mapping between the various component framework interfaces data models represented as XML schemas (XSD). With the XML schemas being the preferred standard for the interface description exchange between most of the automotive application domain components, however, the data interoperability between the semantically equivalent but structurally different data constructs of multiple heterogeneous XSDs stands as a challenge in the absence of an ontology-based approach. To confront this crucial requirement for data interoperability and to increase in effect the reuse of existing components through their interfaces, we propose an approach to semantically map the various component framework interface data models when expressed as ontology schemas, based on the exploration of semantic synergies. The transformation between XSD and RDF (Resource Description Framework) schema representations and the use of queries over the ontology schemas for semantic mapping are demonstrated including a real-world case study.
Confidence in results of an Artificial Neural Networks (ANNs) is increased by preferring to reject data, that is not trustful, instead of risking a misclassification. For this purpose a model is proposed that is able to recognize data, which differs significantly from the training data, during inference. The proposed model observes all activations of the hidden layers, as well as input and output layers of an ANN in a grey-box view. To make ANNs more robust in safety critical applications, this model can be used to reject flawed data, that is suspected to decrease the accuracy of the model. If this information is logged during inference, it can be used to improve the model, by training it specifically with the missing information. An experiment on the MNIST dataset is conducted and its results are discussed.
The enormous amounts of data modern real-time systems have to process lead to expensive, long-lasting calculations. In order to manage those computations in a timely manner, parallel task models have gained a lot of popularity lately. However, parallel programming can be very cumbersome and verbose. Other computationally intensive sectors have dealt with parallel computing for decades and have accumulated their experience in the development of parallel frameworks. Examples of well known parallel runtime systems are OpenMP, Intel Threading Building Blocks (TBB) and Microsoft Parallel Pattern Library (PPL). These runtime systems allow developers to enhance parallelism in their applications in a straightforward fashion. However, those parallel frameworks and the patternbased interfaces they provide might not be easily applicable in real-time systems. In this paper, we investigate the use of parallel programming frameworks in time-critical systems. On that account, we discuss considerations for the design of real-time applications that make use of such parallel runtime systems. Furthermore, we evaluate three library-based frameworks from different computing sectors, namely Intel Threading Building Blocks, Embedded Multicore Building Blocks (EMBB) and High Performance ParalleX (HPX), by conducting benchmarks of various parallel algorithms on an embedded multicore architecture.
The amount of safety-critical embedded systems in automotive development is heavily growing. Ensuring their reliability not only increases the complexity of functions but also requires determinism at design and execution time, which is considerably challenging to fulfill and verify for multi-core processors. The Logical Execution Time (LET) is recently recognized in automotive industry as an approach for ensuring deterministic functional behavior. However, to decrease the manual design effort and time for deploying such complex systems to multi-core platforms and for ensuring their strict timing and safety requirements, automatic solutions are needed. This work presents a solution for allocating tasks to multi-core processors and generating a time-triggered schedule for embedded systems considering safety, timing, and LET semantics. The approach we propose solves both challenges by defining them as a Constraint Satisfaction Problem (CSP). To examine our CSP formulation, we use MiniZinc, which is a solver-independent constraint modeling language that can employ a variety of solvers. In a case study, we explore optimizations of an industrial system that are enabled by scheduling and task allocation design decisions. Further, the performance of the proposed solutions is evaluated based on large set of synthetically generated system models.
Although there has been much speculation about the potential of Augmented Reality (AR) in teaching for learning material, there is a significant lack of empirical proof about its effectiveness and implementation in higher education. We describe a software to integrate AR using the Microsoft Hololens into UML (Unified Modeling Language) teaching. Its user interface is laid out to overcome problems of existing software. We discuss the design of the tool and report a first evaluation study. The study is based upon effectiveness as a metric for students performance and components of motivation. The study was designed as control group experiment with two groups. The experimental group had to solve tasks with the help of the AR modeling tool and the control group used a classic PC software. We identified tendencies that participants of the experimental group showed more motivation than the control group. Both groups performed equally well
In fault tolerant systems, applications are replicated and executed to enable error detection and recovery. If one replica application fails, another is able to take its place and provide the correct results. This concept can benefit from parallel execution on separate execution units. The rise of multicore platforms supports the development of parallel software, by providing the adequate hardware. However, this raises challenges regarding the synchronization of the redundant strings of execution. Replica determinism means that given the same input, identical programs provide the same output. To ensure replica determinism, requirements regarding the synchronization can be split in two domains: data and time. This paper examines the state of the art of synchronization techniques for parallel replicated execution in the context of fault tolerant systems. We analyze the requirements regarding synchronization within the time and data domain and compare different concepts of hardware (multicore, multiprocessor and multi-PCB) and software (processes, threads).
We investigated the potential of augmented reality (AR) to enable visualization of abstract concepts and present the first iteration of a teaching experiment that evaluates the use of AR as support for abstraction skills. Students were confronted with the task to present and explain information to different groups of stakeholders at the example of a coffee machine. Results show that students find it helpful to have a visual app-prototype and especially one that can be disassembled in different levels. The main goal was to sensitize students for the need to think about and to abstract information for certain roles and perspectives.
The automotive industry is eventually evolving into a complex network of services. The heterogeneous and distributed nature of automotive software systems demands flexible software components which can operate in different environments. Because of heterogeneous automotive development environments, the domain experts, must cope with too many diversities, adaption layers, and incompatibilities to design applications for the current generation of autonomous driving vehicles. In this context, interface adaptation is a promising approach to achieve flexibility without directly changing the respective components. AUTOSAR, which is the de-facto standard for describing automotive system architecture and is a hugely comprehensive standard allowing designers full control from abstract system description to bare metal level deployment. However, the vehicle subsystems have still evolved to include multifarious high-level domains not covered by AUTOSAR e.g. Infotainment, Telematics etc. Therefore, it seems beneficial to bridge the semantic gaps between AUTOSAR applications and other standards of automotive application domains. The goal of this paper is to investigate interface semantic mapping and achieve transparent integration of domain-specific applications using the translation of semantics among the AUTOSAR platform software component models and other software components models of open source development platforms e.g. GENIVI. A key goal of such a modelling approach is the reuse of existing interface description languages and respective code generators. This will enhance future interoperability and decrease in incompatibility among these platforms.
As the world is getting more connected, the demands of services in automotive industry are increasing with the requirements such as IoT (Internet of Things) in cars, automated driving, etc. Eventually, the automotive industry has evolved to a complex network of servi ces, where each organization depends on the other organizations, to satisfy its service requirements in different phases of the vehicle life cycle. Because of these heterogeneous and complex development environments, most of the vehicle component interface models need to be specified in various manifest ations to satisfy the semantic and syntactic requirements, specific to different application development environments or frameworks. This paperdescribes an approach to semantic analysis of components interfaces description models of heterogeneous frameworks, that are used for vehicle applications. The proposed approach intends to ensure that interface description models of different service-based vehicle frameworks can be compared, correlated and re-used based on semantic synergies, across different vehicle platforms, development environments and organizations. The approach to semantic synergy exploration could further provide the knowledge base for the increase in interoperability, overall efficiency and development of an automotive domain specific general software solutions, by facilitating coexistence of components of heterogeneous frameworks in the same high-performance ECU for future vehicle software.
To investigate the role of heuristics in the domain of software engineering, an eye tracking study was conducted in which experts and novices were compared. The study focused on one of the most challenging parts in this domain: the generation of an object model for a software product based on a requirements specification. During their training, software engineers are taught different techniques to solve this task. One of these techniques is the noun/verb analysis.
However, it is still unclear to what extent novice and expert programmers are making use of it. Ideally, the noun/verb analysis works as a heuristic and helps programmers to make fast and accurate decisions. Participants in the study were 40 software programmers at four levels of expertise (novices, intermediates, experienced rogrammers, experts). They were presented with ten decision tasks. In each task, participants read a requirement specification and then had to choose one out of three presented class diagrams that they considered the best solution. During the task, their eye movements were recorded. Results show that all participants used the noun/verb analysis as a heuristic. Programmers with higher levels of expertise, however, outperformed programmers with lower levels of expertise. Interestingly, the more experienced programmers were not following the noun/verb analysis in a blindfolded way. They realised that the noun/verb analysis would produce diagrams, but a skilled software architect would not model them in this way. Instead they created their models in a way that they perceived as more logical and realistic
New computing-intensive applications such as assisted or highly automated driving are rapidly expanding the domain of safety-critical embedded systems, driven by the vision of the driverless car. This development makes it necessary to use high performance multi-core systems which are commercially available and provide more parallelism in terms of redundant execution units, however, at the cost of being less reliable. With the continuous down-scaling of semiconductor technology, computing hardware exhibits an increasing vulnerability against random hardware faults. Since these high-performance controller provide less or no hardware redundancy to ensure a safe execution of the application, software-only fault tolerance approaches are under current investigation. Our Scalable Software Support for Dependable Embedded Systems (S3DES) approach achieves fault tolerance by utilizing software-based triple modular redundancy for computational and optimized arithmetic encoded voter processes to ensure fault detection and error handling on application level. In S3DES voters are replicated to allow the compensation of voting failures. However, new challenges with regard to error propagation and multiple voting result outputs are introduced by this extension. We describe how mutual voter monitoring and threshold value checks could be used to establish a hierarchy among the replicated voters without re-introducing a reliability bottleneck in the sense of a single point of failure and resolve the aforementioned challenges.
Among the available dependability assessment techniques, fault injection (FI) is widely adopted and strongly recommended by safety standards for the validation that functional and technical safety mechanisms are implemented correctly and effectively. The main challenge in fault injection assessments is the increasing complexity of system-on-chips as well as the increasing size of memory, which leads to enormous efforts to test every possible fault introduced to the system. Therefore, a number of publicly available fault injection frameworks utilize fault space pruning techniques to reduce the overall fault space and consequently the overall experiment duration. Most of the fault space pruning techniques mainly discuss the reduction of the number of data errors which have to be injected into registers and memory locations. However, control flow errors represent a further domain of possible errors on the application level. Usually for the evaluation of effectiveness of fault tolerance mechanisms against data errors, a single fault assumption at microarchitectural level (e.g. bit-flips) is assumed. In most cases, this assumption is equivalently applied to the program counter to investigate possible control flow errors. Due to this approach, the error space is consciously or unconsciously reduced to the possible erroneous jump targets that can be reached by a specific set of bit-flips in the program counter at a specified time during the program execution. This approach is considered valid regarding the corresponding fault assumption, but leads to negative effects on the significance of the injection and the resulting effectiveness of the tested fault tolerance mechanism. In this paper, we discuss different strategies for the analysis and injection of control flow errors and the resulting differences by considering the single fault assumption at microarchitectural and application level.
Sharing data across multiple tasks in multiprocessor systems has intensively been studied in the past decades. Various synchronization protocols, the most well-known being the Priority Inheritance Protocol or the Priority Ceiling Protocol, have been established and analyzed so that blocking times of tasks waiting to access a shared resource can be upper bounded. To the best of our knowledge, all of these protocols share one commonality: Tasks that want to enter a critical section, that is already being executed by another task, immediately get blocked. In this paper, we introduce the Asynchronous Priority Ceiling Protocol (A-PCP), which makes use of aperiodic servers to execute the critical sections asynchronously, while the calling task can continue its work on non-critical section code. For this protocol, we provide a worst-case response time analysis of the asynchronous computations, as well as necessary and sufficient conditions for a feasibility analysis of a set of periodic tasks using the proposed synchronization model on a system that preemptively schedules the tasks under the rate-monotonic priority assignment.
Scalable Software Support for Dependable Embedded Systems (S3DES) achieves fault tolerance by utilizing spatial software-based triple modular redundancy for computational and voter processes on application level. Due to the parallel execution of the replicas on distinct CPU cores it makes a step towards software-based fault tolerance against transient and permanent random hardware errors. Additionally, the compliance with real-time requirements in terms of response time is enhanced compared to similar approaches. The replicated voters, the introduced mutual voter monitoring and the optimized arithmetic encoding allow the detection and compensation of voter failures without the utilization of backward recovery. Fault injection experiments on real hardware reveal that S3DES can detect and mask all injected data and program flow errors under a single fault assumption, whereas an uncoded voting scheme yields approx. 12% silent data corruptions in a similar experiment.
This paper introduces a custom framework for benchmarking software implementations from the National Institute of Standards and Technology (NIST) Lightweight Cryptography (LWC) project on embedded devices. We present the design and core functions of the framework and apply it to various NIST LWC authenticated encryption with associated data (AEAD) ciphers. Altogether, we evaluate the speed of 213 submitted algorithm vari-ants on four different microcontroller units (MCUs), including 32 bit ARM and 8 bit AVR architectures. To allow a more meaningful comparison, we also conduct code size tests on all four boards and RAM utilization tests on one test platform.
The used technology in the power system is subject of great change. Through the use of smart devices, the systems in the power grid get interconnected between each other and remote networks. Especially the remote access to the critical smart grid environment involves new challenges in the area of security. To ensure the transmitted data and the access to the system is secured, cryptographic mechanisms have to be implemented. One critical part in this task is the management of cryptographic keys. This paper explains and defines a set of basic requirements for cryptographic key management in the smart grid. These requirements are derived from challenges present in common corporate environments. Then, basic approaches in the field of key management are stated and evaluated for the applicability in the power system. Different protocols for the implementation of key management strategies are shown and an assessment regarding their suitability for the defined requirements is conducted.
This The vehicle is evolving to a complex network of heterogeneous subsystems of ECUs, sensors and actuators, each with different computational requirements. These sub-systems are connected via bus systems following different communication paradigms like e.g. signal based or service-oriented communications. This has led to the heterogeneous syntax of describing interfaces even though the semantics of the interfaces are similar. The wide variety of Interface Description Languages (IDLs) in automotive industry hinders partly with the efficient collaboration between different suppliers and the OEMs in the automotive industry. Given this wide variety of automotive IDLs, what could be more beneficial, from a software engineering point of view, is a generic automotive domain specific IDL that can satisfy all the fundamental requirements of the heterogeneous subsystems. This paper describes an approach to compare and correlate IDLs based on semantic similarities of the languages, considering the two aspects: application description and underlying message frameworks used in the different domains of given automotive subsystems. With the exploration of semantic synergies between the IDLs, various domain specific and domain-agnostic frameworks can be compared and correlated. The results can be generalized and abstracted to define a generic Meta IDL which could support use cases like e.g. domain-agnostic functional models and migration of software components between different kinds of automotive subsystems.
Die am weitesten verbreiteten autonomen Systeme der Zukunft sind aller Voraussicht nach intelligente Fahrzeuge, welche selbständig im Straßenverkehr navigieren und mit der Umgebung interagieren. Diese neuen Funktionen erfordern den Einsatz von performanten Mehrkernprozessoren sowie von komplexen (POSIX-kompatiblen) Betriebssystemen. Gleichzeitig erfordert der Einsatz im Automobil hohe funktionale Sicherheit (ASIL-Level), was unter anderem robuste Echtzeiteigenschaften der verwendeten Hard- und Software voraussetzt. Den Echtzeiteigenschaften steht die erhöhte Komplexität mit neuen Quellen für nichtdeterministische Latenzen gegenüber. In diesem Paper präsentieren wir eine Übersicht über diese neuen Einflussfaktoren, und vermessen anschließend Containerlaufzeitumgebungen und deren Latenzverhalten. Wir zeigen dabei, das Netzwerkbrücken unter Last erheblichen Einfluss (Faktor 4–5) auf die Netzwerklatenz ausüben können.
With the increase in demand of services in the automotive industry, automotive enterprises prefer to collaborate with other qualified cross-domain partners to provide complex automotive functions (or services), such as autonomous driving, OTA (Over The Air) vehicle update, V2X (Vehicle-to-Vehicle communication), etc. One key element in cross-domain enterprise collaboration is the mutual agreement between interfaces of software components. In this context, model-to-model mappings of software component models of heterogeneous frameworks for automotive services and to explore the synergies in their interface semantics, have become an essential factor in improving the interoperability among the automotive and other cross-domain enterprises. However, one of the challenges in achieving cross-domain component interface model-to-model mappings at an application level lies in detecting the interface semantics and the semantic relations that are conveyed in different component models in different frameworks. This paper addresses this challenge using a Model Driven Architecture (MDA) based analytical approach to explore interface semantic synergies in the cross-domain component meta-models that are used for automotive services. The approach applies manual semantic checking measurements at an application interface level to understand the meanings and relations between the different meta-model entities of cross-domain framework software components. In this research, we attempt to ensure that interface description models of software components from heterogeneous frameworks can be compared, correlated and re-used for automotive services based on semantic synergies. We have demonstrated our approach using component meta-models from cross-domain enterprises, that are used for the automotive application domain.
Development trends for computing platforms moved from increasing the frequency of a single processor to increasing the parallelism with multiple cores on the same die. Multiple cores have strong potential to support cost-efficient fault tolerance due to their inherent spatial redundancy. This work makes a step towards software-only fault tolerance in the presence of permanent and transient hardware faults. Our approach utilizes software-based spatial triple modular redundancy and coded processing on a shared memory multi-core controller. We evaluate our approach on an Infineon AURIX TriBoard TC277 and provide experimental evidence for error resistance by fault injection campaigns with an iSystem iC5000 On-chip Analyzer.
Writing, with no doubt, is besides reading a core competency which allows us to "exploit" knowledge in general. It also makes possible the exploration of software engineering's core issues. Especially within this context it is necessary to master the reading of complex texts as well as to be able to write in an appropriate academic expression. With regard to studies in software engineering this seems to be obvious, but in fact the opposite is the reality. Therefore measures to improve these skills seemed to be necessarily applied. At the Ostbayerische Technische Hochschule (OTH, University of Applied Sciences, Regensburg) a new format, the so called c*lab, was installed during the winter semester 2017. This was a course which followed the principle of "Writing Across the Curriculum" (WAC). Organized parallel to a lecture of learning how to program the language C, and addressing students of the first semester, the course was a complete voluntary offer in addition to the general standard courses and lectures of the faculty. Students who participated not only reflected on C and its principles, nor only on writing as a self purpose, but they also learned to express technical thoughts and ideas by the use of didactic methods. The idea to transfer also basic LTeX concepts to write a paper based on the IEEE bare_conf.tex-template were also planned. The course followed the idea of student's-centred learning. This paper presents the main structure, goals, and means of the c*lab, and the theory behind. It also embeds the course within the horizon of experiences of teaching writing skills at the Laboratory for Safe and Secure Systems (LaS 3 ) at the faculty of electronic and information engineering at the OTH Regensburg. First experiences have shown that participants increase writing skills and their idea of the importance of writing.
With parallel applications becoming more and more popular even in real-time systems, the demand for safe and easyto- use software libraries and frameworks for parallel and concurrent computations is growing immensely. These frameworks usually provide an implementation for different sets of software patterns. A very well known software pattern for concurrency is the Active Object pattern, that allows various threads to have synchronized access to an object in question. This paper presents the Parallel Active Object pattern, which extends the common Active Object pattern to support the use of objects, whose computations are profoundly enhanced by a parallel execution. Furthermore, a C++ software framework is introduced, which implements the Parallel Active Object pattern and thus provides the possibility of using task or data parallel patterns, for example Map, Reduce and Divide-and-Conquer, on the active object's calculations. The proposed framework is evaluated against two other popular libraries, namely OpenMP and Intel Threading Building Blocks. Through utilization of the C++11 standard and template classes a simple user interface is provided, which abstracts the distribution of workloads among the worker threads. By making use of the C++ Standard Template Library the framework can easily be ported to embedded systems and by extending the pattern through real-time capabilities, which ensure a timely and reliable execution of the method requests, the intention of providing the framework for time critical environments is also targeted in the future.
Learning tasks play an important role in education and especially in higher education. However, there is a significant lack in research regarding these in higher education. A learning task shows several characteristics, whereas the didactic function of a task is mostly considered. Two characteristics of learning tasks are focused in this paper: The didactic function and the type knowledge. Existing types of learning tasks are presented as well as a proposal for learning tasks in software engineering education that considers didactic functions, like elaboration, training or application and types of knowledge, i.e. factual, conceptual, procedural and metacognitive knowledge. This paper aims to serve as a guidance for lecturers who have the purpose to create learning tasks that address both characteristics -- the didactic function and the type of knowledge.
This paper deals with the identification of learning ob-stacles using the questionnaire method. Therefore, two iterations were proceeded: The first one was part of a survey that was carried out at four lo-cations at universities of applied sciences. We asked students about obstructive facts in general providing items for five learning ob-stacle dimensions that were set up before; emotional/motivational, epistemological/cognitive, didactical, resource-related and meta-cognitive learning obstacle dimensions. After the general part, we asked them to answer the same question, but in relation to the – in their opinion – most difficult learning content. With this question, we aim to get indications regarding to epistemological obstacles. In a second step, we used the “Motivated Strategies for Learning Questionnaire”, which was developed by Pintrich [1] as a basis to develop a questionnaire that extracts learning obstacles. In its original version, the “Motivated Strategies for Learning Question-naire” was intended to measure students’ learning strategies, but, as the obstacle dimensions were partly derived from learning strategy classification, we chose this already validated question-naire [2]. Within this iteration, we could confirm a five-factor structure of the questionnaire that could be mapped to the five be-fore set learning obstacle dimensions.
Today, due to the rapidly evolving technology within the automotive industry, the automation level of cars is continuously increasing. As a consequence, the software code base implementing the automated driving functionality is growing in both, complexity and size. Simultaneously, the semiconductor industry continues with structure and voltage downscaling due to diminishing design margins and stringent power constraints. This trend leads to highly integrated hardware on the one hand, whilst provoking an increase in sensitivity against external causes for hardware faults, e.g., radiation effects or electromagnetic interference. Among the available dependability assessment techniques, fault injection (FI) is widely adopted and ISO 26262 strongly recommends applying it to validate, that functional and technical safety mechanisms are implemented correctly and effectively. We present PyFI (Python backend for Fault Injection), a fault injection backend for the Infineon Aurix TriCore which utilizes an iSystem On-chip Analyzer to inject faults into the application data or instructions that are visible at the assembly level. PyFI allows the injection of bit flips and stuck-at faults in memory and register cells of the hardware which trigger our error symptoms on application level. Furthermore, it implements fault collapsing algorithms to reduce the number of faults and the duration for single experiments by gathering statistics about the static and dynamic application execution.
Die Beherrschung von Komplexität ist eine der größten Engineering-Herausforderungen des 21. Jahrhunderts. Themen wie das „Internet der Dinge“ (IoT) und „Industrie 4.0“ beschleunigen diesen Trend. Die modellgetriebene Entwicklung leistet einen entscheidenden Beitrag, um diesen Herausforderungen erfolgreich begegnen zu können.
Die Autoren geben einen fundierten Einstieg und praxisorientierten Überblick über die Modellierung von Software für eingebettete Systeme von den Anforderungen über die Architektur bis zum Design, der Codegenerierung und dem Testen. Für jede Phase werden Paradigmen,
Methoden, Techniken und Werkzeuge beschrieben und ihre praktische Anwendung in den Vordergrund gestellt. Darüber hinaus wird auf die Integration von Werkzeugen, funktionale Sicherheit und Metamodellierung eingegangen sowie die Einführung eines modellbasierten Ansatzes in einer Organisation und die Notwendigkeit zum lebenslangen Lernen erläutert.
Der Leser erfährt in diesem Buch, wie ein modellbasiertes Vorgehen nutzbringend in der Praxis für die Softwareentwicklung eingesetzt wird. Das Vorgehen wird unabhängig von Modellierungswerkzeugen vorgestellt. Zahlreiche Beispiele – exemplarisch auch auf Basis konkreter Werkzeuge – helfen bei der praktischen Umsetzung.
This study is based on the work of Uwano, Nakamura, Monden and Matsumoto (2006) who tried to identify programmers’ eye movements in source code reviews by using eye tracking technology. The researchers were able to identify certain eye movement patterns but due to the technical limitations of earlier eye tracking systems and a small sample they could not find a valid proof for their existence. Now, twelve years later, the eye tracking technology has made significant improvements and is able to capture programmers’ reading behavior in an unobtrusive and precise way. Now the goal is to verify the described patterns by using eye tracking data from expert and novice programmers. In the experiment they have to detect errors in six different codes and take part in a retrospective interview. At the moment, data collections are ongoing. At the time of the conference, we will present the results of our analyses.
This paper aims to provide an overview of the interdisciplinary combination of educational science, psychology, software engineering and the eye tracking methodology. The domain of software engineering is offering great potential for applied eye tracking research and in turn it can benefit from the possibilities of this upcoming technology as well. Nevertheless, software engineering has to struggle with some obstacles. These are namely the different terms, missing guidelines for experimental setups and a lack of common and standardized metrics. If eye tracking should be used in a broader way these problems must be solved. The main purpose of this paper is to list all eye tracking metrics which are relevant for software engineering and to give guidelines to help beginners by avoiding possible pitfalls.
This paper deals with the validation and modification of the German questionnaire "F-Komp". In its original version, it was intended to measure university students' research competences. In the beginning of this study, there were only a few tools available which were reliable. For the purposes of this study, they were not suitable. At the same time, there was no validated version of the F-Komp available, which made the whole validation process for further usage necessary. This questionnaire is based on a structure, which consist of different skills and knowledge and is focused on measuring research competence in general. The validation and modification of the F-Komp is therefore the aim of our contribution as well as a revised version of the questionnaire. We proceeded an explorative factor and a reliability analysis to do a general evaluation of the tool. Some modifications were done in the questionnaire to make it more suitable to the requirements of technical oriented universities of applied sciences [5]. Our revised version is slightly longer and contains several items to gather data about the participants demographics. The modified questionnaire is based on a more appropriate factor structure. This structure is more practically oriented and pays attention to ethical issues. In future cases, this questionnaire will be used in research oriented courses to measure students' progress in acquiring the knowledge and methods which are necessary to perform as a scientist in different research areas.
Regarding the actual automotive safety norms the use of artificial intelligence (AI) in safety critical environments like autonomous driving is not possible. This paper introduces a new conceptual safety modelling approach and a safety argumentation to certify AI algorithms in a safety related context. Therefore, a model of an AI-system is presented first. Afterwards, methods and safety argumentation are applied to the model, whereas it is limited to a specific subset of AI-systems, i.e. off-board learning deterministic neural networks in this case. Other cases are left over for future research. The result is a consistent safety analysis approach that applies state of the art safety argumentations from other domains to the automotive domain. This will enforce the adaptation of the functional safety norm ISO26262 to enable general AI methods in safety critical systems in future.