Refine
Year of publication
Document Type
- conference proceeding (article) (227)
- Article (41)
- conference proceeding (presentation, abstract) (7)
- Part of Periodical (6)
- Report (3)
- Book (2)
- Part of a Book (1)
- conference talk (1)
- Working Paper (1)
Is part of the Bibliography
- no (289)
Keywords
- Forschung (3)
- Forschungsbericht (3)
- Real-time systems (3)
- Safety (3)
- eye tracking (3)
- higher education (3)
- interface (3)
- reliability analysis (3)
- semantic (3)
- AUTOSAR (2)
Institute
- Fakultät Elektro- und Informationstechnik (282)
- Laboratory for Safe and Secure Systems (LAS3) (270)
- Labor Datenkommunikation (17)
- Labor Industrielle Elektronik (14)
- Fakultät Informatik und Mathematik (7)
- Labor Informationssicherheit und Complience (ISC) (7)
- Hochschulleitung/Hochschulverwaltung (6)
- Institut für Angewandte Forschung und Wirtschaftskooperationen (IAFW) (5)
- Fakultät Angewandte Natur- und Kulturwissenschaften (2)
- Labor Optische Übertragungssysteme (2)
Begutachtungsstatus
- peer-reviewed (68)
- begutachtet (13)
Learning centered teaching becomes an important factor in a global perspective of learning software engineering. The Just-in-Time Teaching approach is used in a Chinese-German empirical case study. In a one year terminated project we will analyze the performance of our students in an active learning scenario with Just-in-Time Teaching and Peer Instruction. We will contribute an inter-cultural comparison of achieved competencies by student’s self-assessment and teacher’s observation.
Professional software development is a complex task with many inputs and a complex output. In order to handle complex topics as software or complex engineering projects, structured processes as the V-model or iterative development processes exist. Similarly, the development of a software engineering lecture is a task with many inputs, and a complex output. A structured and methodological approach to the development of a lecture is presented, which applies the same principles as used in the development of software.
Safety has the highest priority because it helps contribute to customer confidence and thereby ensures further growth of the new markets, like electromobility. Therefore in series production redundant hardware concepts like dual core microcontrollers running in lock-step-mode are used to reach for example ASIL D safety requirements given from the ISO 26262. Coded processing is capable of reducing redundancy in hardware by adding diverse redundancy in software, e.g. by specific coding of data and instructions. A system with two coded processing channels is considered. Both channels are active. When one channel fails, the service can be continued with the other channel. It is imaginable that the two channels with implemented coded processing are running with time redundancy on a single core or on a multi core system where for example different ASIL levels are partitioned on different cores. In this paper a redundancy concept based on coded processing will be taken into account. The improvement of the Mean Time To Failure by safeguarding the system with coded processing will be computed for fail-safe as well as for fail-operational systems. The use of the coded processing approach in safeguarding failsafe systems is proved.
The safety of electric vehicles has the highest priority because it helps contribute to customer confidence and thereby ensures further growth of the electromobility market. Therefore in series production redundant hardware concepts like dual core microcontrollers running in lock-step-mode are used to reach ASIL D safety requirements given from the ISO 26262. Coded processing is capable of reducing redundancy in hardware by adding diverse redundancy in software, e.g. by specific coding of data and instructions. A system with two coded processing channels is considered. One channel is active and one is in cold standby. When the active channel fails, the service is switched from the active channel to the standby channel. It is imaginable that the two channels with implemented coded processing are running with time redundancy on a single core or on a multi core system where for example different ASIL levels are partitioned on different cores. In this paper a redundant concept based on coded processing and software rejuvenation will be taken into account.
Capability of single hardware channel for automotive safety applications according to ISO 26262
(2012)
Safety of embedded systems has the highest priority because it helps contribute to customer confidence and thereby ensures growth of the new markets, like electromobility. In series production fail-safe systems as well as fault-tolerant systems are realized with redundant hardware concepts like dual core microcontrollers running in lock-step-mode to reach highest safety requirements given by standards, like ISO 26262 or IEC 61508. In contrast to the hardware redundancy approach, there are also approaches available with information-, time-and/or software-redundancy since several years. One of them is known as coded processing or AN-codes. Coded processing is capable of reducing redundancy in hardware by adding diverse redundancy in software. But the breakthrough of coded processing never took place. One reason for this seem to be the myths which are widely propagated on this subject and the hereby associated uncertainties. In this paper some myths are busted, like the usage of prime numbers as transformation factor A, the myth that greater transformation factors are better or the myth about the residual error probability defined as 1/A. Some of them have been propagated since 1989. The aim of this paper is to provide more clarity and understanding for this technique, perhaps to pave the way for further functional safety concepts based on coded processing approaches.
Mut zu Fehlern, um die Qualität zu steigern – Fault-Injection zur Steigerung der Zuverlässigkeit
(2013)
Wie kann die funktionale Sicherheit in Fahrzeugen zukunftssicher und effektiv gewährleistet werden? Und wie kann dies speziell in elektrifizierten Antrieben gelingen? Mit dieser Aufgabenstellung haben sich AVL in Kooperation mit dem LaS³ und der Universität der Bundeswehr München in einem Forschungsprojekt beschäftigt. Die Antwort lautet: Die automatischen Speichertests in Zusammenspiel mit der Programmfluss-Überwachung und redundanter Hardware können besonders effektiv durch die „Codierte Verarbeitung“ ersetzt werden. Denn hier wird die Diversität in Software erhöht, um die aufwendigere und kostspielige Redundanz von Hardware zu reduzieren.
Diagnostic protocols in automotive systems can offer a huge attack surface with devastating impacts if vulnerabilities are present. This paper shows the application of active automata learning techniques for reverse engineering system state machines of automotive systems. The developed black-box testing strategy is based on diagnostic protocol communication. Through this approach, it is possible to automatically investigate a highly increased attack surface. Based on a new metric, introduced in this paper, we are able to rate the possible attack surface of an entire vehicle or a single Electronic Control Unit (ECU). A novel attack surface metric allows comparisons of different ECUs from different Original Equipment Manufacturers (OEMs), even between different diagnostic protocols. Additionally, we demonstrate the analysis capabilities of our graph-based model to evaluate an ECUs possible attack surface over a lifetime.
Forschungsbericht 2013
(2014)
Forschungsbericht 2017
(2017)
Aufgrund der immer weiter anwachsenden Vernetzung der Stromnetze wird die Kommunikation zwischen der Leitstelle des Energieversorgers und den Infrastrukturkomponenten innerhalb eines Umspannwerks immer bedeutsamer. Dabei werden sowohl Steuerbefehle als auch Daten für Überwachungsfunktionen übertragen. In den aktuellen Netzwerkarchitekturen findet diese Kommunikation ohne eine kryptografische Absicherung statt, was einen Angriffspunkt für gezielte Attacken und damit eine potenzielle Gefährdung der Energieversorgung darstellt. Um solchen Angriffen in Zukunft entgegenzuwirken, wird das ES³M-Sicherheitsmodul entwickelt. Dieses soll in das Netzwerk zwischen den beiden Kommunikationspartnern eingesetzt werden und so den Datenverkehr absichern. Mithilfe einer Bedrohungsanalyse wurden Anforderungen abgeleitet, die neben kryptografischen Maßnahmen auch Themen wie funktionale Sicherheit und Langlebigkeit umfassen. Um diese zu erfüllen, wurde eine spezielle Systemarchitektur auf Basis einer Aufgabenteilung entworfen. Diese Architektur und korrespondierende Designentscheidungen werden präsentiert.
In the field of software engineering, graph-based models are used for a variety of applications. Usually, the layout of those graphs is determined at the discretion of the user. This article empirically investigates whether different layouts affect the comprehensibility or popularity of a graph and whether one can predict the perception of certain aspects in the graph using basic graphical laws from psychology (i.e., Gestalt principles). Data on three distinct layouts of one causal graph is collected from 29 subjects using eye tracking and a print questionnaire. The evaluation of the collected data suggests that the layout of a graph does matter and that the Gestalt principles are a valuable tool for assessing partial aspects of a layout.
Development trends for computing platforms moved from increasing the frequency of a single processor to increasing the parallelism with multiple cores on the same die. Multiple cores have strong potential to support cost-efficient fault tolerance due to their inherent spatial redundancy. This work makes a step towards software-only fault tolerance in the presence of permanent and transient hardware faults. Our approach utilizes software-based spatial triple modular redundancy and coded processing on a shared memory multi-core controller. We evaluate our approach on an Infineon AURIX TriBoard TC277 and provide experimental evidence for error resistance by fault injection campaigns with an iSystem iC5000 On-chip Analyzer.
The requirements for safety-related software systems increases rapidly. To detect arbitrary hardware faults, there are applicable coding mechanism, that add redundancy to the software. In this way it is possible to replace conventional multi-channel hardware and so reduce costs. Arithmetic codes are one possibility of coded processing and are used in this approach. A further approach to increase fault tolerance is the multiple execution of certain critical parts of software. This kind of time redundancy is easily realized by the parallel processing in an operating system. Faults in the program flow can be monitored. No special compilers, that insert additional generated code into the existing program, are required. The usage of multi-core processors would further increase the performance of such multi-channel software systems. In this paper we present the approach of program flow monitoring combined with coded processing, which is encapsulated in a library of coded data types. The program flow monitoring is indirectly realized by means of an operating system.
The Logical Execution Time (LET) has recently been integrated in multi-core automotive systems to ensure timing and dataflow determinism. Although buffering mechanisms are introduced to incorporate LET semantics, they do not guarantee that tasks are executed within their LET frames. In fact, LET and buffering semantics are violated if scheduling is not designed to execute all tasks within their LET frames and in a specific order. In this paper, we describe a scheduling synthesis technique for Fixed-Priority Scheduling (FPS) to achieve resource-efficient execution of LET systems. The proposed approach considers LET semantics, scheduling overheads, and delays caused by operating system operations and provides the possibility to optimize the schedule with respect to aspects like scheduling overheads. Our performance and feasibility evaluation shows that the proposed algorithm provides results in a reasonable amount of time for models of complex industrial applications. Thus, the integration of the proposed algorithm into an automated process is of high benefit to accelerate the development of vehicle applications.
Today's cyberphysical systems are increasingly prone to misuse. To secure existing and future software systems, introducing concepts of IT-Security and Secure Software Engineering (SecSE) in Software Engineering (SE) courses is essential for academic education of future software engineers. This is not only important for computer science students, but also for engineering students studying topics of computing and SE. However, only little research exists on integrating these topics into traditional SE courses, especially for engineering students in non-computer science majors. To narrow this gap, this paper contributes with the design and evaluation of an exercise on modeling misuse cases alongside use cases, based on the inductive teaching method problem-based learning (PBL). The exercise is part of an educational design research investigating which learning content and teaching methods are suitable for integrating IT-Security and SecSE topics into traditional SE education of engineering students to convey factual knowledge as well as raise awareness and interest for both topics during software development. We present the integration of the exercise design into a traditional SE course for engineering students and its evaluation to examine its suitability. We evaluated the exercise design regarding the suitability of the design components, the learning content of misuse cases and the intended learning goals as well as its impact on students' motivation, and their interest in IT-security. The paper then presents indications on the feasibility and success of the exercise design for teaching misuse cases to engineering students and sparking their interest in IT-Security.
We address a novel probabilistic approach to estimate the Worst Case Response Time boundaries of tasks. Multi-core real-time systems process tasks in parallel on two or more cores. Tasks in our contribution may preempt other tasks, block tasks with semaphores to access global shared resources, or migrate to another core. The depicted task behavior is random. The shape of collected response times of a task within a processing time is multimodal. Extreme Value approaches need unimodal response time distributions to estimate the Worst Case Response Time of tasks. The new proposed method derives a set of three task set shapes from the source task set. It is used to minimize the uncertainty of random task behavior by maximizing the coverage of possible Worst Case Response Times. The case study evaluates the new proposed estimation method by the use of dynamically generated random tasks with varying task properties.
Modern compute architectures often consist of multiple CPU cores to achieve their performance, as physical properties put a limit on the execution speed of a single processor. This trend is also visible in the embedded and real-time domain, where programmers are forced to parallelize their software to keep deadlines. Additionally, embedded systems rely increasingly on modular applications, that can easily be adapted to different system loads and hardware configurations.
To parallelize applications under these dynamic conditions, often dispatching frameworks like Threading Building Blocks (TBB) are used in the desktop and server segment. More recently, Embedded Multicore Building Blocks (EMB2) was developed as a task-based programming solution designed with the constraints of embedded systems in mind.
In this paper, we discuss how task-based programming fits such systems by analyzing scheduler implementation variants, with a focus on classic work-stealing and the libraries TBB and EMB2. Based on the state of the art we introduce a novel resource-trading concept that allows static memory allocation in a work-stealing runtime holding strict space and time bounds. We conduct benchmarks between an early prototype of the concept, TBB and EMB2, showing that resource-trading does not introduce additional runtime overheads, while unfortunately also not improving on execution time variances.
The number of IoT devices in SCADA and ICS systems is rising quickly, especially in the domain of critical infrastructures. But these kinds of systems are performing mission critical tasks like controlling devices in industrial facilities or substations in the smart grid. Therefore, they are subject to a lot of regulatory standards. Yet, to provide remote access over the internet, special architectures are developed to integrate a network interface into these devices without inferring with the actual functionality. However, these architectures either lack security measures against cyber-attacks or do not offer the necessary performance for time-critical communication interfaces. To solve that, an architecture consisting of three units is introduced in this paper to provide a network interface with extensive security measures and a high performance. The main feature is the isolation of the cryptographic functionality onto an additional MCU. After proposing the basic concept, the paper presents many implementation details. Based on the current state of implementation, a concept validation of the realized architecture is described.
Confidence in results of an Artificial Neural Networks (ANNs) is increased by preferring to reject data, that is not trustful, instead of risking a misclassification. For this purpose a model is proposed that is able to recognize data, which differs significantly from the training data, during inference. The proposed model observes all activations of the hidden layers, as well as input and output layers of an ANN in a grey-box view. To make ANNs more robust in safety critical applications, this model can be used to reject flawed data, that is suspected to decrease the accuracy of the model. If this information is logged during inference, it can be used to improve the model, by training it specifically with the missing information. An experiment on the MNIST dataset is conducted and its results are discussed.
In the beginning of every security analysis or penetration test of a system, information about the target has to be gathered. On IT-Systems a port scan is usually performed as a first step of an investigation. Since the communication protocols differ in automotive systems, generic port scanning tools can’t be used for a security analysis of CANs.
More complex protocols have a higher likelihood of implementation errors and bugs. On CAN networks, such payloads are transferred through International Standard Transport Protocol (ISO-TP) communication. We designed a new methodology to identify ISO-TP endpoints in automotive networks. Every of these endpoints can provide exploitable application layer protocols and therefor has to be considered during penetration testing and security analysis.
We contribute a new scan approach for the automated evaluation of possible attack surfaces in automotive CAN networks which has a higher coverage and multiple advantages than state of the art approaches.
In the age of the Internet of Things, it is becoming increasingly important to integrate knowledge about the development of secure systems (Secure Software Engineering) into academic teaching. However, teaching IT Security and Secure Software Engineering to non-computer scientists is rare. Therefore, we focus our research on the integration of IT Security into software engineering education of non-computer scientists, particularly electrical engineers, by means of inductive teaching- and learning-arrangements. After collecting students' preconceptions of IT Security and Secure Software Engineering in prior work, this paper now contributes with a first mapping of these preconceptions with corresponding learning content as well as suitable inductive teaching methods to be able to create new lecture and exercise units and improve academic learning and teaching in both areas.
Code reviews are an essential part of quality assurance in modern software projects. But despite their great importance, they are still carried out in a way that relies on human skills and decisions. During the last decade, there have been several publications on code reviews using eye tracking as a method, but only a few studies have focused on the performance differences between experts and novices. To get a deeper understanding of these differences, the following experiment was developed: This study surveys expertise-related differences in experts’, advanced programmers’, and novices’ eye movements during the review of eight short C++ code examples, including correct and erroneous codes. A sample of 35 participants (21 novices, 14 advanced and expert programmers) were recruited. A Tobii Spectrum 600 was used for the data collection. Measures included participants’ eye movements during the code review, demographic background data, and cued retrospective verbal comments on replays of their own eye movement recordings. Preliminary results give proof for experience-related differences between participants. Advanced and expert programmers performed significantly better in case of error detection and the eye tracking data implies a more efficient reviewing strategy.
The enormous amounts of data modern real-time systems have to process lead to expensive, long-lasting calculations. In order to manage those computations in a timely manner, parallel task models have gained a lot of popularity lately. However, parallel programming can be very cumbersome and verbose. Other computationally intensive sectors have dealt with parallel computing for decades and have accumulated their experience in the development of parallel frameworks. Examples of well known parallel runtime systems are OpenMP, Intel Threading Building Blocks (TBB) and Microsoft Parallel Pattern Library (PPL). These runtime systems allow developers to enhance parallelism in their applications in a straightforward fashion. However, those parallel frameworks and the patternbased interfaces they provide might not be easily applicable in real-time systems. In this paper, we investigate the use of parallel programming frameworks in time-critical systems. On that account, we discuss considerations for the design of real-time applications that make use of such parallel runtime systems. Furthermore, we evaluate three library-based frameworks from different computing sectors, namely Intel Threading Building Blocks, Embedded Multicore Building Blocks (EMBB) and High Performance ParalleX (HPX), by conducting benchmarks of various parallel algorithms on an embedded multicore architecture.
Currently, both fail safe and fail operational architectures are based on hardware redundancy in automotive embedded systems. In contrast to this approach, safety is either a result of diverse software channels or of one channel of specifically coded software within the framework of Safely Embedded Software. Product costs are reduced and flexibility is increased. The overall concept is inspired by the well-known Vital Coded Processor approach. Since Mealy state machines are frequently used in embedded automotive systems, application software with a general Mealy state machine is realized differently with Safely Embedded Software starting from the high level programming language C with corresponding measurements.
This paper presents an overview on qualification and certification of tools used in the phases of the safety lifecycle for safety-critical applications, either for development or for verification and validation. Software development tools are widely used in the development of safety-critical software systems. More verification and validation procedures will be automated by software tools to reduce time consuming manual testing. The impact of software tools on functional safety is discussed. Based on normative regulations like IEC 61508 and ISO DIS 26262 different approaches for tool qualification and certification are presented.
Safely embedded software
(2009)
Partly Proportionate fair (Partly-Pfair) scheduling, which allows task migration at runtime and assigns each task processing time with regard to its weight, makes it possible to build highly efficient embedded multi-core systems. Due to its non-work-conserving behavior, which might leave the CPU idle even when tasks are ready to execute, tasks finish only shortly before their deadlines are reached. Benefits are lower task jitter, but additional workload, e.g. through interrupts, can lead to deadline violations. In this paper we present a work-conserving extension of Partly-Pfair scheduling, called PERfair scheduling and the algorithm P-ERfair-PD2 which applies Pfair modifications used for Partly-Pfair on the concept of ERfairness and PD2 policies. With a simulation based schedulability examination we show for multiple time base (MTB) task sets that P-ERfair- PD2 has the same performance as Partly-Pfair-PD2. Additionally, we show that P-ERfair- PD2 has a much higher robustness against perturbations, and therefore it is well suited for embedded domains, especially for the Automotive domain.
The advantages of component-based systems include reuse of generic components as well as adaption through variants. However, they bare a high risk of containing incompatibilities between components, due to the lack of control over the integration-relevant aspects of their components. Current development processes are able to detect incompatibilities between components only at very late stages of system development. The Virtual Integration methodology is an approach to detect and to solve compatibility issues during early stages of system design. The methodology supports developers with a set of measures to reduce the risk of incompatibilities to a minimum at each abstraction layer of their system architecture. Realtime requirements of embedded systems make it necessary to support the methodology with a formal model, which can describe dynamic properties of these systems. In our approach, we use interface automata because they offer a lightweight formalism to describe the behavior of components and to verify their compatibility based on these descriptions. In a feasibility study we show, to which extend interface automata are adequate for the foresaid purpose in the automotive application field.
Eingebettete Systeme unterliegen neben den funktionalen Anforderungen besonders nichtfunktionalen Qualitätsanforderungen wie Effizienz, Zuverlässigkeit und Echtzeitfähigkeit. Mit steigendem Bedarf an Rechenkapazität können bisherige Konzepte zur Leistungssteigerung von Singlecore-Systemen jedoch nicht mehr eingesetzt werden - der Umstieg auf Multicore-Systeme wird erforderlich. Im zweiten Teil dieser Arbeit wird ein simulationsbasierter Ansatz zum Vergleich von Multicore-Scheduling-Algorithmen vorgestellt, mit dem Algorithmen für Multicore-Systeme mit voller Migration und dynamischer Task-Priorität untersucht werden. Wir erweitern diesen Ansatz um ein Verfahren zur Untersuchung einer Tasksetmenge mit stochastisch beschriebenen Eigenschaften und vergleichen ihn mit den im Teil 1 beschriebenen Algorithmen BinPacking-EDF und P-ERfair-PD² für eine Gruppe von Automotive Powertrain Systemen.
Proportionate fair (Pfair) scheduling, which allows task migration at runtime and assigns each task processing time with regard to its weight, is one of the most efficient group of SMP multiprocessor scheduling algorithms known up to now. Drawbacks are tight requirements to the task system, namely the restriction to periodic task systems with synchronized task activation, quantized task execution time, and implicit task deadline. Most likely, a typical embedded real-time system does not fulfill these requirements. In this paper we address violations of these requirements. For heterogeneous task systems, we define the multiple time base (MTB) task system, which is a less pessimistic model than sporadic task systems and is used for automotive systems. We apply the concept of Pfair scheduling to MTB task systems, called partly proportionate fair (Partly-Pfair) scheduling. The restrictions on MTB task systems required for Partly-Pfair ness are weaker than restrictions on periodic task systems required for Pfair ness. In a simulation based study we examined the performance of Partly-Pfair-PD and found it capable to schedule feasible MTB task sets causing a load of up to 100% of the system capacity.
This paper presents an overview on qualification and certification of tools used in the phases of the safety lifecycle for safety-critical applications, either for development or for verification and validation. Software development tools are widely used in the development of safety-critical software systems. More verification and validation procedures will be automated by software tools to reduce time consuming manual testing. The impact of software tools on functional safety is discussed. Based on normative regulations like IEC 61508 and ISO DIS 26262 different approaches for tool qualification and certification are presented.
Embedded real-time systems are often used in harsh environments, for example engine control systems in automotive vehicles. In such ECUs (Engine Control Unit) faults can lead to serious accidents. In this paper we propose a safety embedded architecture based on coded processing. This framework only needs two channels to provide fault tolerance and allows the detection and identification of permanent and transient faults. Once a fault is detected by an observer unit the SES guard makes it visible and initiates a suitable failure reaction.
The shift from single-core to multi-core processors in real-time embedded systems leads to communication based effects on timing such as inter-core communication delays and blocking times. Moreover, the complexity of the scheduling problem increases when multi-core processors are used. In priority-based-scheduling, a fixed priority assignment is used in order to enable predictable behavior of the system. Predictability means that the system has to be analyzable which allows the detection of problems coming from scheduling decisions. For fixed priority scheduling in multi-core real-time embedded systems, a proper task priority assignment has to be done in a way that the system has minimal effects on timing. In this paper, we present an approach for finding near-optimal solutions for task priority assignment and the preemption/cooperation problem. A genetic algorithm is hereby used to create priority assignment solutions. A timing simulator is used for evaluation of each solution regarding real-time properties, memory consumption and communication overhead. In a case study we demonstrate that the proposed approach performs better than well known and single-core optimal heuristics for relatively complex systems.
We present in this paper a new lock-based resource sharing protocol PWLP (Preemptable Waiting Locking Protocol) for embedded multi-core processors. It is based on the busy-wait model and works with non-preemptive critical sections while task may be preempted by tasks with a higher priority when waiting for resources. Our protocol can be applied in partitioned as well as global scheduling scenarios, in which task-fix, job-fix or dynamically assigned priorities may be used. Furthermore, the PWLP permits nested requests to shared resources. Finally, we present a case study based on event-based simulations in which the FMLP (Flexible Multiprocessor Locking Protocol) and the proposed PWLP are compared.
With multi-core controllers entering the area of automotive control ECUs, strategies for parallelizing the control- algorithms come into focus. This paper deals with a special part of automotive powertrain software, called state transitions. Since dependencies between runnables executed there are weak, the transitions provide a good basis for parallelization. We present a strategy of how to distribute efficiently the execution of runnables to different cores while taking care of inner and outer dependencies. The strategy is accompanied by two case studies demonstrating the performance of the concept. The first one is carried out to find the most efficient strategies of parallelize state transitions based on randomly generated, simulated state transitions. In the second one, the developed partitioning strategies are applied to a real software project for an automotive powertrain system.
In this paper we present simulation and model based approaches for evaluating and validating the temporal and safety relevant properties of software intensive safety-critical real-time embedded systems. A high level reliability model of a safe task execution is described by a continuous-time Markov process, enhanced by the modeling of execution times. It is shown that the behavior - regarding real-time and safety metrics - of this theoretical model can be transferred into an abstract system timing model, which then can be analyzed by a discrete event simulation approach. The verification of the discrete event simulation by Markov models offers the possibility of a holistic approach for reliability analysis combined with schedulability analysis of complex safety-critical multicore real-time systems by the discrete event simulation.
Extended Task Priority and Preemptability Optimization in Real-Time Multi-Core Embedded Systems
(2014)
We present a model-based optimization approach for the task allocation problem in embedded multi-core systems. The information therefore is obtained from a system description in AUTOSAR and runtime measurements of the runnables in hardware traces. Based on this an initial software partitioning of runnables to tasks is created. We then use a genetic algorithm to create and evaluate solutions to the task allocation problem. Each solution is hereby evaluated using a discrete event-based simulation, which allows the evaluation with regard to real-time properties, resource consumption and data-communication overhead. The significance of our approach is then shown in a case-study. There, we optimize the task allocation of an embedded system, whose complexity is comparable to that of an actual system, on a multi-core processor. Finally, the results of the optimization are transferred to an ECU Configuration Description to enable further development in compliance with the AUTOSAR methodology.
Global scheduling algorithms are very promising for application in embedded real-time systems using multi-core controllers. In this paper we want to make a first step forward to apply such scheduling methods on real existing systems. Especially a new resource model is necessary to avoid deadlocks, as this goal can not be achieved by using the standard OSEK Priority Ceiling Protocol when shared global resources are in use. We also introduce the new metric mean Normalized Blocking Time in order to be able to compare locking mechanisms according to the timing effects of their blocking behavior. Finally we give a simulative application example of the new metric by the use of two different kinds of semaphore models and an example task set typical for existing embedded real-time systems in the automotive powertrain environment.
Error detecting and correcting codes are widely used in data transmission, storage systems and also for data processing. In logical circuits like arithmetic operations, arbitrary faults can cause errors in the result. However in safety critical applications, it is important to avoid those errors which would lead to system failures. Several approaches are known to protect the result of operations during software processing. In the same way like transmission systems, coded processing uses codes for fault detection. But in contrast to transmission systems, there is no adequate channel model available which makes it possible to evaluate the residue error probability of an arithmetic operation in an analytical way. This paper tries to close the gap of arithmetic error models by the development of a model for an ordinary addition in a computer system. Thus, the reliability of an addition's result can be analytically evaluated.
The data flow is a crucial part of software execution in recent applications. It depends on the concrete implementation of the realized algorithm and it influences the correctness of a result in case of hardware faults during the calculation. In logical circuits, like arithmetic operations in a processor system, arbitrary faults become a more tremendous aspect in future. With modern manufacturing processes, the probability of such faults will increase and the result of a software's data flow will be more vulnerable. This paper shows a principle evaluation method for the reliability of a software's data flow with arbitrary soft errors also with the concept of fault compensation. This evaluation is discussed by means of a simple example based on an addition.
The professional requirements in software engineering have become highly volatile due to the complexities of project development and rapid and innovative changes occurring in the field. Therefore, the development of inter-personal and social competences has gained central importance in the training of software developers. The following text will present a concept allowing to acquire competences by using Pair Programming as an instrument. Moreover, arrangements for learning and teaching will be presented facilitating the acquisition of these competences. By approaching the issue of competence acquisition on a technical as well as on an educational and social level, lifelong learning is facilitated and supported.
We present a simulation-based approach to reliability analysis combined with a schedulability analysis of software intensive embedded real-time systems. In such a system not only does the software execution have to be hardened against soft errors, e.g., by means of coded processing or diverse execution, but also the real-time requirements have still to be met in the presence of such error to guarantee a safe operation of the system. For that reason, the influence regarding the real-time characteristics of a given sporadic error with a certain error rate is analyzed by means of a Monte Carlo simulation. Different safety design patterns are introduced and compared. Furthermore, the impact on the schedulability of an embedded system is discussed.
This paper presents the reliability evaluation of task execution during safe software processing. The standard method of duplication in a safety-critical application can also be applied for tasks in a software system. But in addition to this, there is also the possibility for coded task processing to increase the reliability and availability of software. The presented analysis covers the reliability analysis of a single, a duplicated and a coded task by the technique of continuous time Markov processes. Markov processes are often used for the reliability evaluation of safety-critical systems. We introduce a method to describe the execution time of tasks by means of enhanced Markov models and their solution by numerical methods.
Embedded trends
(2012)
In logical circuits, like arithmetic operations in a processor system, arbitrary faults become a more tremendous aspect in future. Modern manufacturing processes lead to less reliability and higher vulnerability of software execution to soft-errors. The correctness of certain results is important especially for safety–critical applications whose reliability depends on the fault-free execution of each single instruction and the dependencies between them. The more complex a software is the more unreliable the outcome is. But, there is a contrary effect. If the probability for multiple faults increases, there is also the chance that two faults compensate each other and the result is correct again. This paper presents the basic ideas for such a reliability evaluation of a software's data flow with arbitrary soft-errors and the effect of fault compensation. Further, this evaluation provides a possibility to compare different implementations of a data flow with respect to the reliability. This is shown by the comparison of two different error codes as alternatives for coded data processing.
In this paper we present a scheduling approach for safety critical, fault tolerant, multicore real-time embedded systems. For this kind of systems, not only the correctness of a computed result but also the strict adherence to timing requirements of computation is essential to avoid any kind of damage. To react to unpredictable, arbitrary hardware faults suitable error detection mechanisms have to be applied. The caused error itself and the detection and correction have great impact on the system's timing behavior. To still keep the real-time requirements, the used scheduling algorithm has to ensure maximum flexibility to disturbances of the timing. The group of Proportionate Fair (Pfairness) multicore scheduling algorithms has been proven to create an optimal schedule in polynomial time. The contribution of this paper is a Pfair-based algorithm that uses tight coupling between the error detection mechanisms and the scheduler of the real-time operating system to establish a loop-back connection.
Software Engineering is a very volatile profession that requires a variety of theoretical as well as practical skills. In addition to expertise technical knowledge, graduates have to have a variety of social, methodical and personal competences. The acquirement of these non-functional competences are getting more and more important for a successful software engineer. To fulfill these requirements, it is necessarily important to prepare future professionals already during their college course of education. This paper presents exercises for a software engineering lecture with the goal to strengthen the students' practical experiences and to support the development of their non-functional competences. The developed exercises impart technical knowledge and encourage the students to improve their self-organized and lifelong learning. Thereby they are facing practical issues in all steps of the software engineering process while working on an inter semester project.
Traditional methods rely on Static Timing Analysis techniques to compute the Worst Case Response Time for tasks in real-time systems. Multi-Core real-time systems are faced up with concurrent task executions, semaphore accesses, and task migrations where it may be difficult to obtain the worst case upper bound. A new three staged probabilistic estimation concept is presented. Worst Case Response Times are estimated for tasksets which consist of tasks with multiple time bases. The concept involves data generation with sample classification and sample size equalization, model fit and Worst Case Response Time estimation on the basis of extreme value distribution models. A Generalized Pareto Distribution model fit method which includes threshold detection and parameter estimation is also presented. Sample classification in combination with the new Generalized Pareto Distribution model fit method allows to estimate Worst Case Response Times with low pessimism ranges compared to estimation methods that uses the Generalized Pareto or the Gumbel max distribution without sample classification.
Measuring competencies may serve as a feedback mechanism as well as a judgment device for a lecturer. As measuring every competency from a catalogue of competencies is not very viable, the to-be-measured competencies are grouped in competency profiles. Further, assessment practices are shown and applied to a course in a study program. A discussion of useful practices concludes this contribution.
Verbundprojekt EVELIN
(2012)
Entdeckendes Lernen
(2012)
Teaching software testing is a challenging task. Especially if you want to impart more in-depth and practical knowledge to the students. Therefore, most lectures still teach in a classic lecture format despite the fact that this way of instruction is in any case the optimal way of instruction for today's requirements anymore. In this paper we present our implementation of an active learning method to deepen the knowledge in academic software test education. We describe a card game for advanced learning that promotes students' collaboration and knowledge exchange in a playful and competitive manner. The design of the game is based on constructive and cooperative theories. A subsequent evaluation shows that the use of this card game for teaching software testing is a suitable method.
With the availability of the AUTOSAR standard, model-driven methodologies are becoming established in theautomotive domain. However, the process of creating models ofexisting system components is often difficult and time consuming, especially when legacy code has to be re-used or informationabout the exact timing behavior is needed. In order to tackle thisreverse engineering problem, we present CoreTAna, a novel toolthat derives an AUTOSAR compliant model of a real-time systemfrom a dynamic analysis of its trace recordings. This paper givesan overview of CoreTAna's current features and discusses itsbenefits for reverse engineering.
Modelling approaches have to satisfy certain criteria in order to sufficiently encompass the characteristics of dependable heterogenous multi- and many-core system architectures. This work-in-progress paper gives an overview of modern modelling approaches and their related research projects, particularly those regarding domain specific architecture description languages, as well as of the specific challenges of dependable systems and heterogenous multi- and many-core designs, i.e. scheduling techniques for real-time requirements and concerns regarding functional safety. Furthermore, an ongoing research effort in order to identify a set of criteria for evaluating the eligibility of modelling approaches for the task of adequately representing these systems and their specific characteristics is presented.
A cultural change at the university eco-system is possible with diverse learning approaches in faculties. Diverse learning offers will cope with the diversity of students regarding their value systems. Currently teaching at universities is dominated by "teacher-centered teaching", also there are approaches to use different methods to accelerate and intensify the teaching and learning process. Nevertheless these approaches do often not show the desired impact with all students. This paper is offering insights how that comes using the Graves value systems model and is proposing a set of methods which fits to different value systems of students.
Research-oriented learning provides students the opportunity to develop their research competences by experiencing research practice, this often happens in the surrounding of research associations with different universities and companies. This paper introduces a two-step approach to evaluate the research-oriented learning within a research association. First we conduct the evaluation of the research environment with the instrument of the adapted Collaboration Maturity Model (Col-MM) to see whether the collaboration network and the management is able to support the students in their learning process. Additionally, we take into account the evaluation of the students’ research competence. This approach targets the assessment of the starting conditions of the students and to compare their performance level until the end of the research association project phase. These two evaluation phases provide the potential to create an ideal research environment and consequently enable the students to develop and improve their research competence.
In this research, we investigate the possibility of applying ranking task activity in teaching and learning software engineering courses. We introduce three types of ranking tasks, conceptual-, contextual- and sequential ranking questions, which cover most core topics such as requirement analysis, architecture design and quality validation in the course. We have also done experiments on a group of students to see if ranking tasks could increase their conceptual knowledge in specific areas. Assessments were given in order to evaluate the effectiveness of this activity, showing an obvious increase in complex conceptual understanding.
For many students learning to program is a crucial task. In this research, we took a glance into the current literature and identified most ocurring cognitive deficits. Thereby we identified three elements that apear most significant throughout the study. We think that these deficits are fundamental basics for learning how to program. The stu-dents' education of cognitive abilities in programming needs revision and should receive far more training. In future research we want to establish a guideline for promoting these deficits through adequate teaching and learning arrangements in pre-programming education.
In this paper we present our first steps in defining the type, scope and relevance of writing in higher education of software engineering. We aim to identify lacks of scientific research and raise a new and necessary research interest to push research in this area. First we clarify the relevance of writing in higher education in general. In a second step we highlight the relevance of writing in the domain of software engineering in particular. Soft skills to be taught to students of engineering professions and especially to software engineering students are highly discussed. We discuss the skill of writing from a theoretical view as well as reasons for the high relevance of this skill for future engineers. An obligation of teaching writing in the higher education is formulated.
Writing, with no doubt, is besides reading a core competency which allows us to "exploit" knowledge in general. It also makes possible the exploration of software engineering's core issues. Especially within this context it is necessary to master the reading of complex texts as well as to be able to write in an appropriate academic expression. With regard to studies in software engineering this seems to be obvious, but in fact the opposite is the reality. Therefore measures to improve these skills seemed to be necessarily applied. At the Ostbayerische Technische Hochschule (OTH, University of Applied Sciences, Regensburg) a new format, the so called c*lab, was installed during the winter semester 2017. This was a course which followed the principle of "Writing Across the Curriculum" (WAC). Organized parallel to a lecture of learning how to program the language C, and addressing students of the first semester, the course was a complete voluntary offer in addition to the general standard courses and lectures of the faculty. Students who participated not only reflected on C and its principles, nor only on writing as a self purpose, but they also learned to express technical thoughts and ideas by the use of didactic methods. The idea to transfer also basic LTeX concepts to write a paper based on the IEEE bare_conf.tex-template were also planned. The course followed the idea of student's-centred learning. This paper presents the main structure, goals, and means of the c*lab, and the theory behind. It also embeds the course within the horizon of experiences of teaching writing skills at the Laboratory for Safe and Secure Systems (LaS 3 ) at the faculty of electronic and information engineering at the OTH Regensburg. First experiences have shown that participants increase writing skills and their idea of the importance of writing.
The amount of safety-critical embedded systems in automotive development is heavily growing. Ensuring their reliability not only increases the complexity of functions but also requires determinism at design and execution time, which is considerably challenging to fulfill and verify for multi-core processors. The Logical Execution Time (LET) is recently recognized in automotive industry as an approach for ensuring deterministic functional behavior. However, to decrease the manual design effort and time for deploying such complex systems to multi-core platforms and for ensuring their strict timing and safety requirements, automatic solutions are needed. This work presents a solution for allocating tasks to multi-core processors and generating a time-triggered schedule for embedded systems considering safety, timing, and LET semantics. The approach we propose solves both challenges by defining them as a Constraint Satisfaction Problem (CSP). To examine our CSP formulation, we use MiniZinc, which is a solver-independent constraint modeling language that can employ a variety of solvers. In a case study, we explore optimizations of an industrial system that are enabled by scheduling and task allocation design decisions. Further, the performance of the proposed solutions is evaluated based on large set of synthetically generated system models.
With parallel applications becoming more and more popular even in real-time systems, the demand for safe and easyto- use software libraries and frameworks for parallel and concurrent computations is growing immensely. These frameworks usually provide an implementation for different sets of software patterns. A very well known software pattern for concurrency is the Active Object pattern, that allows various threads to have synchronized access to an object in question. This paper presents the Parallel Active Object pattern, which extends the common Active Object pattern to support the use of objects, whose computations are profoundly enhanced by a parallel execution. Furthermore, a C++ software framework is introduced, which implements the Parallel Active Object pattern and thus provides the possibility of using task or data parallel patterns, for example Map, Reduce and Divide-and-Conquer, on the active object's calculations. The proposed framework is evaluated against two other popular libraries, namely OpenMP and Intel Threading Building Blocks. Through utilization of the C++11 standard and template classes a simple user interface is provided, which abstracts the distribution of workloads among the worker threads. By making use of the C++ Standard Template Library the framework can easily be ported to embedded systems and by extending the pattern through real-time capabilities, which ensure a timely and reliable execution of the method requests, the intention of providing the framework for time critical environments is also targeted in the future.