Refine
Year of publication
Document Type
Is part of the Bibliography
- no (26)
Keywords
- reliability analysis (3)
- error probability (2)
- Animals (1)
- Bootloader (1)
- Cryptograph (1)
- Embedded systems (1)
- Erlang-distribution (1)
- Firmware Update (1)
- IoT (1)
- Markov model (1)
- Model (1)
- Multicore processing (1)
- Real-time systems (1)
- Safety (1)
- Safety, Security (1)
- Schedules (1)
- Scheduling algorithms (1)
- Software Protection (1)
- Stochastic simulation (1)
- Time factors (1)
- addition (1)
- anti-Inflammatory agent (1)
- channel model (1)
- chemical synthesis (1)
- coded processing (1)
- continuous-time Markov process (1)
- data flow (1)
- discrete event simulation (1)
- fault compensation (1)
- fault injection (1)
- fault simulation (1)
- multicore scheduling (1)
- pharmacology (1)
- real-time operating system (1)
- residue error probability (1)
- safe software processing (1)
- software-implemented-hardwarefault-tolerance (SIHFT) (1)
- therapeutic use (1)
- toxicity (1)
Institute
- Fakultät Elektro- und Informationstechnik (24)
- Laboratory for Safe and Secure Systems (LAS3) (23)
- Labor Industrielle Elektronik (8)
- Fakultät Maschinenbau (1)
- Hochschulleitung/Hochschulverwaltung (1)
- Institut für Angewandte Forschung und Wirtschaftskooperationen (IAFW) (1)
- Labor Datenkommunikation (1)
- Regensburg Center of Biomedical Engineering - RCBE (1)
Begutachtungsstatus
- peer-reviewed (3)
The data flow is a crucial part of software execution in recent applications. It depends on the concrete implementation of the realized algorithm and it influences the correctness of a result in case of hardware faults during the calculation. In logical circuits, like arithmetic operations in a processor system, arbitrary faults become a more tremendous aspect in future. With modern manufacturing processes, the probability of such faults will increase and the result of a software's data flow will be more vulnerable. This paper shows a principle evaluation method for the reliability of a software's data flow with arbitrary soft errors also with the concept of fault compensation. This evaluation is discussed by means of a simple example based on an addition.
We present a simulation-based approach to reliability analysis combined with a schedulability analysis of software intensive embedded real-time systems. In such a system not only does the software execution have to be hardened against soft errors, e.g., by means of coded processing or diverse execution, but also the real-time requirements have still to be met in the presence of such error to guarantee a safe operation of the system. For that reason, the influence regarding the real-time characteristics of a given sporadic error with a certain error rate is analyzed by means of a Monte Carlo simulation. Different safety design patterns are introduced and compared. Furthermore, the impact on the schedulability of an embedded system is discussed.
This paper presents the reliability evaluation of task execution during safe software processing. The standard method of duplication in a safety-critical application can also be applied for tasks in a software system. But in addition to this, there is also the possibility for coded task processing to increase the reliability and availability of software. The presented analysis covers the reliability analysis of a single, a duplicated and a coded task by the technique of continuous time Markov processes. Markov processes are often used for the reliability evaluation of safety-critical systems. We introduce a method to describe the execution time of tasks by means of enhanced Markov models and their solution by numerical methods.
In logical circuits, like arithmetic operations in a processor system, arbitrary faults become a more tremendous aspect in future. Modern manufacturing processes lead to less reliability and higher vulnerability of software execution to soft-errors. The correctness of certain results is important especially for safety–critical applications whose reliability depends on the fault-free execution of each single instruction and the dependencies between them. The more complex a software is the more unreliable the outcome is. But, there is a contrary effect. If the probability for multiple faults increases, there is also the chance that two faults compensate each other and the result is correct again. This paper presents the basic ideas for such a reliability evaluation of a software's data flow with arbitrary soft-errors and the effect of fault compensation. Further, this evaluation provides a possibility to compare different implementations of a data flow with respect to the reliability. This is shown by the comparison of two different error codes as alternatives for coded data processing.
In this paper we present a scheduling approach for safety critical, fault tolerant, multicore real-time embedded systems. For this kind of systems, not only the correctness of a computed result but also the strict adherence to timing requirements of computation is essential to avoid any kind of damage. To react to unpredictable, arbitrary hardware faults suitable error detection mechanisms have to be applied. The caused error itself and the detection and correction have great impact on the system's timing behavior. To still keep the real-time requirements, the used scheduling algorithm has to ensure maximum flexibility to disturbances of the timing. The group of Proportionate Fair (Pfairness) multicore scheduling algorithms has been proven to create an optimal schedule in polynomial time. The contribution of this paper is a Pfair-based algorithm that uses tight coupling between the error detection mechanisms and the scheduler of the real-time operating system to establish a loop-back connection.