Refine
Document Type
- Report (3)
- Book (1)
- Doctoral Thesis (1)
Language
- English (5)
Has Fulltext
- yes (5)
Keywords
- ---- (1)
- Ablaufplanung (1)
- Betriebssystem (1)
- Chip-Multiprozessor (1)
- Compilation (1)
- Data Stream Management System (1)
- Dienstgüte (1)
- Dynamic Code Reconfiguration (1)
- Ein-Chip-System (1)
- Energiebedarfsanalyse (1)
Institute
Electrical energy is the single most important operating resource to computer systems. Although the energy demand of computers is an invisible system property by itself, the impact of energy demand is omnipresent and obvious in manifold forms of appearance. Sudden system failures (i.e., system breakdowns) and recurrent standard system operations (i.e., system charging) serve as practical examples. Energy demand of hardware components is a physical property of integrated transistor circuits that build our computers today. However, dynamic energy demand at the hardware level is caused by system activities (i.e., processes) at the software level. The analysis and improvement of system software is in focus due to the causal relationship: system software yields challenges and opportunities in equal measure in order to reduce the energy demand of the system at the hardware level. In particular, fine-tuning of system components offers distinct measures to improve the energy efficiency of computer systems. Improvements concern the coherent design of application and system software under consideration of hardware aspects.
This thesis presents, implements, and evaluates unique concepts for proactive energy-aware computing on energy-efficient systems-on-a-chip. In particular, it contributes a development method for energy-aware programming that originates in static and dynamic program analysis to support programmers at the design of energy-aware programs. To assist programmers in reducing the energy demand of their programs, the thesis proposes a software-hardware tooling infrastructure that combines energy-aware programming techniques with automatised energy demand analysis at system level. To further reduce the energy demand of computer systems, the thesis implements a process executive at the operating-system level that exploits a priori information at run time to reduce the energy demand of processes. The corresponding cross-layer approach enables the transfer of programmers’ knowledge to the operating system to reduce the energy demand at run time.
The thesis is first to combine dynamic program analysis techniques and the automatic creation of program variants to support energy-aware programming at the operating-system level. The distinct combination of application knowledge to identify and set important adjusting screws for the energy efficient operation of a computing system bound to an operating system is claimed to be novel.
Today's generation of sensor networks reflect several changes in system characteristics over traditional sensor networks. These changes affect three key aspects: energy as a fundamental resource, stream data processing, and the inherently dynamic structure of the overall system.
In this paper we extract and present eight distinct challenges aligned to the key aspects which need to be addressed in the future. We use data of ongoing science and research projects to extract the most important challenges. These challenges need to be tackled in order to provide the basis for a flexible and adaptive system design which supports data-stream processing in energy-constrained ad-hoc networks.
Invasive Computing
(2022)
Invasive computing is a paradigm for designing and programming future parallel computing systems. For systems with 1,000 or more cores on a chip, resource-aware programming is of utmost importance to obtain high utilisation as well as computational, energy and power efficiency. Invasive computing provides a programmer explicit handles to specify and argue about resource requirements desired or required in different phases of execution: In an invade phase, an application asks the operating system to allocate a set of processor, memory and communication resources to be claimed. In a subsequent infect phase, the parallel workload is spread and executed on the obtained claim of resources. Finally, if the degree of parallelism should be lower again, a retreat operation frees the claim again, and the application resumes a sequential execution. To support this idea of self-adaptive and resource-aware programming, not only new programming concepts, languages, compilers, and operating systems were needed to be developed, but also revolutionary architectural changes in the design of MPSoCs (multiprocessor systems-on-a-chip) to efficiently support invasion, infection, and retreat operations. This book gives a comprehensive overview of all aspects of invasive computing.
Writing well-maintainable parallel programs
that efficiently utilize many processor cores is still a
significant challenge. Threads are hard to use, and so are
event-based schemes. Furthermore, threads are affected by
the blocking anomaly, that is, the loss of parallelism when
threads execute a blocking system call—often resulting
in low core utilization and unnecessarily high response
times. This paper introduces pseudo-blocking system calls
built upon modern asynchronous queue-based system-call
techniques (like Linux’s io_uring) circumventing the
blocking anomaly. They are similar to Go’s programming
model, where one develops against a blocking interface to
keep the code structure clean. However, instead of using
synchronous non-blocking system calls as the underlying
technique, our approach internally uses an asynchronous
queue-based interface. We further present a novel architec-
ture for concurrency platforms, like Cilk and Go, enabling
low latencies and high throughput via pseudo-blocking
system calls. Finally, we discuss future OS enhancements
that would improve our proposed architecture. We imple-
mented and evaluated a concurrency platform based on
the concept of pseudo-blocking system calls. Our platform
can outperform state-of-the-art systems like Go by 1.17 ×
in a file-content search benchmark. It is able to increase
the throughput of a echo-server benchmark by 4 % when
compared to Go, and by 17.8 % when compared to Rust’s
Tokio while improving the tail latency.
Classical core memory was entirely non-
volatile and could keep at least part of the operating
system (OS) in main memory even across power cycles.
These days we can have terabytes of NVRAM to repeat
this approach, albeit on an entirely different scale and
with large parts of the OS state still kept in the volatile
CPU caches. In this paper, we discuss our experiences of
running large modern operating systems including their
applications entirely in NVRAM. We adapted stock Linux
and FreeBSD kernels to work exclusively with NVRAM by
hiding all DRAM from the kernels at boot time to establish
a realistic performance baseline without changing anything
else. Following this entirely NVRAM-agnostic approach,
we could observe an effective performance penalty of
a factor of about four, but only negligible increases in
whole-system power draw. For our system with two CPU
sockets and 56 cores total, we also observed a reduction
in power draw in several scenarios. Due to prolonged
execution times, the energy consumption increased as
well for these measured workloads. While this might be
discouraging at first sight, this result was achieved without
any performance tuning as to the specific characteristics
of today’s NVRAM technology. Therefore, we are also
discussing means to mitigate the observed shortcomings
by integrating NVRAM appropriately into the memory
hierarchy of future robust persistent systems.