Refine
Year of publication
Document Type
Way of publication
- Open Access (2)
Language
- English (17)
Keywords
Institute
BTU
Modularity is a widely used quality measure for graph clusterings. Its exact maximization is prohibitively expensive for large graphs. Popular heuristics progressively merge clusters starting from singletons (coarsening), and optionally improve the resulting clustering by moving vertices between clusters (refinement). This paper experimentally compares existing and new heuristics of this type with respect to their effectiveness (achieved modularity) and runtime. For coarsening, it turns out that the most widely used criterion for merging clusters (modularity increase) is outperformed by other simple criteria, and that a recent multi-step algorithm is no improvement over simple single-step coarsening for these criteria. For refinement, a new multi-level algorithm produces significantly better clusterings than conventional single-level algorithms. A comparison with published benchmark results and algorithm implementations shows that combinations of coarsening and multi-level refinement are competitive with the best algorithms in the literature.
Many-core processors combine fast on-chip communication
with access to large amounts of shared memory. This
makes it possible to exploit the benefits of distributed as well
as shared memory programming models within single parallel
algorithms. While large amounts of data can be shared in the
memory and caches, coordinating the activities of hundreds
of cores relies on cross core communication mechanisms with
ultra low latency for very small messages. In this paper we
discuss two communication protocols for the Intel SCC and
compare them to the MPI implementation of the SCC. Our
micro-benchmark results underline that special purpose protocols
for small messages make much finer levels of parallelism possible
than general purpose message passing systems.
Index Terms—many-core, message passing, shared memory
On many-core processors, both operating system kernels and bare metal applications need efficient cross-core coordination and communication. Although explicit shared- memory programming and message passing might provide the best performance, they also limit the system’s control over scheduling. In contrast, interrupt-driven cross-core invocations provide universal coordination mechanisms that also enable preemptive operations across cores. This paper surveys cross- core invocation mechanisms and their usability with respect to prevalent coordination scenarios. We integrated some of these mechanisms into a bare-metal environment for the Intel SCC pro- cessor and will discuss implementation aspects of the interrupt- driven invocations. In conclusion, such invocation mechanisms provide an expressive platform for future operating systems kernels and bare-metal applications.
Today's multi-cores and future many-cores are NUMA architectures with complex cache hierarchies and multiple memory channels. Depending on the topologies of these memory networks we find everything from true data sharing with shared caches to distributed memory architectures which just pretend to be physical shared memory systems. In fact, most many-cores are hybrid systems that exhibit the characteristics of both distributed systems and SMPs. In this paper we argue in favor of middleware platforms for many-cores. We will discuss the needed functionality in contrast to common distributed system middleware and present micro benchmarks on several architectures to substantiate our claims.
With the evolution toward fast networks of many-core processors, the design assumptions at the basis of software-level distributed shared memory (DSM) systems change considerably. But efficient DSMs are needed because they can significantly simplify the implementation of complex distributed algorithms. This paper discusses implications of the many-core evolution and derives a set of reusable elementary operations for future software DSMs. These elementary operations will help in exploring and evaluating new memory models and consistency protocols.
Hardware and software consistency protocols rely on global observability of consistency events. Acknowledged broadcast is an obvious choice to propagate these events. This paper presents a generalized ring topology for parallel event propagation with acknowledged delivery. Implementations for various many-core architectures show increased performance over conventional approaches. Therefore, diamond rings are a prime candidate for implementations of distributed memory models.