Refine
Document Type
- conference proceeding (article) (14)
- Article (2)
Language
- English (16)
Has Fulltext
- no (16)
Is part of the Bibliography
- no (16)
Keywords
- Code Generation (2)
- Common Lisp (2)
- Computer graphics (2)
- Displacement Mapping (2)
- Meta Programming (2)
- Production Rendering (2)
- Subdivision Surfaces (2)
- ray tracing (2)
- real-time rendering (2)
- rendering (2)
Institute
Begutachtungsstatus
- peer-reviewed (16)
Subdivision surfaces, especially with displacement, are one of the key modeling primitives used in high-quality rendering environments, such as, e.g., movie production. While their use easily maps to rasterization-based frameworks, they pose a significant challenge for ray tracing environments. This is due to the fact that incoherent access patterns require storing or caching fully tessellated and displaced meshes for efficient intersection computations. In this paper we use a two-tier hierarchy built on a scene's patches. It relies on compressed and quantized bounding volumes on the second tier to reduce the size of the BVH itself. Based on this acceleration structure, we propose a quantized, compact approximation for leaf nodes while being faithful to the underlying patch-geometry. We build on recent advances and present a system that shows competitive performance regarding run-time speed, which is close to full-resolution pre-tessellation methods as well as to previous compression approaches. Ultimately, we provide strong compression of up to a factor of 5: 1 compared to state-of-the-art methods while maintaining high geometrical fidelity surpassing similarly compact approximations and getting close to uncompressed geometry.
In this paper we describe and evaluate an implementation of CPU-style SIMD ray traversal on the GPU. We show how spreading moderately wide BVHs (up to a branching factor of eight) across multiple threads in a warp can improve performance while not requiring expensive pre-processing. The presented ray-traversal method exhibits improved traversal performance especially for increasingly incoherent rays.
n this paper we present a method to efficiently cull large parts of a scene prior to shadow map computations for many-lights settings. Our method is agnostic to how the light sources are generated and thus works with any method of light distribution. Our approach is based on previous work in culling for ray traversal to speed up area light sampling. Applied to shadow mapping our method works for high- and low-resolution shadow maps and, in contrast to previous work on many-lights rendering, does neither entail scene approximations nor imposes limits on light range, while still providing significant gains in performance. In contrast to standard culling methods shadow map rendering itself is sped up by a factor of 1.5 to 8.6 while the speedup of shadow map rendering, lookup and shading together ranges from 1.1 to 4.2
In this paper we show how a feature-oriented development methodology can be exploited to investigate a large set of possible implementations for a real-time rendering algorithm. We rely on previously published work to explore potential dimensions of the implementation space of an algorithm to be run on a graphics processing unit (GPU) using CUDA. The main contribution of our paper is to provide a clear example of the benefit to be gained from existing methods in a domain that only slowly moves toward higher level abstractions. Our method employs a generative approach and makes heavy use of Common Lisp-macros before the code is ultimately transformed to CUDA.
Rendering performance is an everlasting goal of computer graphics and significant driver for advances in both, hardware architecture and algorithms. Thereby, it has become possible to apply advanced computer graphics technology even in low-cost embedded appliances, such as car instruments. Yet, to come up with an efficient implementation, developers have to put enormous efforts into hardware/problem-specific tailoring, fine-tuning, and domain exploration, which requires profound expert knowledge. If a good solution has been found, there is a high probability that it does not work as well with other architectures or even the next hardware generation. Generative DSL-based approaches could mitigate these efforts and provide for an efficient exploration of algorithmic variants and hardware-specific tuning ideas. However, in vertically organized industries, such as automotive, suppliers are reluctant to introduce these techniques as they fear loss of control, high introduction costs, and additional constraints imposed by the OEM with respect to software and tool-chain certification. Moreover, suppliers do not want to share their generic solutions with the OEM, but only concrete instances. To this end, we propose a light-weight and incremental approach for meta programming of graphics applications. Our approach relies on an existing formulation of C-like languages that is amenable to meta programming, which we extend to become a lightweight language to combine algorithmic features. Our method provides a concise notation for meta programs and generates easily sharable output in the appropriate C-style target language.
Over the last decade a number of high performance, domain-specific languages (DSLs) have started to grow and help tackle the problem of ever diversifying hard- and software employed in fields such as HPC (high performance computing), medical imaging, computer vision etc. Most of those approaches rely on frameworks such as LLVM for efficient code generation and, to reach a broader audience, take input in C-like form. In this paper we present a DSL for image processing that is on-par with competing methods, yet its design principles are in strong contrast to previous approaches. Our tool chain is much simpler, easing the burden on implementors and maintainers, while our output, C-family code, is both adaptable and shows high performance. We believe that our methodology provides a faster evaluation of language features and abstractions in the domains above.
In this paper we present a method for fast screen-space ray tracing. Single-layer screen-space ray marching is an established tool in high-performance applications, such as games, where plausible and appealing results are more important than strictly correct ones. However, even in such tightly controlled environments, missing scene information can cause visible artifacts. This can be tackled by keeping multiple layers of screen-space information, but might not be afforable on severely limited time-budgets. Traversal speed of single-layer ray marching is commonly improved by multi-resolution schemes, from sub-sampling to stepping through mip-maps to achieve faster frame rates. We show that by combining these approaches, keeping multiple layers and tracing on multiple resolutions, images of higher quality can be computed rapidly. Figure 1 shows this for two scenes with multi-bounce reflections that would show strong artifacts when using only a single layer.
We present a novel approach that simulates human vision including visual defects such as glaucoma by temporal composition of human vision in real-time on the GPU. Therefore, we determine eye focus points every time step and adapt the lens accommodation of our virtual eye model accordingly. The focal distance is then used to determine bluriness of observed scene regions; i.e., we compute defocus for all visible pixels. In order to simulate the visual memory we introduce a sharpness field where we integrate defocus values temporally. That allows for memorizing sharply perceived scene points. For visualization, we ray trace the virtual scene environment while incorporating depth of field based on the sharpness field data. Thus, our algorithm facilitates the simulation of human vision mimicing the visual memory. We consider this to be particularly useful for illustration purposes for patients with visual defects such as glaucoma. In order to run our algorithm in real-time we employ massively parallel graphics hardware.
We present a novel technique for rendering depth of field that addresses difficult overlap cases, such as close, but out-of-focus, geometry in the near-field. Such scene configurations are not managed well by state-of-the-art post-processing approaches since essential information is missing due to occlusion. Our proposed algorithm renders the scene from a single camera position and computes a layered image using a single pass by constructing per-pixel lists. These lists can be filtered progressively to generate differently blurred representations of the scene. We show how this structure can be exploited to generate depth of field in real-time, even in complicated scene constellations.
With ever increasing ray traversal and hierarchy construction performance the application of ray tracing to problems often tackled by rasterization-based algorithms is becoming a viable alternative. This is especially desirable as the ground truth for these algorithms is often determined by using ray tracing and thus directly applying it is the simplest way to generate images satisfying the reference. In this paper we propose a very efficient pre-process to speed up the construction and traversal of sub-optimal, but fast-to-build hierarchies used for interactive ray tracing and show how it can be applied to shadow rays in a hybrid environment, where ray tracing is used to sample area lights for scene positions found and shaded via rasterization.
In earlier work we described C-Mera, an S-Expression to C-style code transformator, and how it can be used to provide high-level abstractions to the C-family of programming languages. In this paper we provide an in-depth description of its internals that would have been out of the scope of the earlier presentations. Œese implementation details are presented as a toolkit of general techniques for implementing similar meta languages on top of Common Lisp and illustrated on the example of C-Mera, with the goal of making our experience in implementing them more broadly available.
We describe the design and implementation of CGen, a C code generator with support for Common Lisp-style macro expansion. Our code generator supports the simple and efficient management of variants, ad hoc code generation to capture reoccurring patterns, composable abstractions as well as the implementation of embedded domain specific languages by using the Common Lisp macro system. We demonstrate the applicability of our approach by numerous examples from small scale convenience macros over embedded languages to real-world applications in high-performance computing.
Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on-the-fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this paper, we present a novel solution to this problem. We propose a compression scheme for a-priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.
In this paper we present a scattering-based method to compute high quality depth of field in real time. Relying on multiple layers of scene data, our method naturally supports settings with partial occlusion, an important effect that is often disregarded by real time approaches. Using well-founded layer-reduction techniques and efficient mapping to the GPU, our approach out-performs established approaches with a similar high-quality feature set. Our proposed algorithm works by collecting a multi-layer image, which is then directly reduced to only keep hidden fragments close to discontinuities. Fragments are then further reduced by merging and then splatted to screen-space tiles. The per-tile information is then sorted and accumulated in order, yielding an overall approach that supports partial occlusion as well as properly ordered blending of the out-of-focus fragments.
We present a method to compute post-processing depth of field (DOF) that produces more accurate results than previous approaches. Our method is based on existing approaches, namely DOF rendering by splatting and fast, tile-based particle accumulation. Using tile-based accumulation allows us to correctly sort out of focus pixels and apply proper alpha-blending to avoid artifacts commonly encountered with filter-based depth of field methods.
Rendering in real time for virtual reality headsets with high user immersion is challenging due to strict framerate constraints as well as due to a low tolerance for artefacts. Eye tracking-based foveated rendering presents an opportunity to strongly increase performance without loss of perceived visual quality. To this end, we propose a novel foveated rendering method for virtual reality headsets with integrated eye tracking hardware. Our method comprises recycling pixels in the periphery by spatio-temporally reprojecting them from previous frames. Artefacts and disocclusions caused by this reprojection are detected and re-evaluated according to a confidence value that is determined by a newly introduced formalized perception-based metric, referred to as confidence function. The foveal region, as well as areas with low confidence values, are redrawn efficiently, as the confidence value allows for the delicate regulation of hierarchical geometry and pixel culling. Hence, the average primitive processing and shading costs are lowered dramatically. Evaluated against regular rendering as well as established foveated rendering methods, our approach shows increased performance in both cases. Furthermore, our method is not restricted to static scenes and provides an acceleration structure for post-processing passes.