Conference Program

All times are specified in UTC-6 (MDT, Denver local time).

The conference was live streamed on YouTube:

Day 1 | Day 2 | Day 3

8:00-9:00 Registration & Breakfast Buffet
Friday
July 26
9:00-9:15 Opening Remarks
9:15-10:30 Papers Session 1:
Acceleration structuresSession chair: Eric Brunvand
H-PLOC Hierarchical Parallel Locally-Ordered Clustering for Bounding Volume Hierarchy Construction
Carsten Benthin, Daniel Meister, Joshua Barczak, Rohan Mehalwal, John Tsakok, Andrew Kensler
Concurrent Binary Trees for Large-Scale Game Components
Anis Benyoub, Jonathan Dupuy
SAH-Optimized k-DOP Hierarchies for Ray Tracing
Martin Káčerik, Jiří Bittner
10:30-11:00 Coffee Break / Poster Session
Work Graph based Denoising for Real-Time Ray Tracing
Felix Kawala, Michael Wittmann, Fabian Wildgrube, Paul Trojahn, Dominik Baumeister, Matthäus Chajdas, Richard Membarth
Hybrid Voxel Formats for Efficient Ray Tracing
Russel Arbore, Jeffrey Liu, Aidan Wefel, Steven Gao, Eric Shaffer
Real-Time Multi-Gigapixel Light Field Ray Traced Rendering with JPEG Compression
Nicholas Wells, Matthew Hamilton
Fast Local Neural Regression for Low-Cost Global Illumination Denoising
Arturo Salmi, Szabolcs Cséfalvay, James Imber
Spectral Monte Carlo Denoiser
Mathieu Noizet, Robin Rouphael, Hervé Deleau, Stéphanie Prévost, Luiz-Angelo Steffenel, Laurent Lucas
11:00-12:00 Hot3D Session 1:
Products
Qualcomm® Adreno™ Ray Tracing Use Case: Generating Primary Visibility for ML-Created Meshes
Aleksandra Krstic, Qualcomm
Revolutionizing Ray Tracing with DLSS 3.5: AI-Powered Ray Reconstruction
Edward Liu, NVIDIA
12:00-13:30 Lunch buffet
13:30-13:35 Sponsor Session
13:35-13:50 Student Competition
13:50-15:05 Papers Session 2:
Light Transport & PrimitivesSession chair: Daniel Meister
Optimizing Path Termination for Radiance Caching Through Explicit Variance Trading
Lukas Kandlbinder, Addis Dittebrandt, Alexander Schipek, Carsten Dachsbacher
Photon-Driven Manifold Sampling
Fei Lee, Jia-Wun Jhang, Chun-Fa Chang
GPU-friendly Stroke Expansion
Raph Levien, Arman Uguray
15:05-15:35 Coffee Break
15:35-16:50 Papers Session 3:
Real-Time GI MethodsSession Chair: Josef Spjut
ReSTIR Subsurface Scattering for Real-Time Path Tracing
Mirco Werner, Vincent Schüßler, Carsten Dachsbacher
Light Path Guided Culling for Hybrid Real-Time Path Tracing
Jan Kelling, Daniel Ströter, Arjan Kuijper
Radiance Caching with On-Surface Caches for Real-Time Global Illumination
Wolfgang Tatzgern, Alexander Weinrauch, Pascal Stadlbauer, Joerg H. Mueller, Martin Winter, Markus Steinberger
16:50-17:00 Short Break
17:00-18:00 Keynote 1
The Vulkan Developer Tools Ecosystem
— From the Vulkan API Launch to Today
Karen Ghavam, LunarG
8:00-9:00 Registration & Breakfast Buffet
Saturday
July 27
9:00-10:15 Papers Session 4:
AI & DenoisingSession Chair: Anton Kaplanyan
Frustum Volume Caching for Accelerated NeRF Rendering
Michael Steiner, Thomas Köhler, Lukas Radl, Markus Steinberger
Converging Algorithm-Agnostic Denoising for Monte Carlo Rendering
Elena Denisova, Leonardo Bocchi
Aliasing Detection in Rendered Images via a Multi-Task Learning
Shu-Ho Fan, Kai-Wen Hsiao, Kai Yi Tan, Chih-Yuan Yao, Hung-Kuo Chu
10:15-10:45 Coffee Break
10:45-12:00 Papers Session 5:
Volume Rendering & SystemsSession Chair: Kenny Gruchalla
GigaVoxels DP: Starvation-less asynchronous render and production for large and detailed volumetric scenes walkthrough
Antoine Richermoz, Fabrice Neyret
Interval Shading: using Mesh Shaders to generate shading intervals for volume rendering
Thibault Tricard
HIPRT: A Ray Tracing Framework in HIP
Daniel Meister, Paritosh Kulkarni, Aaryaman Vasishta, Takahiro Harada
12:00-13:30 Lunch buffet / HPG Town Hall
13:30-14:45 Papers Session 6:
Compression, Micromaps & Procedural GeometrySession Chair: David McAllister
Succinct Opacity Micromaps
Gustaf Waldemarson, Michael Doggett
DGF: A Dense, Hardware Friendly Geometry Format for Lossy Compressing Meshlets with Arbitrary Topologies
Joshua Barczak, Carsten Benthin, David McAllister
Real-Time Procedural Generation with GPU Work Graphs
Bastian Kuth, Max Oberberger, Carsten Faber, Dominik Baumeister, Matthäus Chajdas, Quirin Meyer
14:45-15:15 Coffee Break
15:15-16:15 Hot3D Session 2:
Standards
Work Graphs: Hands-On with the Future of Graphics Programming
Max Oberberger, AMD
Portable and Scalable 3D Rendering Using ANARI
Jefferson Amstutz, NVIDIA
16:15-16:30 Short Break
16:30-17:30 Keynote 2
Realistic Rendering at 60 Frames Per Second
— Past, Present and Future
Peter Shirley, Activision
17:30-18:30 Break
18:30 Banquet Dinner in the Chambers Grant Salon
(downstairs from the Studio Loft where talks are held)
Drinks and hors d'oeuvres6:30pm
Dinner buffet7:30pm
Closing11pm
8:00-9:00 Breakfast Buffet
Sunday
July 28
9:00-10:15 Papers Session 7:
Graphics ApplicationsSession Chair: Jefferson Amstutz
Real-Time Decompression and Rasterization of Massive Point Clouds
Rahul Goel, Markus Schütz, Bernhard Kerbl, P. J. Narayanan
Fast orientable aperiodic ocean synthesis using tiling and blending
Nicolas Lutz, Arnaud Schoentgen, Guillaume Gilet
Parallel spatiotemporally adaptive DEM-based snow simulation
Simon Andreasson, Linus Östergaard, Prashant Goswami
10:15-10:45 Coffee Break
10:45-11:45 Keynote 3
The Acceleration of AI Graphics in NVIDIA GPUs
John Burgess, NVIDIA
11:45-12:00 Awards & Wrap-Up
12:00 End of Conference

Keynote Speakers

Day 1 17:00-18:00

The Vulkan Developer Tools Ecosystem
— From the Vulkan API Launch to Today

Karen GhavamLunarG

Abstract: Khronos released the 3D graphics Vulkan standard in February of 2016. Although a comprehensive 3D graphics API specification was defined by Khronos, it wasn’t sufficient to enable Vulkan application developers. Vulkan application developers were going to need ecosystem tools to aid them in their application development such as the Vulkan Loader, the Vulkan Validation Layer, and the Vulkan SDK. This talk will tell the story of how the Vulkan ecosystem tools were created and how they are being developed today. Some key and interesting technical challenges will be highlighted for many of the tools such as the Vulkan Validation Layer.

Karen Ghavam

After a successful career at Hewlett Packard for 35 years, Karen Ghavam decided to “retire” from Hewlett Packard to take the helm at LunarG as CEO and Engineering Director. LunarG is a graphics software consultancy that delivers many of the Vulkan ecosystem developers tools for the Vulkan API. Karen leads the company’s engineering staff and drives the culture that creates a collaborative and energized environment resulting in a very talented team of 3D Graphics Software Solutions. Resulting from a strong desire to have a lasting impact on the success of the Vulkan API, Karen has been driving the Vulkan Ecosystem Developer tools since before the launch of the Vulkan API in February of 2016. Karen has been in the computer industry professionally since December of 1981.

Day 2 16:30-17:30

Realistic Rendering at 60 Frames Per Second
— Past, Present, and Future

Peter ShirleyActivision

Abstract: Rendering realistic images at 60fps has always been a technical challenge. The last few decades have seen a co-evolution of graphics hardware and graphics algorithms, but the future directions of 60fps rendering is as unclear now as it has ever been; a graphics programmer is faced with mobile device rendering, cloud rendering, AI rendering, VR rendering, and ray traced rendering. This talk will survey where we came from, where we are, and where we might go.

Peter Shirley

Peter Shirley is a Vice President of Computer Graphics at Activision. He is a former researcher at NVIDIA, a cofounder of two software companies, and a professor at Indiana University, Cornell University, and the University of Utah. He received a BS in Physics from Reed College, a Ph.D. in Computer Science from University of Illinois. He is the coauthor of several books on computer graphics and a variety of technical articles. He is a member of the Siggraph Academy.

Day 3 10:45-11:45

The Acceleration of AI Graphics in NVIDIA GPUs

John BurgessNVIDIA

Abstract NVIDIA developed GPU hardware acceleration for AI in parallel with hardware acceleration for ray tracing and these investments aligned to make a greater impact on real-time graphics than they might have otherwise. Alongside performance and image quality enhancements, AI can now be used to improve geometry and texture compression, to replace complex shader code, and to provide wholly new representations of interactive content. Looking forward, generative AI will upend how we think of rendering altogether, so how do we consider leveraging (and evolving) all the architectural tools in the GPU to enable that innovation? This talk will discuss the development of AI acceleration in NVIDIA GPUs and emerging trends in AI, especially for real-time rendering.

John Burgess

John Burgess is a Vice President of GPU Architecture at NVIDIA, where he has been contributing to GPU design for 20 years. His experience spans memory system design to streaming processor cores, including tensor cores for AI and Deep Learning and ray tracing cores for professional visualization and gaming. He received his Ph.D. in Physics from The University of Texas at Austin, where he studied non-linear phenomena such as fluid dynamics, pattern formation, and chaos.

Hot3D Talks

Day 1 11:00-12:00 (first speaker)

Qualcomm® Adreno™ Ray Tracing Use Case:
Generating Primary Visibility for ML-Created Meshes

Aleksandra KrsticQualcomm

Abstract In the last few years, hybrid ray tracing has emerged as a powerful, real-time rendering technique even on mobile devices. In addition to enabling more realistic lighting effects, ray tracing has another potential advantage over rasterization: the ability to quickly cull hundreds or even thousands of triangles without fully evaluating them. This capability comes from the logarithmic nature of the boundary volume hierarchies (BVHs), tree structures built over the scene geometry, which are used during ray traversal. Using BVHs to quickly traverse through the scene becomes important when the GPU is asked to render meshes that are not optimized for triangle adjacency, post-vertex shader caching, or typical number of pixels covered by a triangle. Interestingly, unoptimized meshes are exactly what GPUs encounter in various machine learning applications, such as MobileNeRF. MobileNeRFs use textured polygons to represent neural radiance fields. The polygon meshes are created during the training process, gradually moving from a classical NeRF-like continuous representation towards a discrete one, and they have somewhat different properties then your typical game geometry. In this talk, we present our work of ray tracing primary rays for rendering ML-created meshes. We explore the challenges encountered during this process and highlight performance differences between traditional rendering of ML-created meshes and ray tracing these meshes while running on latest Adreno GPUs. Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries.

Day 1 11:00-12:00 (second speaker)

Revolutionizing Ray Tracing with DLSS 3.5:
AI-Powered Ray Reconstruction

Edward LiuNVIDIA

Abstract NVIDI’'s DLSS 3.5 introduces a groundbreaking advancement in ray tracing with its new Ray Reconstruction technology. This AI-powered neural renderer replaces traditional hand-tuned denoisers, significantly enhancing the quality and performance of ray-traced images. By leveraging NVIDIA’s supercomputer-trained AI network, DLSS 3.5 generates higher-quality pixels, improving image sharpness, lighting accuracy, and overall visual fidelity. In this talk, we will delve into the technical intricacies of DLSS 3.5, showcasing how Ray Reconstruction works to deliver superior-quality ray-traced visuals across a variety of applications and games, including Cyberpunk 2077: Phantom Liberty and Portal with RTX. Attendees will learn about the benefits of integrating DLSS 3.5 into their systems, focusing on real-world performance gains and visual improvements.

Day 2 15:15-16:15 (first speaker)

Work Graphs: Hands-On with the Future of Graphics Programming

Max OberbergerAMD

Abstract AMD has recently positioned itself as an innovator in the graphics programming model space, by partnering with Microsoft and releasing day-0 support for the novel GPU work graphs programming model. GPU work graphs represent a paradigm shift in GPU programmability, by allowing developers to schedule work directly from the GPU. This talk will give a hands-on introduction to GPU work graphs and show how this new programming model can be used to drive high-performance graphics directly from the GPU. This includes GPU-driven rendering with the experimental mesh nodes feature, which was previewed at GDC 2024 earlier this year. After the talk, audience members should have a good understanding of work graphs and should be able to decide if and how they can use it in their own research.

Day 2 15:15-16:15 (second speaker)

Portable and Scalable 3D Rendering Using ANARI

Jefferson AmstutzNVIDIA

Abstract Countless 3D rendering engines have been built to bring state-of-the-art rendering to applications. However, interfacing an application with a particular rendering engine requires unique software maintenance, limiting the number of engines that an application can consider. ANARI, a new API specification from Khronos, solves this problem by defining a common front-end interface library that takes in-memory scene data and dispatches it to an underlying 3D renderer. The API provides vendors the semantics to be able to expose innovation without implementation-specific APIs: asynchronous scene updates, zero-copy data arrays, minimized frame latency, and ultimately beautifully rendered images. This talk will showcase our experiences using ANARI inside mainstream visualization applications, as well as motivate the benefits that ANARI brings to the graphics research community.