All times are specified in UTC-6 (MDT, Denver local time).

This schedule outline is expected to be final, but we reserve the right to make changes if needed; please check back as we fill in more detail.

8:00-9:00 Registration & Breakfast Buffet
Friday
July 26
9:00-9:15
Opening Remarks
9:15-10:30 Paper Session 1:
Acceleration structures
10:30-11:00 Coffee Break / Poster Session
Work Graph based Denoising for Real-Time Ray Tracing
Felix Kawala, Michael Wittmann, Fabian Wildgrube, Paul Trojahn, Dominik Baumeister, Matthäus Chajdas, Richard Membarth
Hybrid Voxel Formats for Efficient Ray Tracing
Russel Arbore, Jeffrey Liu, Aidan Wefel, Steven Gao, Eric Shaffer
Real-Time Multi-Gigapixel Light Field Ray Traced Rendering with JPEG Compression
Nicholas Wells, Matthew Hamilton
Fast Local Neural Regression for Low-Cost Global Illumination Denoising
Arturo Salmi, Szabolcs Cséfalvay, James Imber
Spectral Monte Carlo Denoiser
Mathieu Noizet, Robin Rouphael, Hervé Deleau, Stéphanie Prévost, Luiz-Angelo Steffenel, Laurent Lucas
11:00-12:00 Hot3D Session 1
12:00-13:30 Lunch buffet
13:30-13:35 Sponsor Session
13:35-13:50 Student Competition
13:50-15:05 Paper Session 2:
Light Transport & Primitives
15:05-15:35 Coffee Break
15:35-16:50 Papers Session 3:
Real-Time GI Methods
16:50-17:00 Short Break
17:00-18:00 Keynote 1
8:00-9:00 Registration & Breakfast Buffet
Saturday
July 27
9:00-10:15 Papers Session 4:
AI & Denoising
10:15-10:45 Coffee Break
10:45-12:00 Papers Session 5:
Volume Rendering & Systems
12:00-13:30 Lunch buffet / HPG Town Hall
13:30-14:45 Papers Session 6:
Compression, Micromaps & Procedural Geometry
14:45-15:15 Coffee Break
15:15-16:15 Hot3D Session 2
16:15-16:30 Short Break
16:30-17:30 Keynote 2
17:30-18:30 Break
18:30 Banquet Dinner
8:00-9:00 Breakfast Buffet
Sunday
July 28
9:00-10:15 Papers Session 7:
Graphics Applications
10:15-10:45 Coffee Break
10:45-11:45 Keynote 3
11:45-12:00 Awards & Wrap-Up
12:00 End of Conference

Keynote Speakers

The Vulkan Developer Tools Ecosystem
— From the Vulkan API Launch to Today

Karen Ghavam
LunarG

Abstract: Khronos released the 3D graphics Vulkan standard in February of 2016. Although a comprehensive 3D graphics API specification was defined by Khronos, it wasn’t sufficient to enable Vulkan application developers. Vulkan application developers were going to need ecosystem tools to aid them in their application development such as the Vulkan Loader, the Vulkan Validation Layer, and the Vulkan SDK. This talk will tell the story of how the Vulkan ecosystem tools were created and how they are being developed today. Some key and interesting technical challenges will be highlighted for many of the tools such as the Vulkan Validation Layer.

Karen Ghavam

After a successful career at Hewlett Packard for 35 years, Karen Ghavam decided to “retire” from Hewlett Packard to take the helm at LunarG as CEO and Engineering Director. LunarG is a graphics software consultancy that delivers many of the Vulkan ecosystem developers tools for the Vulkan API. Karen leads the company’s engineering staff and drives the culture that creates a collaborative and energized environment resulting in a very talented team of 3D Graphics Software Solutions. Resulting from a strong desire to have a lasting impact on the success of the Vulkan API, Karen has been driving the Vulkan Ecosystem Developer tools since before the launch of the Vulkan API in February of 2016. Karen has been in the computer industry professionally since December of 1981.

Realistic Rendering at 60 Frames Per Second
— Past, Present, and Future

Peter Shirley
Activision

Abstract: Rendering realistic images at 60fps has always been a technical challenge. The last few decades have seen a co-evolution of graphics hardware and graphics algorithms, but the future directions of 60fps rendering is as unclear now as it has ever been; a graphics programmer is faced with mobile device rendering, cloud rendering, AI rendering, VR rendering, and ray traced rendering. This talk will survey where we came from, where we are, and where we might go.

Peter Shirley

Peter Shirley is a Vice President of Computer Graphics at Activision. He is a former researcher at NVIDIA, a cofounder of two software companies, and a professor at Indiana University, Cornell University, and the University of Utah. He received a BS in Physics from Reed College, a Ph.D. in Computer Science from University of Illinois. He is the coauthor of several books on computer graphics and a variety of technical articles. He is a member of the Siggraph Academy.

The Acceleration of AI Graphics in NVIDIA GPUs

John Burgess
NVIDIA

Abstract NVIDIA developed GPU hardware acceleration for AI in parallel with hardware acceleration for ray tracing and these investments aligned to make a greater impact on real-time graphics than they might have otherwise. Alongside performance and image quality enhancements, AI can now be used to improve geometry and texture compression, to replace complex shader code, and to provide wholly new representations of interactive content. Looking forward, generative AI will upend how we think of rendering altogether, so how do we consider leveraging (and evolving) all the architectural tools in the GPU to enable that innovation? This talk will discuss the development of AI acceleration in NVIDIA GPUs and emerging trends in AI, especially for real-time rendering.

John Burgess

John Burgess is a Vice President of GPU Architecture at NVIDIA, where he has been contributing to GPU design for 20 years. His experience spans memory system design to streaming processor cores, including tensor cores for AI and Deep Learning and ray tracing cores for professional visualization and gaming. He received his Ph.D. in Physics from The University of Texas at Austin, where he studied non-linear phenomena such as fluid dynamics, pattern formation, and chaos.

Hot3D Talks

Qualcomm® Adreno™ Ray Tracing Use Case:
Generating Primary Visibility for ML-Created Meshes

Aleksandra Krstic
Qualcomm

Abstract In the last few years, hybrid ray tracing has emerged as a powerful, real-time rendering technique even on mobile devices. In addition to enabling more realistic lighting effects, ray tracing has another potential advantage over rasterization: the ability to quickly cull hundreds or even thousands of triangles without fully evaluating them. This capability comes from the logarithmic nature of the boundary volume hierarchies (BVHs), tree structures built over the scene geometry, which are used during ray traversal. Using BVHs to quickly traverse through the scene becomes important when the GPU is asked to render meshes that are not optimized for triangle adjacency, post-vertex shader caching, or typical number of pixels covered by a triangle. Interestingly, unoptimized meshes are exactly what GPUs encounter in various machine learning applications, such as MobileNeRF. MobileNeRFs use textured polygons to represent neural radiance fields. The polygon meshes are created during the training process, gradually moving from a classical NeRF-like continuous representation towards a discrete one, and they have somewhat different properties then your typical game geometry. In this talk, we present our work of ray tracing primary rays for rendering ML-created meshes. We explore the challenges encountered during this process and highlight performance differences between traditional rendering of ML-created meshes and ray tracing these meshes while running on latest Adreno GPUs. Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries.

Revolutionizing Ray Tracing with DLSS 3.5:
AI-Powered Ray Reconstruction

Edward Liu
NVIDIA

Abstract NVIDI’'s DLSS 3.5 introduces a groundbreaking advancement in ray tracing with its new Ray Reconstruction technology. This AI-powered neural renderer replaces traditional hand-tuned denoisers, significantly enhancing the quality and performance of ray-traced images. By leveraging NVIDIA’s supercomputer-trained AI network, DLSS 3.5 generates higher-quality pixels, improving image sharpness, lighting accuracy, and overall visual fidelity. In this talk, we will delve into the technical intricacies of DLSS 3.5, showcasing how Ray Reconstruction works to deliver superior-quality ray-traced visuals across a variety of applications and games, including Cyberpunk 2077: Phantom Liberty and Portal with RTX. Attendees will learn about the benefits of integrating DLSS 3.5 into their systems, focusing on real-world performance gains and visual improvements.

Portable and Scalable 3D Rendering Using ANARI

Jefferson Amstutz
NVIDIA

Abstract Countless 3D rendering engines have been built to bring state-of-the-art rendering to applications. However, interfacing an application with a particular rendering engine requires unique software maintenance, limiting the number of engines that an application can consider. ANARI, a new API specification from Khronos, solves this problem by defining a common front-end interface library that takes in-memory scene data and dispatches it to an underlying 3D renderer. The API provides vendors the semantics to be able to expose innovation without implementation-specific APIs: asynchronous scene updates, zero-copy data arrays, minimized frame latency, and ultimately beautifully rendered images. This talk will showcase our experiences using ANARI inside mainstream visualization applications, as well as motivate the benefits that ANARI brings to the graphics research community.

Work Graphs: Hands-On with the Future of Graphics Programming

Max Oberberger
AMD

Abstract AMD has recently positioned itself as an innovator in the graphics programming model space, by partnering with Microsoft and releasing day-0 support for the novel GPU work graphs programming model. GPU work graphs represent a paradigm shift in GPU programmability, by allowing developers to schedule work directly from the GPU. This talk will give a hands-on introduction to GPU work graphs and show how this new programming model can be used to drive high-performance graphics directly from the GPU. This includes GPU-driven rendering with the experimental mesh nodes feature, which was previewed at GDC 2024 earlier this year. After the talk, audience members should have a good understanding of work graphs and should be able to decide if and how they can use it in their own research.