Abstract
Abstract
Real-time global illumination is challenging due to the high computational cost of simulating complex light transport. Existing learning-based methods relying on precomputed radiance caches can generate high-quality images under changing camera viewpoints, but assume static scene geometry. In this thesis, we extend such a framework to support dynamic scenes with object motion. Radiance caches are stored at triangle barycenters and generated offline using a physically based renderer with limited angular resolution. At runtime, a deep radiance reconstruction network predicts full global illumination from these imperfect caches and auxiliary G-buffer information, enabling real-time rendering. To handle object motion, we introduce an online cache update mechanism that selectively refreshes cache entries affected by geometric changes during rendering. This allows the method to maintain high-quality global illumination under both camera motion and object motion, while preserving real-time performance.