Deferred rendering methodologies analysis and comparison
DOI:
https://doi.org/10.17721/1812-5409.2025/1.19Keywords:
computer graphics, real-time rendering, forward rendering, deferred rendering, G-buffer, visibility bufferAbstract
Real-time photorealistic rendering has always been one of the long-standing goals in computer graphics. In particular, this objective often goes down to low-latency photorealistic two-dimensional image generation based on analytic three-dimensional environment description and spectator location and view parameters using a typical personal computer with consumer graphics hardware with relatively limited computing capability. The fundamental problems one has to solve to perform rendering are main view visibility determination and shading, i. e. finding the environment surface element that is observed in the particular pixel of the resulting image and calculating the amount of light this surface element reflects in the observer's direction for all image's pixels. Due to the computation capability constraints of the target hardware, it is crucial to perform these operations as efficiently as possible.
This work is aimed at analysis and experimental comparison of the modern methodologies used to tackle main view visibility determination and shading tasks. Due to modern lighting calculation methods' complexity and high geometric detail of the virtual environments, forward rendering has become impractical as a general-purpose rendering approach and only remains in use when specific geometry or material types are involved. Deferred rendering with G-buffer generation has become a state-of-the-art solution for products demanding high visual quality and fidelity. G-buffer generation can be performed in different ways including rasterized G-buffer generation and visibility buffer method. A theoretical overview and comparison of these techniques are presented in this work. Also, we use a demonstration application we implemented to perform an experimental comparison of G-buffer generation techniques in various conditions. Our experimental results can be used as a guidance when designing production rendering solutions.
Pages of the article in the issue: 144 - 147
Language of the article: English
References
Burley, B. (2012). Physically-based shading at Disney. Disney. https://media.disneyanimation.com/uploads/production/publication_asset/48/asset/s2012_pbs_disney_brdf_notes_v3.pdf
Burns, C. A., & Hunt, W. A. (2013). The Visibility Buffer: A Cache-Friendly Approach to Deferred Shading. Journal of Computer Graphics Techniques, 2(2), 55–69. http://jcgt.org/published/0002/02/04/
Crassin, C., McGuire, M., Fatahalian, K., & Lefohn, A. (2016). Aggregate G-Buffer Anti-Aliasing. IEEE Transactions on Visualization and Computer Graphics, 22(10), 2215–2228. https://doi.org/10.1109/TVCG.2016.2586073
Deering, M., Winner, S., Schediwy, B., Duffy, C., & Hunt, N. (1988). The triangle processor and normal vector shader: a VLSI system for high performance graphics. SIGGRAPH Computer Graphics, 22(4), 21–30. https://doi.org/10.1145/378456.378468
Karis, B. (2013). Real Shading in Unreal Engine 4. Epic Games. https://cdn2.unrealengine.com/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf
Kerzner, E., & Salvi, M. (2014). Streaming G-Buffer compression for multi-sample anti-aliasing. In HPG '14: Proceedings of High Performance Graphics (pp. 1–7). Eurographics Association.
Liktor, G., & Dachsbacher, C. (2012). Decoupled deferred shading for hardware rasterization. In Spencer, S. N. (Ed.), Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (pp. 143–150). Association for Computing Machinery. https://doi.org/10.1145/2159616.2159640
Mara, M., McGuire, M., & Luebke, D. (2013). Lighting Deep G-Buffers: A single-pass, layered depth images with minimum separation applied to indirect illumination. NVIDIA Corporation. https://research.nvidia.com/sites/default/files/pubs/2013-12_Lighting-Deep-G-Buffers/Mara2013DeepGBuffer.pdf
Pharr, M., & Fernando, R. (2005). GPU Gems 2: Programming Techniques for High-Performance Graphics and General-Purpose Computation (Gpu Gems). Addison-Wesley Professional.
Pharr, M., Jakob, W., & Humphreys, G. (2016). Physically Based Rendering: From Theory to Implementation (3rd ed.). Morgan Kaufmann Publishers Inc.
Walter, B., Marschner, S. R., Li, H., & Torrance, K. E. (2007). Microfacet models for refraction through rough surfaces. In J. Kautz, & S. Pattanaik (Eds.), Proceedings of the 18th Eurographics Conference on Rendering Techniques (pp. 195–206). Eurographics Association.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Rostyslav Pikulsky

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
