Results 1 to 2 of 2

Thread: deferred rendering and framebuffer bandwith

  1. #1
    Junior Member Newbie
    Join Date
    Oct 2014
    Posts
    2

    deferred rendering and framebuffer bandwith

    hi,
    i want to implement deferred rendering for a university course.

    Quote Originally Posted by OpenGL SuperBible 6th edition page 549
    The first stage of a deferred renderer is to create the G-buffer, which is
    implemented using a framebuffer object with several attachments.
    OpenGL can support framebuffers with up to eight attachments, and each
    attachment can have up to four 32-bit channels (using the GL_RGBA32F
    internal format, for example). However, each channel of each attachment
    consumes some memory bandwidth, and if we donít pay attention to the
    amount of data we write to the framebuffer, we can start to outweigh the
    savings of deferring shading with the added cost of the memory
    bandwidth required to save all of this information.
    They continue with a somewhat elaborated attachment scheme, where they for instance span the normal over two attachments.

    I understand that bandwidth can be valuable, but does it really matter if i have two 32bit * 4channels attachments or lets say eight 8bit * 4channels attachments. both of them would add up to 256 bit per fragment output.

    or do the graphics card vendors implement the attachments in a way , that they aren't packed nicely and 8bits would be extended to 32? Maybe it's only about the number of channels per attachment and it would extend a vec3 to vec4?

    thanks,
    adam

  2. #2
    Senior Member OpenGL Guru
    Join Date
    Oct 2004
    Posts
    4,661
    Quote Originally Posted by adamce View Post
    i want to implement deferred rendering for a university course.

    Quote Originally Posted by OpenGL Superbible
    G-buffer... several attachments...each channel of each attachment consumes some memory bandwidth, and if we don’t pay attention to the amount of data we write to the framebuffer, we can start to outweigh the savings of deferring shading with the added cost of the memory bandwidth ...
    They continue with a somewhat elaborated attachment scheme, where they for instance span the normal over two attachments.

    I understand that bandwidth can be valuable, but does it really matter if i have two 32bit * 4channels attachments or lets say eight 8bit * 4channels attachments. both of them would add up to 256 bit per fragment output.
    What you have to remember is what is actually stored in the G-buffer is "not" necessarily the format that's output at the tail-end of your fragment shader. The GPU does run-time format conversion to map the float/vec* output of the frag shader in your G-buffer rasterization pass to the format(s) in your FBO attachments. That's what actually gets written to memory, and as importantly it's what format gets "read" later from memory when you go to apply your lighting pass(es). So reducing how many bits you use for each component in your G-buffer can save you GPU write and read bandwidth.

Similar Threads

  1. Replies: 5
    Last Post: 05-25-2018, 06:55 PM
  2. Deferred Rendering
    By john_connor in forum OpenGL: Basic Coding
    Replies: 4
    Last Post: 10-11-2016, 06:20 AM
  3. Problem with Deferred Rendering and FBO's
    By mbl111 in forum OpenGL: Advanced Coding
    Replies: 0
    Last Post: 12-03-2014, 11:42 PM
  4. A variation of deferred rendering
    By expliced in forum OpenGL: Basic Coding
    Replies: 9
    Last Post: 02-12-2012, 06:15 AM
  5. Deferred rendering
    By Racha in forum OpenGL: User Software
    Replies: 3
    Last Post: 02-08-2010, 10:24 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Proudly hosted by Digital Ocean