Page 1 of 2 12 LastLast
Results 1 to 10 of 20

Thread: How fast is point rendering?

  1. #1
    Member Newbie
    Join Date
    Sep 2014
    Posts
    35

    How fast is point rendering?

    Specifically, if I have a decently fast shader (e.g. one that just does linear interpolation from a texture), can I sanely run it on every pixel on the screen?

    Thanks

  2. #2
    Senior Member Regular Contributor
    Join Date
    Dec 2010
    Location
    Oakville, ON, CA
    Posts
    165
    Yeah, why not?.. Deferred shading works similarly: first, the framebuffer gets stuffed with rasterized raw input values, then a screen-aligned quad is applied few times to perform different types of calculations for each of the pixels to shade them. At the end contents containing final colors are copied into the default framebuffer.
    Depends on how many times you want to run through your framebuffer's contents and how many pixels it contains, though.

  3. #3
    Member Newbie
    Join Date
    Sep 2014
    Posts
    35
    Well I'd *expect* it to be fast, since you should be able to run a separate GPU thread for each pixel, but when I say "run the shader on every pixel" it sounds scary.

    What I want to do is render to a framebuffer object, then render to the screen by interpolating every pixel from the neighbors it has in the FBO. So how can I say "run this shader for every pixel on the screen" without creating a buffer that contains the coordinates of every pixel on the screen?

  4. #4
    Senior Member Regular Contributor
    Join Date
    Dec 2010
    Location
    Oakville, ON, CA
    Posts
    165
    Well, if you want to just copy contents from the custom FBO to the default window's framebuffer, you may consider using glBlitFramebuffer. That should be the fastest way to copy because it is made exactly for that.

  5. #5
    Member Newbie
    Join Date
    Sep 2014
    Posts
    35
    Well I specifically want the step of "run my arbitrary shader on every pixel" in there, e.g. for whatever kind of interpolation I want.

    Can I just create some polygons that span the entire screen and render them? Would that cause the shader for every pixel?

  6. #6
    Member Newbie
    Join Date
    Apr 2014
    Posts
    47
    Specifically, if I have a decently fast shader (e.g. one that just does linear interpolation from a texture), can I sanely run it on every pixel on the screen?
    Just think, any immersive 3D world must draw something in every pixel, and hence must run a shader at least once for every pixel.

    Quote Originally Posted by BenFoppa View Post
    Well I'd *expect* it to be fast, since you should be able to run a separate GPU thread for each pixel, but when I say "run the shader on every pixel" it sounds scary.

    What I want to do is render to a framebuffer object, then render to the screen by interpolating every pixel from the neighbors it has in the FBO. So how can I say "run this shader for every pixel on the screen" without creating a buffer that contains the coordinates of every pixel on the screen?
    Every fragment doesn't get its own GPU thread, it is determined by how the hardware is set up and the number of pixel pipelines it has, but there are many hardware components which will just handle fragment (pixel) shading.

    What I want to do is render to a framebuffer object, then render to the screen by interpolating every pixel from the neighbors it has in the FBO. So how can I say "run this shader for every pixel on the screen" without creating a buffer that contains the coordinates of every pixel on the screen?
    This is just normal texture mapping with "linear interpolation" for minification or magnification.

    How you set it up.
    Code :
    glGenTextures(1, &name);
    glBindTexture(GL_TEXTURE_2D, &name);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // << HERE (linear interpolation)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // << HERE
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    When you draw your textured quad to fill the whole screen, the interpolation happens as part of the texture lookup.

    This is the quad data I use for full-screen quads. The first two numbers in each line are positions, the last two are UV texture coordinates. You don't need to give coordinates for every pixel because the UV coordinates of the vertices will be interpolated and pick associated values from your texture. In a sense, every pixel in your texture is picked by looking up a UV of ( x / imageWidth, y/imageHeight ).
    Code :
        GLfloat quadData[16] = {
            -1,-1,    0,0,
            1,-1,     1,0,
            -1,1,     0,1,
            1,1,      1,1
        };

    Then in your shader, you don't do any transform:
    Code :
    attribute mediump vec2 position;
    attribute mediump vec2 texCoord2;
     
    varying mediump vec2 texCoord2Varying;
     
    // This assumes that the quad position coordinates are being input as in the
    // range [-1,1] in both X and Y directions, so no transformation is necessary.
    void main() {
        texCoord2Varying = texCoord2;
        gl_Position      = vec4( position.x, position.y, 0, 1 );
    }

  7. #7
    Member Newbie
    Join Date
    Sep 2014
    Posts
    35
    Quote Originally Posted by MtRoad View Post
    Just think, any immersive 3D world must draw something in every pixel, and hence must run a shader at least once for every pixel.
    Makes sense. Again, I'd *expect* it to be fast, since there's no hardware reason it wouldn't be. But saying the hardware can do it and saying it's straightforward and performant in OpenGL are two different things, which was why I asked.

    Quote Originally Posted by MtRoad View Post
    Every fragment doesn't get its own GPU thread, it is determined by how the hardware is set up and the number of pixel pipelines it has, but there are many hardware components which will just handle fragment (pixel) shading.
    Makes sense that there are other constraints, but am I right in thinking that on "ideal" hardware (e.g. infinite pixel piplines), OpenGL would be able to run a separate GPU thread for each pixel? Is there a reason you can't parallelize to that degree?

    So if I create two triangles that span the size of the screen and draw them using my custom shader, the fragment shader will be called for every pixel in the shape? If my shader called on pixel (x,y) were to just read the pixel (x, y) from some texture, would this be comparably performant to blitting from the texture to the screen?

    Thanks

  8. #8
    Senior Member Regular Contributor
    Join Date
    Sep 2013
    Posts
    186
    The fragment shader will run for every fragment (not pixels, its not 100% the same). The vertex shader will run for every vertex.
    If you have 2 triangles spanning the whole screen then your vertex shader will run 2 * 3 times. Your fragment shader will run for as many fragments as are needed to fill your whole screen.
    If you have a lot of points instead your vertex shader will run for each of them, but your fragment shader will most likely run as just often as before. So the work for the fragment shader will be almost the same, but the vertex shader will run much more often.

    Testing on my hardware, it is much slower to draw 6 triangles filling the screen then to draw 1 point for every pixel on my monitor.

  9. #9
    Member Newbie
    Join Date
    Sep 2014
    Posts
    35
    Quote Originally Posted by Cornix View Post
    The fragment shader will run for every fragment (not pixels, its not 100% the same). The vertex shader will run for every vertex.
    If you have 2 triangles spanning the whole screen then your vertex shader will run 2 * 3 times. Your fragment shader will run for as many fragments as are needed to fill your whole screen.
    If you have a lot of points instead your vertex shader will run for each of them, but your fragment shader will most likely run as just often as before. So the work for the fragment shader will be almost the same, but the vertex shader will run much more often.

    Testing on my hardware, it is much slower to draw 6 triangles filling the screen then to draw 1 point for every pixel on my monitor.
    Okay, good to know. I've been wondering about the difference..

    To draw one point for every pixel, did you have to create a buffer to hold the coordinates of every pixel? How can I run a shader on every pixel without having to do that?

  10. #10
    Senior Member Regular Contributor
    Join Date
    Sep 2013
    Posts
    186
    I tested with VBO's. You can simply test for yourself and compare the performance on your own hardware, this shouldnt be too hard to do.
    If you just want the shader output you can also use offscreen rendering, read up on FBO's. This might possibly be even faster.
    If you just want the fragment shader output (and dont care about the vertex shader) you can also just draw a giant quad (2 triangles) across the screen. That does not make a difference to the fragment shader.

Page 1 of 2 12 LastLast

Similar Threads

  1. Replies: 5
    Last Post: 05-25-2018, 06:55 PM
  2. Fast Rendering
    By Balu in forum OpenGL: Advanced Coding
    Replies: 11
    Last Post: 11-26-2003, 11:31 PM
  3. Not rendering very fast....
    By Gestalt Halcyon in forum OpenGL: Basic Coding
    Replies: 5
    Last Post: 11-12-2002, 12:44 PM
  4. Fast rendering 2D layers.
    By andywarfield in forum OpenGL: Basic Coding
    Replies: 0
    Last Post: 10-25-2001, 05:41 AM
  5. Fast rendering to texture.
    By Nutty in forum OpenGL: Advanced Coding
    Replies: 5
    Last Post: 09-14-2000, 05:12 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Proudly hosted by Digital Ocean