Page 1 of 2 12 LastLast
Results 1 to 10 of 14

Thread: color channel: float and 8 bit

  1. #1
    Member Newbie
    Join Date
    Sep 2013
    Posts
    31

    color channel: float and 8 bit

    Hello there.
    I've seen that the color to go into the shaders was specified as a vec3 so 3 floats for each RGB channel.

    1. Isn't this a waste of memory? Is the hardware driver really using the float resolution (precision) for pixel LED output?
    If the final output has 24 bit precision:

    2. Can someone tell me who converts and at which moment is the color converted from this 3 x float to 3 x 8 bit to display on the screen?

    3. In order to save memory isn't it worth:
    a. specifying the color using a 1 pixel 1D texture?
    b. using 3 x GLchar and using glAttribPointer() with GL_UNSIGNED_BYTE as type and GL_TRUE for normalized (or manually dividing each by 256 in the shader)?
    c. another way to directly parse from 3 x 8 bit (RGB space of the input color) to... 3 x 8 bit (output)?

    I suppose it would be useful to work with a larger precision when interpolating color from a vertex to another but I don't need it since I use one color per primitive.

  2. #2
    Senior Member OpenGL Lord
    Join Date
    Mar 2015
    Posts
    6,675
    I've seen that the color to go into the shaders was specified as a vec3 so 3 floats for each RGB channel.
    Where?

    There are many ways to get a color into a shader, and from the shader's side, pretty much all of them will look like 3 floats. That doesn't mean the source data actually are 3 IEEE-754 32-bit floating-point numbers.

    It is very common to use normalized integers, either in texture image formats or in vertex formats. While many tutorials will provide colors in vertex formats with 3 floats, this is usually for the sake of simplicity, not a suggestion of how things should actually be handled in serious applications.

    Isn't this a waste of memory?
    In the general sense, no. See below

    Is the hardware driver really using the float resolution (precision) for pixel LED output?
    No. Generally, you'll get 8 bits per color channel of precision.

    However, this does not mean that you don't want more bits for intermediate computations. Just because your final value will be squashed to 8 bits per color channel doesn't mean you don't need more bits in the middle.

    I'm not going to get into the details, but high dynamic range rendering basically requires more than 8 bits of precision. Not at the end, but it requires light intensities (at the very least) to have absolute magnitudes larger than 1.0.

    specifying the color using a 1 pixel 1D texture?
    No. If you're talking about some form of palette, that's always a losing proposition in terms of compression. S3TC, BPTC, or ASTC will generally beat paletting in image quality, data size, and performance simultaneously.

    using 3 x GLchar and using glAttribPointer() with GL_UNSIGNED_BYTE as type and GL_TRUE for normalized (or manually dividing each by 256 in the shader)?
    You should not manually divide it in the shader. But yes, this is a common means of specifying a color via the vertex format.

    Note however, that the shader will still see the value as a vec3. The whole point of the vertex format is that the system will decide on its own how to convert the data for you (using dedicated hardware or shader logic, depending on the hardware's capabilities).

  3. #3
    Member Newbie
    Join Date
    Sep 2013
    Posts
    31
    Thank you for the detailed explanation and references!
    From what you're saying and from what I've read in the articles, I came to the conclusion that it's best for me to use GL_RGB8UI constant but I don't know where to specify it, in what function? Can you please, give me an example?

  4. #4
    Senior Member OpenGL Lord
    Join Date
    Mar 2015
    Posts
    6,675
    From what you're saying and from what I've read in the articles, I came to the conclusion that it's best for me to use GL_RGB8UI constant
    No, those are unsigned integers, not normalized, unsigned integers.

    Furthermore, that's a texture image format, not something you use for a vertex format. Granted, you haven't made it clear which one you're trying to use at the moment. Image formats are specified for textures when you are creating storage for them.

  5. #5
    Member Newbie
    Join Date
    Sep 2013
    Posts
    31
    What I'm trying to do is draw some (upscaled) points of different colors at random coords on the screen and I thought specifying each point's color through vertex attributes i.e. triplets of unsigned integers for a 24-bit color. This "project" is a bit tedious for me as I'm a beginner and still striving with which gl* functions to call, when.
    I thought I'd use glVertexAttribIPointer() as I said above but I don't know how to use GL_RGB8UI to tell it that I need non-normalized color information i.e. {128, 128, 255}.
    How can I go about it?

  6. #6
    Senior Member OpenGL Lord
    Join Date
    Mar 2015
    Posts
    6,675
    I thought I'd use glVertexAttribIPointer() as I said above but I don't know how to use GL_RGB8UI to tell it that I need non-normalized color information i.e. {128, 128, 255}.
    First, as I explained, GL_RGB8UI is a texture image format. It has nothing to do with vertex formats; those are completely different things specified by completely different APIs.

    Second, why do you need non-normalized colors? Colors are a good 80% of the reason why we use integer normalization to begin with. Your use case doesn't seem to merit the use of integers here.

    I think you've become confused. Your use case strongly suggests that you want normalized color values, but you claim to not want them normalized. If your use case truly needs these colors to not be normalized, then you should be able to explain why.

    Third, glVertexAttribIPointer cannot perform integer normalization. As stated on the previously linked OpenGL wiki page, integer normalization is for when data is stored as an integer, but seen as if it were a float. glVertexAttirbIPointer is for data that is stored as an integer, and you want to be seen as an integer. That's why there's no parameter for it; you're feeding integer data as integers, so the concept of normalization just doesn't apply.

    Fourth, if you are going to use glVertexAttribIPointer, you must use ivec or uvec in your shader as the corresponding input value.

  7. #7
    Member Newbie
    Join Date
    Sep 2013
    Posts
    31
    Thank you for staying with me so far. Yes, I'm confused indeed. I want neither more nor less than to draw a point and use a simple way to input its color into the shaders (varying/uniform?) in the form of RGB where each channel has a value of [0,255] just as any color is commonly expressed.
    So basically I want to draw a few points on the screen like for instance a green point and I want to pass its color into the shaders as (0, 190, 0) and not as (0f, 0.745098f, 0f) for reasons of readability (obvious), memory (I only need 24 bits instead of 96 bits) and (possibly though doubtful) precision (due to rounding). But you said:
    the shader will still see the value as a vec3. The whole point of the vertex format is that the system will decide on its own how to convert the data for you
    So I want GLSL to treat (0, 190, 0) as a color, as channel values i.e. green value of 190 and not alter the "shade" in the slightest. This, instead of the undesired case of treating the triplet as general numbers and possibly convert it to float and make it lossy.
    That's why I came up with the idea of using a 1 pixel 1D texture -- thinking that the color (0, 190, 0) would occupy 24 bit and at the same time being 100.00% preserved. But you don't recommend this and it is, I admit, bad practice.

    Here is where I'm puzzled:
    What's the most straight forward way of inputing a color into the shaders without using unnecessary memory and losslessly? How do you advise me to draw the points?
    I suspect you'll say (and If you do then I'll do it like that):
    glVertexAttribPointer(ind, 3, GL_UNSIGNED_SHORT, GL_TRUE, str, off)

  8. #8
    Senior Member OpenGL Lord
    Join Date
    Mar 2015
    Posts
    6,675
    RGB where each channel has a value of [0,255] just as any color is commonly expressed.
    And that's the source of your confusion. See, you've learned that colors range from [0, 255].

    The don't. That's just what colors look like when stored as normalized integer bytes. Colors actually range on the floating-point range [0, 1] (for the purposes of this conversation). Which is exactly what the normalized byte range [0, 255] maps to.

    The shader should will see floating-point values on the range [0, 1]. That's the whole point of normalized integers.

    So I want GLSL to treat (0, 190, 0) as a color, as channel values i.e. green value of 190 and not alter the "shade" in the slightest. This, instead of the undesired case of treating the triplet as general numbers and possibly convert it to float and make it lossy.
    Converting from normalized integers to floats is not lossy.

    glVertexAttribPointer(ind, 3, GL_UNSIGNED_SHORT, GL_TRUE, str, off)
    Your data is 8-bit bytes, not 16-bit shorts. Also, you should pad out your data structure to be 4-byte aligned; pass 4 values instead of 3. The shader can still use a vec3 if you want.

  9. #9
    Member Newbie
    Join Date
    Sep 2013
    Posts
    31
    Thank you again.
    Yes, I meant 8 bit unsigned (from your link, GL_UNSIGNED_BYTE) and I wrote GL_UNSIGNED_SHORT.

    About padding, I speculate: the shader works with power of 2 sized memory blocks (maybe even a minimum of 4-byte) and so a 4th value is from the house.
    But still, I can't find use for a 4th value. In addition I'm afraid that the memory on the client side i.e. the buffer in RAM would still be occupied (not freed) after the data has been transferred to video RAM...

  10. #10
    Senior Member OpenGL Lord
    Join Date
    Mar 2015
    Posts
    6,675
    But still, I can't find use for a 4th value.
    You don't have to. Padding represents wasted space.

    In addition I'm afraid that the memory on the client side i.e. the buffer in RAM would still be occupied (not freed) after the data has been transferred to video RAM...
    I'm not really sure what you mean here. The "memory on the client side" is your memory. You allocated it, so you need to delete it when you're finished.

Page 1 of 2 12 LastLast

Similar Threads

  1. 12bits per color channel
    By Zulfiqar Malik in forum OpenGL: Advanced Coding
    Replies: 2
    Last Post: 06-07-2010, 10:27 AM
  2. one/two channel float rendering on ATI R300 or better
    By in forum OpenGL: Advanced Coding
    Replies: 0
    Last Post: 07-01-2007, 07:24 AM
  3. FBO and single channel float texture with HW blending
    By shelll in forum OpenGL: Advanced Coding
    Replies: 5
    Last Post: 01-11-2006, 05:21 AM
  4. How to render to single channel float cubemap texture?
    By 991060 in forum OpenGL: Advanced Coding
    Replies: 1
    Last Post: 11-04-2004, 08:56 PM
  5. Color matrix and color channel
    By ada in forum OpenGL: Advanced Coding
    Replies: 0
    Last Post: 07-02-2002, 06:36 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Proudly hosted by Digital Ocean