OpenGL color space

Hello,

so far I have never used sRGB color space in OpenGL (neither sRGB textures nor sRGB framebuffers).
Maybe because the sRGB framebuffers were introduced late (in OpenGL 3.0 I guess).

So I did not really care about the whole color space topic at all (shame on me).

My final linear color framebuffer was just looking fine on any sRGB monitor I ever had in use.

But I would like to understand a few things better:

1. With my current setup (not using any OpenGL sRGB related stuff), do I actually need to care if
a monitor is using sRGB or something else? I guess not because in my application I am not
aware of the color profile(s) of the connected monitor(s).
2. So what actually happens when the linear color space framebuffer is blit to the default
framebuffer provided by the windowing system and then SwapBuffer() is executed?
Is the windowing system doing any color space conversion? Or the graphics driver?
Or the monitor? I don’t really know how the pipeline is but since I work in linear color
space and the monitor uses sRGB color profile I guess at some stage (sofware or hardware)
there is probably a conversion?
3. What changes in “2.” if I would use an sRGB framebuffer but keep the linear default framebuffer
provided by the windowing sytem? I guess glBlitFramebuffer would convert from sRGB to linear
and then the pipeline continues the same way as for “2.”?
4. What changes in “2.” if I would keep the linear framebuffer but would use an sRGB
default framebuffer provided by the windowing system ([FONT=Verdana]WGL_COLORSPACE_SRGB_EXT)?
[LEFT][FONT=Verdana]I guess glBlitFramebuffer would convert from linear to sRGB and then the rest of the
pipeline to the monitor should be a bit different?[/LEFT]

Help is very appreciated![/FONT][SUB][/SUB][/FONT]

Your monitor will never use linear intensity. It may use sRGB or it may use a constant gamma value (typically around 2.2, although older Apple systems used 1.8 as that’s closer to the typical gamma law used by printers). sRGB was designed as an approximation to the gamma curves of typical CRT monitors at the time, but with a linear slope near black.

Any conversion is performed by the GPU as part of the blit operation. sRGB won’t be supported if the GPU can’t perform the conversion (either via dedicated hardware or by inserting instructions into the shader program).

The relationship between the brightness values received by the monitor (either analogue or digital) and the intensity of the emitted light is non-linear; a brightness of 0.5 will result in significantly less than 0.5 intensity, i.e. the curve is steeper at higher brightness.

Historically, monitors (and televisions) had roughly constant gamma, i.e. I=Bγ with γ (gamma) typically around 2.45 (this is a consequence of the relationship between grid voltage and light intensity for a CRT). The problem with this is that it provides excessive density near black; e.g. the lowest 10% of brightness values [0.0,0.1] correspond to the lowest 0.6% of intensity values [0.0-0.006]. sRGB “fixes” this by using a linear slope close to black (brightness below 0.04) then transitioning to something closer to a gamma law.

Correct, provided that GL_FRAMEBUFFER_SRGB is enabled.

Note that the hardware framebuffer is always sRGB/gamma-encoded in the sense that those values will be sent directly to the monitor, which will then apply a non-linear transformation to obtain the intensity. Whether OpenGL considers the hardware framebuffer to be sRGB or linear simply affects whether conversions are performed. The historical approach was to treat both as linear when in fact both were sRGB (or something close to it). Similarly, textures were historically treated as linear in spite of the fact that any image loaded from a file would almost certainly use a gamma law.

Provided that GL_FRAMEBUFFER_SRGB is enabled, the blit should perform linear-to-sRGB conversion.

In short: if GL_FRAMEBUFFER_SRGB is enabled, reads from sRGB framebuffers perform sRGB-to-linear conversion, writes to sRGB framebuffers perform linear-to-sRGB conversion. Otherwise, reads and writes aren’t converted regardless of whether the framebuffer is sRGB or linear. Note that this also applies to blending; if GL_FRAMEBUFFER_SRGB is enabled and the framebuffer is sRGB, sRGB-to-linear conversion is performed on values read from the framebuffer, these are blended with the fragment shader outputs, then linear-to-sRGB conversion is performed on the result before it is written to the framebuffer. If GL_FRAMEBUFFER_SRGB is disabled, all framebuffers are assumed to be linear regardless of the sRGB status of their colour attachments (this is necessary to allow for shaders which aren’t “sRGB-aware”, i.e. which are simply using gamma-law values from textures without linearising them).

Thanks for your reply!

While I understand some aspects I still don’t know if I should care about the whole sRGB topic because everything looks just fine (maybe on a monitor with color space different from sRGB it does not, I don’t know…)

If I would start to care, I probably would need to check textures first.

Right now I use CxImage::GetBits() to load textures and directly passes the RGB data to OpenGL without any sRGB format.
And here I already have no clue if CxImage gives me linear or non-linear color space values dependending on the color profile in the jpg/png/whatever is loaded.
I googled for an answer but without luck.

Then I would need to know when to use a normal RGB framebuffer or an sRGB framebuffer for the final output but I have no clue.

What is the game industry doing for PCs? Do they use sRGB framebuffer or normal RGB framebuffer? And how to decide which one is needed? :confused:

You should always use an sRGB framebuffer for display. Indeed, Vulkan as an API doesn’t even really give you a choice. Presentable images have precisely one value for VkColorSpaceKHR: non-linear sRGB. This is regardless of the Vulkan image format. That is, if it’s an unsigned normalized image, the presentation system will interpret it as if it is sRGB.

everything looks just fine

Of course it “looks just fine”. You designed it to look fine. You built the colors, lighting scheme, and shaders so that the output will look fine. If something didn’t look fine, you tweaked it until it did.

The question is, does it look correct? Because that’s not a matter of what you’re seeing; it’s a matter of the physics of light.

If you want to do physically correct rendering, gamma correcting for the display device (which sRGB is kind of a shorthand for) is not optional. Lighting computations only make sense when done in a linear colorspace (though even then, it’s still an approximation). So any colors provided to the computations need to be in a linear colorspace. And the results in linear RGB need to be converted into a colorspace suitable for display on the display device.

Not doing this leads to problems.

Textures intended to be used as diffuse reflectance maps are often created in the sRGB colorspace, so simply pretending that those values are linear RGB values leads to unfortunate effects. This is why it’s important to have sRGB colorspace images.

Now, that doesn’t mean everything you do will be in a linear RGB colorspace. Drawing UI elements often doesn’t do computations on the colors it gets from textures. So if the texture or vertex color is already in the sRGB colorspace, it makes no sense to convert it to linear just to write it to the image which converts it back to sRGB. This is why you can turn off sRGB colorspace conversion when writing to the framebuffer.