There are a couple threads on nearly this topic, but they are either divergent or died out.
I have a framebuffer that I have to have a meaningful alpha plane in. The image gets rendered and then gets blended based on reulting alpha per pixel in the framebuffer (hardware combiner)
So I do a
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Draw the geometry which has some transparent components.
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Since I have cleared with opaque black clear color, I expect the results to be opaque. But they are the alpha of the geometry rendered over it - SRC_ALPHA.
Huh?
In other cases I will NEED to render the SRC_ALPHA into the framebuffer - to “cut a hole” in the scene.
So what little dance do I need to do to control when the SRC_ALPHA goes straight to the framebuffer (or FBO) from the SRC in order to “cut through” the scene and when it blends with the existing background to be “more opaque” as layers get added?
It has been a long day on this one.
Any thoughts are greatly appreciated?
Try to formulate what you want to achieve in terms of addition and multiplication of the RGB and alpha values, and it should quickly become clear what you need to do.
Note that OpenGL ES 2.0 has glBlendFuncSeparate which lets you specify different factors for RGB and alpha.
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) does not guarantee that the destination alpha value remains one. Assuming the default blend equation GL_FUNC_ADD, it means: dest.rgba = src.rgba * src.a + dest.rgba * (1-src.a)
i.e. for alpha dest.a = src.a * src.a + dest.a * (1-src.a)
This value will only be one if (src.a == 1) or (src.a == 0 and dest.a == 1).