Rasterization of transparent objects

Hello all!

Hope you have a good day!

How to rasterize of transparent objects such as grass and bushes?

Textures of object have transparent areas and result depends on order of rendering.
If more distant object has been drawn before less distant one, then I can see it through transparent areas, but else it hidden.
I understand it because of depth buffer.

I tried to fill it in the shader, but do not received expected result.
It seems the point is not so easy as I thought before.

How can I rasterize such objects?

47210032647766

Thank you for any help!

Render the translucent objects in front-to-back order, or use an order-independent transparency technique (such as GL_SAMPLE_ALPHA_TO_COVERAGE). NOTE: Requires an multisample framebuffer. Link to a Humus demo that uses it: Alpha to coverage.

This might help you:

https://learnopengl.com/Advanced-OpenGL/Blending

Also, looking at your image, it appears you may not have blending enabled (GL_BLEND). You either need that to see through your sprites in the translucent areas, or GL_SAMPLE_ALPHA_TO_COVERAGE.

Alpha testing (GL_ALPHA_TEST) can be used to suppress fragment writes where sprite fragments are sufficiently transparent (as can doing a manual discard in the shader). Depending on your scene, this may or may not improve performance.

Thank you, guys!

You help me a lot.

40007821011064

best regards!

My first implementation was just rejecting fragments with alpha = 0.

But it looks ugly for objects with partially transparent areas.

Now, I’m going to implement ‘order independent transparency’ technique.
Trying to combine all pieces together.

So, I need to set state:

glEnable(GL_BLEND);
glEnable(GL_MULTISAMPLE);
glEnable(GL_SAMPLE_ALPHA_TO_COVERAGE);
glfwWindowHint(GLFW_SAMPLES, 4); // 4 layers using for calculation of result fragment

It is enough or I should implement some algorithm in the shaders?

No, you probably don’t need any shader magic for this.

In addition to the above state, I would either add:

glEnable( GL_SAMPLE_ALPHA_TO_ONE );

or disable GL_BLEND. You might also consider:

  • Adding glEnable( GL_ALPHA_TEST ) and setting an alpha test threshold with glAlphaFunc, and
  • cranking up your sample count for increased transparency quality with the multisample coverage dither masks.

There is something wrong with it.
If I disable GL_BLEND, then all transparent fragments have a black color (transparency is not working).
If I enable GL_BLEND, then depth buffer using depth of nearest samle, even if fragment have alpha = 0, i.e. order of rendering is important.

PS: glAlphaFunc is not working for me. I read that functionality is deprecated, and I use discarding fragment in the shader instead of that.

Then alpha-testing isn’t working.

glEnable(GL_ALPHA_TEST) is only available in the compatibility profile.

From your description, it looks like that discard statement isn’t actually being executed. The depth buffer could be explained by enabling early fragment tests, but that wouldn’t affect the colour.

The discarding is working very well. All fragments with alpha = 0 is absent.
But I don’t discard fragments with partial transparency where alpha is not 0.
And there is the case. The Objects is not rendered behind fragments with partial transparency. (Because of depth buffer)
You can see exact result of that on the picture that I posted before.
Yes it don’t affect the color, but all objects which be rendering after semitrancparency ones isn’t visible.
And I can’t call it ‘order independent transparency’.

I know that I don’t understand important details. :slight_smile:

This calls into question whether you are activating MSAA rendering properly (and/or whether your driver implements it properly). I would suggest that you read this wiki page: Multisampling

With normal single-sample alpha-blend transparency, what you are describing makes sense, if the closer translucent object renders first (with depth writes and depth test enabled). When it does, it blends with the background, and sets the pixel depth value. Then the more-distant object renders, fails the depth test, and never writes (or blends) its color with the current pixel color.

However, this is not how MSAA rasterization works.

With MSAA rasterization, depth (and stencil) values are stored per sample, not per pixel, so depth compares happen per sample. Further, the coverage mask (determined from GL_ALPHA_TO_COVERAGE in this case) determines which subsamples within a pixel to write or to not write to in order to approximate that degree of transparency.

What this means is that, with MSAA rasterization with GL_ALPHA_TO_COVERAGE, even if the closer translucent object renders first, its depth value was only written to the pixel subsamples in the coverage mask. Therefore, when the most-distant object renders, there are still samples within that pixel whose depth values still reflect the background depth value (and not the depth value of the closer translucent object), so that more-distant object will win the depth test against those.

Example: If your closer, translucent object was only 25% opaque, then only ~25% of the subsamples should have been written, so the further object should win the depth test against and write/blend color for the remaining 75% of the subsamples (assuming our simple example with just these two fragments involved).

Based on this, let’s see the code that you are using for allocating an MSAA framebuffer, binding it, enabling MSAA rasterization, and downsampling (resolving) it to produce a renderable single-sample result.

Verify that you are actually enabling MSAA rasterization (glEnable( GL_MULTISAMPLE )), not just creating and binding an MSAA render target.

Nop, I didn’t meet that article.

I was thinking the call glEnable(GL_SAMPLE_ALPHA_TO_COVERAGE) allocates required layers for color and depth buffers according glfwWindowHint(GLFW_SAMPLES, 4), i.e. I was thinking those two instructions must allocate four cells per fragment for each buffer directly in default framebuffer.

Now I see I must allocate framebuffer manually.

Photon, GC, thank you a lot, guys!

Here’s a guide for GLFW. Sounds like you’re not far off:

Yes, I used this article for implementation.
But it is not working because of some reasons. :confused: