Confusing points in the spec (blending, clip paths, etc.)

I am confused regarding the actual implementation of certain OpenVG functionality. A few of the definitions in the spec seem very strange to me.

1) VG_BLEND_SRC

E.g. I’ve browsed dozens of pages on the web and the reference images for SRC or DST at semi-transparent paint always show the src (or dst respectively) pixels actually mixed with the background (e.g. SVG OPEN. ). However the way that SRC function is defined in the OpenVG specification is:

C’r = C’s * 1 + C’d * 0

Now, since C’ is premultiplied we can write this in non-premultiplied space like this:

Cr = (Cs * As * 1) / (As * 1)

which obviously simplifies to

Cr = Cs

But this means that the source would never be mixed with the background as the reference images usually show. Instead source alpha is completely ignored and the pixel color is directly copied onto surface. The function on the page that I linked is different though: it multiplies a premultiplied source color another time with source alpha which gives ther right result.

I think the big problem of the current specification is that it doesnt include eny images for the results of different blending operations. So I wonder… whats the right function to use? I you ask me, Cr = Cs is not much of a use anyway.

2) Clip paths

The specification says OpenVG is intended to be used for SVG viewers and the like. Then how is one supposed to implement clipping paths with it? There is no blending mode to change only the target alpha which we could use to draw the clipping shape into the alpha plane and clip via SRC_IN operator.

The only way to do it is to actually make an image a render target, draw a path onto it, then revert back to old render target and compose that image into existing mask on the mask surface. Why such a limitation when it could be so easy to integrate such functionality into the spec? The first and easiest thing would be to provide a way of blocking output into certain color planes, like the OpenGL glColorMask() function does.

3) Color interpolation

In the section 3.4.3 of the spec it says:

“In OpenVG, color interpolation takes place in premultiplied format in order to obtain correct results for translucent pixels.”

However, I see no reason why interpolating in premultiplied format would give better results compared to non-premultiplied interpolation. On the homepage of the AmanithVG implementation, they even state that:

“Interpolating premultiplied color stops is not equal, nor correct, to produce premultiplied colors from the interpolation output. The example below shows a linear gradient made of three stops in the format (t,r,g,b,a): (0.0, 1.0, 1.0, 1.0, 1.0) - (0.5, 0.0, 1.0, 1.0, 0.0) - (1.0, 1.0, 1.0, 1.0, 1.0)”

And I think it is more than obvious why this is not right thing to do. If you premultiply a color with alpha=0 it will produce a rgba color (0,0,0,0) and thus discard all the actual color information and prevent it from affecting the interpolated values. So whats up with this?

[edit: a typo]

VG_BLEND_SRC is thge closest thing we have to a way of turning blending off (although there is that annoying division by 0 = 0 thing to make life interesting). The color you paint with is the color you get. As such, in any software implemention this mode should be the fastest (blending based on anti-aliasing is independant of this). If you want a regular blending mode, just leave the default setting SCR_OVER.

Rendering into image (pbuffer of such like) and then using it was an alpha mask is indeed the only way I too see to implement clip paths. Clip paths though, if I am not mistaken, are part of the full SVG spec which includes many other things that are also difficult with OpenVG (group opacity for example [and pre OpenVG 1.0.1, dashing]). In addition, full SVG viewers have to be able to parse things like style sheets and java script, making them rather bulky.
Because of all this, SVG tiny was deleoped which doesn’t have all the features of full SVG. I believe clippaths are one of the features in the full spec which did not make it to tiny.

I don’t know why khronos made the choices it did (some of the color conversion/format choices are even much more questionable). It’s the way it is.
I guess the thinking was that translucent is translucent, and interpolating some gray scale with something translucent and getting getting Blue for example must have seemed weird to them. shrug

But this means that the source would never be mixed with the background as the reference images usually show.

That’s true… It never blends with the background, and the alpha is ignored, or better said: The alpha is copied onto the drawing surface unmodified. If you want blending use VG_BLEND_SRC instead.

whats the right function to use? I you ask me, Cr = Cs is not much of a use anyway.

It’s useful if you want to draw images or opaque pathes. Rendering without alpha-blending (e.g. straight pixel copy) is faster since there is no need to read the backbuffer.

However, I see no reason why interpolating in premultiplied format would give better results compared to non-premultiplied interpolation.

If you interpolate between two colors, a green with 100% transparency, and a 100% opaque white, you will get all shades of semi-transparent light greens. This is not correct since the green color shouldn’t bleed into the interpolated result. The green pixel is full transparent after all, and 100% transparent pixels don’t have a color.

Premultiplication prior to interpolation gets rid of these artifacts.

[quote:1f97to2p]But this means that the source would never be mixed with the background as the reference images usually show.

That’s true… It never blends with the background, and the alpha is ignored, or better said: The alpha is copied onto the drawing surface unmodified. If you want blending use VG_BLEND_SRC instead. [/quote:1f97to2p]

err…did u mean VG_BLEND_SRC_OVER?

[quote:1f97to2p]However, I see no reason why interpolating in premultiplied format would give better results compared to non-premultiplied interpolation.

If you interpolate between two colors, a green with 100% transparency, and a 100% opaque white, you will get all shades of semi-transparent light greens. This is not correct since the green color shouldn’t bleed into the interpolated result. The green pixel is full transparent after all, and 100% transparent pixels don’t have a color. [/quote:1f97to2p]

Well, it just depends on how you look at things. But interpolating in premultiplied space makes things even harder. Consider this example:
you want to add a fade-out effect to an image by drawing it with VG_DRAW_IMAGE_MULTIPLY and using a linear gradient which extends from its bottom to its top. Since you want to only affect the image alpha (transparency) you set one end of the linear gradient to (1,1,1,1) and the other one to (1,1,1,0). When interpolating in non-premultiplied color space this would yield the wanted result of all the colors if image being preserved and only their alpha value gradially changing. But if you interpolate in the premultiplied space, then the end-color would be (0,0,0,0) and this would mix some unwanted black shades into the image at the end of the gradient. Why make life simple, when it can be difficult, eh?

Premultiplication prior to interpolation gets rid of these artifacts.

In this case these are clearly not “artifacts” but a normal way one would expect things to work. It just so seems to me that this API is full of tiny details that make usage of it so much harder and limiting to the user instead of giving a broad range of possibilities.

Hi Ileben,

err…did u mean VG_BLEND_SRC_OVER?

Jep. That’s the one.

you want to add a fade-out effect to an image by drawing it with VG_DRAW_IMAGE_MULTIPLY and using a linear gradient which extends from its bottom to its top. Since you want to only affect the image alpha (transparency) you set one end of the linear gradient to (1,1,1,1) and the other one to (1,1,1,0). When interpolating in non-premultiplied color space this would yield the wanted result of all the colors if image being preserved and only their alpha value gradially changing. But if you interpolate in the premultiplied space, then the end-color would be (0,0,0,0) and this would mix some unwanted black shades into the image at the end of the gradient. Why make life simple, when it can be difficult, eh?

When I started to dig into OpenVG I was confused and didn’t understood the whys and hows of porter-duff blending as well. In the end, it works as good as the good old alpha blending, and even better in some other cases.

Here is an alpha-blending szenario the non-premultiplied way:

Texel: a single pixel of the image; opaque red (1, 0, 0, 1)
Color: a single color from the gradient; half transparent white (1, 1, 1, 0.5)
Dest: a single pixel on the drawing surface; opaqe white (1, 1, 1, 1)

1. Multiply Texel and Color to get the Source-Color:
Src = Texel * Color = (1, 0, 0, 0.5)

2. Plug into the alpha-blend equation
Output = Src * Src.Alpha + Dest * (1-Src.Alpha) 

3. 
Output = (1, 0.5, 0.5) just what you expected.

Here’s the same with premultiplied alpha blending:


1. We need all colors in premultiplied format. This effects only Color since all other colors are opaque. During rendering this step is not nessesary since the colors are premultiplied prior to blending.

Color = (1,1,1,0.5) -> Premultiply -> (0.5, 0.5, 0.5, 0.5)

2. Multiply Texel and Color to get the Source-Color:
Src = Texel * Color = (0.5,0,0,0.5)

3. Plug into the alpha-blend equation
Output = Src + Dest * (1-Src.Alpha) 

4.
Output = (1,0.5, 0.5) The same as if done the old way

Mathematically it’s the same, the only difference is that the multiplication with src-alpha is done prior to the alpha blending. However, some things that are not possible with good old alpha blending are now doable. Lot of composition tricks rely on this.

For example, think about an application with a very complex, transparent and multi layer GUI drawn over a movie. With normal alpha-blending you have to draw the GUI every frame, layer by layer, even if the GUI itself does not change.

With Porter-Duff blending you can draw the GUI into an offscreen image and only update it (or a part of it) when something changes. The offscreen image can then be blended any time onto the movie with VG_BLEND_SRC_OVER. The result will be the same, but you do this with much less work. With Porter-Duff blending you’re able to store the composition of multiple transparent layers in an offscreen buffer and apply it to any image later. With normal alpha blending you can’t do that.

One drawbacks of premultiplied blending is, that that you can’t make a image less transparent once the alpha has been multiplied in. For your gradient example that’s no problem since the alpha comes from the gradient-paint, not the image.

Another drawback is, that it’s near to impossible to combine fogging and transparency with Porter-Duff blending, but normal alpha blending isn’t that good in this discipline either g

I suggest you take a look at this blog-entry. That guy has written up some more stuff why Porter-Duff is superior to good old alpa blending:

http://home.comcast.net/~tom_forsyth/bl … lpha%5D%5D

Nils

Here’s the same with premultiplied alpha blending:

  1. We need all colors in premultiplied format. This effects only Color since all other colors are opaque. During rendering this step is not nessesary since the colors are premultiplied prior to blending.

Color = (1,1,1,0.5) → Premultiply → (0.5, 0.5, 0.5, 0.5)

  1. Multiply Texel and Color to get the Source-Color:
    Src = Texel * Color = (0.5,0,0,0.5)
  1. Plug into the alpha-blend equation
    Output = Src + Dest * (1-Src.Alpha)

Output = (1,0.5, 0.5) The same as if done the old way

 

So you say (only RGB written):
Output = Src (0.5, 0, 0) + Dest (1, 1, 1) * (1-Src.Alpha(0.5)) =
= (0.5, 0, 0) + (0.5, 0.5, 0.5) = (1, 0.5, 0.5)

But now, thats a premultiplied color isn’t it? What about un-premultiplying it back? I find the last step (unpremultiplying the result of the rendering back again) quite hard for graphic cards with no shaders…

Thanks, for that link! I will have a look at it later in the evening.

Ivan

I played around with calculation a little more and found an exact case that exposes what was actually confusing me (where premultiplied and non-premultiplied blending differ). In this case I take the same gradient stops as before (white-opaque -> white-transparent) and the same image color you used (red-opaque), but the background this time has color (0,0,0,0). What changes everything is that 0 alpha in the background. Lets see:


Gradient first stop: (1,1,1,1)
Gradient last stop: (1,1,1,0)

-> Premultiply gradient ->

Gradient first stop premul: (1,1,1,1)
Gradient last stop premul: (0,0,0,0)

-> Gradient lerped at half-way ->

Grad: (0.5, 0.5, 0.5, 0.5)
Tex: (1, 0, 0, 1)
Dst: (0, 0, 0, 0)

1) Multiply texel and color:

Src = Grad * Tex = (0.5, 0, 0, 0.5)

2) Apply BLEND_SRC_OVER:

Out.RGB = Src + Dst * (1 - Src.A)
Out.A = Src.A + Dst * (1 - Src.A)
(1 - Src.A) = 0.5

Out.RGB = (0.5, 0, 0) + (0, 0, 0) * 0.5 = (0.5, 0, 0)
Out.A = 0.5 + 0.5 * 0.0 = 0.5

3) Output:

Out.RGBA = (0.5, 0, 0, 0.5) (premultiplied!)
Out.RGBA = (1, 0, 0, 0.5) (non-premultiplied)

Now here’s the big difference: the example you gave with background of alpha 1 obviously gives output alpha 1 and this doesn’t change un-premultiplied value of the output color.

However, when background has alpha 0, the old alpha-blending would give the same result as porter-duff (0.5, 0, 0, 0.5) but this wold be final (non-premultiplied), while the porter-duff equation gives equal result in premultiplied color and thus the final color is full-opaque red again. In the end this does kind of make sense if you think of “alpha” as “how much does it influence the blended color”. So despite the background being black, it still can’t influence the partially-transparent pixels in the source, because background alpha is 0 which means “my influence on blending is 0”.

Now let’s do a quick check with background alpha = 1 (but still black color):
Out.A = 0.5 + 0.5 * 1 = 1
Out.RGBA = (0.5, 0, 0, 1) (premultiplied) = (non-premultiplied)

After this, I managed to understand what was wrong with my thinking before: the problem is that people used to old alpha-blending (including me) typically don’t take into account the destination alpha at all when blending takes place, so we are not used to think of it as any influental factor. However, the most important trick of Porter-Duff blending is that even in the SRC_OVER blending mode, the background alpha does matter and this is the key thing to understand why it still works just as good and even better than the old alpha-blending.

The only thing that bothers me now (regarding my implementation in OpenGL) is, how to unpremultiply the result back once it’s in the framebuffer, without custom shaders. To do this, I would have to divide the color in the buffer with it’s alpha or multiply with alpha-inverse. However, in OpenGL i can only multiply and only with alpha, not its inverse. Has anyone got a solution for this?

Hi Ileben,

Read up the original Porter-Duff paper on the post-divide thing. They have not been sure what to do in this case themselfs. If you think about it: You try to display a half transparent pixel, and half of the information that you need to display it simply does not exist.

Post-Dividing is one way to get around it, filling the non transparent pixels with some color is another one.

Or just ignore the problem alltogether. That’s what I did. Fortunately EGL offers you a way to report back to the application how the alpha format looks like. I simply forbid to create render-surfaces with EGL_ALPHA_FORMAT = EGL_ALPHA_FORMAT_NONPRE. That way all my rendering surfaces are premultiplied, and the user has to deal with it.

Most of the time you don’t half half transparent pixels in a final composition anyways.

Nils