Combine texture data with OpenGL

Hello everyone, I am curious if there is a standard way to combine the actual pixel data of a texture on the GPU (using standard blending modes).

Let me explain.
Right now, I have a pretty simple Paint-like program where the mouse click acts as a paint brush.
When I draw something, I completely overwrite the previous data for the part of the picture I “painted” over, using glTexSubImage2D.

glTexSubImage2D(
	GL_TEXTURE_2D, // target
	0,             // level
	x_pos,         // xoffset
	y_pos,         // yoffset
	               // note: no internal format can be changed for "Sub"textures
	brush_width,   // width
	brush_height,  // height
	GL_RGBA,              // format (of the pixel data)
	GL_UNSIGNED_BYTE,     // type
	brush.subimage.data() // rectangular block of data
);

This works OK for rectangular brushes with 100% opacity, but does not work for a circular brush, because it doesn’t add the RGBA components together when drawing – it simply overwrites it. (e.g. so if you draw a 50% alpha image over a 90% alpha image, it overwrites it and all that remains is the 50% alpha image). I can make a picture if that isn’t clear…

Anyway, so what I’m wondering is the following scenario:
[ul]
[li]I have texture A (the actual raster image)[/li][li]I have texture B (the Brush image)[/li][/ul]
I want to be able to make some OpenGL API call that can add texture B on top of texture A, as if it were rendering it using with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); set, but sends the newly computed texture data (let’s call it texture C) back into texture A.

Is such a thing possible? (The alternative is to combine the texture pixels on the CPU-side, using the equivalent alpha-blending mathematics, which honestly isn’t the end of the world, and might be what I end up doing anyway if transfer of data from the GPU to the CPU is too slow, but I’m really just curious what my options here are.)

Thank you.

Bind texture A to a framebuffer object, render a quad using texture B with blending enabled.

Thank you. I think I see what you mean!

A few things I’m still not sure about.
Before, my texture B (the brush) wasn’t truly a texture but just an array of pixels.
If I actually want to draw the brush image now instead of just using glTexSubImage2D, I have to make the brush image be an actual texture? Or is there just a way to send pixel data without it being a texture? Does that make sense?

Second question is,
Am I supposed to still call
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
before rendering to texture, or is that redundant?
Or is that bad because we don’t want to be clearing a texture?

Thanks again.

PS:

So once I correctly set up the framebuffer object, and bind my drawing image texture A to it, I then do

glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

before calling everything that I’d normally call to be able to draw something.
e.g. I still call glUseProgram, glUniform*, glBindBuffer(/vertex buffer/), glBindBuffer(/uv buffer/), etc.
Finally, call glDrawArrays(/draw quad/);
and it will send whatever I draw to the texture I specified (which is the drawing image texture A).

After I do that, I can then call
glBindFramebuffer(GL_FRAMEBUFFER, 0);
to render to the actual screen, and bind my newly rendered texture,
and call glDrawArrays(/draw quad/);

That sounds right (in theory)?

I just tried this out… I’m getting some bugs but I think it’s just me forgetting a few things so I’m not asking for help there. Once I experiment more I’ll make a minimum example if I’m still having issues.

Yes.

In legacy OpenGL (prior to 3.1, or later versions using the compatibility profile), you can use glDrawPixels() to render pixel data directly to the framebuffer. But that won’t work with 3.1+ core profile, and is likely to be slower than using a texture.

You can use that to clear the texture to a solid colour (although you wouldn’t use GL_DEPTH_BUFFER_BIT). How you render the finished texture to the window doesn’t change.

Yes. Framebuffer objects (FBOs) allow you to render to a texture in the same way as to a window. All rendering operations are supported.

I got it working after some more tinkering, again thank you for the help GClements.

I have one more question for whoever can help, as a direct result of this.
Due to the fact that I’m no longer specifying texture pixels as coordinates, I have to now do the coordinate mapping myself (from texture pixel coordinates [0, N] to OpenGL coordinates [-1, 1]).

The underlying texture I’m writing to is always the same size (let’s say it’s N pixels in the x-direction). But right now, for the brush’s overlay texture’s quad (2x tris), I’m specifying the vertex buffer coordinates in an approximate manner, e.g. if my underlying texture N is 4 pixels wide, then each pixel is about 0.5 width in OpenGL coordinates (e.g. first pixel will go from x-coordinate -1.0 to -0.5).

Is this “manual” converting of pixel coordinates to OpenGL float coordinates to size a quad the best way to get pixel-accurate sizes (calling glBufferSubData with the new size/position each frame), or is better that I call glViewport(brush_x, brush_y, brush_width, brush_height); each frame before drawing my brush texture quad, and then change it back to glViewport(0, 0, canvas_width, canvas_height);?

Are both about the same in terms of efficiency? Or are there other pitfalls instead in using glViewport instead of glBufferSubData (talking about modern OpenGL). Thank you.
[u]

Right now, my call logic is like this:

one-time initialization:


    glGenBuffers(1, &brush_vertex_buffer_id);
    glBindBuffer(GL_ARRAY_BUFFER, brush_vertex_buffer_id);
    brush_vertex_buffer = {
            // BL tri
            -1.0f, -1.0f, 0.0f,
             1.0f, -1.0f, 0.0f,
            -1.0f,  1.0f, 0.0f,

            // TR tri
             1.0f, -1.0f, 0.0f,
             1.0f,  1.0f, 0.0f,
            -1.0f,  1.0f, 0.0f,
        };
    glBufferData(GL_ARRAY_BUFFER,
                 sizeof(GLfloat) * brush_vertex_buffer.size(),
                 brush_vertex_buffer.data(),
                 GL_STATIC_DRAW);

per-frame logic:



glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
// ...
glViewport(brush_pos_x, brush_pos_y, brush_width, brush_height);
// ...
// Note: No call to glBufferSubData needed here
//
glDrawArrays(GL_TRIANGLES, 0, 3 * 2);
glViewport(0, 0, 512, 512);

// Stop rendering to texture,
// continue rendering to main screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);

My question is, instead of doing the 2x glViewport calls, it is better to keep the Viewport the same, but each frame do something like this instead:



glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

for each (coordinate in brush_vertex_buffer) // 18 floats
{
    // resize the coordinates so that they are equivalent to brush_width and brush_height, in OpenGL coordinates
     brush_vertex_buffer[i] = brush_vertex_buffer[i] * some_factor + offset; // THIS
}

// ...
// AND THIS, glBufferSubData is now needed each frame because we're making the vertex buffer itself the size of the brush
glBufferSubData(
    GL_ARRAY_BUFFER,   // target,
    0,                 // offset
    sizeof(GLfloat) * 3 * 3 * 2, // size
    brush_vertex_buffer.data()   // data
);
// ...
glDrawArrays(GL_TRIANGLES, 0, 3 * 2);

// Stop rendering to texture,
// continue rendering to main screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);

Right now, I switched and am now calling glViewport, instead of glBufferSubData + coordinate converting, and the pixels still look accurate, but this feels wrong.

Edit: I was explaining things wrong. The post now accurately depicts the fact that when I am using glViewport 2x calls, I avoid the glBufferSubData and the coordinate conversion for loop (because OpenGL does the converting for me).

Third option would be to avoid glViewport and glBufferSubData, and to instead use uniforms (for brush’s x, y, width, height) to push the conversion logic to the shader. I don’t know which way is best… is making a uniform call (with an int array of size 4) comparable to 2x glViewport calls?
i.e. glUniform4i(GLint location, GLint x, GLint y, GLint width, GLint height); [and you have to do the conversion in the shader] vs glViewport(x, y, width, height); [and OpenGL does the conversion for you].

If I operate under “KISS”, it seems like glViewport is the more beneficial option here.

I wouldn’t change the viewport, but apply a transformation to the vertex coordinates either using the vertex shader or (for the fixed-function pipeline) the model-view and/or projection matrix. With the fixed-function pipeline, you’d typically use glOrtho() to set the projection matrix so that the eye-space coordinate system uses pixel coordinates, then use glTranslate() and glScale() to set the model-view matrix before rendering a unit quad ((0,0) to (1,1)).

Efficiency isn’t an issue when you’re dealing with 4 or 6 vertices (a typical 3D game has thousands of vertices).

I see, thank you GClements. Doing the transformation of vertex coordinates in the shader sounds like a good plan then. So this means I use my own uniforms instead of glViewport. I’ll try out both ways. Thanks.

Hi everyone.

I have a follow-up issue, so I figure it would be more helpful to just continue this thread.
Previously, I was helped in setting up a framebuffer to render to texture. I got this to work great, for 100% opaque colors.

But, when working with opacities (alpha channel) less than 100%, the image’s pixels “decay” after every time I re-render it to texture (i.e. the RGB[A] components degrade towards 0).

Now, my question is, is there a blending option that lets the texture pixels get set without multiplying by the alpha, but otherwise still following standard blending glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); as you would expect?

In other words, if I render to texture using a renderbuffer, and then do:

glBindTexture(GL_TEXTURE_2D, texture_id);
glGetTexImage(GL_TEXTURE_2D,
              0,
              GL_RGBA,
              GL_UNSIGNED_BYTE,
              pixels.data());

saveImageToFile(filename, pixels, width, height);

I want the pixels of that texture to be their un-multiplied (NOT multiplied by alpha) values.

Example:

Let’s say I’m drawing to texture’s pixel in a blank background, RGBA(0, 0, 0, 0), to keep things simple. (Although I should be able to blend with any type of underlying RGBA pixel.)

If I load a PNG image with RGBA(0, 200, 255, 254) color, the first time this is rendered to texture, every RGBA element is multiplied by (254/255) and rounded, giving me this sequence every time I iterate the rendering to texture:


Original brush texture:
	RGBA(0, 200, 255, 254)

Texture output after the i'th click (brush stroke) renders:
click 1: RGBA(0, 199, 254, 253)
							  0 = round(  0 * 254/255)
							199 = round(200 * 254/255)
							254 = round(255 * 254/255)
							253 = round(254 * 254/255)
click 2: RGBA(0, 197, 252, 251)
							  0 = round(  0 * 253/255) # uses RGBA of
							197 = round(199 * 253/255) # previous render,
							252 = round(254 * 253/255) # including alpha.
							251 = round(253 * 253/255) # 
click 3: RGBA(0, 194, 248, 247)
							  0 = round(  0 * 251/255) # see the pattern?
							194 = round(197 * 251/255)
							248 = round(252 * 251/255)
							247 = round(251 * 251/255)
click 4: RGBA(0, 188, 240, 239)
							etc.
click 5: RGBA(0, 176, 225, 224)
click 6: RGBA(0, 155, 198, 197)
click 7: RGBA(0, 120, 153, 152)
click 8: RGBA(0, 72, 91, 91)
click 9: RGBA(0, 26, 32, 32)
click 10: RGBA(0, 3, 4, 4)
click 11: RGBA(0, 0, 0, 0)

After 10 iterations, it’s decayed so much that I no longer see the image. I want to stop this from happening at all.
I want the texture’s pixel to be obtained back as (0, 200, 255, 254) when given back to me after being rendered to texture.

Note that after the first click I make, that makes the original pixel get colored, I zero-out the brush texture to prevent more clicks from re-applying the color:


            glBindTexture(GL_TEXTURE_2D, brush_textureID);
            glTexSubImage2D(
                GL_TEXTURE_2D,
                0,        // level (base image is 0)
                0,        // x offset (we're overwriting
                0,        // y offset   every pixel)
                diameter, // width
                diameter, // height
                GL_RGBA,
                GL_UNSIGNED_BYTE,
                blank.data()
            );

Here is my function that renders to texture. I’ve tried using different values in glBlendFunc to no avail.

void MyGLCanvas::render_brush_to_frame()
{
    glUseProgram(shader.getProgram());

    // Set our "myTextureSampler" sampler to use Texture Unit 0
    glUniform1i(uniformTextureID, 0);

    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    // Render to texture instead
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
    glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);

    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
    glClear(GL_COLOR_BUFFER_BIT);

    // 1st attribute buffer : vertices
    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, brush_vertex_buffer_id);
    glVertexAttribPointer(
        0,          // attribute 0. Matches layout of shader.
        3,          // size (X+Y+Z = 3)
        GL_FLOAT,   // type
        GL_FALSE,   // normalized?
        0,          // stride
        (void*)0    // array buffer offset
    );

    // 2nd attribute buffer : UVs
    glEnableVertexAttribArray(1);
    glBindBuffer(GL_ARRAY_BUFFER, uvbuffer_unflipped);
    glVertexAttribPointer(
        1,          // attribute. No particular reason for 1, but must match the layout in the shader.
        2,          // size : U+V => 2
        GL_FLOAT,   // type
        GL_FALSE,   // normalized?
        0,          // stride
        (void*)0    // array buffer offset
    );
    glBufferData(GL_ARRAY_BUFFER, sizeof(g_uv_buffer_unflipped_data), g_uv_buffer_unflipped_data, GL_STATIC_DRAW);

    // Draw underlying frame:
    glBindTexture(GL_TEXTURE_2D, textureID);
    glDrawArrays(GL_TRIANGLES, 0, 3 * 2);

    // Draw brush to frame:
    glBindTexture(GL_TEXTURE_2D, brush_textureID);
    glDrawArrays(GL_TRIANGLES, 0, 3 * 2);

    // Stop rendering to texture, continue rendering to main screen:
    glBindFramebuffer(GL_FRAMEBUFFER, 0);

    glDisableVertexAttribArray(0);
    glDisableVertexAttribArray(1);

    std::swap(textureID, renderedTexture);
}

If I change the Blend Func when rendering to texture to be:


    glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA,
                        GL_ONE_MINUS_DST_ALPHA, GL_ONE);

it makes it degrade A LOT slower, but it still degrades:

0: RGBA(0, 199, 254, 254)
1: RGBA(0, 198, 253, 254)
2: RGBA(0, 197, 252, 254)
3: RGBA(0, 196, 251, 254)
4: RGBA(0, 195, 250, 254)
5: RGBA(0, 194, 249, 254)
6: RGBA(0, 193, 248, 254)
7: RGBA(0, 192, 247, 254)
8: RGBA(0, 191, 246, 254)
9: RGBA(0, 190, 245, 254)
10: RGBA(0, 189, 244, 254)
11: RGBA(0, 188, 243, 254)

121: RGBA(0, 127, 133, 254)
122: RGBA(0, 127, 132, 254)
123: RGBA(0, 127, 131, 254)
124: RGBA(0, 127, 130, 254)
125: RGBA(0, 127, 129, 254)
126: RGBA(0, 127, 128, 254)
127: RGBA(0, 127, 127, 254)
128: RGBA(0, 127, 127, 254) [steady state]

I can show more code if necessary… I’ve whittled down my code a lot to only show this behavior.

You should probably disable blending before drawing the “canvas”, otherwise the clear colour will “bleed” through portions with partial alpha. I’m assuming that this is what’s causing the decay you’re describing. The canvas doesn’t even need an alpha channel, it could just be GL_RGB (although the implementation may pad it out to 32-bpp for efficiency).

Actually, why do you even need to draw the underlying frame? You’re using 3 textures (brush_textureID, textureID and renderedTexture), swapping textureID and renderedTexture at the end of each frame, and starting off each frame by clearing the new frame and rendering the previous frame over the top of it, right? But why? The “obvious” approach for a paint program is to just create and clear the “canvas” texture when the user selects “New Image”, then draw onto it without ever clearing it again.

For a slightly more advanced approach to dragging a brush around, you might want to accumulate the brush stroke onto an intermediate (monochrome) texture, then use that as a brush to apply the colour (or pattern or whatever) to the canvas. Rendering the brush onto the intermediate texture would use glBlendEquation(GL_MAX) so that overlapping applications of the brush onto a given pixel would record the maximum alpha value rather than accumulating or blending alpha values.

In that case, the canvas would retain its original contents throughout the stroke, only the intermediate texture would be modified after each mouse event. The window would be updated by rendering the canvas and the stroke directly to the window. When the user releases the mouse button, the stroke would be rendered onto the canvas.

Actually, why do you even need to draw the underlying frame?

The underlying frame is what the previous brush strokes are rendered to, in the actual (currently buggy) program. Like, the underlying frame texture would have already had any possibilities of colors already on it. Yeah, the stuff I did with the swapping of the textures is probably me over-complicating at first in a futile attempt to fix things.

Rendering the brush onto the intermediate texture would use glBlendEquation(GL_MAX) so that overlapping applications of the brush onto a given pixel would record the maximum alpha value rather than accumulating or blending alpha values.

Well, actually, I wouldn’t want to do just GL_MAX, because in “standard” blending, a 50% alpha layer (~128 alpha) put on top of a 50% alpha layer results in around an alpha ~192. So if I just did max(128, 128), I wouldn’t get 192. Edit: Oh nevermind, you’re talking about during one “brush stroke” (before lifting the mouse), yes I agree GL_MAX would be beneficial there.

For a slightly more advanced approach to dragging a brush around, you might want to accumulate the brush stroke onto an intermediate texture … When the user releases the mouse button, the stroke would be rendered onto the canvas.

Yep, that’s one part I have planned :). Though I won’t get to that until I solve this first issue.

Okay, thanks for all the help so far, seriously. Perhaps I made my last post too hastily. I’ll will further simplify what I have (removing the texture swapping) and keep trying stuff out, and keep doing research, to see if I can get an RGBA(0, 200, 255, 254) pixel to actual render as RGBA(0, 200, 255, 254) to texture, and hopefully not bang my head too much (jk). If I’m truly stumped, I’ll come back and will try to be as clear as possible.

Thank you for your time.

Hello! I am reporting back my progress.

I have come to the conclusion that what I am trying to cannot be done with glBlend functions alone. It requires a bit more math in the shaders, to do it the correct way as described on Wikipedia.

The following webpage is HELPFUL in understanding this.
http://apoorvaj.io/alpha-compositing-opengl-blending-and-premultiplied-alpha.html

Note the complete lack of the A_d in the incorrect [OpenGL] formula. One critical case in which the incorrect approach falls apart is when the destination image is translucent, i.e. when A_d<1.

The pre-multiplied isn’t just needed for correct bilinear (or similar) filtering, as many articles seem to suggest; it is needed for simply correct calculations when Alpha_dst < 1.

Final result:

	Brush RGBA         (  0, 255,   0,  64)
	Clear RGBA         (  0,   0, 255, 128)
	Expected OVER RGBA (  0, 102, 153, 160)
	Actual   OVER RGBA (  0,  64,  96, 160) // 1st pass
	Actual   OVER RGBA (  0, 102, 153, 160) // 2nd pass, with de-multiplication

Here’s an excerpt from the code, maybe it will help someone.
Note: Some boilerplate code is also taken from following LearnOpenGL - Framebuffers

void MyGLCanvas::render_brush_to_frame()
{
    //
    // 1. Render destination (bottom) image onto frame buffer using pre-multiplication shader
    //
    glDisable(GL_BLEND);
    glUseProgram(premult_shader.getProgram());

    // Set our "myTextureSampler" sampler to use Texture Unit 0
    glUniform1i(premult_shader_uniformTextureID, 0);

    // Render to texture instead
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
    glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, temp_textureID, 0);

    double clear_r =   0.0 / 255.0; // 50% opacity blue
    double clear_g =   0.0 / 255.0;
    double clear_b = 255.0 / 255.0;
    double clear_a = 128.0 / 255.0;
    glClearColor(clear_r * clear_a, // manually pre-multiplying here
                 clear_g * clear_a, // because glClear doesn't use the
                 clear_b * clear_a, // pre-multiply logic in the shader
                 clear_a );         // normally, we be using an actual texture + glDraw call here
    glClear(GL_COLOR_BUFFER_BIT);

    print_rgba("Clear RGBA         ", clear_r, clear_g, clear_b, clear_a);

    double r_o, g_o, b_o, alpha_o;

    a_over_b(brush_red/255.0, brush_green/255.0, brush_blue/255.0, brush_alpha/255.0,
             clear_r,         clear_g,          clear_b,           clear_a,
             r_o,             g_o,              b_o,               alpha_o);

    print_rgba("Expected OVER RGBA ", r_o, g_o, b_o, alpha_o);

    // 1st attribute buffer : vertices
    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, brush_vertex_buffer_id);
    glVertexAttribPointer(
        0,          // attribute 0. Matches layout of shader.
        3,          // size (X+Y+Z = 3)
        GL_FLOAT,   // type
        GL_FALSE,   // normalized?
        0,          // stride
        (void*)0    // array buffer offset
    );

    // 2nd attribute buffer : UVs
    glEnableVertexAttribArray(1);
    glBindBuffer(GL_ARRAY_BUFFER, uvbuffer_unflipped);
    glVertexAttribPointer(
        1,          // attribute. No particular reason for 1, but must match the layout in the shader.
        2,          // size : U+V => 2
        GL_FLOAT,   // type
        GL_FALSE,   // normalized?
        0,          // stride
        (void*)0    // array buffer offset
    );
    glBufferData(GL_ARRAY_BUFFER, sizeof(g_uv_buffer_unflipped_data), g_uv_buffer_unflipped_data, GL_STATIC_DRAW);

    //
    // 2. Render source (top) image onto frame buffer, again using pre-multiplication shader,
    //    but now with blending enabled
    //
    glEnable(GL_BLEND);
    glBlendEquation(GL_FUNC_ADD);
    glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

    // Draw brush to frame:
    glBindTexture(GL_TEXTURE_2D, brush_textureID);
    glDrawArrays(GL_TRIANGLES, 0, 3 * 2);

    //
    // 3. Reverse the pre-multiplication process, using the de-multiplication shader
    //

    // The texture we're going to render to
    glGenTextures(1, &temp2_textureID);

    // "Bind" the newly created texture: all future texture functions will modify this texture
    glBindTexture(GL_TEXTURE_2D, temp2_textureID);

    // Give OpenGL empty image...
    glTexImage2D(GL_TEXTURE_2D, 0, internal_format, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

    // Poor filtering. Needed!
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

    // Set "temp_textureID" as our color attachment #0
    glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, temp2_textureID, 0);

    // Set the list of draw buffers.
    GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
    glDrawBuffers(1, DrawBuffers); // "1" is the size of DrawBuffers

    // Always check that our framebuffer is ok
    if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
    {
        std::cout << "ERROR 2: glCheckFramebufferStatus
";
    }

    glDisable(GL_BLEND);

    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
    glClear(GL_COLOR_BUFFER_BIT);

    // Blending still enabled, right? Otherwise it would just overwrite.
    glUseProgram(demult_shader.getProgram());

    // Set our "myTextureSampler" sampler to use Texture Unit 0
    glUniform1i(demult_shader_uniformTextureID, 0);

    // Draw the first temp texture to the second temp texture
    glBindTexture(GL_TEXTURE_2D, temp_textureID);
    glDrawArrays(GL_TRIANGLES, 0, 3 * 2);

    // Stop rendering to texture, continue rendering to main screen:
    glBindFramebuffer(GL_FRAMEBUFFER, 0);

    glDisableVertexAttribArray(0);
    glDisableVertexAttribArray(1);

    //writeTextureToFile(temp_textureID, "temp");
    //print_pixel(temp_textureID, 100, 100);
    print_pixel(temp2_textureID, 100, 100);
}

Pre-multiply shader excerpt

uniform sampler2D myTextureSampler;

void main()
{
	// Output color = color of the texture at the specified UV
	vec4 col = texture( myTextureSampler, UV).rgba;
	color = vec4(col.r, col.g, col.b, 1.0) * col.a;
}

De-multiply shader excerpt

uniform sampler2D myTextureSampler;

// DeMultiply Alpha shader
void main()
{
	// Output color = color of the texture at the specified UV
	vec4 col = texture(myTextureSampler, UV).rgba;
	color = vec4(col.rgb/col.a, col.a);
}