Composing Alpha only texture images

Hello,

I want to compose a larger GL_ALPHA only texture from many other smaller GL_ALPHA only textures at run time.

I tried to do this through a PBuffer but have hit a stumbling block. I can’t share a texture with a context with an alpha only surface with a context with an rgb565 surface.

The following code breaks with an EGL_BAD_MATCH error at eglCreateContext if I set the share context. However, it works fine if I don’t try to share the contexts. But then the original context can’t access the newly created texture because it resides in a different context.


int width = 256;
int height = 256;

EGLDisplay current_display = eglGetCurrentDisplay();
if(current_display == EGL_NO_DISPLAY)
{
	printf("NO DISPLAY
");
	return;
}

EGLContext current_context = eglGetCurrentContext();
if (current_context == EGL_NO_CONTEXT)
{
	printf("NO CONTEXT
");
	return;
}

EGLSurface current_surface = eglGetCurrentSurface(EGL_DRAW);
if (current_surface == EGL_NO_SURFACE)
{
	printf("NO SURFACE
");
	return;
}

EGLint num_config;
EGLint conflist[] = {
    EGL_ALPHA_SIZE, 8,
    EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
    EGL_NONE
};

EGLConfig  config;
if (!eglChooseConfig(current_display, conflist, &config, 1, &num_config) || num_config != 1) {
    printf("Failed to find get configuration %x
", eglGetError());
    return;
}

EGLint pbuf_attribs[] = { 
    EGL_WIDTH,          width,
    EGL_HEIGHT,         height,
    EGL_NONE
};

EGLSurface surface = eglCreatePbufferSurface( current_display, config, pbuf_attribs ); 
if( surface == EGL_NO_SURFACE ) {
    printf("Failed to create sureface %x
", eglGetError());
}

EGLContext context = eglCreateContext( current_display, config, current_context, NULL );
if( context == EGL_NO_CONTEXT ) {
	printf("Context creation error: %x
", eglGetError());
	return;
}

if (!eglMakeCurrent(current_display, surface, surface, context))
{
	printf("Unable to make the pbuffer context current: %x
", eglGetError());
	return;
}

// DO SOME RENDERING HERE

GLuint texture;
glGenTextures(1, &texture);

glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);

glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, 0, 0, width, height, 0);

if (!eglMakeCurrent(current_display, current_surface, current_surface, current_context))
{
	printf("Unable to make the main context current");
	return;
}

eglDestroyContext(current_display, context);
eglDestroySurface(current_display, surface);

Does anybody know any way I can share the textures between contexts or maybe suggest another way to do this. My platform specs are as follows:

EGL Vendor: Imagination Technologies
EGL Version: 1.2 build 1
GLES Vendor: Imagination Technologies
GLES Renderer: PowerVR MBXLite with VGPLite
GLES Version: OpenGL ES-CM 1.1

Thank you,
Ian

Hi Ian,

The first thing to note is that this platform doesn’t support alpha-only pbuffers. You will actually get an RGBA8 config when asking for 8 alpha bits. Due to the added space and bandwidth consumption this may not be what you want.

Do you just want to blit those smaller textures to the larger one (i.e. just 1:1 mapping) or do you need scaling/rotation/filtering? Do you have access to the alpha image data or just the texture objects? How often do you need to perform this operation?

Sounds very much like rendering static text to a texture… :slight_smile:

You hit the nail on the head. It is for static text rendering, although I also need a similar technique to compose static image textures from other rgba textures as well (layering).

Initially this will be for blitting smaller textures into a larger one with a 1:1 mapping and I have direct access to the alpha pixels themselves. This operation will be sparse but needs to happen at run time. For example, when scrolling through a large list of text it would be good to cache each line while it is on screen and then possibly discard it when it is no longer visible.

I’d prefer to do it in as little space and time as possible, but I’d have to profile it to see if it is acceptable.

Thank you,
Ian

Have you tried blitting the glyphs to an image in memory using the CPU, then uploading this to OpenGL ES? Though the upload operation itself may be costly.

How many glyphs can fit on the screen at once? If your character set fits into one texture it may be fast enough to render each as two textured triangles, with all triangles cached for a single draw call.

Unfortunately our character set is huge as we have to support all CJK + Western languages so it won’t likely fit onto one texture. I’ve tried caching a few megabytes of the most used most used glyphs as textures on the GPU with good results. By having each character as one texture, everytime I add a new character the upload is fairly cheap. This holds as long as not too many new character are introduced on each frame. Then I use rectangles (two triangles) for each character to render them.

The problem with this method is that performance dips considerably when rendering many characters at once, so I’d like to prerender lines of text and save those as one texture while they’re on the screen. This should theoretically cost one frame for each new line that appears on the screen.

I’ve tried blitting the glyphs to an image and uploading them as a line but have found the upload performance to be quite bad.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.