OpenGL ES internal buffers (MX31)

OpenGL ES maintains multiple internal color buffers that are ultimately drawn on screen in sequence. Is there any way to access these buffers manually?

Why do I want to access them? Texture uploads performance seem to be good for short animations, but not for regular screen repainting, at least when compared to a direct buffer memcpy to the framebuffer(correct me if I am wrong).

Therefore, I would like to use OpenGLES textures for my animations, but direct memcpy’s for regular screen repainting/refreshing. Is there ANY way I can make this happen? Thanks for your help.

No, and there are very good reasons for it.

Why do I want to access them? Texture uploads performance seem to be good for short animations, but not for regular screen repainting, at least when compared to a direct buffer memcpy to the framebuffer(correct me if I am wrong).

Therefore, I would like to use OpenGLES textures for my animations, but direct memcpy’s for regular screen repainting/refreshing. Is there ANY way I can make this happen? Thanks for your help.

Could you please explain in more detail what you are trying to do? What do you mean by “regular screen repainting”, and what do texture uploads have to do with it?

Ok, heres what I am trying to do:

The application
I am building a UI application for an MX31 device. My application will ALWAYS cover the entire screen. All objects in my application (ie, buttons, textbox, etc) draw themselves as and when required onto a buffer.
Whenever the “screen buffer” has been modified, it is redrawn onto the screen(this happens often). This is what I meant by regular repainting.

Using OpenGLES
Now if I use opengles, this repainting (ie drawing a buffer onto the screen) can happen most efficiently by(and correct me if I am wrong here):

1)glTexSubImage2D’ing the buffer onto a texture
2)glDrawArrays/glDrawTexioes the quad
3)eglSwapBuffers

Or in other words - uploading a texture onto the screen.

Without textures
Another way of doing it would be, if I had the framebuffer with me, a direct memcpy from my buffer to the framebuffer. This is a whopping 3 times faster than a texture upload. In this situation, I cannot be using opengles.

Qn: Why do I want to use opengles?
Ans: My application has some complicated animations, which run smooth only if implemented by opengles.

Goal
The problem I am facing now is that when I use opengles, my entire application slows down dramatically.

Therefore, I am looking for a way to draw my screen buffer using a direct memcpy to framebuffer, but use ogl textures, etc only when I need to display an animation.

I hope this makes my problem more understandable. Please do let me know your comments/suggestions.

OpenGL ES is not particularly well suited for “copy buffer to screen” tasks. I’d recommend drawing the whole UI with OpenGL ES. The UI widgets could be textured quads, and when e.g. a button is pressed you simply switch from the “button normal” to the “button pressed” texture. That way you can minimize texture uploads to e.g. text that changes (although you could draw that with OpenGL ES, too). That should be significantly faster, and you could use compressed textures for the static elements, too.

If that is not possible, you could try to create a texture bindable pbuffer that matches your buffer pixel format (what is it, by the way?). Then bind it and upload the texture data. This should reduce the amount of reformatting performed at the expense of rendering performance.

OpenGL ES is not particularly well suited for “copy buffer to screen” tasks.

Thanks for clearing that for me. Is there a particular reason why?

you could use compressed textures for the static elements, too

I have tried to find some info on compressed textures, but havnt been able to understand clearly. In what way are compressed textures different from normal textures? Sorry, I know this question is not exactly in line with the subject.

If that is not possible, you could try to create a texture bindable pbuffer that matches your buffer pixel format (what is it, by the way?).
Sounds interesting. Im using a 5_6_5 16bit RGB buffer. Why would a “texture bindable pbuffer” reduce reformatting as compared to a normal buffer? Ill try it anyway, thanks for this suggestion. Really do appreciate it.

If you just want to copy a buffer to the screen, there can be nothing better than a direct copy to the framebuffer.

In OpenGL ES the data needs to be copied to texture memory first. Textures are assumed to be mostly static, thus the GL implementation may reformat the data to optimize the spatial layout for filtering. Simple scanline order is not very good for that.
And to render a quad GL performs a lot of additional operations (whether hardware or software) compared to a simple copy, which means increased power draw.

I have tried to find some info on compressed textures, but havnt been able to understand clearly. In what way are compressed textures different from normal textures? Sorry, I know this question is not exactly in line with the subject.

The image data is just compressed using a lossy block-based encoding. Compressed textures need less space and thus less bandwidth, but depending on the texture contents there may be a reduction of image quality.

Sounds interesting. Im using a 5_6_5 16bit RGB buffer. Why would a “texture bindable pbuffer” reduce reformatting as compared to a normal buffer? Ill try it anyway, thanks for this suggestion. Really do appreciate it.

Because, as opposed to normal textures, pbuffers are assumed to contain dynamic data. The reformatting can be expensive and only pays off in an “update once, use many times” scenario.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.