OpenGL and video compatibility

Is it possible to display video in an OpenGL context or, if not, is it possible to display video in a program that uses OpenGL but in a different region of the screen? I am most interested in H.264 playback.

Thanks,

-Qu0ll

Hi Qu0ll,

You need two separate things here. OpenGL can be used to draw video frames.

However OpenGL cannot decode video files such as H.264.

So first of all, you will need a H.264 decoder that will convert the video frames into RGBA arrays.

Then what you need to do is create a texture from decoded pixel array.

Finally you draw the texture on a quad inside your OpenGL program.

Thanks again matchStickMan.

Do you know if there would be any performance issues using OpenGL to display video in this manner?

There are no performance issues using opengl to display video, On the contrary! Using Opengl for this task you will benefit from hardware acceleration.
I advice you to take a look to “pixel buffer objects” which is at the moment the fastest way to update frequently texture data.

Quite a few applications are already doing what you want to achieve.

Anyways, I can tell you that it is possible to:

  1. Decode pretty much any video format,
  2. Texturize them
  3. Display them
  4. Move them around the screen, twist and turn them, or use shaders on them or whatever you wish to do.

All of the above in real-time.

Once again, I know that for a fact. :slight_smile:

But be warned, this is some serious work you’re looking at.

Thanks to you both for the answers. You have been most helpful.

Yes, it’s a lot of work ahead but I plan to enjoy it!

-Qu0ll

I implemented video presentation using OpenGL some time ago and
it works quite well. You will have to take care of a few things
though.

Decoding the video: This can be done using e.g. the ffmpeg
libraries or any other means you have available. The main thing
is that you want access to the pixel data - preferrably in both
YCbCr and RGB formats (RGB as a fallback).

Using threads to decode the next frame while displaying the
current is usually a good idea but requires some extra work.

Displaying the video: If you have the data in YCbCr format, you
can usually use a fragment shader (or similar) to save quite a
bit of bandwidth and some processing capacity. If you only have
access to RGB data the display will be much simpler (just update
a texture and draw a quad) but performance will be a bit worse.

Sample conversion shader (using rectangular textures):

/*

  • Conversion shader for YCbCr->RGB (YUV->RGB)
    */

#version 110
#extension GL_ARB_texture_rectangle : enable

uniform sampler2DRect Ytex,Cbtex,Crtex;

void main(void)
{
vec3 YCbCr,RGB;
vec2 xy;
xy=gl_TexCoord[0].st;
YCbCr.r=texture2DRect(Ytex,xy).r-0.0625;
YCbCr.g=texture2DRect(Cbtex,0.5xy).r-0.5;
YCbCr.b=texture2DRect(Crtex,0.5
xy).r-0.5;
RGB=YCbCr*mat3(1.1643, 0.0000, 1.5958,
1.1643,-0.3917,-0.8129,
1.1643, 2.0170, 0.0000 );
gl_FragColor=vec4(RGB,1.0);
}

Great, thanks tesla. I am sure that will help me.

-Qu0ll

Adding to what everyone else has said, I don’t know how much you already know but NVidia released a really useful library a few months back called VDPAU (link) which provides GPU-accelerated H.264 decoding and playback support.

Projects such as MythTV, MPlayer, Xine, etc. provide support for real-time playback of H.264 via this library.

You might check it out.