Capture live video and render the images with opengl

Hi,

i want to capture live video and render the images with opengl in this way:

  1. capture live video images.
  2. apply a function to each frame, which will apply a color filter for example.
  3. render each frame (with the filter applied)in an opengl rectangle.

I have heard some terms like “DirecShow” or “buffer”, but, someone here knows some more specific data about this process and how make it works as fast as possible?

I dont need information or comments to apply a texture in a rectangle, Its just all before that.

PD: anyone who wants information about the opengl’s part in an augmented reality project, write me a private message. I know some, i could help you.