Increasing Frame Rate per second

Is it possible to take each 24-bit color frame of
video and display each bit sequentially as separate frames. Thus splitting each frame into 24 frames.

Note: I just want to know if opengl could do this. If yes, how?

I am definitely a beginner and really appreciate the help from the ‘gurus’ out there.

Can you take a 24-bit image and create 24 images containing the bits of the original? Yes. Would this have any particular meaning? Not really.

I’m not sure what it is you’re intending to accomplish here.

I have a senior design project coming up and i am planning to create a hologram. I read few paper about them and most of them mentioned really really high frame rates like 4000 fps. I know for a display device of this age its not useful but for #D Holographic devices we need very high fps.

Will u please tell me how to do it?

Decomposing a 24-bit image into 24 1-bit images will not make your framerate magically better. Because each frame will not contain any actually useful information; you’ll only get useful information from every 24 frames.

If you need 4000 fps (which I’m fairly sure is beyond the range of human perception, even if there are display devices that can show that), then you will need to render very little. Or have some incredibly beefy hardware and render very little.

I think if you would read the following pdf, you would better understand my goal…

http://gl.ict.usc.edu/research/3DDisplay/3DDisplay_USCICT_SIGGRAPH2007.pdf

Please Read Page 2–> Paragraph “High-Speed Projector” and also see Fig 3.

So, what you’re saying is that you have 24 binary images, and you want to use OpenGL to package them into a single RGB image for “display”? Yes, you can do that with shaders (though you’ll need at least two passes).

I want exactly the opposite of what u r saying.
I have a 24-bit image/frame that i want to decompose into sequentially separate 24 frames.

OR as the paper states
“Instead of
rendering a color image, the FPGA takes each 24-bit color frame of
video and displays each bit sequentially as separate frames”

Instead of
rendering a color image, the FPGA takes each 24-bit color frame of
video and displays each bit sequentially as separate frames

But that happens in the FPGA array; that’s not where the paper was using OpenGL.

Yes, you can use shaders to do this decomposition, but it’s not exactly the most efficient use of your GPU. If you had that image on the CPU, it’d probably be faster for you to do the decomposition yourself.

I am trying to bypass FPGA hardware, as i am building this system cheapest and portable. That’s why i asked if there is a command in opengl that just takes a 24 bit color image and shreds it into 24 fundamental layers.

Also, will you please explain this further
“If you had that image on the CPU, it’d probably be faster for you to do the decomposition yourself.”

Thanks

That’s why i asked if there is a command in opengl that just takes a 24 bit color image and shreds it into 24 fundamental layers.

There’s not a “command” that will do it; you have to write shader code for it.

But if you’re making a cheap, portable system, why are you relying on GPUs at all? Particularly shader-capable ones. That requires at least modestly advanced hardware. Any decent ARM chip should be able to handle what you need.

Also, will you please explain this further

I don’t know what part of that statement is unclear. 24-bit images are a sequence of 24-bit integers. You want to break them up into 24 images, where each pixel in the destination image represents a bit from the original. You can write that code in C/C++. Or any language for that matter; it’s just some bit manipulation.

Thanks for your reply. I know how to do image manipulation in c. So, i will just right a functin and send 24fps to my c program in real time.

Moreover, will you please briefly explain what are ‘shaders’?

There is the Wiki
http://www.opengl.org/wiki/Shading_languages:_General

and
http://www.opengl.org/wiki/Shading_languages