glReadPixels() alternative solution?

Hello Everybody,

is there an alternative solution to get the pixels back from a rendered image? I have read some infos about using frame buffer objects. But i still use frame buffer objects and i didn’t see some other things then read the pixels back with glReadPixels. Is there still a faster pendant to get the data back?

Thanks for your help.
krikit

It depends what you mean by ‘back’?

The framebuffer extension is for example very handy when you want to render images onto textures (or renderbuffers). These textures you can then use as any texture, in a shader for doing post process effects, regular texturing, deferred shading etc. But if you want the data back to the CPU it would just be an additional step.

Hello gardin,

with “read back” i mean read the image data to an unsigned char array and work on the CPU on this result. I like to sobel an image and work on the result set on the cpu it works fine with glreadpixels() but it isn’t really fast and so I’m looking for a other solution.

krikit

Well, such a filter should be possible to do directly on the GPU using a shader. That could increase performance a lot, not only because it’s slow uploading the image to the CPU, but also because the GPU is much faster at this.

The main idea is usually (i havent implemented a Sobel operator though but should be similar) that you draw your whole scene with a framebuffer object enabled, thus rendering everything to a texture. After this, you draw a single quad that covers the whole screen (google for screen aligned quad or similar and it should give you tons of examples, look at AMD RenderMonkey too, they have some GLSL examples like that you could try out). Then in the fragment shader, you have access to the texture coordinates, that work as screen coordinates. Now, by reading your texture you got from rendering the scene, you have access to any pixels required.

After some googling i found http://www.geeks3d.com/20110201/sobel-a … demo-glsl/ they say the GLSL source is attached, havent looked at it though

There are also some ideas shown in the book GPU Gems 3, the chapter http://developer.nvidia.com/node/176 is actually about Deferred Shading, but they also do some edge detection, it’s using Direct3D, but the general idea should be the same

Hope that helps, at least some ideas!

Hello gardin,

thank you for your informations. You are completely right what you say in your post. And i do the sobel or blur or any other simple convolution on the GPU with the fragment shader - it’s the idea to start the complete workflow with for example an blur (median or gaus) and then the sobel convolution of the result of the blur. But after the whole workflow i need the picture in the cpu because i don’t want to show the result on the screen. I like to analyse the data with the CPU.

And by the way, it’s not necessary to make a sobel it is just a example. So it is also interessting to work on each camera frame with the GPU to rotate the picture because on the GPU it’s done with a simple texture mapping. But when i read the rotated picture back with glReadPixels it isn’t fast enough. So i can rotate the image with the CPU. (<- This is because of the iPhone Camera. The captured image is allways rotated by -90° (i think -90°))

Thanks you for your Time

krikit

hello i want to know if you find any solution for this problem ???

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.