Half suggestion, half reality check

Here it goes…

How hard would it be to have opengl output a 3d pixel array as a signal array, like in a cube shape, or a sphere? Instead of retrieving a camera input for displaying, it retrieves a set area’s models’ and terrains’ surfaces, and sends them through some signal or other, preferably able to be picked up by another program.

The reason I ask is this: I am working on a laser based 3d projector that will actually create such a picture, without the need of any special eyewear. All it requires is an LCD, a red, green and blue laser, and one huge interpreting program to send a signal through the dvi port.

Any information would be appreciated!!! (please don’t flame me…)

Not sure what you want exactly, but perhaps you can render to different layers that show the pixels at different depths of your scene. This way you could obtain `voxels’ (3d pixels) that you can show with your laser device.

Rendering multiple layers of pixels will be costly though. But perhaps you are working with scene’s that don’t change much, so you can do the rendering once and than just use the stored image to display.

Such an approach would need a 3d texture to which you should render (instead of rendering to the framebuffer). You could also take a series of 2d textures. Render your scene with the appropriate modelview matrix for each layer of the 3d texture. Now you have 3d pixels stored in your 3d texture and you should find an appropriate way to hand them over to your device.

Does sending RGB framebuffer on dvi port 1 and depth on dvi port 2 is enough ?

Anyway, I fail to see an actual “suggestion for OpenGL”…

@ZbuffeR
I was hoping that if it was fairly simple to integreate, it would be so I wouldn’t be left behind with an older version on this project. Also, the DVI would be after a program runs through a big polynomial and sends the proper picture to the lcd. The DVI port is pretty much unrelated to opengl at this point.

@Heiko
Another thing to mention is that this will almost definitely leave out most of the background, and it will be very low resolution at first. And yes, the initial images would be premade, but I was hoping for this to have opengl compatibility so that you could play any opengl game on it.

I can’t see how any OpenGL game could run on such hardware. Games are optimized for not processing things that can’t be seen on a screen. However with your 3d output device things are visible that otherwise wouldn’t be visible.

So to be able to play games on such an output device, the game must be programmed specifically with 3d output in mind (instead of 2d, or two times a 2d view with slightly different perspective in case of using shutter glasses or so).

In theory one could program games for a 3d output device using OpenGL. Perhaps using a similar approach as I have described in my previous post. But as I explained, rendering a 3d output is much more costly than rendering a 2d output (the total surface of all objects that can be seen is much larger than with a 2d screen).

Long term I’m betting technology will go straight for the brain.

Each sensory apparatus we have (or don’t have) will ultimately be successfully emulated and eventually improved upon, leaving us with only a brain (or two).

But of course that’s before we evolve beyond the need for physical form. I imagine OpenGL will be an altogether different API by then, perhaps a plug-in module of sorts that you install, if you have the lobes for it.

I think people won’t opt for something that invasive due to “Matrix” Paranoia.