i have read that the compute shader can render, but i’m asking myself how this works.
do i have to create the image data for the framebuffer anywhere else, like in a shader storage buffer, and later put the data into a framebuffer texture or can i access directly a (framebuffer) texture in the compute shader anyhow ?
[QUOTE=john_connor;1284053]but how can i access an image for the output ??
input is clear:
uniform sampler2D mytex;
[/QUOTE]
(I assume that you’ve figured this much out from the other replies, but I’ll add it here for anyone who stumbles across the thread).
Not “texture”, but “image”.
In the shader, image2D (etc) type, accessed via imageLoad(), imageStore() etc. If you might be writing the same pixel via different shader invocations, there’s also the imageAtomic* functions to perform atomic read-modify-write operations.
In the client code, glBindImageTexture() binds a texture (or rather, mipmap level) as an image.
Images are basically textures used as “raw” arrays. Unlike textures, they can be written as well as read. Access uses integer array indices, with no interpolation, filtering, wrapping, etc. Also, only formats with power-of-two pixel sizes are supported (the only supported three-component format is GL_R11F_G11F_B10F, but normally you’d just use a 4-component format even if you don’t need an alpha channel).
This looks weird to me. There might have interesting underlying reasons, for sure. But this looks like going backward when textures (I know here we talk about image) had to be power-of-two.
It’s possibly due to the fact that a non-power-of-two-sized pixel may span word boundaries, which would be problematic for atomic operations.
Although, it’s probably a moot point, as I believe that most hardware only actually supports power-of-two pixel sizes (anything else just gets padded out).