At which point is OpenGL free to upload pixel data to VRAM when using pixel buffers?

Hello,

I would like to use pixel buffer objects to continuously stream video data to the GPU:


glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboId);
glBufferData(GL_PIXEL_UNPACK_BUFFER, DATA_SIZE, 0, GL_STREAM_DRAW); //Avoid stalls by requesing "fresh" backing memory
GLubyte* ptr = (GLubyte*) glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
ptr[..] = ...

glBindTexture(GL_TEXTURE_2D, texture_id);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboId);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, IMAGE_WIDTH, IMAGE_HEIGHT, PIXEL_FORMAT, GL_UNSIGNED_BYTE, 0);

What I wonder: Which API call tells the driver that the data won’t be modified any longer and it can begin the asynchronous copy operation?

And instead of requesting a buffer, is it possible to upload ordinary malloc-allocated memory regions asynchronously too, without a dedicated copy operation?

Thank you in advance, Clemens

Which API call tells the driver that the data won’t be modified any longer and it can begin the asynchronous copy operation?

None.

Unless you persistently mapped that buffer (which you did not), it’s illegal to perform any OpenGL operation on that buffer which writes to or reads from it while it is mapped. You must use glUnmapBuffer before attempting to perform a pixel transfer operation with it.

After unmapping it, you can treat the buffer as though it stored the information you wrote through the mapped pointer.

If you did persistently map it, then you have to follow the various rules of persistent mapping, based on the manor in which you mapped it. If you didn’t use coherent mapping, then you have to explicitly flush the ranges you modified before performing operations that read from those ranges of memory. If you use coherent mapping, then you don’t have to care at all.

And instead of requesting a buffer, is it possible to upload ordinary malloc-allocated memory regions asynchronously too, without a dedicated copy operation?

No. Even Vulkan doesn’t let you do a DMA like that from arbitrary memory. The closest it gets is with vkCmdUpdateBuffer, which has a hard limit of 64KB.

If you’re using a single thread, then the commands will be executed in sequence. That includes subsequent commands which modify the buffer. So even though the glTexSubImage2D() call won’t have been executed by the time that the function returns, it will be executed before any subsequent command.

A subsequent glMapBuffer() (or similar) call will either wait for pending commands to complete before returning a pointer, or at least will make any alternative behaviour transparent; e.g. if you map the buffer write-only, it may allocate a new data store and map that while queued commands refer to the previous data store. The only exception is if you use glMapBufferRange() with GL_MAP_UNSYNCHRONIZED_BIT.

An implementation could perform uploads from client memory asynchronously, by write-protecting the region so that the client blocks if it tries to modify it before the copy occurs.

But why use malloc()d regions rather than allocating OpenGL buffer objects and mapping those?

Actually, it can’t. All OpenGL functions (except those that end in “Pointer”) are required to be finished using the pointers you pass them by the time the function returns. So either they do the upload from your memory and hold up the CPU (and thus not be asynchronous), or they copy it into their own memory to asynchronously DMA on their own time.

Hi Alfonse,

Unless you persistently mapped that buffer (which you did not), it’s illegal to perform any OpenGL operation on that buffer which writes to or reads from it while it is mapped. You must use glUnmapBuffer before attempting to perform a pixel transfer operation with it.

Thanks a lot for taking a look at my code and the explanation :slight_smile:
Now I think I understand it a bit better … as soon as I unmap the buffer, the DMA unit can kick in and OpenGL internally makes sure the DMA operation is finished before any subsequent operation touches this data.

Thanks a lot & best regards, Clemens