Descriptor Pool maxSets is troublesome

Having to specify the max sets is a problem because you may not know how many you need do to the dynamic nature of what your doing. All you can do is make your best guess and hope some edge case doesn’t cause you to run out. I don’t know if there is a reason why this pool needs it’s max to be specified upfront where as the others don’t. For me this is a huge speed bump where In trying to come up with different solutions to work around this limitation.

I could get around this if I could bind the same descriptor set when rendering the same thing but that doesn’t seem to work. Say for example I want to render the same texture multiple times. If I bind the same descriptor set in other command buffers, only one will show and no error information is displayed.

If you don’t know how many sets you need, how could you know how many descriptors you need? Whatever you use to determine the latter can determine the former as well.

For example, if you have some descriptor set layout, that single set uses X descriptors. If you’re creating a pool from which you want to allocate 10 such sets, then “maxSets” would be 10.

I could get around this if I could bind the same descriptor set when rendering the same thing but that doesn’t seem to work. Say for example I want to render the same texture multiple times.

This sounds like you’re doing something wrong, like manipulating the descriptor set while it is in use.

If you don’t know how many sets you need, how could you know how many descriptors you need? Whatever you use to determine the latter can determine the former as well.

Just like a C++ class is a blueprint for what can be allocated numerous times, I know how many blueprints I have but not how many of each I’ll need. For example, I reuse a handful of graphics to construct a complex menu system of numerous buttons, menu windows, etc. I would have to allocate the whole menu system and then go back and count everything, to then go back and allocate all the descriptor sets. Talk about jumping through hoops.

Here’s even a bigger example. Say you have a space ship firing projectiles via the player button presses. Here, all you can do is your best guess as too how many will be in play at one time, otherwise you could run out.

Talk about jumping through hoops.

Welcome to low-level programming. Be thankful you only have to jump through Vulkan’s hoops rather than having to directly write against dozens of different hardware.

Say you have a space ship firing projectiles via the player button presses. Here, all you can do is your best guess as too how many will be in play at one time, otherwise you could run out.

If each projectile is the same, why would different projectiles need different descriptor sets? You’re rendering the same thing in different locations; why do you need to assign each rendered object a different descriptor set?

If you believe each projectile needs to have its own descriptor set, then you have come to a misunderstanding of how Vulkan works.

The same goes for your menu suggestion. The number of descriptor sets you need is not based on the number of menu items visible. It’s based on the number of unique combinations of resources you will be using. And that number is almost certainly fixed. Your GUI needs to render (from a resource-consumption perspective):

  • Text items.
  • Untextured items.
  • Textured items.

So that’s 2 descriptor sets plus 1 for each kind of texture you might use in your GUI. And you can avoid the latter in many cases through array textures and/or arrays of textures.

If each projectile is the same, why would different projectiles need different descriptor sets? You’re rendering the same thing in different locations; why do you need to assign each rendered object a different descriptor set?

So if you are saying that I can render the same thing twice at different places on the screen using the same descriptor set and ubo buffer, then I must be doing something wrong thinking this doesn’t work. I’m talking about recording two different command buffers using the same descriptor set and ubo buffer, rendering in two different places due to differences in the matrices passed to the shader via the ubo buffer. Is that what you are suggesting?

Yes. Because what you just described very much can work.

However, there are ways to try to do what you just described that don’t work. For example, if the UBO is what contains the data that defines the position of a particular projectile, and you want to change this value between projectiles, that’s not going to work.

In order to make this work, you have to provide the per-object data in a way that is distinct from descriptor resources. No buffer object updates, no descriptor set changes. Period.

There are plenty of possibilities here. For example, Push Constants; indeed, this is precisely what Push Constants are for. You can pass the position as a push constant. Or better yet, pass an index via Push Constants, which you use to index into a UBO/SSBO to fetch the particular per-object data for these projectiles. So you would have an array of your per-object data in the buffer resource used by a descriptor.

Alternatively, since you’re rendering the same mesh for each particle (presumably?), you can employ instanced rendering. In this case, the per-particle data may be in a vertex buffer accessed in the VS as an input value, or it may be in a UBO/SSBO, accessed via the instance index in the Vertex Shader.

Basically, there are several options for providing per-object data to a shader without changing descriptor or buffer object state.

Well sort of (see above for details), but if you’re rendering a bunch of projectiles, you really shouldn’t be putting them in different command buffers.

In order to make this work, you have to provide the per-object data in a way that is distinct from descriptor resources. No buffer object updates, no descriptor set changes. Period.

I appreciate you taking the time to explain this to me. It’s obvious I’m still working through the OpenGL mindset of shader usage. Vulkan is definitely a different beast. I can now see that one command buffer, descriptor set and ubo buffer can be reused across multiple renderings of the same type of object. Just need to decide how to inject the object specific data like the matrices and color.

On second thought, if I’m using push constants, I don’t see how I can reuse the command buffer across multiple object. Looks like I’ll still need a command buffer per object since that’s the means of updating the shader with the position of the graphic. Would that be correct?

You don’t “reuse” a command buffer “across multiple object”. You put multiple objects into the same command buffer. Your code would look somewhat like this:


vkCmdBindPipeline(...) //Shader for projectiles.
vkCmdBindDescriptorSets(...) //Descriptors for particles.
vkCmdBindVertexBuffers(...) //Vertex buffers for particles.

vkCmdPushConstants(...) //Push constant data for particle 0
vkCmdDraw(...); //Draw particle 0.

vkCmdPushConstants(...) //Push constant data for particle 1
vkCmdDraw(...); //Draw particle 1.

...

Wow, that was informative. Was not aware the same command could be recorded to a command buffer multiple times.

Question… say I have 3 different pipelines and a handful of descriptor sets, ubo’s and vertex buffer for each of the pipelines, can this all be recorded in the same command buffer?

Um… yes. Why would you think you wouldn’t be able to?

Because vkCmdBindPipeline, vkCmdBindVertexBuffers, vkCmdBindIndexBuffer, vkCmdBindDescriptorSets use the term “Bind”. To me, binding something means to make it permanent and can’t be changed. It sets the stage for everything else that comes next. Other command buffer calls use the term “Set” which doesn’t suggest lasting change. In OpenGL you bound your shaders, texturtes and VBO’s which effected everything from that point. Using the term “Bind” gives the wrong impression as to what these calls do and how the command buffer works.

And now I’m even more confused.

In OpenGL, binding is not permanent. Yes, binding affects what follows, but if you want to render later with a different texture, you don’t tear down the whole OpenGL context and create a new one. You just bind a new texture in the same place as the old one and subsequent rendering commands use the new texture.

Binding in Vulkan works exactly like binding in OpenGL; it affects what follows (in that command buffer), but you can always bind something else.

“Set” is used when you’re setting a value or (small) set of values. “Bind” is used when you’re binding an object. They both otherwise work identically; setting data that has already been set overrides that data, and binding objects to a location that they’ve already been bound to overrides the previous binding.

If you’ve ever been in a nerd fight with another engineer, you have no doubt discovered that the meaning of words can be slightly different based on unique experiences and perceptions.

Before I continue on to my next question, I just want to say how much I appreciate you help in all this. I’ve been converting my OpenGL game engine over to Vulkan. The tutorial I’ve been working through has made my current understanding of Vulkan possible, allowing me to get as far as I have with my conversion. The tutorial didn’t got into much detail as to fully explain the capabilities of the command buffer and the many way it can be used.

In a real world scenario of rendering game graphics, render objects come in and out of existence constantly. I see two options that have their pros and cons. What are your thoughts on this?

  1. Render all objects in the same command buffer.
    a) Pro: Only one command buffer is needed
    d) Con: The same command buffer needs to be recorded every frame to allow for object it come in and out of existence.

  2. Each object get’s it’s own command buffer
    a) Pro: The command buffer is recorded only once and submitted to the queue when the object is active
    b) Con: A lot of command buffers are allocated

I see two options that have their pros and cons. What are your thoughts on this?

Neither. You should have one command buffer per-task, per-thread.

The whole point of the command buffer paradigm is to be able to thread your rendering operation. To generate rendering commands on multiple threads at the same time. If you want to take maximum advantage of this, then you need to have at least one command buffer per-thread.

But at the same time, threading a renderer can be somewhat difficult. Consider shadow mapping. You need to walk your object hierarchy to find out which objects are “visible” from the light. But you also need to do that to determine which objects are visible for the main rendering. And the two lists of objects won’t be the same, even though they’re starting from the same source set of objects. And the commands you need for shadow map rendering are not the commands you need for regular rendering.

For best efficiency (probably), you will want to only walk the object hierarchy once. So on the thread(s) that do this, they should be generating two command buffers. One for the shadow map rendering and one for the main rendering. For each object, they issue commands for both CBs, as needed.

You would eventually send those CBs to the queue submissions thread, which will know about them and do something useful with them.

Vulkan works best when your rendering system has a well-defined structure to it. That is, you build “boxes” that you can pour content into. You have a “box” for objects that cast shadows, a “box” for regular objects, a “box” for particles, a “box” for post-processing effects, and so on. This way, you can tailor your threading and CB-building system for the needs of your renderer.

However, if you’re just starting out, that’s all extremely complicated. I would instead focus on just using a single command buffer. It’ll be easier to go from that to multiple, threaded CBs than it will if you start from having one CB per object.

Neither. You should have one command buffer per-task, per-thread.

I have read that you need to create the command pool per thread for the command buffers needed within that thread. Is this true?

…then you need to have at least one command buffer per-thread.

My engine asset creation and destruction is organized by group. Each asset can then be used to create any number of sprites. Each group could have it’s own command buffer for it’s own thread. You still have to record the command buffer every frame because you don’t know which sprite will be visible at any given moment but at least it can be broken up by thread.

One last question. I’m assuming I can load a new group of assets while rendering a completely different group.

No. But yes.

The technical answer is that access to command pools, and the command buffers generated by them, require “external synchronization”. That is, if you allocate a CB from a particular pool, you cannot call any function that manipulates the CB unless you have ensured that no other thread is simultaneously manipulating any command buffer that was allocated from the same pool.

Now, you could ensure this by creating a mutex for each CB and locking it around when you’re adding commands to a buffer and so forth. But this would be terrible for performance.

The consequence of this is that the most reasonable way to use command pools and the buffers they create is to keep them in the same thread. With the exception of sending them off to a dedicated queue submission thread, but that’s a single mutex operation you do on a thread when that thread has done all of its rendering, rather than something you’re doing all the time.

That is, if you allocate a CB from a particular pool, you cannot call any function that manipulates the CB unless you have ensured that no other thread is simultaneously manipulating any command buffer that was allocated from the same pool.

That makes sense. I appreciate all your help! :slight_smile: As I was looking up an example of push constants, I just so happen to discover a feature called “push descriptors”. Looks interesting…