Well, here I am once again… same topic.
So I was about to throw my little path tracer onto the GPU using Vulkan’s Compute Shaders & recalled everything I went through to be actually able to write to images via compute shader.
And…
It doesn’t seem to make sense. The answers I’ve received doesn’t seem to make sense. (No offense ! I just really think we’re getting each other wrong…)
Just to state it, I actually managed to get it work last time I wrote here. I just think it’s not at all a “clean” way.
What is currentlly happening:
-> I have a swapchain which consists of 3 Images on my Computer.
-> I have a compute shader listing a single Binding for output
layout (binding = 0, rgba8) uniform writeonly image2D resultImage;
→ I’m allocating a seperate descriptor set, each with a single descriptor, for each image in the swapchain.
for (int i = 0; i < swapChainImages.size(); i++)
{
VkDescriptorSetLayoutBinding imgBinding = {};
imgBinding.descriptorType = VK_DESCRIPTOR_TYPE_STORAGE_IMAGE;
imgBinding.stageFlags = VK_SHADER_STAGE_COMPUTE_BIT;
imgBinding.binding = 0;
imgBinding.descriptorCount = 1;
VkDescriptorSetLayoutCreateInfo layoutInfo = {};
layoutInfo.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
layoutInfo.bindingCount = 1;
layoutInfo.pBindings = imgBinding ;
// computeDescriptorSets is just a vector of VkDescriptorSets
vkCreateDescriptorSetLayout(logicalDevice, &layoutInfo, nullptr, computeDescriptorSets[i]);
}
→ Next I’m creating a Command Buffer for each image in the swapchain (3 in my case, as mentioned.) & record for each command buffer to bind with the corresponding descriptor set.
So… What’s the problem ?
Say I have a collection of spheres & planes that I want to pass to my compute shader, just like Sascha does in his raytracing example.
This means, that for every image in the swapchain I’ll have to bind those 2 buffers in the corresponding descriptor sets.
Making a total (for my computer) of 3 Descriptor Sets each containing a binding to the corresponding image, and 2 more bindings to the corresponding buffers.
Is this really the way to go ?
It just seems kind of… wrong to me.
Alright, let’s get to the part, that doesn’t seem to make sense for me (sorry for this looong post…)
→ A single binding (say for example 0) can actually take multiple descriptors !
VkDescriptorSetLayoutBinding imgBinding = {};
imgBinding.binding = 0; // Binding index
imgBinding.descriptorCount = 5; // <= Can take multiple descriptors !
Which actually means, I can bind multiple resources to it, right ? (I’m going to leave unimportant fields out…)
std::vector<VkDescriptorImageInfo> imgInfos;
for (int i = 0; i < swapChainImages.size(); i++)
{
VkDescriptorImageInfo imgInfo = {};
imgInfo.imageView = swapChainImageViews[i]; // Target current image view !
imgInfos.push_back(imgInfo);
}
VkWriteDescriptorSet imgWrite = {};
imgWrite.dstSet = computeDescriptorSets[i];
imgWrite.dstBinding = 0; // Binding index is still 0, same as above
imgWrite.descriptorCount = imgInfos.size(); // Target 3 descriptors
imgWrite.pImageInfo = imgInfos.data(); // Take 3 different resources (3 different image views)
So, either I’m just getting something awfully wrong … or, this is not very true:
OK, but if your shader was not actually using “one descriptor set containing three descriptors”, why would you create that in C++? I mean, your shader would have to look something like this:
layout(set = 0, binding = 0) image2D firstImage;
layout(set = 0, binding = 1) image2D secondImage;
layout(set = 0, binding = 2) image2D thirdImage;
Because, I can actually bind multiple resources to the same binding index of the same set, right ?
In my last example, all 3 Images would actually correspond to
layout (set = 0, binding = 0) image2D allThreeImages;
But, if I am really right, why doesn’t the compute shader fill all 3 images with memory than ?
Once again, sorry for this looong post !