Descriptor pool able to allocate even though pool should be empty

Hey folks, I am currently writing a small descriptor pool allocator, though I noticed that even with completely empty descriptor pools, the application still is able to allocate descriptors from this pool.

Anyone have an idea on what could be causing this behaviour?

The validation layers are enabled, I am using the CPP Vulkan wrapper with built-in exception handling and none of this triggers on creation.

std::array<vk::DescriptorPoolSize, 3> type_count;
	
	// Initialize our pool with these values
	type_count[0].type = vk::DescriptorType::eCombinedImageSampler;
	type_count[0].descriptorCount = 0;

	type_count[1].type = vk::DescriptorType::eSampler;
	type_count[1].descriptorCount = 0;
	
	type_count[2].type = vk::DescriptorType::eUniformBuffer;
	type_count[2].descriptorCount = 0;



	vk::DescriptorPoolCreateInfo createInfo = vk::DescriptorPoolCreateInfo()
		.setPNext(nullptr)
		.setMaxSets(iMaxSets)
		.setPoolSizeCount(type_count.size())
		.setPPoolSizes(type_count.data());
	pool = aDevice.createDescriptorPool(createInfo);
vk::DescriptorSetAllocateInfo alloc_info[1] = {};
	alloc_info[0].pNext = NULL;
	alloc_info[0].setDescriptorPool(pool);
	alloc_info[0].setDescriptorSetCount(iNumToAllocate);
	alloc_info[0].setPSetLayouts(&iDescriptorLayouts);

	std::vector<vk::DescriptorSet> tDescriptors;
	tDescriptors.resize(iNumToAllocate);

	iDevice.allocateDescriptorSets(alloc_info, tDescriptors.data());

Well, it is a validation layer domain. descriptorCount = 0 is invalid usage.
Either the layers are incomplete, or you have them configured incorrectly.

UPDATE: Yeah, it is incomplete layer. Easy enough; I will whip up a patch. Hopefully it will be in next SDK release.

Not only does the layer not catch up on this, the descriptor pool allows allocation even when resources are not specified. I am thinking this might be a driver bug as well, I also posted something on the nvidia vulkan forums. Since it should be illegal behaviour to allocate descriptors from a heap which does not support these descriptors.

Especially because the API does not throw any error, not even access errors. Which may be the charm of Vulkan, but still it sounds like something that should not happen. Thanks a lot though!

Alright PR is up. The user should get validation error on 0 descriptorCount or maxSets (probably) in the next SDK release.

[QUOTE=Mercesa;42703]Not only does the layer not catch up on this, the descriptor pool allows allocation even when resources are not specified. I am thinking this might be a driver bug as well, I also posted something on the nvidia vulkan forums. Since it should be illegal behaviour to allocate descriptors from a heap which does not support these descriptors.

Especially because the API does not throw any error, not even access errors. Which may be the charm of Vulkan, but still it sounds like something that should not happen. Thanks a lot though![/QUOTE]

TBH the spec itself does seem very specific about it. It is only natural that weird driver implementations follow.
Maybe some Issue should be written there too.

[STRIKE]TBH the spec itself does seem very specific about it. It is only natural that weird driver implementations follow.
Maybe some Issue should be written there too.[/STRIKE]

Correction: the core spec (You have to read that one specifically due to the Issue 533) says in the VUs that you need to account the allocations yourself. Allocating more than available in the pool is invalid usage and the validation layers seem to report those problems to me (do you have them properly enabled?)

The VK_KHR_maintenance1 extension seems to remove those restrictions and replace them with VK_ERROR_OUT_OF_POOL_MEMORY_KHR VkResult.

My apologies for the very late response, I have been very busy. You are right, disabling this extension does give me an error now due to the layer picking up on it.

Do you propose I keep track of the memory myself and enable VK_KHR_maintenance1 extension? What would be best practice here?

I get similar (lack) of errors, or crashes for some parameter combinations on AMD too with VK_KHR_maintenance1.

So considering that, and also that generally extensions are not universally supported, I would stick with vanilla Vulkan in this aspect.