I’m trying to implement virtual reality in my Vulkan program. To do that, I’m using the OpenVR SDK from Valve, however, the SDK only supports OpenGL and DirectX for the time being. Vulkan support is planned, but not a priority, so it might still be a few months off.
Anyway, this is what I’m doing as a temporary workaround:
The scene is rendered (for each eye) into a Vulkan image (With host-readable memory). After the scene has been rendered, the data for both images is copied on the CPU into OpenGL textures, which are then relayed to the OpenVR SDK.
This works fine, but it’s extremely slow, unless very low resolutions are used for the images.
So, I’d like to avoid involving the CPU at all if possible. To do that, I’d either have to make the OpenGL textures use the same memory which I’ve allocated for the Vulkan images, or copy the image data somehow on the GPU instead of the CPU. Is either of that possible? Is there any other way?