Multi GPU

Hi all,

i’m a OpenGL experienced programmer and a Vulkan newbie. i’m triyng experimenting with multi GPU support on Vulkan but something is not clear to me.

Here is my configuration:

Windows 7
Geforce GTX 980 GPU
GeForce TITAN GPU
Two monitor: Each GPU is connected to a different monitor.

I would like to run two instance of the same application: one for each gpu. I have done something similar in the past using OpengGL GPU affinity Nvidia extension on two different Quadro cards and it worked well. GPU affinity extension is not available on GeForce GPU.

So now i would like to know if it is possible to do the same thing with Vulkan. I try to select the right device for each instance but it doesn’t seem to work.

First question: how is it possible that i see my application on both monitor, if i move the window app between the two monitors, even if i selected only one device during vulkan startup? With GPU affinity the application was visible only on the monitor connected to the selected device…

I don’t understand…

Thank you,
Marco

I try to select the right device for each instance but it doesn’t seem to work.

How did it not “seem to work”? Does your implementation give you two VkPhysicalDevices with two distinct names and properties? If you create an application for one monitor’s VkPhysicalDevice, what happens to that application when you move it to another monitor?

Yes, i receive correctly two Gpu from vkPhysicsDevice. Now, if i select the first one connected to the first monitor, when i move the application to the second monitor, that is connected only to the second gpu, the application is still working. It doesn’t matter wich gpu i select, the application is always working…

Vkphycaldevices returns correctly 2 gpu. Now, if i select the first one connected to the first monitor, and i move the application to the second monitor, connected to the second gpu, the application is still working. So it seems to me that it doesn’t matter wich gpu i select… It seems that both gpu always receive the same vulkan commands

vkEnumeratePhysicalDevices return 2 GPU with different names: GTX TITAN and GTX 980.
Then i call vkCreateDevice with one of them but if move the application on the monitor connected to the other GPU, the application is still working.

When i used WGL_NV_gpu_affinity in the past the application correctly didn’t render anything when i moved the application to the other monitor…

I’m confused…

I would think the problem here is that we are confusing Monitors with GPUs. Vulkan largely does not concern itself with monitors. The GPU you choose in Vulkan will be the one where the workload will be computed on.

The OS likes to manage the monitors itself and you can choose some settings (e.g. off vs duplication vs extended desktop). In extended desktop mode it likes to treat all connected monitors somewhat like one big monitor. In that sense, there is no “moving to other monitor”. If you are using the VK_KHR_win32_surface you may notice it is created from a window (not from a monitor).

If you want to achieve something like what you describe anyway, you should be able to do it using the platform specific API. Either it should be possible to prevent the app window to be moved to position where it would end up (partly or fully) on the unwanted monitor. Or it shoud be possible to pause rendering or maybe minimalize the app when someone tries to move it to different monitor.

So what exactly are you trying to achieve? This smels a bit of X-Y problem.

krOoze thank you for your reply.

My problem is not a X-Y problem. It is a multi GPU scalable problem: i want to run each instance of the application on a different CPU CORE and different GPU.

What i mean is something that in linux is simple to do and with the expected behavior, but is something that you can do without enable Xinerama ( the X Window System that enables X applications and window managers to use two or more physical displays as one large virtual display ). First app instance is running on CPU CORE1 and GPU1, second instance is running on CPU CORE2 and GPU2 so they are at full speed. If i do that in linux, the performance of the application instance1 does not change if i start and close the application instance2 and this is correct.

In Windows is not so easy to do.

I undesrtand what you mean but if:

GPU 1 is connected only to Monitor 1
GPU 2 is connected only to Monitor 2

now i start the application with vkCreateDevice on GPU1. If i move it to the second monitor, do you think that Windows OS is grabbing the rendered image from GPU1 and send it to GPU2?

If i move it to the second monitor, do you think that Windows OS is grabbing the rendered image from GPU1 and send it to GPU2?

Let us consider what we know. We know that you’re using Device1, which reports that its using GPU1 connected to Monitor1. You create a window in Monitor1, set up Vulkan in it with Device1, and then move the window to Monitor2. Your swapchain was (presumably?) not recreated by this process. So what we know is that the swapchain images can be used by either monitor.

Given those facts, here are the possibilities:

  1. Swapchains are independent of the device. That they are owned by the windowing system rather than any particular GPU.

  2. The two devices can read at least some of each others memory, thereby allowing one device to display images from another.

  3. VkDevice objects are virtual, and they can instantly switch from using one VkPhysicalDevice to another.

#3 is impossible. That would require that the implementation abjectly lie to you about its queries. For example, according to the Vulkan Database, the GeForce 980 only has 4GB of video memory, while Titan GPUs offer at least 6 GB. Those queries cannot change simply because you re-positioned the window. These are properties of the VkPhysicalDevice, and Vulkan does not allow a VkDevice’s underlying VkPhysicalDevice to change. Ever.

And even if it could, how could a 6GB GPU instantly transfer all of its memory to a 4GB GPU? The destination simply doesn’t have sufficient storage.

So regardless of how the presentation system works, if you create a VkDevice associated with a VkPhsyicalDevice for GPU1, then it will render using the resources of GPU1. How that rendered image gets to the screen is ultimately irrelevant to the question of which GPU’s resources are being used to compose that image.

In short, you don’t seem to have a “multi GPU scalable problem” here. You’re merely making an incorrect assumption about the cause of what you’re seeing.

Alfonse thank you.

I see your point… What i’m trying to understand is that the GPU workload is really where is supposed to be.

In linux, without Xinerama, you can’t move the app window between monitor: you have the OS desktop only on the primary monitor and from that you can start all the app instance to all the others montior connected to all your PC GPUs. Now, if you start and stop the instances on one monitor, you don’t see that the other running instances performace on the others monitors are affected or are changed.

What is confusing me is this kind of “flexibility” that i found in Windows OS because i’m not sure that this behavior of virtual extended desktop does not impact the performance of the application instances.

For example i tried to create two app:

AppInstance1: vkCreateDevice on GPU1 on starting on monitor1
AppInstance2: vkCreateDevice on GPU2 on starting on monitor2

Now if i close AppInstance2 i see a small increase in performance on AppInstance1: that should not happen if they will be really independent so this is what is make me crazy.

I will try to use GPU-Z application to see if the GPU workload are really independent…

vkMarco, that performance increase might be coming from the CPU not having to wait to sync 2 GPUs.