New User Consideration Questions

UrsaTeddy

New Member
Sep 29, 2021
5
0
1
53
Greetings,

I am considering setting ProxMox up here in my home lab after reading a lot and watching a lot.

However there is one thing that still confuses me regarding GPUs and Passthru.

Let us assume I have one of the latest GPUs (6900 XT for example).

I want to use the full power of this card for my graphical work and rendering (as well as video rendering).

So I should pass this GPU through.

However at this point, do I still remote log in from another machine or am I connecting directly to the ProxMox machine to use the VM set up to use this card?

If I am remote logging in, am I not limited by the terminal's graphic capability to render the signal coming through the network - say it is a 4K image/desktop coming through.

And then in addendum,

I want to at times play some games on another VM (the first VM being a different OS).

Can they both use the same GPU via passthru (VMs not operating together) or do they need seperate GPUs?

Is it possible to run both VMs simultaneously - one browsing the web, the other gaming for example.

All help appreciated,
D
 
Last edited:
hi,

I want to use the full power of this card for my graphical work and rendering (as well as video rendering).

So I should pass this GPU through.
yes

However at this point, do I still remote log in from another machine or am I connecting directly to the ProxMox machine to use the VM set up to use this card?
while setting up the passthrough you can use the graphical interface or the node shell via SSH. see our wiki page for instructions [0]

If I am remote logging in, am I not limited by the terminal's graphic capability to render the signal coming through the network - say it is a 4K image/desktop coming through.
for best graphics performance we recommend SPICE viewer [1].

if you want to reduce the lag from network latency, you could install a desktop manager on your PVE machine [2] and connect directly to the VM from your host, instead of connecting from another host in the network.

however be aware if you install desktop manager on your PVE and have GPU passthrough, then you won't be able to use the Nvidia card on the host (since it'll most likely be blacklisted for the host, so that it can be used inside the VM).

another option would be connecting with a decent ethernet link capable of 1Gb/s or higher transfer speed

I want to at times play some games on another VM (the first VM being a different OS).

Can they both use the same GPU via passthru (VMs not operating together) or do they need seperate GPUs?
as long as the VMs aren't running at the same time, it should be okay.

hope this helps!



[0]: https://pve.proxmox.com/wiki/Pci_passthrough
[1]: https://pve.proxmox.com/wiki/SPICE
[2]: https://pve.proxmox.com/wiki/Developer_Workstations_with_Proxmox_VE_and_X11
 
Greetings Again!

I have a new passtru questions regarding the use of a GPU.

If I have 2 VMs and want to share the GPU between them - only one VM at a time - do I have to constantly switch the GPU between VMs or are they both setup with passthru, and whichever VM goes first wins?

Thanks Again,
D
 
Greetings Again!

I have a new passtru questions regarding the use of a GPU.

If I have 2 VMs and want to share the GPU between them - only one VM at a time - do I have to constantly switch the GPU between VMs or are they both setup with passthru, and whichever VM goes first wins?

Thanks Again,
D
You can add the same GPU to multiple VMs. But if one VM is already running using the GPU the other VMs just won't be able to start. Its a little bit annoying because automated backups won't work because to do a backup the VMs must be started for a short time. So you should exclude these VMs from the backup job and remember to manually backup them when no VM using that GPU is running.
 
One last question if I may ...

Say I have 3 VMs that need to be run ... usually 2 of them simultaneously at most.

How would I handle an "optional" GPU situation whereby if I start VM-1 with the GPU-1, VM-2 will use GPU-2, however if VM-2 starts first it will grab GPU-1 and VM-1 will end up with GPU-2.

Is this possible at all? Or am I going to have to either create different versions of each VM for the appropriate video card?

If this is the case what would be the easiest way to keep those VMs in sync (since the only change would be the video card).

Thanks for all the help thus far,
D
 
One last question if I may ...

Say I have 3 VMs that need to be run ... usually 2 of them simultaneously at most.

How would I handle an "optional" GPU situation whereby if I start VM-1 with the GPU-1, VM-2 will use GPU-2, however if VM-2 starts first it will grab GPU-1 and VM-1 will end up with GPU-2.

Is this possible at all? Or am I going to have to either create different versions of each VM for the appropriate video card?

If this is the case what would be the easiest way to keep those VMs in sync (since the only change would be the video card).

Thanks for all the help thus far,
D
Each PCIe device should be in its own IOMMU group and would get its own address based on the PCIe slot it is put into. So your GPU-1 is for example always "0000:02:00" and your GPU-2 is always "0000:03:00". And you use this address to tell the VM which PCIe device to passthrough, so if you passthrough GPU-1 to VM-1 and GPU-2 to VM-2 they always should use the same GPU and can run in parallel because both got their own GPU.
So if you don't got a third GPU for your third VM you need to decide which of the two GPUs you want to passthrough or if you want to passthrough both. But as long as a GPU is already is use your VM-3 won't be able to start.
 
So essentially depending on my workload needs.

I have a 3080 Ti (High End), a Radeon Pro W3100 (Low End), and a Radeon Pro W5500 (Mid Range) setup.

VM 1 (Windows Gaming)
VM 2 (Modelling/Rendering)
VM 3 (Everyday Linux Machine)

I run the VMs on the same machine but a multi-input monitor.

Normally I would want the 3080Ti for Modelling/Rendering, the Mid Range Card for the Windows Machine and the Low End Card for the Linux.

However at the end of the week (if I have time) I like to do a little gaming. Thus I want the Windows VM to use the High End Card.

From what you have said, I would have to shut down the VMs, change their assigned GPUs, and then boot them up again (since I monitor Emails and such on the Modelling/Rendering machine for business isolation reasons).

Is there any way to make this easier? Do I have to create two versions of each VM with the relative cards assigned - say Windows (Mid), Windows (High) - and then boot as required?

Or is there an easier way?

Thanks in advance,
D