Shared GPU for VDI -- Radeon Pro v340 ?

Mar 24, 2025
4
0
1
Edge of the Universe
Has anyone used the Radeon Pro v340 with Proxmox? I was hoping to share this card between multiple VMs, for VDI. The closest I could find was this:
But that appears to be for an older version of Proxmox. Also uses an older/different AMD card.

When searching the forums, the one thread that did focus on the card seems to end with a question (unresolved?):
MxGPU-related threads and documentation, for VDI-capable GPUs, seem to only mention the AMD FirePro S7100X, S7150, and S7150 x2.

Should I use a different card for this workload?
 
Hi,

You can try with AMD FirePro S7100X, S7150, or S7150x2.
The Radeon Pro V340 is not officially supported for MxGPU in Proxmox.
V340 is based on Vega architecture, and AMD has not extended MxGPU support to this generation.
 
I have tested sharing the card between 2 VMs due to it having 2 GPUs on the card itself. Took a little doing but I got it working. Biggest pain in the butt is the fact the driver from AMD doesn't actually support the device string for the v340L I bought. The driver supports the device string: "PCI\VEN_1002&DEV_6864&REV03" but in windows I was getting a failed install due to the fact the passthrough device was "PCI\VEN_1002&DEV_6864&SUBSYS_0C001002&REV05". So what to do?

The following is at your own risk. I'm not responsible if your system gets compromised because you downloaded a driver from a 3rd party and disabled the security checking that follows here.

1. reboot into VM bios and disable secure boot
2. boot into windows and disable driver sig checking and integrity checking (administrator mode command prompt in windows) You MUST reboot for the settings to take effect
bcdedit /set nointegritychecks on
bcdedit /set testsigning on
3. Navigate to the amd expanded driver directory and remove the revision string
1764396794091.png
4. Open device manager and install the driver for the display device choosing to manually search a directory for the driver
5. You will be prompted that the driver is potentially compromised, accept the risk and install.
6. Once installed you can safely turn the integrity and testing back to 'OFF'

https://www.amd.com/en/support/down...-pro/radeon-pro-v-series/radeon-pro-v340.html

1764393864026.png

1764393939020.png


1764394580960.png
 
Hi @kowmangler, ty, seems workaround possible.
Is it works stable in your environment?
Did you test it with some GPU load inside the VMs?
 
Hi @kowmangler, ty, seems workaround possible.
Is it works stable in your environment?
Did you test it with some GPU load inside the VMs?
6840 for TimeSpy

Fire Strike:
1653.png

Didn't try to get dual GPU working or invest much into tuning or trying different driver combinations. Decent for $48 free shipping. I also have the other GPU on a linux VM at the moment so single GPU to windows VM and single GPU simultaneously to a Linux VM. Scores above are in 3dmark free benchmarks only and default settings. 2k screen res.

I really wanted to use it for accelerating Fusion360 so i can design more complex models to 3d print and one GPU for building an AI agent to test on.
 
6840 for TimeSpy

Fire Strike:
View attachment 93613

Didn't try to get dual GPU working or invest much into tuning or trying different driver combinations. Decent for $48 free shipping. I also have the other GPU on a linux VM at the moment so single GPU to windows VM and single GPU simultaneously to a Linux VM. Scores above are in 3dmark free benchmarks only and default settings. 2k screen res.

I really wanted to use it for accelerating Fusion360 so i can design more complex models to 3d print and one GPU for building an AI agent to test on.
very curious how you got the card to pass through, for me it ends up locking up the system whenever I attempt to.

I did run the card in a bare metal windows machine, flashed the vbios of both cards to a asus strix vega 56 was able to game and run furmark no problem. But getting passthrough for multiple vms has failed me so far. I should try my other motherboard maybe.
 
6840 for TimeSpy

Fire Strike:
View attachment 93613

Didn't try to get dual GPU working or invest much into tuning or trying different driver combinations. Decent for $48 free shipping. I also have the other GPU on a linux VM at the moment so single GPU to windows VM and single GPU simultaneously to a Linux VM. Scores above are in 3dmark free benchmarks only and default settings. 2k screen res.

I really wanted to use it for accelerating Fusion360 so i can design more complex models to 3d print and one GPU for building an AI agent to test on.

I have the Radeon Pro v340 (32GB), instead of the v340L. Do you think that I could safely flash the vBIOS and passthrough the card as you did?

Just curious...
 
very curious how you got the card to pass through, for me it ends up locking up the system whenever I attempt to.

I did run the card in a bare metal windows machine, flashed the vbios of both cards to a asus strix vega 56 was able to game and run furmark no problem. But getting passthrough for multiple vms has failed me so far. I should try my other motherboard maybe.
My bios has sr-iov, iommu, abover 4G addressing all enabled. On my host system I installed the amd drivers so the card was fully initializing as a device there. Debian 13, proxmox setup. I got the GPU drivers from the debian repos. If your hardware doesn't support FLR (function level reset) you might have a hard time doing anything with it in passthrough. you may have luck with using vfio drivers to negate attaching to the host system at all but when I did that I had the system lockup with errors related to FLR until the system finally crashed anytime I restarted the VM.

Do bear in mind my GPU is in an actual 1u server not an *ATX consumer system. As of this writing I have moved the GPU between host system, docker container on host system, back to host, to windows VM, power off windows VM, power on Debian VM with both GPUs attached, Power off Debian VM, remove one GPU, attach one GPU to an Ubuntu VM, power on both VMs. And all of that works just fine without restarting the host system. I would check your logs to see if you have FLR errors (or reset errors) which may indicate your installed drivers are not recognizing the request. If the host does not reset the device it will hang the system eventually as the process that consumed it is gone and the device is locked to a non-existent PID.
 
I have the Radeon Pro v340 (32GB), instead of the v340L. Do you think that I could safely flash the vBIOS and passthrough the card as you did?

Just curious...
I'll trade you my 340L for your 340 lol. I need more GPU memory for this damn inferencing. I wouldn't think you would need to flash vBIOS at all to passthrough that v340. If you are using a desktop mobo instead of a server mobo you may not have support for necessary functions to passthrough effectively. Make sure sr-iov, iommu, above 4G addressing all enabled. If your bios do not support those functions then I don't know that flashing would do much good. I do also want to mention that I did not use the proxmox iso. I did Debian 13 and installed proxmox with the install on debian guide. I installed amdgpu drivers from the Debian 13 upstream repos.