I have the following:
Motherboard: TRX40 AORUS PRO WIFI
CPU: AMD Ryzen Threadripper 3970X
Memory: VENGEANCE RGB PRO 128GB (4 x 32GB) DDR4 DRAM 3000MHz C16 Memory Kit
The Setup:
When I use all the stock settings everything I need works. However the memory is only running at 2133 Mhz and is rated for 3200 Mhz.
In the bios when I choose the xmp profile for 3200 Mhz and up the ram voltage to the specified 1.35 v proxmox boots.
They system I am running has quite a number of VMs, which you would expect from the specifications. All of the VMs that require nothing special work fine... and are now noticeably faster for what I'm doing.
The issue:
Any VM using PCI passthrough vifo now doesn't boot after the bootloader. This is true for both windows and linux. I use the PCI passthrough to access GPUs natively. What am I missing here? I exclude the PCI addresses at boot time for the host, and as I mentioned previously, the memory speed change is the only difference, all other VMs function normally and the VMs I need to have native GPU access function normally at the lower memory clock.
My question:
Has anyone else run into this?
Is this an upstream KVM issue?
Is this a known issue?
My preliminary searches have come up empty on the kvm resources so if someone can point me in the right direction it would help. Thank you.
Motherboard: TRX40 AORUS PRO WIFI
CPU: AMD Ryzen Threadripper 3970X
Memory: VENGEANCE RGB PRO 128GB (4 x 32GB) DDR4 DRAM 3000MHz C16 Memory Kit
The Setup:
When I use all the stock settings everything I need works. However the memory is only running at 2133 Mhz and is rated for 3200 Mhz.
In the bios when I choose the xmp profile for 3200 Mhz and up the ram voltage to the specified 1.35 v proxmox boots.
They system I am running has quite a number of VMs, which you would expect from the specifications. All of the VMs that require nothing special work fine... and are now noticeably faster for what I'm doing.
The issue:
Any VM using PCI passthrough vifo now doesn't boot after the bootloader. This is true for both windows and linux. I use the PCI passthrough to access GPUs natively. What am I missing here? I exclude the PCI addresses at boot time for the host, and as I mentioned previously, the memory speed change is the only difference, all other VMs function normally and the VMs I need to have native GPU access function normally at the lower memory clock.
My question:
Has anyone else run into this?
Is this an upstream KVM issue?
Is this a known issue?
My preliminary searches have come up empty on the kvm resources so if someone can point me in the right direction it would help. Thank you.