I'm struggling getting 2 Nvidia GPUs to passthrough to the same VM.
I've been able to do this before even passing 4 GPUs to the same VM (using 1x Riser Cables). But now that I'm trying to passthrough the 2 GPUs actually connected directly to the x16 PCIe ports, I only seem able to pass one or other (or even both, but to two individual VMs) but not both to the same VM.
I've tried it on an Asus ROG x570, X470 and Asrock B550, all with the same result.
Here's my GRUB:
I've also passed :
GPUs are setup using Resource Mapping
pveversion -v
Let me know if there's any additional info needed.
Thanks to anyone that might have some advice!
I've been able to do this before even passing 4 GPUs to the same VM (using 1x Riser Cables). But now that I'm trying to passthrough the 2 GPUs actually connected directly to the x16 PCIe ports, I only seem able to pass one or other (or even both, but to two individual VMs) but not both to the same VM.
I've tried it on an Asus ROG x570, X470 and Asrock B550, all with the same result.
Here's my GRUB:
Code:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset iommu=pt pcie_acs_override=downstream,multifunction kvm.ignore_msrs=1 kvm.report_ignored_msrs=0 vfio-pci.ids=10de:2204,10de:147d,38>
GRUB_CMDLINE_LINUX=""
I've also passed :
Code:
echo "vfio" >> /etc/modules
echo "vfio_iommu_type1" >> /etc/modules
echo "vfio_pci" >> /etc/modules
echo "vfio_virqfd" >> /etc/modules
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidiafb" >> /etc/modprobe.d/blacklist.conf
GPUs are setup using Resource Mapping
Code:
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 16
cpu: host
efidisk0: local-lvm:vm-145-disk-0,efitype=4m,size=4M
hostpci0: mapping=RTX3090_4_1,pcie=1
hostpci1: mapping=RTX3090_4_2,pcie=1
hostpci2: mapping=NVME_4,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 65536
meta: creation-qemu=8.1.5,ctime=1713656569
name: vastai-5
net0: virtio=bc:24:11:2e:d2:3f,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-145-disk-1,discard=on,iothread=1,size=48G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=c90525f7-e6a9-4084-b37c-c364e4e4167a
sockets: 1
vmgenid: 3a3300fe-6ec6-4f2a-a999-ac8ace96aada
pveversion -v
Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.13-5-pve)
pve-manager: 8.1.10 (running version: 8.1.10/4b06efb5db453f29)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.3
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.5
libpve-cluster-perl: 8.0.5
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.6
libpve-network-perl: 0.9.6
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.5-1
proxmox-backup-file-restore: 3.1.5-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.5
pve-cluster: 8.0.5
pve-container: 5.0.9
pve-docs: 8.1.5
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.11-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.1.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2
Let me know if there's any additional info needed.
Thanks to anyone that might have some advice!