[SOLVED] Host latency up and crash when passing through nic and gpu to 2 different vm

ctf830

New Member
Dec 23, 2021
2
1
3
24
Hi everyone, I recently install Proxmox to my server. Everything was running fine when I decided to passthrough pcie.

I currently have 2 vms. Running CentOS-8 and Windows 10. I tried to passthrough the nic (Intel I210) to CentOS-8 and GPU to Windows 10. It runs fine when i start only one of the vm but when i start both of the vm, the host network latency will go up and finally crash. I found that if i passthrough both nic and gpu to the same vm, which i am currently doing, the host runs perfectly fine. Below are some of the information of my system.

root@*****:~# lspci
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers (rev 05)
00:01.0 PCI bridge: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) (rev 05)
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 630 (rev 04)
00:14.0 USB controller: Intel Corporation 100 Series/C230 Series Chipset Family USB 3.0 xHCI Controller (rev 31)
00:16.0 Communication controller: Intel Corporation 100 Series/C230 Series Chipset Family MEI Controller #1 (rev 31)
00:17.0 SATA controller: Intel Corporation Q170/Q150/B150/H170/H110/Z170/CM236 Chipset SATA Controller [AHCI Mode] (rev 31)
00:1b.0 PCI bridge: Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #17 (rev f1)
00:1b.3 PCI bridge: Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #20 (rev f1)
00:1c.0 PCI bridge: Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #1 (rev f1)
00:1d.0 PCI bridge: Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #9 (rev f1)
00:1f.0 ISA bridge: Intel Corporation Z170 Chipset LPC/eSPI Controller (rev 31)
00:1f.2 Memory controller: Intel Corporation 100 Series/C230 Series Chipset Family Power Management Controller (rev 31)
00:1f.3 Audio device: Intel Corporation 100 Series/C230 Series Chipset Family HD Audio Controller (rev 31)
00:1f.4 SMBus: Intel Corporation 100 Series/C230 Series Chipset Family SMBus (rev 31)
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-V (rev 31)
01:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)
03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)

root@*****:~# find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/7/devices/0000:00:1b.3
/sys/kernel/iommu_groups/5/devices/0000:00:17.0
/sys/kernel/iommu_groups/3/devices/0000:00:14.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.6
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/8/devices/0000:00:1c.0
/sys/kernel/iommu_groups/6/devices/0000:00:1b.0
/sys/kernel/iommu_groups/4/devices/0000:00:16.0
/sys/kernel/iommu_groups/12/devices/0000:03:00.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.2
/sys/kernel/iommu_groups/10/devices/0000:00:1f.0
/sys/kernel/iommu_groups/10/devices/0000:00:1f.3
/sys/kernel/iommu_groups/10/devices/0000:00:1f.4
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/9/devices/0000:00:1d.0

I also attach the configuration of both vm.
root@*****:~# cat /etc/pve/qemu-server/101.conf
agent: 1,fstrim_cloned_disks=1
boot: order=scsi0;ide2;net0
cores: 4
cpu: host
hostpci0: 0000:03:00.0,pcie=1
ide2: local:iso/CentOS-8.5.2111-x86_64-dvd1.iso,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=6.1.0,ctime=1639417288
name: CentOS-8
net0: virtio=AA:94:29:40:84:0F,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-101-disk-0,discard=on,size=117187806K,ssd=1
scsi1: local-hdd:vm-101-disk-0,size=7812500423K
scsihw: virtio-scsi-pci
smbios1: uuid=6180a847-ff19-4cb8-9dc5-d252402c35ae
sockets: 1
vmgenid: e149b412-87a2-42d7-8bb8-73ecbd083492

root@*****:~# cat /etc/pve/qemu-server/102.conf
agent: 1,fstrim_cloned_disks=1
bios: ovmf
boot: order=sata0;ide2
cores: 4
cpu: host
hostpci0: 0000:01:00,pcie=1,x-vga=1
ide2: local:iso/Windows10_21H1.iso,media=cdrom,size=4512000K
machine: pc-q35-6.1
memory: 24576
meta: creation-qemu=6.1.0,ctime=1639399686
name: Windows-10
net0: virtio=56:E3:0F:EA:EF:2F,bridge=vmbr0,firewall=1
numa: 1
ostype: win11
sata0: local-lvm:vm-102-disk-0,discard=on,size=117187806K,ssd=1
sata1: local-hdd1:vm-102-disk-0,size=1872546072887
scsihw: virtio-scsi-pci
smbios1: uuid=3b8f1844-d501-4d39-bc0a-dcb8f92ccb26
sockets: 1
vga: none
vmgenid: f6908c16-724d-416e-99ec-f7f507d6ab98

My Server Setup:
CPU: Intel Core i7-7700T
MB: ASUS Z170M-PLUS
RAM: 32GB DDR4
GPU: ASUS PHOENIX GTX 1050
Additional Nic: Intel I210 lan card

Please let me know if i need to provide additional information! Many Thanks!
 
Last edited:
did you enable the 'acs override patch' ? can you post some logs from the host (dmesg/ journal) ?
 
This has nothing to do with the devices in particular on their assignment.
Your system has 32GB of RAM and you give 8GB and 24GB to your VMs. When using passthrough, all memory must be locked into RAM because of PCI DMA. This will leave no memory for Proxmox, filesystem cache and virtualization overhead. Therefore, when both VM's use passthrough and are started both, the system will run out of RAM and crash.
If only one of them uses passthrough, then the other VM might not use all its memory an the system might appear to work fine.
 
This has nothing to do with the devices in particular on their assignment.
Your system has 32GB of RAM and you give 8GB and 24GB to your VMs. When using passthrough, all memory must be locked into RAM because of PCI DMA. This will leave no memory for Proxmox, filesystem cache and virtualization overhead. Therefore, when both VM's use passthrough and are started both, the system will run out of RAM and crash.
If only one of them uses passthrough, then the other VM might not use all its memory an the system might appear to work fine.
Amazing thanks a lot! i have been digging through the forum for the whole month! you are a saviour!
 
  • Like
Reactions: rursache

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!