Hello,
I am having an issue running an Nvidia RTX-2070 Super and an LSI-9211-8i at the same time on separate VMs. I can run them independently without issue, but the second I try to run their VMs at the same time I get the QEMU Exit Code 1 and it fails to start. These devices are not in the same IOMMU groups and my motherboard manual states that PCIE16_1 can run in x16 or x8 with PCIE16_2 also running at x8. Am I misunderstanding the info below?
Copied from Asus manual:
2 x PCIe 3.0/2.0 x16 slots (supports x16, x8/x8, x8/x4+x4*, x8+x4+x4/x0**) - This is PCIE16_1 and PCIE_2
1 x PCI Express 3.0/2.0 x16 slot (max. at x4 mode, compatible with PCIe x1, - This is PCIE16_3
x2 and x4 devices)
4 x PCI Express 3.0/2.0 x1 slots
* For 2 Intel® SSD on CPU support, install a Hyper M.2 X16 card (sold separately)
into the PCIeX16_2 slot, then enable this card under BIOS settings.
** For 3 Intel ® SSD on CPU support, install a Hyper M.2 X16 card (sold separately)
into the PCIeX16_1 slot, then enable this card under BIOS settings
My VMs are the current stable versions of TrueNAS Scale and Ubuntu Server (only running Jellyfin with GPU hardware acceleration). Currently, I am able to run both VMs with the HBA card in PCIE16_3, but it is running at x4. The BIOS states that the GPU is still running at x16 in slot 1 with this setup, which I do not totally understand unless the 4 lanes for PCIE16_3 are just chipset lanes.
I am having an issue running an Nvidia RTX-2070 Super and an LSI-9211-8i at the same time on separate VMs. I can run them independently without issue, but the second I try to run their VMs at the same time I get the QEMU Exit Code 1 and it fails to start. These devices are not in the same IOMMU groups and my motherboard manual states that PCIE16_1 can run in x16 or x8 with PCIE16_2 also running at x8. Am I misunderstanding the info below?
Copied from Asus manual:
2 x PCIe 3.0/2.0 x16 slots (supports x16, x8/x8, x8/x4+x4*, x8+x4+x4/x0**) - This is PCIE16_1 and PCIE_2
1 x PCI Express 3.0/2.0 x16 slot (max. at x4 mode, compatible with PCIe x1, - This is PCIE16_3
x2 and x4 devices)
4 x PCI Express 3.0/2.0 x1 slots
* For 2 Intel® SSD on CPU support, install a Hyper M.2 X16 card (sold separately)
into the PCIeX16_2 slot, then enable this card under BIOS settings.
** For 3 Intel ® SSD on CPU support, install a Hyper M.2 X16 card (sold separately)
into the PCIeX16_1 slot, then enable this card under BIOS settings
My VMs are the current stable versions of TrueNAS Scale and Ubuntu Server (only running Jellyfin with GPU hardware acceleration). Currently, I am able to run both VMs with the HBA card in PCIE16_3, but it is running at x4. The BIOS states that the GPU is still running at x16 in slot 1 with this setup, which I do not totally understand unless the 4 lanes for PCIE16_3 are just chipset lanes.