Passtrouth 10xGPU limit on one VM.

Feb 28, 2023
15
0
1
Hi there,



We have a Proxmox 7.3.6 server with 2x Intel Xeon 4116 Silver with RAID mirror ZFS for file system and with 10xRTX GPUs, configured as: https://pve.proxmox.com/wiki/Pci_passthrough

Considering that the root is ZFS, I have added to /etc/kernel/cmdline the line with attributes intel_iommu=on iommu=pt.
We have several VM's with 1 passtrouth GPU working correctly, BUT when we assign the 10 GPU's to a single VM, THEN system goes into UEFI boot hopelessly, and it doesn't seem to detect the QEMU HardDisk. If I configure the VM with up 7 GPU's or less, the system boots and works perfectly. ¿Why?

What is happening and how can I solve it? please any help or guideline will be appreciated.
Kind regards
MRFactory

Attached is the journal file, as requested.
 

Attachments

Hi,

BUT when we assign the 10 GPU's to a single VM, THEN system goes into UEFI boot hopelessly,
what exactly to you mean with this? Is there an error? Can you provide any logs from inside the guest?
 
BUT when we assign the 10 GPU's to a single VM, THEN system goes into UEFI boot hopelessly, and it doesn't seem to detect the QEMU HardDisk. If I configure the VM with up 7 GPU's or less, the system boots and works perfectly. ¿Why?
I'd guess 10 GPUs and storage need more PCIe lanes than one CPU has, so you'd need ressources from both sockets, and I have no idea whether qemu is capable of doing that.
 
Hi fam, thanks for your query, i hope this will help, to find what are we missing.

Hi,


what exactly to you mean with this? Is there an error? Can you provide any logs from inside the guest?

Hereunder is the log, when Powering on with 8 GPU's or more, the operating system does not boot, goes into UEFI boot, and when accessing the UEFI configurator, there is no QEMU HDD. (with 7GPUs works correctly):

Mar 02 12:50:14 pve pvestatd[4041]: VM 201 qmp command failed - VM 201 qmp command 'query-proxmox-support' failed - unable to connect to VM 201 qmp socket - timeout after 51 retries
Mar 02 12:50:14 pve pvestatd[4041]: status update time (8.100 seconds)
Mar 02 12:50:24 pve pvestatd[4041]: VM 201 qmp command failed - VM 201 qmp command 'query-proxmox-support' failed - unable to connect to VM 201 qmp socket - timeout after 51 retries
Mar 02 12:50:24 pve pvestatd[4041]: status update time (8.094 seconds)
Mar 02 12:50:27 pve kernel: vfio-pci 0000:61:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
Mar 02 12:50:27 pve kernel: vfio-pci 0000:61:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Mar 02 12:50:27 pve kernel: vfio-pci 0000:3d:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
Mar 02 12:50:27 pve kernel: vfio-pci 0000:3d:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Mar 02 12:50:27 pve kernel: vfio-pci 0000:3e:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
Mar 02 12:50:27 pve kernel: vfio-pci 0000:3e:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Mar 02 12:50:27 pve kernel: vfio-pci 0000:3f:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
Mar 02 12:50:27 pve kernel: vfio-pci 0000:3f:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Mar 02 12:50:28 pve kernel: vfio-pci 0000:40:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
Mar 02 12:50:28 pve kernel: vfio-pci 0000:40:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Mar 02 12:50:28 pve kernel: vfio-pci 0000:41:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
Mar 02 12:50:28 pve kernel: vfio-pci 0000:41:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Mar 02 12:50:28 pve kernel: vfio-pci 0000:60:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
Mar 02 12:50:28 pve kernel: vfio-pci 0000:60:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Mar 02 12:50:28 pve kernel: vfio-pci 0000:64:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
Mar 02 12:50:28 pve kernel: vfio-pci 0000:64:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
Mar 02 12:50:34 pve pvestatd[4041]: VM 201 qmp command failed - VM 201 qmp command 'query-proxmox-support' failed - unable to connect to VM 201 qmp socket - timeout after 51 retries
Mar 02 12:50:34 pve pvestatd[4041]: status update time (8.099 seconds)
Mar 02 12:50:39 pve pvedaemon[3377227]: <root@pam> end task UPID:pve:003B5A82:02D10DBA:64008D5C:qmstart:201:root@pam: OK
Mar 02 12:50:48 pve QEMU[3889832]: kvm: vhost_set_mem_table failed: Argument list too long (7)
Mar 02 12:50:48 pve QEMU[3889832]: kvm: unable to start vhost net: 7: falling back on userspace virtio
Mar 02 12:50:55 pve pvedaemon[3891396]: starting vnc proxy UPID:pve:003B60C4:02D127E9:64008D9F:vncproxy:201:root@pam:
Mar 02 12:50:55 pve pvedaemon[3377227]: <root@pam> starting task UPID:pve:003B60C4:02D127E9:64008D9F:vncproxy:201:root@pam:

1677760325447.png

1677760342711.png
 
It could be a problem of your mainboard an it’s pcie lanes. Most common Intel boards have (afaik) 48 lanes, while each gpu will occupy 4 lanes. If you have additional pcie components running (hba for example) it simply runs out of lanes.
 
It could be a problem of your mainboard an it’s pcie lanes. Most common Intel boards have (afaik) 48 lanes, while each gpu will occupy 4 lanes. If you have additional pcie components running (hba for example) it simply runs out of lanes.
thanks for your reply, but we don't have any other lPCIe lanes in the motherboard, a part of the 10XGPUs.
 
can you post the vm config when you have all gpus enabled?
also can you try to boot from a live cd in that case (e.g. ubuntu/debian) and do a 'lsblk' and 'lspci' and post that?
 
can you post the vm config when you have all gpus enabled?
also can you try to boot from a live cd in that case (e.g. ubuntu/debian) and do a 'lsblk' and 'lspci' and post that?
Hi mate! thanks for your query,

hereunder is the info requested:

vm config:​

VM with 7 GPU RTX 2080 Ti:​

agent: 1

bios: ovmf

boot: order=sata0;scsi0;net0

cores: 23

cpu: host

efidisk0: VMPool-1:base-100-disk-0/vm-201-disk-0,efitype=4m,size=1M

hostpci0: 0000:61:00,pcie=1

hostpci1: 0000:3d:00,pcie=1

hostpci2: 0000:3e:00,pcie=1

hostpci3: 0000:3f:00,pcie=1

hostpci4: 0000:40:00,pcie=1

hostpci5: 0000:41:00,pcie=1

hostpci6: 0000:60:00,pcie=1

hostpci7: 0000:64:00,pcie=1

machine: pc-q35-7.1

memory: 102400

meta: creation-qemu=7.1.0,ctime=1675259293

name: MR-RENDER-01

net0: virtio=CA:A1:2E:78:C8:9A,bridge=vmbr0

numa: 1

ostype: win10

parent: Stress

sata0: none,media=cdrom

scsi0: VMPool-1:base-100-disk-1/vm-201-disk-1,backup=0,iothread=1,size=640G,ssd=1

scsihw: virtio-scsi-single

smbios1: uuid=fd1635d8-c128-4e34-9766-31fe209e7ca7

sockets: 2

vmgenid: 82330a59-96e5-4757-8283-45449ca2b032

[Stress]

agent: 1

balloon: 0

bios: ovmf

boot: order=scsi0;ide2;net0

cores: 23

cpu: Skylake-Server

efidisk0: VMPool-1:base-100-disk-0/vm-201-disk-0,efitype=4m,size=1M

hostpci0: 0000:61:00,pcie=1

hostpci1: 0000:3d:00,pcie=1

hostpci2: 0000:3e:00,pcie=1

hostpci3: 0000:3f:00,pcie=1

hostpci4: 0000:40:00,pcie=1

hostpci5: 0000:41:00,pcie=1

hostpci6: 0000:60:00,pcie=1

hostpci7: 0000:64:00,pcie=1

ide2: none,media=cdrom

machine: pc-q35-7.1

memory: 102400

meta: creation-qemu=7.1.0,ctime=1675259293

name: MR-RENDER-01

net0: virtio=CA:A1:2E:78:C8:9A,bridge=vmbr0

numa: 1

ostype: win10

scsi0: VMPool-1:base-100-disk-1/vm-201-disk-1,backup=0,iothread=1,size=640G,ssd=1

scsihw: virtio-scsi-single

smbios1: uuid=fd1635d8-c128-4e34-9766-31fe209e7ca7

snaptime: 1677082500

sockets: 2

vmgenid: 82330a59-96e5-4757-8283-45449ca2b032


** with 8 GPU is the same adding the next PCI.


LSBLK:​

VM with 8 GPU RTX 2080 Ti:​

ubuntu@ubuntu:~$ lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS

loop0 7:0 0 2.5G 1 loop /rofs

loop1 7:1 0 4K 1 loop /snap/bare/5

loop2 7:2 0 63.3M 1 loop /snap/core20/1822

loop3 7:3 0 240.6M 1 loop /snap/firefox/2356

loop4 7:4 0 91.7M 1 loop /snap/gtk-common-themes/1535

loop5 7:5 0 346.3M 1 loop /snap/gnome-3-38-2004/119

loop6 7:6 0 304K 1 loop /snap/snapd-desktop-integration/49

loop7 7:7 0 49.8M 1 loop /snap/snapd/18357

loop8 7:8 0 45.9M 1 loop /snap/snap-store/638

sr0 11:0 1 4.6G 0 rom /cdrom


VM with 7 GPU RTX 2080 Ti​

ubuntu@ubuntu:~$ sudo lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS

loop0 7:0 0 2.5G 1 loop /rofs

loop1 7:1 0 63.3M 1 loop /snap/core20/1822

loop2 7:2 0 4K 1 loop /snap/bare/5

loop3 7:3 0 240.6M 1 loop /snap/firefox/2356

loop4 7:4 0 346.3M 1 loop /snap/gnome-3-38-2004/119

loop5 7:5 0 45.9M 1 loop /snap/snap-store/638

loop6 7:6 0 91.7M 1 loop /snap/gtk-common-themes/1535

loop7 7:7 0 49.8M 1 loop /snap/snapd/18357

loop8 7:8 0 304K 1 loop /snap/snapd-desktop-integration/49

sda 8:0 0 640G 0 disk

├─sda1 8:1 0 100M 0 part

├─sda2 8:2 0 16M 0 part

├─sda3 8:3 0 639.4G 0 part

└─sda4 8:4 0 535M 0 part

sr0 11:0 1 4.6G 0 rom /cdrom


LSPCI​

VM with 8 GPU RTX 2080 Ti​

ubuntu@ubuntu:~$ lspci

00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller

00:01.0 VGA compatible controller: Device 1234:1111 (rev 02)

00:10.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:10.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:10.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:10.3 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03)

00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03)

00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03)

00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03)

00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03)

00:1c.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:1c.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:1c.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:1c.3 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03)

00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03)

00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03)

00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03)

00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92)

00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02)

00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)

00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02)

01:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

01:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

01:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

01:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

02:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

02:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

02:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

02:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

03:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

03:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

03:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

03:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

04:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

04:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

04:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

04:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

05:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

05:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

05:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

05:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

06:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

06:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

06:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

06:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

07:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

07:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

07:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

07:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

08:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

08:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

08:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

08:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

09:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

09:02.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

09:03.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

09:04.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

0a:03.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon

0a:07.0 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)

0a:08.0 Communication controller: Red Hat, Inc. Virtio console

0a:12.0 Ethernet controller: Red Hat, Inc. Virtio network device

0d:01.0 SCSI storage controller: Red Hat, Inc. Virtio SCSI


VM with 7 GPU RTX 2080 Ti​

ubuntu@ubuntu:~$ lspci

00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller

00:01.0 VGA compatible controller: Device 1234:1111 (rev 02)

00:10.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:10.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:10.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03)

00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03)

00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03)

00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03)

00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03)

00:1c.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:1c.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:1c.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:1c.3 PCI bridge: Red Hat, Inc. QEMU PCIe Root port

00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03)

00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03)

00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03)

00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03)

00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92)

00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02)

00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)

00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02)

01:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

01:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

01:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

01:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

02:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

02:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

02:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

02:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

03:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

03:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

03:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

03:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

04:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

04:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

04:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

04:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

05:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

05:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

05:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

05:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

06:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

06:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

06:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

06:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

07:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

07:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

07:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

07:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

08:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

08:02.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

08:03.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

08:04.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

09:03.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon

09:07.0 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)

09:08.0 Communication controller: Red Hat, Inc. Virtio console

09:12.0 Ethernet controller: Red Hat, Inc. Virtio network device

0c:01.0 SCSI storage controller: Red Hat, Inc. Virtio SCSI
 
ok, the lspci output looks good, but the disk is missing from the lsblk output..

can you also post the complete journal from a boot inside with the live cd? (again one with 8 and one with 7)

on most linux distros the command is 'journalctl -b' for the journal of the last boot

i guess either something is not correctly initializing, but i don't know what the reason could be
 
ok, the lspci output looks good, but the disk is missing from the lsblk output..

can you also post the complete journal from a boot inside with the live cd? (again one with 8 and one with 7)

on most linux distros the command is 'journalctl -b' for the journal of the last boot

i guess either something is not correctly initializing, but i don't know what the reason could be
Hi again, Here we go: ( sorry for the delay, we need to isolate the full server to get this logs)
I hope this help to fix it. !!
 

Attachments

Last edited:
mhmm... ok AFAICS, there is really not much that would hint what is wrong besides these lines:

virtio_scsi virtio3: virtio: device uses modern interface but does not have VIRTIO_F_VERSION_1
virtio_scsi: probe of virtio3 failed with error -22

but i did not find much for these error messages (and AFAIU, these are not the reason it does not work)

could you try with some other vm configurations? e.g. using seabios instead of ovmf or using a different storage controller? (just for testing)

i tried here, but when i pass through 10 nic pci devices, (virtual functions) it does work without problem..
also you could try to test the new qemu 7.2 (https://forum.proxmox.com/threads/qemu-7-2-available-on-no-subscription-as-of-now.123421/) if you did not use that yet
 
mhmm... ok AFAICS, there is really not much that would hint what is wrong besides these lines:



but i did not find much for these error messages (and AFAIU, these are not the reason it does not work)

could you try with some other vm configurations? e.g. using seabios instead of ovmf or using a different storage controller? (just for testing)

i tried here, but when i pass through 10 nic pci devices, (virtual functions) it does work without problem..
also you could try to test the new qemu 7.2 (https://forum.proxmox.com/threads/qemu-7-2-available-on-no-subscription-as-of-now.123421/) if you did not use that yet


Hello again, after doing some tests these are our findings:

This VM was created from an original template with only 1GPU... and if we create the VM adding 8 GPUS it doesn't boot or detect HD, even if I boot from Ubuntu LiveCD... but adding 7 GPUS works fine...

Now we have created a new template with the same hardware and version of Proxmox but this time including all 8 GPUs from the Template, then if we created a new VM it boots perfectly!! with both seaBIOS and OVMF and detect QEMU HardDisk.

we don't know the reason why, since both VM has the same configuration with 8 GPUS.
The environment is in production and we can't test QEMU 7.2.

Thanks and best regards.
 
Last edited:
so it works now? (i cannot imagine how that works if the config is the same...)

can you maybe post for both the output of
Code:
qm config ID
qm showcmd ID --pretty
?
 
so it works now? (i cannot imagine how that works if the config is the same...)

can you maybe post for both the output of
Code:
qm config ID
qm showcmd ID --pretty
?
here it is:

QM CONFIG: VM 201 fails, VM working​


root@X:~# qm config 201​

agent: 1

bios: ovmf

boot: order=scsi0;sata0;net0

cores: 23

cpu: host

efidisk0: VMPool-1:vm-201-disk-0,efitype=4m,size=1M

hostpci0: 0000:61:00,pcie=1

hostpci1: 0000:64:00,pcie=1

hostpci2: 0000:3e:00,pcie=1

hostpci3: 0000:3f:00,pcie=1

hostpci4: 0000:40:00,pcie=1

hostpci5: 0000:3d:00,pcie=1

hostpci6: 0000:60:00,pcie=1

hostpci7: 0000:41:00,pcie=1

machine: pc-q35-7.1

memory: 102400

meta: creation-qemu=7.1.0,ctime=1675259293

name: MR-RENDER-01

net0: virtio=CA:A1:2E:78:C8:9A,bridge=vmbr0

numa: 1

ostype: win10

sata0: none,media=cdrom

scsi0: VMPool-1:base-100-disk-1/vm-201-disk-1,backup=0,iothread=1,size=640G,ssd=1

scsihw: virtio-scsi-single

smbios1: uuid=fd1635d8-c128-4e34-9766-31fe209e7ca7

sockets: 2

vmgenid: 82330a59-96e5-4757-8283-45449ca2b032


root@X:~# qm config 401​

agent: 1

bios: ovmf

boot: order=scsi0;sata0;net0

cores: 23

cpu: host

efidisk0: VMPool-1:vm-401-disk-0,efitype=4m,size=1M

hostpci0: 0000:61:00,pcie=1

hostpci1: 0000:64:00,pcie=1

hostpci2: 0000:3e:00,pcie=1

hostpci3: 0000:3f:00,pcie=1

hostpci4: 0000:40:00,pcie=1

hostpci5: 0000:3d:00,pcie=1

hostpci6: 0000:60:00,pcie=1

hostpci7: 0000:41:00,pcie=1

machine: pc-q35-7.1

memory: 102400

meta: creation-qemu=7.1.0,ctime=1675259293

name: TESTAzken2

net0: virtio=BE:A1:8E:24:1C:9A,bridge=vmbr0

numa: 1

ostype: win10

sata0: none,media=cdrom

scsi0: VMPool-1:vm-401-disk-1,backup=0,iothread=1,size=640G,ssd=1

scsihw: virtio-scsi-single

smbios1: uuid=639e1d34-7f45-45ce-9d18-11f4d3b59987

sockets: 2

vmgenid: 93f39196-8b27-4e86-8593-a378e5307d4a


QM SHOWCMD: VM 201 fails, VM 401 working​


root@X:~# qm showcmd 201 --pretty​

/usr/bin/kvm \

-id 201 \

-name 'MR-RENDER-01,debug-threads=on' \

-no-shutdown \

-chardev 'socket,id=qmp,path=/var/run/qemu-server/201.qmp,server=on,wait=off' \

-mon 'chardev=qmp,mode=control' \

-chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \

-mon 'chardev=qmp-event,mode=control' \

-pidfile /var/run/qemu-server/201.pid \

-daemonize \

-smbios 'type=1,uuid=fd1635d8-c128-4e34-9766-31fe209e7ca7' \

-drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' \

-drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/dev/zvol/VMPool-1/vm-201-disk-0,size=540672' \

-smp '46,sockets=2,cores=23,maxcpus=46' \

-nodefaults \

-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \

-vnc 'unix:/var/run/qemu-server/201.vnc,password=on' \

-no-hpet \

-cpu 'host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt' \

-m 102400 \

-object 'memory-backend-ram,id=ram-node0,size=51200M' \

-numa 'node,nodeid=0,cpus=0-22,memdev=ram-node0' \

-object 'memory-backend-ram,id=ram-node1,size=51200M' \

-numa 'node,nodeid=1,cpus=23-45,memdev=ram-node1' \

-object 'iothread,id=iothread-virtioscsi0' \

-readconfig /usr/share/qemu-server/pve-q35-4.0.cfg \

-device 'vmgenid,guid=82330a59-96e5-4757-8283-45449ca2b032' \

-device 'usb-tablet,id=tablet,bus=ehci.0,port=1' \

-device 'vfio-pci,host=0000:61:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:61:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1' \

-device 'vfio-pci,host=0000:61:00.2,id=hostpci0.2,bus=ich9-pcie-port-1,addr=0x0.2' \

-device 'vfio-pci,host=0000:61:00.3,id=hostpci0.3,bus=ich9-pcie-port-1,addr=0x0.3' \

-device 'vfio-pci,host=0000:64:00.0,id=hostpci1.0,bus=ich9-pcie-port-2,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:64:00.1,id=hostpci1.1,bus=ich9-pcie-port-2,addr=0x0.1' \

-device 'vfio-pci,host=0000:64:00.2,id=hostpci1.2,bus=ich9-pcie-port-2,addr=0x0.2' \

-device 'vfio-pci,host=0000:64:00.3,id=hostpci1.3,bus=ich9-pcie-port-2,addr=0x0.3' \

-device 'vfio-pci,host=0000:3e:00.0,id=hostpci2.0,bus=ich9-pcie-port-3,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:3e:00.1,id=hostpci2.1,bus=ich9-pcie-port-3,addr=0x0.1' \

-device 'vfio-pci,host=0000:3e:00.2,id=hostpci2.2,bus=ich9-pcie-port-3,addr=0x0.2' \

-device 'vfio-pci,host=0000:3e:00.3,id=hostpci2.3,bus=ich9-pcie-port-3,addr=0x0.3' \

-device 'vfio-pci,host=0000:3f:00.0,id=hostpci3.0,bus=ich9-pcie-port-4,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:3f:00.1,id=hostpci3.1,bus=ich9-pcie-port-4,addr=0x0.1' \

-device 'vfio-pci,host=0000:3f:00.2,id=hostpci3.2,bus=ich9-pcie-port-4,addr=0x0.2' \

-device 'vfio-pci,host=0000:3f:00.3,id=hostpci3.3,bus=ich9-pcie-port-4,addr=0x0.3' \

-device 'pcie-root-port,id=ich9-pcie-port-5,addr=10.0,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=5,chassis=5' \

-device 'vfio-pci,host=0000:40:00.0,id=hostpci4.0,bus=ich9-pcie-port-5,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:40:00.1,id=hostpci4.1,bus=ich9-pcie-port-5,addr=0x0.1' \

-device 'vfio-pci,host=0000:40:00.2,id=hostpci4.2,bus=ich9-pcie-port-5,addr=0x0.2' \

-device 'vfio-pci,host=0000:40:00.3,id=hostpci4.3,bus=ich9-pcie-port-5,addr=0x0.3' \

-device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' \

-device 'vfio-pci,host=0000:3d:00.0,id=hostpci5.0,bus=ich9-pcie-port-6,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:3d:00.1,id=hostpci5.1,bus=ich9-pcie-port-6,addr=0x0.1' \

-device 'vfio-pci,host=0000:3d:00.2,id=hostpci5.2,bus=ich9-pcie-port-6,addr=0x0.2' \

-device 'vfio-pci,host=0000:3d:00.3,id=hostpci5.3,bus=ich9-pcie-port-6,addr=0x0.3' \

-device 'pcie-root-port,id=ich9-pcie-port-7,addr=10.2,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=7,chassis=7' \



-device 'vfio-pci,host=0000:41:00.0,id=hostpci6.0,bus=ich9-pcie-port-7,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:41:00.1,id=hostpci6.1,bus=ich9-pcie-port-7,addr=0x0.1' \

-device 'vfio-pci,host=0000:41:00.2,id=hostpci6.2,bus=ich9-pcie-port-7,addr=0x0.2' \

-device 'vfio-pci,host=0000:41:00.3,id=hostpci6.3,bus=ich9-pcie-port-7,addr=0x0.3' \

-device 'pcie-root-port,id=ich9-pcie-port-8,addr=10.3,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=8,chassis=8' \

-device 'vfio-pci,host=0000:60:00.0,id=hostpci7.0,bus=ich9-pcie-port-8,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:60:00.1,id=hostpci7.1,bus=ich9-pcie-port-8,addr=0x0.1' \

-device 'vfio-pci,host=0000:60:00.2,id=hostpci7.2,bus=ich9-pcie-port-8,addr=0x0.2' \

-device 'vfio-pci,host=0000:60:00.3,id=hostpci7.3,bus=ich9-pcie-port-8,addr=0x0.3' \

-device 'VGA,id=vga,bus=pcie.0,addr=0x1' \

-chardev 'socket,path=/var/run/qemu-server/201.qga,server=on,wait=off,id=qga0' \

-device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \

-device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \

-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \

-iscsi 'initiator-name=iqn.1993-08.org.debian:01:a8866e362c9' \

-device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' \

-drive 'file=/dev/zvol/VMPool-1/vm-201-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \

-device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100' \

-device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' \

-drive 'if=none,id=drive-sata0,media=cdrom,aio=io_uring' \

-device 'ide-cd,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=101' \

-netdev 'type=tap,id=net0,ifname=tap201i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \

-device 'virtio-net-pci,mac=CA:A1:2E:78:C8:9A,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024,bootindex=102' \

-rtc 'driftfix=slew,base=localtime' \

-machine 'type=pc-q35-7.1+pve0' \

-global 'kvm-pit.lost_tick_policy=discard'


root@X:~# qm showcmd 401 --pretty​

/usr/bin/kvm \

-id 401 \

-name 'TESTAzken2,debug-threads=on' \

-no-shutdown \

-chardev 'socket,id=qmp,path=/var/run/qemu-server/401.qmp,server=on,wait=off' \

-mon 'chardev=qmp,mode=control' \

-chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \

-mon 'chardev=qmp-event,mode=control' \

-pidfile /var/run/qemu-server/401.pid \

-daemonize \

-smbios 'type=1,uuid=639e1d34-7f45-45ce-9d18-11f4d3b59987' \

-drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd' \

-drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/dev/zvol/VMPool-1/vm-401-disk-0,size=540672' \

-smp '46,sockets=2,cores=23,maxcpus=46' \

-nodefaults \

-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \

-vnc 'unix:/var/run/qemu-server/401.vnc,password=on' \

-no-hpet \

-cpu 'host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt' \

-m 102400 \

-object 'memory-backend-ram,id=ram-node0,size=51200M' \

-numa 'node,nodeid=0,cpus=0-22,memdev=ram-node0' \

-object 'memory-backend-ram,id=ram-node1,size=51200M' \

-numa 'node,nodeid=1,cpus=23-45,memdev=ram-node1' \

-object 'iothread,id=iothread-virtioscsi0' \

-readconfig /usr/share/qemu-server/pve-q35-4.0.cfg \

-device 'vmgenid,guid=93f39196-8b27-4e86-8593-a378e5307d4a' \

-device 'usb-tablet,id=tablet,bus=ehci.0,port=1' \

-device 'vfio-pci,host=0000:61:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:61:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1' \

-device 'vfio-pci,host=0000:61:00.2,id=hostpci0.2,bus=ich9-pcie-port-1,addr=0x0.2' \

-device 'vfio-pci,host=0000:61:00.3,id=hostpci0.3,bus=ich9-pcie-port-1,addr=0x0.3' \

-device 'vfio-pci,host=0000:64:00.0,id=hostpci1.0,bus=ich9-pcie-port-2,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:64:00.1,id=hostpci1.1,bus=ich9-pcie-port-2,addr=0x0.1' \

-device 'vfio-pci,host=0000:64:00.2,id=hostpci1.2,bus=ich9-pcie-port-2,addr=0x0.2' \

-device 'vfio-pci,host=0000:64:00.3,id=hostpci1.3,bus=ich9-pcie-port-2,addr=0x0.3' \

-device 'vfio-pci,host=0000:3e:00.0,id=hostpci2.0,bus=ich9-pcie-port-3,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:3e:00.1,id=hostpci2.1,bus=ich9-pcie-port-3,addr=0x0.1' \

-device 'vfio-pci,host=0000:3e:00.2,id=hostpci2.2,bus=ich9-pcie-port-3,addr=0x0.2' \

-device 'vfio-pci,host=0000:3e:00.3,id=hostpci2.3,bus=ich9-pcie-port-3,addr=0x0.3' \

-device 'vfio-pci,host=0000:3f:00.0,id=hostpci3.0,bus=ich9-pcie-port-4,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:3f:00.1,id=hostpci3.1,bus=ich9-pcie-port-4,addr=0x0.1' \

-device 'vfio-pci,host=0000:3f:00.2,id=hostpci3.2,bus=ich9-pcie-port-4,addr=0x0.2' \

-device 'vfio-pci,host=0000:3f:00.3,id=hostpci3.3,bus=ich9-pcie-port-4,addr=0x0.3' \

-device 'pcie-root-port,id=ich9-pcie-port-5,addr=10.0,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=5,chassis=5' \

-device 'vfio-pci,host=0000:40:00.0,id=hostpci4.0,bus=ich9-pcie-port-5,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:40:00.1,id=hostpci4.1,bus=ich9-pcie-port-5,addr=0x0.1' \

-device 'vfio-pci,host=0000:40:00.2,id=hostpci4.2,bus=ich9-pcie-port-5,addr=0x0.2' \

-device 'vfio-pci,host=0000:40:00.3,id=hostpci4.3,bus=ich9-pcie-port-5,addr=0x0.3' \

-device 'pcie-root-port,id=ich9-pcie-port-6,addr=10.1,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=6,chassis=6' \

-device 'vfio-pci,host=0000:3d:00.0,id=hostpci5.0,bus=ich9-pcie-port-6,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:3d:00.1,id=hostpci5.1,bus=ich9-pcie-port-6,addr=0x0.1' \

-device 'vfio-pci,host=0000:3d:00.2,id=hostpci5.2,bus=ich9-pcie-port-6,addr=0x0.2' \

-device 'vfio-pci,host=0000:3d:00.3,id=hostpci5.3,bus=ich9-pcie-port-6,addr=0x0.3' \

-device 'pcie-root-port,id=ich9-pcie-port-7,addr=10.2,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=7,chassis=7' \



-device 'vfio-pci,host=0000:60:00.0,id=hostpci6.0,bus=ich9-pcie-port-7,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:60:00.1,id=hostpci6.1,bus=ich9-pcie-port-7,addr=0x0.1' \

-device 'vfio-pci,host=0000:60:00.2,id=hostpci6.2,bus=ich9-pcie-port-7,addr=0x0.2' \

-device 'vfio-pci,host=0000:60:00.3,id=hostpci6.3,bus=ich9-pcie-port-7,addr=0x0.3' \

-device 'pcie-root-port,id=ich9-pcie-port-8,addr=10.3,x-speed=16,x-width=32,multifunction=on,bus=pcie.0,port=8,chassis=8' \

-device 'vfio-pci,host=0000:41:00.0,id=hostpci7.0,bus=ich9-pcie-port-8,addr=0x0.0,multifunction=on' \

-device 'vfio-pci,host=0000:41:00.1,id=hostpci7.1,bus=ich9-pcie-port-8,addr=0x0.1' \

-device 'vfio-pci,host=0000:41:00.2,id=hostpci7.2,bus=ich9-pcie-port-8,addr=0x0.2' \

-device 'vfio-pci,host=0000:41:00.3,id=hostpci7.3,bus=ich9-pcie-port-8,addr=0x0.3' \

-device 'VGA,id=vga,bus=pcie.0,addr=0x1' \

-chardev 'socket,path=/var/run/qemu-server/401.qga,server=on,wait=off,id=qga0' \

-device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \

-device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \

-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \

-iscsi 'initiator-name=iqn.1993-08.org.debian:01:a8866e362c9' \

-device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' \

-drive 'file=/dev/zvol/VMPool-1/vm-401-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \

-device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100' \

-device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' \

-drive 'if=none,id=drive-sata0,media=cdrom,aio=io_uring' \

-device 'ide-cd,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=101' \

-netdev 'type=tap,id=net0,ifname=tap401i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \

-device 'virtio-net-pci,mac=BE:A1:8E:24:1C:9A,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024,bootindex=102' \

-rtc 'driftfix=slew,base=localtime' \

-machine 'type=pc-q35-7.1+pve0' \

-global 'kvm-pit.lost_tick_policy=discard'
 
mhmm... 2. weird things i noticed (but not sure how they could influence if the disk is visible in the guest...)
1. the order of the pci devices is not that of the config (should not really happen...)
2. the ctime (creation time) of both vms in the meta tag is identical (did you copy that to the new config?)

otherwise they look identical (aside from using different disks ofc)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!