PCIe passthrough

Your cmdline looks strange: it does not match a usual Proxmox host (no amd_iommu=on and no initrd=) and it implies LVM instead of ZFS. Please run cat /proc/cmdline on the Proxmox host.
Memory could be an issue if ZFS uses 50% and the other VM uses 25% (because all memory must be pinned) then allocating 25% could be a problem (in ZFS does not release memory quickly enough). This is easily tested by temporarily reducing the memory amount (a lot), and I would expect memory allocation errors in the logs.
Note that KSM won't apply to VMs that use PCI passthrough (because of Direct Memory Access as I said before).

EDIT: Timeouts during starting of VMs used to be a sign to taking to long to allocate and map large amounts of pages into actual memory, but I thought that was fixed long ago.
 
Last edited:
This is my whole cmdline:
Code:
root@proxmox1:~# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-5.4.114-1-pve root=/dev/mapper/pve-root ro quiet

Nothing more


Regarding IOMMU: It was strange to me in the beginning as well but I was able to configure IOMMU/SR-IOV directly from BIOS and the IOMMU groups are being displayed at Proxmox.

I am able to passthrough PCIe devices to other VMs so I guess this is working?!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!