Hi,
I have a Proxmox 8 system using ZFS in RAIDz1. On this system there is one VM using virtiofsd with the following arguments:
The VM itself is launched with the following arguments:
When this VM is running, whenever I try to launch a new VM with a PCI pass-through device, it takes tens of seconds. Without setting a limit to ARC (the host has 128 GB RAM), it almost always ends up with a timeout, which means I can no longer start a new VM after the one with virtiofs. After adding a limit of 16 GB to ARC, as suggested by the latest Proxmox installation, start-up time is still very long, but eventually the VM starts:
The problem comes from the fact that VMs with PCI pass-through have to allocate all of their memory before starting-up, as opposed to regular VM which can allocate memory on the fly. Using
I have a Proxmox 8 system using ZFS in RAIDz1. On this system there is one VM using virtiofsd with the following arguments:
Bash:
/usr/libexec/virtiofsd --log-level info --socket-path /run/virtiofsd/146.sock --shared-dir /srv/146 --cache=auto --announce-submounts --inode-file-handles=mandatory
The VM itself is launched with the following arguments:
Code:
args: -object memory-backend-memfd,id=mem,size=4096M,share=on -numa node,memdev=mem -chardev socket,id=char0,path=/run/virtiofsd/146.sock -device vhost-user-fs-pci,chardev=char0,tag=146
When this VM is running, whenever I try to launch a new VM with a PCI pass-through device, it takes tens of seconds. Without setting a limit to ARC (the host has 128 GB RAM), it almost always ends up with a timeout, which means I can no longer start a new VM after the one with virtiofs. After adding a limit of 16 GB to ARC, as suggested by the latest Proxmox installation, start-up time is still very long, but eventually the VM starts:
Bash:
echo 17179869184 > /sys/module/zfs/parameters/zfs_arc_max
The problem comes from the fact that VMs with PCI pass-through have to allocate all of their memory before starting-up, as opposed to regular VM which can allocate memory on the fly. Using
top
I see that the memory is slowly going up. But I don't understand why having virtiofsd running on the same host would make the allocation so much slower. I guess it is related to the shared memory I am using, but I'd prefer not to rely on huge pages as they are less flexible. Does anybody have the same problem, and do you have an idea to circumvent it?