[TUTORIAL] virtiofsd in PVE 8.0.x

Another update, but now about Windows 2019.
I noticed that if I let the services VirtioFS in automatic Windows doesn't mount the Z: driver, as expected.
But if I let the service as manual, the VM started normaly and then, the Z: driver appears.
But as soon I restart it, and then tried to started the virtiofs services, the VMS hangs.
Interesstly enough, I got this errors in syslog:

2023-09-25T14:37:44.822911-03:00 pve100 pve100 virtiofsd[47960]: Received request: opcode=Init (26), inode=1, unique=2, pid=5988
2023-09-25T14:37:44.822971-03:00 pve100 virtiofsd[47962]: Warning: Cannot announce submounts, client does not support it
2023-09-25T14:37:44.823029-03:00 pve100 pve100 virtiofsd[47960]: Creating MountFd: mount_id=284, mount_fd=25
2023-09-25T14:37:44.823075-03:00 pve100 pve100 virtiofsd[47960]: Replying OK, header: OutHeader { len: 80, error: 0, unique: 2 }
2023-09-25T14:37:47.878044-03:00 pve100 pve100 virtiofsd[47960]: QUEUE_EVENT
2023-09-25T14:37:47.878188-03:00 pve100 pve100 virtiofsd[47960]: Received request: opcode=Lookup (1), inode=1, unique=3, pid=5988
2023-09-25T14:37:47.878242-03:00 pve100 pve100 virtiofsd[47960]: Replying ERROR, header: OutHeader { error: -2 (No such file or direc
tory), unique: 3, len: 16 }
 
Btw i removed -numa node from the line below.

Perl:
my $vfs_args = "-object memory-backend-memfd,id=mem,size=$conf->{memory}M,share=on -numa node,memdev=mem";

I can mount now, without superblock errors. but still hanging on vm reboots.
 
  • Like
Reactions: Gilberto Ferreira
What this error message suppose to meant?

vfs_args: -object memory-backend-memfd,id=mem,size=4096M,share=on -numa node,memdev=mem -chardev socket,id=char0,path=/run/virtiofsd/104-mnt_104.sock -device vhost-user-fs-pci,chardev=char0,tag=mnt_104
kvm: total memory for NUMA nodes (0x200000000) should equal RAM size (0x100000000)
TASK ERROR: start failed: QEMU exited with code 1
 
What this error message suppose to meant?

vfs_args: -object memory-backend-memfd,id=mem,size=4096M,share=on -numa node,memdev=mem -chardev socket,id=char0,path=/run/virtiofsd/104-mnt_104.sock -device vhost-user-fs-pci,chardev=char0,tag=mnt_104
kvm: total memory for NUMA nodes (0x200000000) should equal RAM size (0x100000000)
TASK ERROR: start failed: QEMU exited with code 1
That the mem size you gave 4096 is not the same as the vm has?
 
How did you compile it??
I already tried and always complain about cargo, rust and stuff!
I just downloaded the last version and copied it to the server.

apt installed 1.7.0;

This is 1.8.0-dirty, and it seems to fix it all for me, but not sure about the dirty part!

Might take long before it makes it way into Debian 12 stable, but I take my changes for now.
 
Last edited:
  • Like
Reactions: Drallas
Well...
I have no luck here!
I have a new fresh installation of PVE 8, everything is up to date.
Use virtiofsd 1.8.0-dirty
/usr/libexec/virtiofsd --version
virtiofsd 1.8.0-dirty
But still got stuck inside the VM:
root@debian:~# mount -t virtiofs mnt_210 /mnt/
root@debian:~# df
(VM hangs)

I got this in the syslog:

Code:
2023-09-26T10:54:04.298819-03:00 pve100 QEMU[11968]: kvm: Failed to write msg. Wrote -1 instead of 20.
2023-09-26T10:54:04.298922-03:00 pve100 QEMU[11968]: kvm: Failed to set msg fds.
2023-09-26T10:54:04.298976-03:00 pve100 QEMU[11968]: kvm: vhost VQ 1 ring restore failed: -22: Invalid argument (22)
2023-09-26T10:54:04.299026-03:00 pve100 QEMU[11968]: kvm: Failed to set msg fds.
2023-09-26T10:54:04.299081-03:00 pve100 QEMU[11968]: kvm: vhost VQ 0 ring restore failed: -22: Invalid argument (22)
2023-09-26T10:54:04.299122-03:00 pve100 QEMU[11968]: kvm: Error starting vhost: 22
2023-09-26T10:54:04.299163-03:00 pve100 QEMU[11968]: kvm: Failed to set msg fds.
2023-09-26T10:54:04.299203-03:00 pve100 QEMU[11968]: kvm: vhost_set_vring_call failed 22
2023-09-26T10:54:04.299242-03:00 pve100 QEMU[11968]: kvm: Failed to set msg fds.
2023-09-26T10:54:04.299291-03:00 pve100 QEMU[11968]: kvm: vhost_set_vring_call failed 22
2023-09-26T10:54:04.299493-03:00 pve100 systemd[1]: virtiofsd-mnt_210@210.service: Deactivated successfully.
2023-09-26T10:54:04.313866-03:00 pve100 QEMU[11968]: kvm: Unexpected end-of-file before all data were read
 
Last edited:
On my testserver Ubuntu 22.04 all works fine, also when I reboot the vm.

Mounts from fstab also mount without a problem:
Bash:
# Mounts for virtiofs
mnt_pve_cephfs_download  /srv/cephfs-mounts/download  virtiofs  defaults  0  0
mnt_pve_cephfs_multimedia  /srv/cephfs-mounts/multimedia  virtiofs  defaults  0  0

> mount
mnt_pve_cephfs_multimedia on /srv/cephfs-mounts/multimedia type virtiofs (rw,relatime)
mnt_pve_cephfs_download on /srv/cephfs-mounts/download type virtiofs (rw,relatime)

No more superblock errors, or hangs.

I plan on running my final setup in Debian 12, but didn't test it there yet.

Are you sure the old binary is unloaded!?