[TUTORIAL] virtiofsd in PVE 8.0.x

Got this running on 4 servers now, mounting the same set of folder on all. I guess I also found 'the solution' (workaround) for the bad superblock error and updated my guide to reflect my learnings.

My systems
Host: Proxmox 8.0

Linux pve01 6.2.16-14-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-14 (2023-09-19T08:17Z) x86_64 GNU/Linux

Vm's: Debian 12
Linux dswarm03 6.1.0-12-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.52-1 (2023-09-07) x86_64 GNU/Linux

PS: I reverted to virtiofsd 1.7.2 to test if that versions works too.

It seems also fine, and now I'm sure the next virtiofsd update isn't going to break stuff if its reverting from 1.8.0 tot 1.7.2.

Bash:
-rwxr-xr-x  1 root root 5.4M Oct  3 12:06  virtiofsd
-rwxr-xr-x  1 root root 2.6M Jul 20 09:21 'virtiofsd 1.7.0'
-rwxr-xr-x  1 root root 5.4M Oct  3 12:10 'virtiofsd 1.7.2'
-rwxr-xr-x  1 root root 5.6M Oct  3 12:07 'virtiofsd 1.8.0'
 
Last edited:
  • Like
Reactions: Gilberto Ferreira
I have been testing virtiofsd in one of my VMs for the last few days and noticed that some services that process and copy large files sometimes hang, could it be that the hookscript perlscript suggested here does not use "queue-size=1024" in the VM's args?
I have now added the argument to my shares and will test it the next few days to see if that was the problem

So I can now report back that after over a week of intensive use with a lot of data transfer via virtiofsd, the VM no longer hangs, as was previously the case almost daily without the setting of the "queue size"
 
I ran into a problem that I couldn't mount the same folder on another VM on the same host.

Turns out that the tag format (mnt_pve_cephfs_docker) used was not unique and also the slice(s) were not (system-virtiofsd\x2ddocker.slice).
This is required when the same folder needs to be attached to multiple vm's on the same host.

I shortened the tag and included the vmid to make it unique (100-docker), as long as you don't have foldernames ending with same string, and want those to be mounted on the same VM!

The slice(s) (system-virtiofsd\x2d100\x2ddocker.slice) now has the vmid in it to make it unique.

Who needs this, can replace this section in the hookscript below:
# TODO: Have removal logic. Probably need to glob the systemd directory for matching files.

Perl:
  for (@{$associations{$vmid}}) {
    my $share_id  = $_ =~ m!/([^/]+)$! ? $1 : ''; # only last folder from path
    my $unit_name = 'virtiofsd-' . $vmid . '-' . $share_id;
    my $unit_file = '/etc/systemd/system/' . $unit_name . '@.service';
    print "attempting to install unit $unit_name...\n";
    if (not -d $virtiofsd_dir) {
        print "ERROR: $virtiofsd_dir does not exist!\n";
    }
    else { print "DIRECTORY DOES EXIST!\n"; }

    if (not -e $unit_file) {
      $tt->process(\$unit_tpl, { share => $_, share_id => $share_id }, $unit_file)
        || die $tt->error(), "\n";
      system("/usr/bin/systemctl daemon-reload");
      system("/usr/bin/systemctl enable $unit_name\@$vmid.service");
    }
    system("/usr/bin/systemctl start $unit_name\@$vmid.service");
    $vfs_args .= " -chardev socket,id=char$char_id,path=/run/virtiofsd/$vmid-$share_id.sock";
    $vfs_args .= " -device vhost-user-fs-pci,chardev=char$char_id,tag=$vmid-$share_id";
    $char_id += 1;
  }

This is the first time, that I'm fooling around with Perl, so my modifications are far from perfect, but they work. (Hopefully someone else can make it better)

Also, useful to know that pre / post stop are not doing much, unfortunately it's needed to manually clean up stuff until a pre / post stop is made!

And even host reboot are needed to get to forget about old virtiofs stuff!

Screenshot 2023-10-14 at 22.07.53.png
 
Last edited:
  • Like
Reactions: bud
First of all - thanks for the great tutorial @BobC. I have successfully mounted a virtiofs share into one of my VMs.
However, I do experience quite poor performance. I have also already posted over in the proxmox-sub (where I mainly compare different iodepths, I have since retested with different blocksizes).

I used fio (direct=1, size 10G) to compare "native" write performance on the host with the performance inside the VM.
On the host the most I was able to get was ~1GB/s with 8KiB blocks (100-800 MB/s with other block sizes). When I ran the same tests inside the VM, the most I got was ~400 MB/s with 32 KiB blocks (other block sizes were 100-200 MB/s).

I'm wondering if those results are to be expected or if there is something wrong here. Perhaps you can share some experience and/or give advice of how to further debug/improve the situation.

PS: I have also repeated the tests on my Desktop PC (Arch, virtiofsd v1.8) with virt-manager. Here, I did not see any noticeable difference in performance. Quite the opposite: The VM did even outperform my host starting with blocksizes ~8K. Note that had to enable "shared memory" in virt-manager for it to support viritofs.
 
sorry @GamerBene19, my use-case was never sequential, it was random, and it was mainly for home directories. NFS performance was substantially worse (for me) when using NFS between VM and host, even where the VM was running on the same node as the NFS export. And that was even with zfs logbias set to random for the NFS export too.
 
Last edited:
Hello,
I am first time trying to use virtiofs together with the proposed hook script. I try to export 3 folders from host to vm, I see 6 virtiofsd processes running and everything looks ok. Only the vm cannot find the tags and can therefore perform no mount. dmesg shows that the tags are unknown.
They look like "100-somefolder" which is clearly what virtiofsd seems to have configured, too. the args line shows the same tags.
virtiofsd is 1.7.2 from standard. Is there a way to list all proposed tags inside the vm?
 
After some reboots and fiddling I managed to mount the virtiofs filesystems and can use them on this vm. But now I tried to export them via nfs and that seems to make troubles again. The nfs clients seem to see the basic fs tree (one can ls and cd to folders), but as soon as I try to open an existing file I get a stale filehandle error message.
Before this situation I got errors from the nfs server saying the exported fs need a fsid set. I gave them some small integers (1-6), maybe this is causing the problem on the client?
 
I have been struggling with this for a month.

I have followed the instructions here but cannot get a zfs directory on the host to mount inside the vm.

Code:
Proxmox 8 - 6.5.11-8-pve

args that allow the vm to start
Code:
args: -chardev socket,id=char0,path=/run/vfs600.sock -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=vfs600 -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on

Manually starting virtiofsd
Code:
/usr/libexec/virtiofsd --log-level debug --socket-path=/run/vfs600.sock --shared-dir /zfs --announce-submounts --inode-file-handles=mandatory

Code:
virtiofsd 1.7.2

After running virtiofsd manually and starting the vm I get this on the host after the vm completes boot
Code:
[2024-02-10T04:04:58Z DEBUG virtiofsd::passthrough::mount_fd] Creating MountFd: mount_id=310, mount_fd=10
[2024-02-10T04:04:58Z DEBUG virtiofsd::passthrough::mount_fd] Dropping MountFd: mount_id=310, mount_fd=10
[2024-02-10T04:04:58Z INFO  virtiofsd] Waiting for vhost-user socket connection...
[2024-02-10T04:05:07Z INFO  virtiofsd] Client connected, servicing requests
[2024-02-10T04:05:25Z ERROR virtiofsd] Waiting for daemon failed: HandleRequest(InvalidParam)

Anybody have any ideas?
 
I can tell you, using the script made by Drallas from above in this thread got me a working setup. At least in regard of the virtiofs part. There is a "my guide" link above, walk to that. It installs a hookscript, and you have to reboot the vm twice probably, but then it should work.
 
I got it figured out. I didn't use perl scripts from other places because I intended to track down what my specific issue was. I essentially bashed a debug process on my system and found that the args line needs to be the first line in the conf file. My system uses numa so using the file backend instead of memfd was my next issue. Right now I am progressing in getting it to connect reliably because sometimes the virtiofsd service process doesn't get a connection from the vm on start. My own bash hookscripts only need the vmid to do all pre and post works so I am pressing forward now.

EDIT: I got this thing licked now. If your vm is throwing superblock errors on first start when trying to mount a directory, there's a chance the vm didn't connect to the virtiofsd process. When debugging I noticed this was when the issue arose the most. Also, systemd-run (transient) and .service files with virtiofsd (only virtiofsd; I can use transient and .service files on other things with 0 issues) seemed to be a bit flaky when it comes to needing to start/stop vms especially "hard stops". I was able to "bash" the crap out of the command in order to get it to run and not need a transient or .service file. When the vm stops, there really is no need to clean anything up since virtiofsd (when the vm is connected correctly) will automatically shut itself down when it detects vm shutdown. What I do is just make sure there aren't any existing args line/s during the pre-start hook and add in the correct args line for that vm. I went so far as to write a hookscript entirely in perl and still ended up being able to reliably get it done by just calling the bash script from the perl hookscript.
 
Last edited:
Hi,
i am trying to get this running on Win11.

But when i am trying to start the "VirtIO-FS Service" in Win11 i get the following error:
Der Dienst "VirtIO-FS service" auf "Lokaler Computer" konnte nicht gestartet werden. Fehler 1053: Der Dienst antwortete nicht rechtzeitig auf die Start- oder Steuerungsanforderung.

I am using the script from Drallas from: https://gist.github.com/Drallas/7e4a6f6f36610eeb0bbb5d011c8ca0be
My config looks like:
Code:
root@pve:/var/lib/vz/snippets# cat virtiofs_hook.conf
100: /mnt/pve/pictures
101: /mnt/pve/pictures

The config of the vm looks good for me:
Code:
root@pve:/var/lib/vz/snippets# qm config 100
args: -object memory-backend-memfd,id=mem,size=16000M,share=on -numa node,memdev=mem -chardev socket,id=char0,path=/run/virtiofsd/100-bud.sock -device vhost-user-fs-pci,chardev=char0,tag=100-bud
bios: ovmf
boot: order=scsi0
cores: 8
cpu: x86-64-v2-AES
efidisk0: nob:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hookscript: local:snippets/virtiofs_hook.pl
machine: pc-q35-8.1
memory: 16000
meta: creation-qemu=8.1.5,ctime=1711959689
name: steuern
net0: virtio=BC:24:11:4D:F1:8D,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: nob:vm-100-disk-1,iothread=1,size=80G
scsihw: virtio-scsi-single
smbios1: uuid=037688a8-c1a2-495c-9df0-3d34c6cecc94
sockets: 1
tpmstate0: nob:vm-100-disk-2,size=4M,version=v2.0
unused0: nob:vm-100-disk-3
vmgenid: b9a0c41d-f083-4d75-8eea-ac7c7c888a74

Does someone has an idea for me how solve the problem?
 
Only thing i missed is the installation of winfsp. (I guess this is the FUSE-Client on the Windows side. Not sure why I do not need this on the linux VM.)
https://winfsp.dev/
After installation it is working perfectly.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!