VirtioFS support

yaro014

Active Member
Dec 27, 2012
32
17
28
New version of qemu comes with VirtioFS support. Is there any plans for proxmox implementation?
Ability to share disks across VMs just like mount points within containers would be a very used feature if implemented.

Thanks
 
Last edited:
Ok so after a little playing around with it I got it provisionally working so I thought It might be worth sharing.

Host setup:
Bash:
# Create temporary folder to share
mkdir /tmp/shared/

# Clone virtio-fs qemu and build virtiofsd (https://virtio-fs.gitlab.io/howto-qemu.html)
git clone https://gitlab.com/virtio-fs/qemu.git
cd qemu
make -j 8 virtiofsd

# Run virtiofsd, this will stay open until you shut down your vm
./virtiofsd -f -o clone_fd -o vhost_user_socket=/var/run/vm001-vhost-fs.sock -o source=/tmp/shared/ -o cache=always

VM setup:
Bash:
#Edit your VM config and add args as below:
# Add args to a VM, /etc/pve/nodes/<pve>/qemu-server/<xxx>.conf
args: -chardev socket,id=char0,path=/var/run/vm001-vhost-fs.sock -device vhost-user-fs-pci,chardev=char0,tag=myfs -object memory-backend-memfd,id=mem,size=4G,share=on -numa node,memdev=mem

# size=4G < this needs to be the same as the VM RAM size.
# path=/var/run/vm001-vhost-fs.sock < this is the path to socket created in host setup
# tag=myfs  < this is the tag which you will have to use in guest mount command. Change to your liking.

Guest setup:
I used the vanilla ubuntu 20.04, but you can use any guest distribution with kernel 5.4 and above (alternatively follow the guest kernel section here:https://virtio-fs.gitlab.io/howto-qemu.html)

Bash:
mkdir /mnt/tmp
mount -t virtiofs myfs /mnt/tmp
# wha la

Some benchmarks:

Write test:

Guest with virtiofs:
Bash:
dd if=/dev/zero of=tempfile bs=1M count=4096 conv=fdatasync,notrunc status=progress
3446669312 bytes (3.4 GB, 3.2 GiB) copied, 4 s, 862 MB/s
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 7.8479 s, 547 MB/s

Same test on the host:
Bash:
dd if=/dev/zero of=tempfile bs=1M count=4096 conv=fdatasync,notrunc status=progress
3860856832 bytes (3.9 GB, 3.6 GiB) copied, 2 s, 1.9 GB/s
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 3.92432 s, 1.1 GB/s

Read test:
Guest with virtiofs:
Bash:
dd if=tempfile of=/dev/null bs=1M count=4096 status=progress
3717201920 bytes (3.7 GB, 3.5 GiB) copied, 2 s, 1.9 GB/s
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 2.30858 s, 1.9 GB/s

Same test on the host:
Bash:
dd if=tempfile of=/dev/null bs=1M count=4096 status=progress
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 0.951201 s, 4.5 GB/s

Now these tests aren't really fair as:
a) testing done on /tmp which is mounted from tmpfs and host is always going to have raw speeds
b) not tested against NFS, SMB or 9p virtio
c) not a real test anyway, multiple options should be specified and edited for best performance which I am going to have a play with soon as this functionality looks amazing.
 
Hi, do you have any suggestions how to ensure virtiofsd runs before VM starts on pve and restarts every time VM is rebooted? Is there any pre/post start/stop hook I can access?

Any plans this feature will make it into future Proxmox release along with newer QEMU versions?
 
Sounds sensible, note that QEMU 5.2 is already on pvetest repo since a few days, but it does not yet include the virtiofsd binary, we'll see if we can build and ship that with the build.
 
Sounds sensible, note that QEMU 5.2 is already on pvetest repo since a few days, but it does not yet include the virtiofsd binary, we'll see if we can build and ship that with the build.
Any news on this? Will this be in the next production release?
 
Any news on this? Will this be in the next production release?
It's already available since pve-qemu-kvm version 5.2.0-2 and the post above describes how one could use it.

But, there's no GUI/API implementation planned for integrating VirtioFSd yet - if that's what you meant.
 
Last edited:
It's already available since pve-qemu-kvm version 5.2.0-2 and the post above describes how one could use it.

But, there's now GUI/API implementation planned for integrating VirtioFSd yet - if that's what you meant.
I was referring to the availability of the virtiofsd and I do see it at /usr/lib/kvm/virtiofsd. Thanks.

I wasn't thinking about the GUI but that would be great.
 
Hello there, I'm trying virtiofsd from few weeks and I really like it, but I'm having a little problem.
I'm using it with Proxmox in a non standard configuration in my little home server with mdadm raid to pass a directory tree from the host to a guest virtual machine.

I know that my setup isn't officially supported but considering that there are others people here that are using virtiofsd I hope to receive a feedback. I have also opened an issue on virtiofsd gitlab but I'm not sure if my problem is virtiofsd or proxmox related.

What happens to me is that virtiofsd doesn't garant write access at users allowed by group permission.

In Proxmox I start virtiofsd with this configuration:
Code:
/usr/lib/kvm/virtiofsd --socket-path=/var/run/virtiofsd-data-VM1.sock -o source=/media/data/vm1/ -o cache=always -o debug

-chardev socket,id=char0,path=/var/run/virtiofsd-data-VM1.sock -device vhost-user-fs-pci,chardev=char0,tag=data -object memory-backend-memfd,id=mem,size=4096M,share=on -numa node,memdev=mem

In my Virtual machine I mount everything with fstab:
Code:
data /media/data virtiofs rw 0 2

At the moment my Virtual Machine is a Debian Buster with kernel from Backports:
Code:
$ uname -a
Linux debian 5.10.0-0.bpo.5-amd64 #1 SMP Debian 5.10.24-1~bpo10+1 (2021-03-29) x86_64 GNU/Linux

But I also have tested the latest Ubuntu server with same result.

The problem is the following.

The source /media/data/vm1/ passed from the host, in the guest becomes /media/data and in it there is the following directories tree with the following permissions:
Code:
$ ls -la /media/data/
total 16
drwxrwxr-x 4 root root  4096 mag 30 17:15 .
drwxr-xr-x 4 root root  4096 apr 27 17:41 ..
drwxrwx--- 7 root users 4096 mag 30 17:14 Documents
drwxrwxr-x 6 root users 4096 lug 15  2019 www

but if I try to write something in /media/data/Documents with an user that is in the users group I have a permission denied error:
Code:
$ touch /media/data/Documents/test.txt
touch: cannot touch ‘/media/data/Documents/test.txt’: Permission denied

obviously I don't have any problem in writing in ‘/media/data/Documents/’ with sudo and the strange thing is that also if I'm not able to write in the directory with my user, instead I'm able to read its content:
Code:
$ ls -la /media/data/Documents/
total 8
drwxrwx--- 7 root   users  4096 mag 30 18:26 .
drwxrwxr-x 4 root   root   4096 mag 30 17:15 ..
-rw-r--r-- 1 root   root      0 mag 30 18:26 test2.txt
-rw-r--r-- 1 root   root      0 mag 30 18:26 test3.txt
-rw-r--r-- 1 root   root      0 mag 30 18:26 test.txt

The problem shouldn't be about my permissions, in fact if I copy ‘/media/’ with same permissions (cp -rp) outside the mount point provided by virtiofsd, for example in /tmp/media/data/Documents/, then everything works correctly and I'm able to write and read with the users in the users group.

I also noticed that if I change the ownership of /media/data/Documents from root:users to root:myuser then the user 'myuser' is able to write in that directory.

At the moment to solve this problem I had to change the permission of /media/data/Documents from 770 to 777 but this isn't a great solution. I have also discovered that if I change the permission from 777 to 776 I still have permission denied error If I try to write in the directory.

I also tried using a virtiofsd compiled from sources, from stable and dev branch but with same result.
I also tried using the virtio-fs device backend written in Rust, stil same problem.

Honestly seems strange to me that virtiofsd fails on a so important thing as group permission, so I'm thinking that this is caused by my fault.
Otherwise, could be a Proxmox related error? M'I doing some error?

Someone else that has using virtiofsd has noticed a similar problem?
 
Last edited:
...
What happens to me is that virtiofsd doesn't garant write access at users allowed by group permission.
...
Is the group id of 'users' the same in the vm and the host?

when I do
Code:
grep users /etc/group
I get
Code:
users:x:100:
on my host and the id is also 100 on my Debian vm.

I tested your scenario in a root-owned, 'users' group writable virtiofs mounted directory in my vm and I was able to create a file with a regular user in that dir.
 
Is the group id of 'users' the same in the vm and the host?
This shouldn't be relevant, anyway yes, they are the same, but in the host I don't have the users present in the guest.

I tested your scenario in a root-owned, 'users' group writable virtiofs mounted directory in my vm and I was able to create a file with a regular user in that dir.
Could you better explain what you did? Because what I reported seems to be a virtiofsd related problem. What are the permissions of your directory?
 
Last edited:
  • Like
Reactions: RudyBzh
This shouldn't be relevant, anyway yes, they are the same, but in the host I don't have the users present in the guest.


Could you better explain what you did? Because what I reported seems to be a virtiofsd related problem. What are the permissions of your directory?

Code:
$ id
uid=1012(dockeruser) gid=100(users) groups=100(users)
$ pwd
/mnt/findable/stuff/util/test
$ ls -la
total 34
drwxrwx--- 2 root users 2 Jun 10 22:21 .
drwxr-sr-x 6 root users 6 Jun 10 22:08 ..
$ mount|grep util
stuff on /mnt/findable/stuff/util type virtiofs (rw,relatime)
$ touch foo
$ ls -la
total 35
drwxrwx--- 2 root       users 3 Jun 10 22:22 .
drwxr-sr-x 6 root       users 6 Jun 10 22:08 ..
-rw-r--r-- 1 dockeruser users 0 Jun 10 22:22 foo
 
$ id
uid=1012(dockeruser) gid=100(users) groups=100(users)
One of the conditions to make this problem happen is this:
- The process has gid B but has A in its list of supplementary groups

In your case your user primary group and supplementary group are the same and me too had noticed that in similar conditions everything works correctly:
I also noticed that if I change the ownership of /media/data/Documents from root:users to root:myuser then the user 'myuser' is able to write in that directory.
 
Last edited:
Hi, I have this working by launching the virtiofsd process manually before launching the vm. I can read and write files from an Ubuntu VM to the host. But I can't manage to launch virtiofsd automatically using a hookscript. I keep getting this error:

Code:
Jul 28 11:38:44 archive pvedaemon[1461]: <root@pam> starting task UPID:archive:0000296E:000290AE:610125A4:qmstart:102:root@pam:
Jul 28 11:38:44 archive pvedaemon[10606]: start VM 102: UPID:archive:0000296E:000290AE:610125A4:qmstart:102:root@pam:
Jul 28 11:38:44 archive pvedaemon[10606]: hookscript error for 102 on pre-start: command '/var/lib/vz/snippets/virtiofs.pl 102 pre-start' failed: exit code 255
Jul 28 11:38:44 archive pvedaemon[1461]: <root@pam> end task UPID:archive:0000296E:000290AE:610125A4:qmstart:102:root@pam: hookscript error for 102 on pre-start: command '/var/lib/vz/snippets/virtiofs.pl 102 pre-start' failed: exit code 255



my /vz/snippets/virtiofs.pl is just a copy of the template /usr/share/pve-docs/examples/guest-example-hookscript.pl with my script added on the pre-start
Code:
if ($phase eq 'pre-start') {

    # First phase 'pre-start' will be executed before the guest
    # ist started. Exiting with a code != 0 will abort the start

    print "$vmid is starting, doing preparations.\n";
    
    /usr/lib/vz/snippets/launch-virtio-daemon.sh

    # print "preparations failed, aborting."
    # exit(1);

and in my launch-virtio-daemon.sh I'm trying to force an exit status of 0 to avoid the error, but no matter what I do I still get that "pre-start' failed: exit code 255"

Code:
#!/usr/bin/bash

function launch() {
    /usr/lib/kvm/virtiofsd --daemonize --socket-path=/var/run/vm102-vhost-fs.sock -o source=/rpool/exchange/ -o cache=always & disown
    return 0
}
launch

I'm just trying different things but nothing works, is there anyone more experienced with hookscripts?
 
Can you post your entire virtiofs.pl ?
this now works.

Perl:
#!/usr/bin/perl

# Exmple hook script for PVE guests (hookscript config option)
# You can set this via pct/qm with
# pct set <vmid> -hookscript <volume-id>
# qm set <vmid> -hookscript <volume-id>
# where <volume-id> has to be an executable file in the snippets folder
# of any storage with directories e.g.:
# qm set 100 -hookscript local:snippets/hookscript.pl

use strict;
use warnings;

print "GUEST HOOK: " . join(' ', @ARGV). "\n";

# First argument is the vmid

my $vmid = shift;

# Second argument is the phase

my $phase = shift;

if ($phase eq 'pre-start') {

    # First phase 'pre-start' will be executed before the guest
    # ist started. Exiting with a code != 0 will abort the start

    print "$vmid is starting, doing preparations.\n";
        
    system('/var/lib/vz/snippets/launch-virtio-daemon.sh');

    # print "preparations failed, aborting."
    # exit(1);

} elsif ($phase eq 'post-start') {

    # Second phase 'post-start' will be executed after the guest
    # successfully started.

    print "$vmid started successfully.\n";

} elsif ($phase eq 'pre-stop') {

    # Third phase 'pre-stop' will be executed before stopping the guest
    # via the API. Will not be executed if the guest is stopped from
    # within e.g., with a 'poweroff'

    print "$vmid will be stopped.\n";

} elsif ($phase eq 'post-stop') {

    # Last phase 'post-stop' will be executed after the guest stopped.
    # This should even be executed in case the guest crashes or stopped
    # unexpectedly.

    print "$vmid stopped. Doing cleanup.\n";

} else {
    die "got unknown phase '$phase'\n";
}

exit(0);


but I need to fix this /var/lib/vz/snippets/launch-virtio-daemon.sh
Bash:
#!/usr/bin/bash


function launch() {

    /usr/lib/kvm/virtiofsd --syslog --daemonize --socket-path=/var/run/vm102-vhost-fs.sock -o source=/rpool/exchange/ -o cache=always > /dev/null 2>&1
    return 0
}

launch
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!