[TUTORIAL] virtiofsd in PVE 8.0.x

BUt it's a new installation.
I do that 'cause I was using pve-test as repository. So today I do a new and fresh installation.
And do not installed virtiofsd via apt, but copy the dirty one to /usr/libexec !
But then how are you sure the dependencies are installed?

https://tracker.debian.org/pkg/rust-virtiofsd

Depends: libc6 (>= 2.34), libcap-ng0 (>= 0.7.9), libgcc-s1 (>= 4.2), libseccomp2 (>= 0.0.0~20120605)

Bash:
apt info virtiofsd
Package: virtiofsd
Version: 1.7.0-1~bpo12+pve1
Built-Using: rustc (= 1.67.1+dfsg1-1~bpo12+pve1)
Priority: optional
Section: otherosfs
Source: rust-virtiofsd
Maintainer: Debian Rust Maintainers <pkg-rust-maintainers@alioth-lists.debian.net>
Installed-Size: 2,741 kB
Depends: libc6 (>= 2.34), libcap-ng0 (>= 0.7.9), libgcc-s1 (>= 4.2), libseccomp2 (>= 0.0.0~20120605)
Breaks: pve-qemu-kvm (<< 8.0), qemu-system-common (<< 1:8.0)
Replaces: pve-qemu-kvm (<< 8.0), qemu-system-common (<< 1:8.0)
Homepage: https://virtio-fs.gitlab.io/
 
Last edited:
Well
Now it's work in Windows 2019. Without any hangs. (If I don't use NUMA!)
Turns out that I need to use the perl script hosted here:
https://gist.github.com/Drallas/7e4a6f6f36610eeb0bbb5d011c8ca0be
I can't recall if I used this before, but it's seems to me that I messed up something with the other script.
I am using virtiofsd-dirty and not the one that are packaged by Proxmox.
Anyhows! Is freaking work!
 
Last edited:
  • Like
Reactions: Drallas
Well
Now it's work in Windows 2019. Without any hangs. (If I don't use NUMA!)
Turns out that I need to use the perl script hosted here:
https://gist.github.com/Drallas/7e4a6f6f36610eeb0bbb5d011c8ca0be
I can't recall if I used this before, but it's seems to me that I messed up something with the other script.
I am using virtiofsd-dirty and not the one that are packaged by Proxmox.
Anyhows! Is freaking work!

Nice it works for you too.. and that the script in my gist was helpful, it’s practicality the same as the one shared previously in this thread..

Only be careful with auto updating 1.8 is at least two steps ahead of stable Debian. Next update my revert it back to 1.72, that might work or not!
 
Last edited:
  • Like
Reactions: Gilberto Ferreira
Nice it works for you too.. and that the script in my gist was helpful, it’s practicality the same as the one shared previously in this thread..

Only be careful with auto updating 1.8 is at least two steps ahead of stable Debian. Next update my revert it back to 1.72, that might work or not!
Many thanks for your script.
Now it's time movin on and try on this stuff in a cluster env.
Let's see what happens when migrate a vm from node to another with this all togheter.
I will keep you posted, guys.
 
Migration is a work in progress, but perhaps it’s already working in 1.8…. Keep us informed.
Yep!
No luck at all!


Code:
pve01:~# qm migrate 100 pve02 --online
2023-09-26 16:33:38 use dedicated network address for sending migration traffic (172.17.20.20)
2023-09-26 16:33:38 starting migration of VM 100 to node 'pve02' (172.17.20.20)
2023-09-26 16:33:38 starting VM 100 on remote node 'pve02'
2023-09-26 16:33:40 [pve02] hookscript error for 100 on pre-start: command '/vms/snippets/100.pl 100 pre-start' failed: exit code 2
2023-09-26 16:33:40 ERROR: online migrate failure - remote command failed with exit code 255
2023-09-26 16:33:40 aborting phase 2 - cleanup resources
2023-09-26 16:33:40 migrate_cancel
2023-09-26 16:33:42 ERROR: migration finished with problems (duration 00:00:05)
migration problems
 
I have been testing virtiofsd in one of my VMs for the last few days and noticed that some services that process and copy large files sometimes hang, could it be that the hookscript perlscript suggested here does not use "queue-size=1024" in the VM's args?
I have now added the argument to my shares and will test it the next few days to see if that was the problem
 
Last edited:
But I don't know exactly why or how.
I found several articles that states it only helps if your host has multiple physical processors and then it is required to have exactly the same number of sockets in the VM. I also saw it is required for hot plug memory and CPU - but i have been unable to get that working (i will start a thread at some point for that). Thanks for sharing, this is all helping me build knowledge.
 
queue-size=1024 seems an arbitrary, but high number to me! Where is it comming from?

I tried to read up on qemu/kvm, libvirt and virtio-net drivers, but couldn't find any resource pointing to information that is helping me to understand what is the best option.

What I do understand is, that regardless if the value it's set, that there always should be a default queue-size value in place that's higher than 0, and that a higher queue-size value means more impact on RAM.

Can some elaborate a bit more on this?
 
Last edited:
queue-size=1024 seems an arbitrary, but high number to me! Where is it comming from?

I tried to read up on qemu/kvm, libvirt and virtio-net drivers, but couldn't find any resource pointing to information that is helping me to understand what is the best option.

What I do understand is, that regardless if the value it's set, that there always should be a default queue-size value in place that's higher than 0, and that a higher queue-size value means more impact on RAM.

Can some elaborate a bit more on this?

I got the 1024 from the official virtiofs documentation, but I could not find out exactly what the setting does and what is the optimal value.
https://virtio-fs.gitlab.io/howto-qemu.html

It is also described in the first post of this thread under "1. Add an "args:" line to the VM's VMID.conf file located in /etc/pve/qemu-server".

I have now run it for a few hours with the 1024 and feel that it runs more stable for me now, so far none of the processes using large files via the virtiofs have frozen as before without the queue size.
 
Last edited:
I got the 1024 from the official virtiofs documentation, but I could not find out exactly what the setting does and what is the optimal value.
https://virtio-fs.gitlab.io/howto-qemu.html

It is also described in the first post of this thread under "1. Add an "args:" line to the VM's VMID.conf file located in /etc/pve/qemu-server".

I have now run it for a few hours with the 1024 and feel that it runs more stable for me now, so far none of the processes using large files via the virtiofs have frozen as before without the queue size.

Thanks, did you also try the DAX option mentioned in that document?

btw: I have not yet seen freeze issues while testing, it would be nice to understand the rationale behind this to pick the right value.
 
  • Like
Reactions: Gilberto Ferreira
Thanks, did you also try the DAX option mentioned in that document?

btw: I have not yet seen freeze issues while testing, it would be nice to understand the rationale behind this to pick the right value.

Yeah the DAX function I also tested because it sounded super useful but unfortunately it doesn't work yet because the support for it is not yet in QEMU

Maybe the freezes are related to that because I use the 1.8.0 version virtiofsd because the shipped 1.7.0 version of Debian always caused the VM to hang at boot, but I see that the suggestion came from you :D Thanks for that btw, since I used this version the problem at least disappeared
 
  • Like
Reactions: Drallas
Been using virtiofsd for a month or two now, and it does not appear crazy stable. Getting timeouts when starting VMs sometimes, getting hanging filesystem after a `reboot` (requiring qm stop + qm start), also does not seem to be able to restart from the GUI if you want a working fs..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!