#!/usr/bin/bash
function launch() {
nohup /usr/lib/kvm/virtiofsd --syslog --daemonize --socket-path=/var/run/vm102-vhost-fs.sock -o source=/rpool/exchange/ -o cache=always &> /dev/null &
return 0
}
launch
nohup COMMAND &>/dev/null &
Oh, that sounds much more elegant, sorry I got lost reading your post, I had already read it yesterday and I seemed much more complicated, with multiple mounts and dynamic configuration, I'm just trying a single mount point. I'll check it out in more depth.You can make it a systemd service so you dont have to deamonize just like it's shown in the post i mentioned earlier.
#!/bin/bash
function launch() {
nohup /usr/lib/kvm/virtiofsd -f --socket-path=/var/run/shared-fs.sock -o source=/zp0/ct0/subvol-103-disk-1 --cache=always --syslog --daemonize &> /dev/null &
return 0
}
if [ $2 = "pre-start" ]; then
launch
fi
exit 0
'm using it with Proxm
This shouldn't be relevant, anyway yes, they are the same, but in the host I don't have the users present in the guest.
Could you better explain what you did? Because what I reported seems to be a virtiofsd related problem. What are the permissions of your directory?
hugetlbfs /dev/hugepages hugetlbfs defaults
echo 2000 > /proc/sys/vm/nr_hugepages
/usr/bin/virtiofsd -f -d --socket-path=/var/<socketname>.sock -o source=/mnt/sharevolumes/fileshare -o cache=always -o posix_lock -o flock
args: -chardev socket,id=char0,path=/var/virtiofsd1.sock -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=fileshare -object memory-backend-memfd,id=mem,hugetlb=yes,hugetlbsize=2097152,prealloc=yes,size=3G,share=on -mem-path /dev/hugepages -numa node,memdev=mem
mount -t virtiofs <tag> <mount-point>
mount -t virtiofs fileshare /mnt/fileshare/
args: -chardev socket,id=char0,path=/var/virtiofsd1.sock -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=fileshare -object memory-backend-memfd,id=mem,hugetlb=yes,hugetlbsize=2097152,prealloc=yes,size=3G,share=on -mem-path /dev/hugepages -numa node,memdev=mem
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-22.04-live-server-amd64.iso,media=cdrom,size=1432338K
memory: 3072
meta: creation-qemu=6.2.0,ctime=1654416192
name: cloudinittests
net0: virtio=C6:28:4A:61:E7:AA,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: wd2tb:vm-110-disk-0,size=10G
scsihw: virtio-scsi-pci
smbios1: uuid=3939eba6-46aa-4e53-860d-b039eecbcfd6
sockets: 1
vmgenid: 70e27a5e-c8cd-43f7-ad6d-0e93980fb691
Hello, i'm about to try this, and i will read up on https://virtio-fs.gitlab.io/howto-qemu.html but just wanted to ask in advance, if you actually tried enabling NUMA, and what happened?As mentioned above, make sure you do not enable numa in proxmox
Hello, i asked about this from a linux openzfs developer, and he says it will probably not work, and then said that he is planning to start using virtiofs himself at some point, and when he does he will take a look at DAX, and the possibility of adding openzfs support for it.Hello! Is there any chance we will see support for the DAX feature of virtiofs? I tried to enable it but got an error message. This feature should help w/ performance because it avoids duplicating files between host and guest memory. It requires virtiofs to be built with some flags.
DAX info
https://virtio-fs.gitlab.io/howto-qemu.html says the flags are
CONFIG_DAX
CONFIG_FS_DAX
CONFIG_DAX_DRIVER
CONFIG_ZONE_DEVICE
/etc/passwd
. So, I'm pretty sure that I'm doing something wrong here.pveum
or whether it was useradd
on the host itself, but either way - I got it working./usr/lib/lvm/virtiofsd
manually, each time I stop and start the VM guests. I've noticed that if I tried to use the same socket twice (i.e. create one socket and try to have two VMs connect to it), it doesn't let me do that. Does this mean that I need to run/start a new virtiofsd
for each VM/guest? I get the feeling that this would be somewhat inefficient because you'd end up spawning as many virtiofsd
processes as you have VMs running./var/lib/vz/snippets
and also the bash shell script as well to start it. I also read @yaro014's thread about trying to pass arguments to those scripts (because @Rphoton's example has, for example, the VMID hardcoded into the script). So, how do I make it so that it will automatically open the socket for each of my VMs after each stop and/or before each start? I'm not a programmer nor a developer, so I can see the scripts, but I am not knowledgeable enough to be able to make sense of how the scripts will be able to create a new socket if it the VMID is hardcoded into it? (or does that not matter?)virtiofs.pl
perl script as well as the launch-virtio-daemon.sh
in to /var/lib/vz/snippets/
.qm set 100 --hookscript local:snippets/virtiofs.pl
#!/usr/bin/bash
function launch() {
nohup /usr/lib/kvm/virtiofsd --syslog --daemonize --socket-path=/var/run/shared-fs.sock -o source=/myfs/ -o cache=always &> /dev/null &
return 0
}
launch
--cache=always
" doesn't work. (At least as of this writing, with Proxmox VE 7.3-3.)#!/usr/bin/perl
# Exmple hook script for PVE guests (hookscript config option)
# You can set this via pct/qm with
# pct set <vmid> -hookscript <volume-id>
# qm set <vmid> -hookscript <volume-id>
# where <volume-id> has to be an executable file in the snippets folder
# of any storage with directories e.g.:
# qm set 100 -hookscript local:snippets/hookscript.pl
use strict;
use warnings;
print "GUEST HOOK: " . join(' ', @ARGV). "\n";
# First argument is the vmid
my $vmid = shift;
# Second argument is the phase
my $phase = shift;
if ($phase eq 'pre-start') {
# First phase 'pre-start' will be executed before the guest
# ist started. Exiting with a code != 0 will abort the start
print "$vmid is starting, doing preparations.\n";
system('/var/lib/vz/snippets/launch-virtio-daemon.sh');
# print "preparations failed, aborting."
# exit(1);
} elsif ($phase eq 'post-start') {
# Second phase 'post-start' will be executed after the guest
# successfully started.
print "$vmid started successfully.\n";
} elsif ($phase eq 'pre-stop') {
# Third phase 'pre-stop' will be executed before stopping the guest
# via the API. Will not be executed if the guest is stopped from
# within e.g., with a 'poweroff'
print "$vmid will be stopped.\n";
} elsif ($phase eq 'post-stop') {
# Last phase 'post-stop' will be executed after the guest stopped.
# This should even be executed in case the guest crashes or stopped
# unexpectedly.
print "$vmid stopped. Doing cleanup.\n";
} else {
die "got unknown phase '$phase'\n";
}
exit(0);
mount -t virtiofs myfs /myfs
without any issues.nfs-kernel-server
on the host itself, and then created the NFS export, and then mounted the same directory in SLES over NFS instead. It worked enough. (I could write to it (via the virtio network interface) at 301 MB/s and read from it at 785 MB/s vs. the virtiofs that Ubuntu was able to use which was able to write to it at around 496 MB/s and read from it at 1700 MB/s (average between two Ubuntu VMs reading/writing to the host separately and sequentially).Stupid question -- how do I make this persistent through reboots?Set up a certain space for hugepages:
echo 2000 > /proc/sys/vm/nr_hugepages
I don't know about NUMA. (I've been reading the documentation about it as well, and I'm still a little fuzzy as to what the benefits of NUMA would be (for a single socket system).)for me as virtiofs doesn't support NUMA (VM won't start with NUMA enabled just as user lpfister9 says, it's basically useless. The Intel docs i linked to describe how to tie hugepages to a specific NUMA slot, but virtiofs don't care, and as such is useless.
virtiofs is useless, and basically as this is 2023, linux seems useless.. still struggling with wayland, vulcan, GPU drivers, the works.. we're lucky to have a functioning bash environment, and supposed to be content with that lol. It's all BS.
Well, there's something called 9pfs, which recently got a major performance improvementI don't know about NUMA. (I've been reading the documentation about it as well, and I'm still a little fuzzy as to what the benefits of NUMA would be (for a single socket system).)
Like I understand how NUMA can be useful for a muilt-socket system, but with consolidation, the number of sockets may be declining.
For the features that you are looking for, I am not sure if there's a better, alternative version that you can with a free option.
(I'm not sure if VMWare ESXi support the features that you're looking for, but my understanding is that they DON'T have a free option where you can download it and try it out. A LOT of tech YouTubers talk about the per-CPU core licensing cost of VMWare ESXi.)
In my testing of virtio-fs, it's faster than Oracle VirtualBox (with shared folders between the host and the VM guests. xcp-ng doesn't even support it because Xen itself is a Type-1 hypervisor. TrueNAS' VM capabilties also didn't even have something like virtio-fs (that I was able to find, during the course of my research).
So, I'm not sure what other option is there that would be able to do the kind of things, with the kind of features that you're looking for.
Not available for xcp-ng. (At least not when I tried to google it.)Well, there's something called 9pfs, which recently got a major performance improvement
Varies.My main complaint is with linux overall, sure you can build something functional, but only using hours, days, weeks and months studying every single little detail yourself. Everything changes and breaks all the time
The main problem is with all these stupid distro's, there are thousands of 2 to 3 men projects, that never amount to anything, and always die. Nobody is focused on a single OS. Then you go to Debian and it's debian OR Ubuntu, but no single project. Then there is the stupid dependencies, where any little piece of code can break the whole system, because everything depends on everything else. And unless every little piece of code is just perfect, your whole system is nothing but a heap of trash.
Update your linux GPU driver? Well, that is a life or death event.
The kernel is stupid, the package managers are stupid, and the whole system is stupid.
What are you talking about? The virtiofsd daemon got rewritten in rust and lives now in a separate repository, as described in the beta release notes.The nail in the coffin for virtiofsd on proxmox8.
virtiofsd
.I apologize, I did not read the notes.What are you talking about? The virtiofsd daemon got rewritten in rust and lives now in a separate repository, as described in the beta release notes.
We then upstreamed packaging the new rust replacement of virtiofsd for Debian, and will ship it for the final release in Proxmox VE 8, where it will be available as separate package calledvirtiofsd
.
We use essential cookies to make this site work, and optional cookies to enhance your experience.