Thanks a lot for the suggestion~ I've built it, was just about to reboot the servers :] Good day!Yeah! I strong recommend that you give a try with 1.8.0-dirty. It's work perfect.
See previously messages here in this thread.
I wish good luck.
Thanks a lot for the suggestion~ I've built it, was just about to reboot the servers :] Good day!Yeah! I strong recommend that you give a try with 1.8.0-dirty. It's work perfect.
See previously messages here in this thread.
I wish good luck.
let us know how you get on and if it improves everything! I will start my virtiofsd adventures in the next couple of weeks. So thanks to everyone on this thread for blazing the way!Thanks a lot for the suggestion~ I've built it, was just about to reboot the servers :] Good day!
bad superblock
error and updated my guide to reflect my learnings.-rwxr-xr-x 1 root root 5.4M Oct 3 12:06 virtiofsd
-rwxr-xr-x 1 root root 2.6M Jul 20 09:21 'virtiofsd 1.7.0'
-rwxr-xr-x 1 root root 5.4M Oct 3 12:10 'virtiofsd 1.7.2'
-rwxr-xr-x 1 root root 5.6M Oct 3 12:07 'virtiofsd 1.8.0'
I have been testing virtiofsd in one of my VMs for the last few days and noticed that some services that process and copy large files sometimes hang, could it be that the hookscript perlscript suggested here does not use "queue-size=1024" in the VM's args?
I have now added the argument to my shares and will test it the next few days to see if that was the problem
Sounds goodSo I can now report back that after over a week of intensive use with a lot of data transfer via virtiofsd, the VM no longer hangs, as was previously the case almost daily without the setting of the "queue size"
# TODO: Have removal logic. Probably need to glob the systemd directory for matching files.
for (@{$associations{$vmid}}) {
my $share_id = $_ =~ m!/([^/]+)$! ? $1 : ''; # only last folder from path
my $unit_name = 'virtiofsd-' . $vmid . '-' . $share_id;
my $unit_file = '/etc/systemd/system/' . $unit_name . '@.service';
print "attempting to install unit $unit_name...\n";
if (not -d $virtiofsd_dir) {
print "ERROR: $virtiofsd_dir does not exist!\n";
}
else { print "DIRECTORY DOES EXIST!\n"; }
if (not -e $unit_file) {
$tt->process(\$unit_tpl, { share => $_, share_id => $share_id }, $unit_file)
|| die $tt->error(), "\n";
system("/usr/bin/systemctl daemon-reload");
system("/usr/bin/systemctl enable $unit_name\@$vmid.service");
}
system("/usr/bin/systemctl start $unit_name\@$vmid.service");
$vfs_args .= " -chardev socket,id=char$char_id,path=/run/virtiofsd/$vmid-$share_id.sock";
$vfs_args .= " -device vhost-user-fs-pci,chardev=char$char_id,tag=$vmid-$share_id";
$char_id += 1;
}
Proxmox 8 - 6.5.11-8-pve
args: -chardev socket,id=char0,path=/run/vfs600.sock -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=vfs600 -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on
/usr/libexec/virtiofsd --log-level debug --socket-path=/run/vfs600.sock --shared-dir /zfs --announce-submounts --inode-file-handles=mandatory
virtiofsd 1.7.2
[2024-02-10T04:04:58Z DEBUG virtiofsd::passthrough::mount_fd] Creating MountFd: mount_id=310, mount_fd=10
[2024-02-10T04:04:58Z DEBUG virtiofsd::passthrough::mount_fd] Dropping MountFd: mount_id=310, mount_fd=10
[2024-02-10T04:04:58Z INFO virtiofsd] Waiting for vhost-user socket connection...
[2024-02-10T04:05:07Z INFO virtiofsd] Client connected, servicing requests
[2024-02-10T04:05:25Z ERROR virtiofsd] Waiting for daemon failed: HandleRequest(InvalidParam)
Der Dienst "VirtIO-FS service" auf "Lokaler Computer" konnte nicht gestartet werden. Fehler 1053: Der Dienst antwortete nicht rechtzeitig auf die Start- oder Steuerungsanforderung.
root@pve:/var/lib/vz/snippets# cat virtiofs_hook.conf
100: /mnt/pve/pictures
101: /mnt/pve/pictures
root@pve:/var/lib/vz/snippets# qm config 100
args: -object memory-backend-memfd,id=mem,size=16000M,share=on -numa node,memdev=mem -chardev socket,id=char0,path=/run/virtiofsd/100-bud.sock -device vhost-user-fs-pci,chardev=char0,tag=100-bud
bios: ovmf
boot: order=scsi0
cores: 8
cpu: x86-64-v2-AES
efidisk0: nob:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hookscript: local:snippets/virtiofs_hook.pl
machine: pc-q35-8.1
memory: 16000
meta: creation-qemu=8.1.5,ctime=1711959689
name: steuern
net0: virtio=BC:24:11:4D:F1:8D,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsi0: nob:vm-100-disk-1,iothread=1,size=80G
scsihw: virtio-scsi-single
smbios1: uuid=037688a8-c1a2-495c-9df0-3d34c6cecc94
sockets: 1
tpmstate0: nob:vm-100-disk-2,size=4M,version=v2.0
unused0: nob:vm-100-disk-3
vmgenid: b9a0c41d-f083-4d75-8eea-ac7c7c888a74
root@pve1:~# fio --name=write_test --directory=/mnt/bindmounts/shared --size=1G --time_based --runtime=60 --rw=write --bs=4k --numjobs=1 --iodepth=1 --group_reporting
write_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=693MiB/s][w=177k IOPS][eta 00m:00s]
write_test: (groupid=0, jobs=1): err= 0: pid=188545: Thu Nov 21 16:15:52 2024
write: IOPS=125k, BW=489MiB/s (513MB/s)(28.7GiB/60001msec); 0 zone resets
clat (usec): min=4, max=74942, avg= 7.57, stdev=83.91
lat (usec): min=4, max=74942, avg= 7.63, stdev=83.95
clat percentiles (usec):
| 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 5],
| 30.00th=[ 5], 40.00th=[ 5], 50.00th=[ 5], 60.00th=[ 5],
| 70.00th=[ 5], 80.00th=[ 6], 90.00th=[ 11], 95.00th=[ 21],
| 99.00th=[ 31], 99.50th=[ 58], 99.90th=[ 182], 99.95th=[ 359],
| 99.99th=[ 1139]
bw ( KiB/s): min=111000, max=748200, per=99.77%, avg=499877.48, stdev=179396.54, samples=119
iops : min=27750, max=187050, avg=124969.31, stdev=44849.12, samples=119
lat (usec) : 10=89.93%, 20=4.46%, 50=5.06%, 100=0.24%, 250=0.24%
lat (usec) : 500=0.04%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%
cpu : usr=12.41%, sys=75.94%, ctx=55352, majf=4, minf=12
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,7515732,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=489MiB/s (513MB/s), 489MiB/s-489MiB/s (513MB/s-513MB/s), io=28.7GiB (30.8GB), run=60001-60001msec
root@ubuntu01:/home/user# fio --name=write_test --directory=/mnt/bindmounts/shared --size=1G --time_based --runtime=60 --rw=write --bs=4k --numjobs=1 --iodepth=1 --group_reporting
write_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=8472KiB/s][w=2118 IOPS][eta 00m:00s]
write_test: (groupid=0, jobs=1): err= 0: pid=1277: Thu Nov 21 16:18:39 2024
write: IOPS=4260, BW=16.6MiB/s (17.5MB/s)(999MiB/60001msec); 0 zone resets
clat (usec): min=45, max=61547, avg=231.80, stdev=375.67
lat (usec): min=45, max=61549, avg=232.20, stdev=375.89
clat percentiles (usec):
| 1.00th=[ 50], 5.00th=[ 62], 10.00th=[ 71], 20.00th=[ 83],
| 30.00th=[ 92], 40.00th=[ 105], 50.00th=[ 120], 60.00th=[ 141],
| 70.00th=[ 182], 80.00th=[ 408], 90.00th=[ 562], 95.00th=[ 652],
| 99.00th=[ 1156], 99.50th=[ 1614], 99.90th=[ 3228], 99.95th=[ 3949],
| 99.99th=[ 8356]
bw ( KiB/s): min= 5731, max=57736, per=100.00%, avg=17134.75, stdev=10517.79, samples=119
iops : min= 1432, max=14434, avg=4283.62, stdev=2629.50, samples=119
lat (usec) : 50=0.97%, 100=35.84%, 250=38.30%, 500=9.88%, 750=11.97%
lat (usec) : 1000=1.65%
lat (msec) : 2=1.08%, 4=0.27%, 10=0.04%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%
cpu : usr=3.80%, sys=29.70%, ctx=241869, majf=0, minf=9
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,255638,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=16.6MiB/s (17.5MB/s), 16.6MiB/s-16.6MiB/s (17.5MB/s-17.5MB/s), io=999MiB (1047MB), run=60001-60001msec
agent: 1
args: -object memory-backend-memfd,id=mem,size=2048M,share=on -numa node,memdev=mem -chardev socket,id=char0,path=/run/virtiofsd/2000-shared.sock -device vhost-user-fs-pci,chardev=char0,tag=2000-shared
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
hookscript: local:snippets/virtiofs_hook.pl
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=9.0.2,ctime=1732199146
name: ubuntu01
net0: virtio=02:BD:7F:BD:4B:11,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-zfs:vm-2000-disk-0,backup=0,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=a32528e6-b7a5-4bf5-9cc1-e16b2b841445
sockets: 1
tpmstate0: local-zfs:vm-2000-disk-1,size=4M,version=v2.0
vmgenid: c40101bf-7d4a-4d6f-94cf-79fdefaed293
What am I doing wrong? (Not an expert of Perl)of
Got it working. Leaving last post for posterity, unless others suggest I delete it (sorry, I don't make my way into forums much anymore).
Anyway - THANK YOU, again, @sikha !
here's my modified script for Proxmox 8:
Perl:#!/usr/bin/perl use strict; use warnings; my %associations = ( 102 => ['/mnt/local'], # 101 => ['/zpool/audio', '/zpool/games'], ); use PVE::QemuServer; use Template; my $tt = Template->new; print "GUEST HOOK: " . join(' ', @ARGV) . "\n"; my $vmid = shift; my $conf = PVE::QemuConfig->load_config($vmid); my $vfs_args_file = "/run/$vmid.virtfs"; my $virtiofsd_dir = "/run/virtiofsd/"; my $DEBUG = 1; my $phase = shift; my $unit_tpl = "[Unit] Description=virtiofsd filesystem share at [% share %] for VM %i StopWhenUnneeded=true [Service] Type=simple RuntimeDirectory=virtiofsd PIDFile=/run/virtiofsd/.run.virtiofsd.%i-[% share_id %].sock.pid ExecStart=/usr/libexec/virtiofsd --log-level debug --socket-path /run/virtiofsd/%i-[% share_id %].sock --shared-dir [% share %] --cache=auto --announce-submounts --inode-file-handles=mandatory [Install] RequiredBy=%i.scope\n"; if ($phase eq 'pre-start') { print "$vmid is starting, doing preparations.\n"; my $vfs_args = "-object memory-backend-memfd,id=mem,size=$conf->{memory}M,share=on -numa node,memdev=mem"; my $char_id = 0; # TODO: Have removal logic. Probably need to glob the systemd directory for matching files. for (@{$associations{$vmid}}) { my $share_id = $_ =~ s/^\///r =~ s/\//_/gr; my $unit_name = 'virtiofsd-' . $share_id; my $unit_file = '/etc/systemd/system/' . $unit_name . '@.service'; print "attempting to install unit $unit_name...\n"; if (not -d $virtiofsd_dir) { print "ERROR: $virtiofsd_dir does not exist!\n"; } else { print "DIRECTORY DOES EXIST!\n"; } if (not -e $unit_file) { $tt->process(\$unit_tpl, { share => $_, share_id => $share_id }, $unit_file) || die $tt->error(), "\n"; system("/usr/bin/systemctl daemon-reload"); system("/usr/bin/systemctl enable $unit_name\@$vmid.service"); } system("/usr/bin/systemctl start $unit_name\@$vmid.service"); $vfs_args .= " -chardev socket,id=char$char_id,path=/run/virtiofsd/$vmid-$share_id.sock"; $vfs_args .= " -device vhost-user-fs-pci,chardev=char$char_id,tag=$share_id"; $char_id += 1; } open(FH, '>', $vfs_args_file) or die $!; print FH $vfs_args; close(FH); print $vfs_args . "\n"; if (defined($conf->{args}) && not $conf->{args} =~ /$vfs_args/) { print "Appending virtiofs arguments to VM args.\n"; $conf->{args} .= " $vfs_args"; } else { print "Setting VM args to generated virtiofs arguments.\n"; print "vfs_args: $vfs_args\n" if $DEBUG; $conf->{args} = " $vfs_args"; } PVE::QemuConfig->write_config($vmid, $conf); } elsif($phase eq 'post-start') { print "$vmid started successfully.\n"; my $vfs_args = do { local $/ = undef; open my $fh, "<", $vfs_args_file or die $!; <$fh>; }; if ($conf->{args} =~ /$vfs_args/) { print "Removing virtiofs arguments from VM args.\n"; print "conf->args = $conf->{args}\n" if $DEBUG; print "vfs_args = $vfs_args\n" if $DEBUG; $conf->{args} =~ s/\ *$vfs_args//g; print $conf->{args}; $conf->{args} = undef if $conf->{args} =~ /^$/; print "conf->args = $conf->{args}\n" if $DEBUG; PVE::QemuConfig->write_config($vmid, $conf) if defined($conf->{args}); } } elsif($phase eq 'pre-stop') { #print "$vmid will be stopped.\n"; } elsif($phase eq 'post-stop') { #print "$vmid stopped. Doing cleanup.\n"; } else { die "got unknown phase '$phase'\n"; } exit(0);
I had to modify:
1. ExecStart command with new args for virtiofsd (old args did not work, so new args may require some tuning)
2. Added a line to mkdir /run/virtiofsd/ if it doesn't exist (otherwise, virtiofsd couldnt write its socket)
3. post-start function to not rewrite the config (PVE::QemuConfig->write_config) if it's undefined
root@proxmox:~# perl virtiofs.pl 100
GUEST HOOK: 100
Use of uninitialized value $phase in string eq at virtiofs.pl line 40.
Use of uninitialized value $phase in string eq at virtiofs.pl line 84.
Use of uninitialized value $phase in string eq at virtiofs.pl line 103.
Use of uninitialized value $phase in string eq at virtiofs.pl line 106.
Use of uninitialized value $phase in concatenation (.) or string at virtiofs.pl line 109.
got unknown phase ''