So I've been playing around with this and experimenting with it and I have a few questions.
(i'm new to Proxmox and this level of virtualisation, so please forgive my stupid questions.)
1) I was reading this thread about the user and permissions management.
Stupid question: how do I create a user (and group) in Proxmox VE 7.3-3 such that the shared folder on the host will have the same permissions said shared folder is mounted in the VM guest?
I was able to create the folder as root on the host just fine, but when I tried to change the permissions of the shared folder on the host to the same permissions as my Ubuntu 20.04 VMs/guests, apparently, I didn't have the same group nor user. If I tried to create the user via the Proxmox VE GUI, and I set it to PAM, and then try and change/set the password, it says that the user doesn't exist. If I try to set it as the Proxmox VE authentication, I can set the password via the GUI, but the entry still doesn't show up in /etc/passwd
. So, I'm pretty sure that I'm doing something wrong here.
Solved.
I had to do it through ssh/the command line. (It was still weird that when I created my user account via the GUI and then tried to set the password, it said that my user account didn't exist despite the fact that it was right there.
I am not sure which command finally took care of that, whether it was
pveum
or whether it was
useradd
on the host itself, but either way - I got it working.
2) Right now, I have been running /usr/lib/lvm/virtiofsd
manually, each time I stop and start the VM guests. I've noticed that if I tried to use the same socket twice (i.e. create one socket and try to have two VMs connect to it), it doesn't let me do that. Does this mean that I need to run/start a new virtiofsd
for each VM/guest? I get the feeling that this would be somewhat inefficient because you'd end up spawning as many virtiofsd
processes as you have VMs running.
If you can please educate me in regards to this, that would be greatly appreciated.
Also solved, apparently.
It looks like that with the auto-start script for the socket, that it will create a new one each time you start a VM that has the hookscript.
3) I also read, in this thread, about the putting the perl script into /var/lib/vz/snippets
and also the bash shell script as well to start it. I also read @yaro014's thread about trying to pass arguments to those scripts (because @Rphoton's example has, for example, the VMID hardcoded into the script). So, how do I make it so that it will automatically open the socket for each of my VMs after each stop and/or before each start? I'm not a programmer nor a developer, so I can see the scripts, but I am not knowledgeable enough to be able to make sense of how the scripts will be able to create a new socket if it the VMID is hardcoded into it? (or does that not matter?)
(I'm an idiot when it comes to these things, and therefore; an idiots guide to deploying this (and any help that the team can provide) would be greatly appreciated.
Thank you.
Also solved.
I put both the
virtiofs.pl
perl script as well as the
launch-virtio-daemon.sh
in to
/var/lib/vz/snippets/
.
Hookscript was "attached" to the VM via this command:
qm set 100 --hookscript local:snippets/virtiofs.pl
launch-virtio-daemon.sh contents here:
Code:
#!/usr/bin/bash
function launch() {
nohup /usr/lib/kvm/virtiofsd --syslog --daemonize --socket-path=/var/run/shared-fs.sock -o source=/myfs/ -o cache=always &> /dev/null &
return 0
}
launch
The flag "
--cache=always
" doesn't work. (At least as of this writing, with Proxmox VE 7.3-3.)
The perl hook script contents here:
Code:
#!/usr/bin/perl
# Exmple hook script for PVE guests (hookscript config option)
# You can set this via pct/qm with
# pct set <vmid> -hookscript <volume-id>
# qm set <vmid> -hookscript <volume-id>
# where <volume-id> has to be an executable file in the snippets folder
# of any storage with directories e.g.:
# qm set 100 -hookscript local:snippets/hookscript.pl
use strict;
use warnings;
print "GUEST HOOK: " . join(' ', @ARGV). "\n";
# First argument is the vmid
my $vmid = shift;
# Second argument is the phase
my $phase = shift;
if ($phase eq 'pre-start') {
# First phase 'pre-start' will be executed before the guest
# ist started. Exiting with a code != 0 will abort the start
print "$vmid is starting, doing preparations.\n";
system('/var/lib/vz/snippets/launch-virtio-daemon.sh');
# print "preparations failed, aborting."
# exit(1);
} elsif ($phase eq 'post-start') {
# Second phase 'post-start' will be executed after the guest
# successfully started.
print "$vmid started successfully.\n";
} elsif ($phase eq 'pre-stop') {
# Third phase 'pre-stop' will be executed before stopping the guest
# via the API. Will not be executed if the guest is stopped from
# within e.g., with a 'poweroff'
print "$vmid will be stopped.\n";
} elsif ($phase eq 'post-stop') {
# Last phase 'post-stop' will be executed after the guest stopped.
# This should even be executed in case the guest crashes or stopped
# unexpectedly.
print "$vmid stopped. Doing cleanup.\n";
} else {
die "got unknown phase '$phase'\n";
}
exit(0);
Seems like it worked.
Ubuntu was able to mount via:
mount -t virtiofs myfs /myfs
without any issues.
SLES, on the other hand, does not and would not mount it. (I tried with SLES15 SP4, SLES12 SP4 - neither worked).
I looked at the SLES documentation and tried the command that they showed for mounting 9p and that didn't work neither.
It said that the /myfs "special device" wasn't present or something along those lines.
Pity. So, for that, I ended up installing
nfs-kernel-server
on the host itself, and then created the NFS export, and then mounted the same directory in SLES over NFS instead. It worked enough. (I could write to it (via the virtio network interface) at 301 MB/s and read from it at 785 MB/s vs. the virtiofs that Ubuntu was able to use which was able to write to it at around 496 MB/s and read from it at 1700 MB/s (average between two Ubuntu VMs reading/writing to the host separately and sequentially).
So at least I was able to get Ubuntu up and running with that.
Tomorrow, I think that I am going to try CentOS and then also Windows.