[SOLVED] Unit file order to mount from client container

Pedulla

New Member
Aug 1, 2017
15
1
3
Oregon, USA
I'll start out acknowledging that some will think this is nuts and people will respond with "Why?". The answer, It's what I've got to work with; so given that...

PVE 5.2-5
Node 100 is a virtual machine running FreeNAS with PCI passthrough (This part is working great BTW)

Other nodes are containers that would like to use storage on the FreeNAS node (side note: using a virtio network interface to keep it all internal)

But since they are containers, they use the mount points configured on the host (see the catch-22 approaching), the host needs to wait till node 100 is started before it mounts the FreeNAS shares.

So, what do I put in the After= directive in the hosts systemd mount unit file that waits for node 100 to be started?

-or-

Is there some trick with use with a bind mound?
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
6,497
467
103
Hi,

you can use the boot order for this kind of setups.
VM 100 get boot order 1 and all other container get a boot order > 1 and set also a boot delay.
Boot order you find in the VM/CT options.
 

Pedulla

New Member
Aug 1, 2017
15
1
3
Oregon, USA
Hi wolfgang,
That's what I initially thought too, however, the host systemd tries to mount the share on VM100 before VM100 is started so the mount status on the host is "dead". Once VM100 is started I can ssh into the host and re-run the mount commands (systemctl start mnt-xxxxx.mount) and now the mount points in/for the other containers are active. What I need to do is tell systemd NOT to mount the shares on VM100, until after VM100 is started.
I was hoping I could point the AFTER directive in the .mount file to to some service that would hold off mounting till VM100 was started (similar to how you wait for the network to be up before you attempt an NFS mount). I just am not sure what to have systemd "look for" so to speak.

Alternatively, if I run full VM's for all of what are now containers, then the boot order would work because the mounting mechanism is different in VM's than for containers.

Other Alternative, if there is a way to use bind mounts to sort of ignore the fact that VM100 isn't there yet establish the mounts as though it were; I haven't figured this one out yet.
 

Pedulla

New Member
Aug 1, 2017
15
1
3
Oregon, USA
Code:
root@pve1:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve)
pve-manager: 5.2-5 (running version: 5.2-5/eb24855a)
pve-kernel-4.15: 5.2-4
pve-kernel-4.15.18-1-pve: 4.15.18-15
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-35
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-9
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-1
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-28
pve-container: 2.0-24
pve-docs: 5.2-4
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-29
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
6,497
467
103
The main problem is that start VM/CT depend on the mount service, so if you add a dependency to this you get a cycled dependency and nothing will work anymore.

I would make an own mount systemd service for the mount.
This service should start after multi-user.target and wait as long your NAS need to come up.
 
  • Like
Reactions: Pedulla

Pedulla

New Member
Aug 1, 2017
15
1
3
Oregon, USA
Ah, ok, if start VM/CT depends on mount... I get it.

So then I'll switch to mounting inside the VM/CT. In the case of CT is this thread still accurate in terms of changing the apparmor profile to allow NFS mount?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!