How to automatically mount shared filesystem at start up?

bmas9307

New Member
Apr 14, 2022
4
0
1
Hi Everyone,

This started when I ran into issues trying to start an lxc container after I did a reboot. I had shared the mounted files with the container in its own mount point, so the file share had to be mounted before I could boot the container. Ideally, I'd like this container to start automatically after I reboot the PVE server, so that will require that all mounts get mounted automatically.

I've added everything in the /etc/fstab file, and the mount point directory exists, but that last manual piece is running "mount -a" after a reboot. Is there a simple way to automate this?

It's probably not relevant to this question, but here are all the customary system details just in case:
pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-7 (running version: 7.1-7/df5740ad)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

Thanks
 
As i know if it is in the /etc/fstab file
then it should be executed at boot so the mount -a should not be needed.
the boot sequenz of your VM could be changed
i think the small number are stopped as last und startet as first.
 
If you have to run "mount -a" after reboot that means your /etc/fstab is not working as it should for your case.
Did you use: "noauto do not mount when "mount -a" is given (e.g., at boot time)" by chance? (man fstab)

How did you define storage in PVE where container points? Have you set this option?
Code:
       --is_mountpoint <string> (default = no)
           Assume the given path is an externally managed mountpoint and consider the storage offline if it is not mounted. Using a boolean (yes/no) value serves as a shortcut to using the target path
           in this field.
(man pvesm)


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
If you have to run "mount -a" after reboot that means your /etc/fstab is not working as it should for your case.
Did you use: "noauto do not mount when "mount -a" is given (e.g., at boot time)" by chance? (man fstab)

How did you define storage in PVE where container points? Have you set this option?
Code:
       --is_mountpoint <string> (default = no)
           Assume the given path is an externally managed mountpoint and consider the storage offline if it is not mounted. Using a boolean (yes/no) value serves as a shortcut to using the target path
           in this field.
(man pvesm)


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I'm going to be completely honest, I only half understand what you're asking here. I added the network path with smb credentials in fstab and just ran "mount -a" with no other options. I did also set the noperm option in the fstab file (see below) as I was told that allows the VM's and containers to access the file system.
Bash:
//IP-path/share /mnt/mount-point cifs username=x,password=y,iocharset=utf8,noperm 0 0

Because this mount point is external, does that mean I should use the "--is_mountpoint yes" option when running the mount command the first time?
 
If the line is actually like the one you posted, can you just run

Makefile:
mount /mnt/mount-point

and report back the errors you get?
 
@bmas9307 what is the output of "cat /etc/pve/storage.cfg"
what is the output of "pct config [container_id]"

the "is_mountpoint yes" goes into /etc/pve/storage.cfg if you are managing the mountpoint externally from PVE, which you are.
You can also manage it via PVE by removing fstab entry and adding a "cifs" type storage in PVE. It may solve your issue



Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
If the line is actually like the one you posted, can you just run

Makefile:
mount /mnt/mount-point

and report back the errors you get?

No errors. I can mount it, the issue I'm running into is it doesn't automatically mount after a reboot.
 
@bmas9307 what is the output of "cat /etc/pve/storage.cfg"

Code:
@Prox1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

zfspool: local-zfs
        pool rpool/data
        sparse
        content images,rootdir

what is the output of "pct config [container_id]"

Code:
@Prox1:~# pct config 201
arch: amd64
cores: 3
features: keyctl=1,nesting=1
hostname: GrayStack
lock: mounted
memory: 4096
mp0: /mnt/smb/Media,mp=/mnt/Media
net0: name=eth0,bridge=vmbr2,gw=10.2.2.1,hwaddr=12:8E:A7:80:38:C4,ip=10.2.2.2/24,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-201-disk-0,size=32G
startup: order=2
swap: 512
unprivileged: 1

It looks like the mounted file system isn't showing up storage.cfg. Is there a piece I missed when adding it to fstab, or is this managed separately?
 
Last edited:
I'm going to be completely honest, I only half understand what you're asking here. I added the network path with smb credentials in fstab and just ran "mount -a" with no other options. I did also set the noperm option in the fstab file (see below) as I was told that allows the VM's and containers to access the file system.
Bash:
//IP-path/share /mnt/mount-point cifs username=x,password=y,iocharset=utf8,noperm 0 0

Because this mount point is external, does that mean I should use the "--is_mountpoint yes" option when running the mount command the first time?

Hi, did you tried with auto a _netdev options?

Bash:
//IP-path/share /mnt/mount-point cifs auto,_netdev,username=x,password=y,iocharset=utf8,noperm 0 0