Hi All
I am using (and loving) the latest proxmox on an Intel NUC with an NFS mount to a NAS.
I use mount and umount scripts per container in /etc/pve/openvz using simfs to bind to the main host mounts.
This all works well and NFS mounts are available in the containers. However, if I restart the whole system then the containers start and are unable to access NFS mounts. The mount is simply not there...
If I then restart the container using "vzctl restart " then the NFS mounts inside the container start to work as expected.
I guess this could be due to a timing issue on boot where the containers are started before the main host NFS mount is available, but this is conjecture.
Has anyone else seen this problem?
I can think of two different approaches to fix this:
- Is there a simple way to add a short delay to the start of the containers?
- Change the service start order to make sure that the NFS mounts are complete before any of the pve-* services are started.
Many thanks in advance for any suggestions
I am using (and loving) the latest proxmox on an Intel NUC with an NFS mount to a NAS.
I use mount and umount scripts per container in /etc/pve/openvz using simfs to bind to the main host mounts.
This all works well and NFS mounts are available in the containers. However, if I restart the whole system then the containers start and are unable to access NFS mounts. The mount is simply not there...
If I then restart the container using "vzctl restart " then the NFS mounts inside the container start to work as expected.
I guess this could be due to a timing issue on boot where the containers are started before the main host NFS mount is available, but this is conjecture.
Has anyone else seen this problem?
I can think of two different approaches to fix this:
- Is there a simple way to add a short delay to the start of the containers?
- Change the service start order to make sure that the NFS mounts are complete before any of the pve-* services are started.
Many thanks in advance for any suggestions
Last edited: