Hi,
I'm not sure if this will sound familiar or not to anyone. I've got a ProxVE cluster setup with 3 hardware nodes (1 master / 2 slave). All 3 are identical machines (Dell pe2950, hardware raid, HA trunked dual gig ether interfaces - consistent config on all 3 machines).
I have a "NFS storage pool" defined which mounts an NFS share from a local NFS server. This mounts fine on all 3 hardware nodes.
The mount is actually present on the hardware nodes, so that some OpenVZ based VMs I've got // which need to be able to mount a share off this NFS server work; and my review of docs suggested that ensuring the NFS server is mounted on the ProxVE hardware node is the 'easiest way' to ensure modules are loaded properly / and working smoothly -- to allow NFS mounts to work for the OpenVZ VMs.
The problem:
I had to reboot the "Master" hardware node in the ProxVE cluster last night. It came back online OK; however, I didn't notice until thisAM -- that the NFS mount on the ProxVE hardware node simply didn't mount on reboot.
Via the web interface, I was able to choose "Enable" - and pouf, it was online.
Subsequently, I was able to stop and start all the VMs (one per hardware node) which wanted to make NFS mounts. And after doing so -- they had their NFS mounts fine as well.
However: Ideally -- this should happen on its own // and not require any systems admin intervention.
I'm just curious, if there are know issues / or circumstances where NFS 'storage' will refuse to mount when the ProxVE hardware node(s) are rebooted. .. and if there are any known workarounds to resolve this.
Thanks,
Tim Chipman
Fortech IT Solutions
http://FortechITSolutions.ca
I'm not sure if this will sound familiar or not to anyone. I've got a ProxVE cluster setup with 3 hardware nodes (1 master / 2 slave). All 3 are identical machines (Dell pe2950, hardware raid, HA trunked dual gig ether interfaces - consistent config on all 3 machines).
I have a "NFS storage pool" defined which mounts an NFS share from a local NFS server. This mounts fine on all 3 hardware nodes.
The mount is actually present on the hardware nodes, so that some OpenVZ based VMs I've got // which need to be able to mount a share off this NFS server work; and my review of docs suggested that ensuring the NFS server is mounted on the ProxVE hardware node is the 'easiest way' to ensure modules are loaded properly / and working smoothly -- to allow NFS mounts to work for the OpenVZ VMs.
The problem:
I had to reboot the "Master" hardware node in the ProxVE cluster last night. It came back online OK; however, I didn't notice until thisAM -- that the NFS mount on the ProxVE hardware node simply didn't mount on reboot.
Via the web interface, I was able to choose "Enable" - and pouf, it was online.
Subsequently, I was able to stop and start all the VMs (one per hardware node) which wanted to make NFS mounts. And after doing so -- they had their NFS mounts fine as well.
However: Ideally -- this should happen on its own // and not require any systems admin intervention.
I'm just curious, if there are know issues / or circumstances where NFS 'storage' will refuse to mount when the ProxVE hardware node(s) are rebooted. .. and if there are any known workarounds to resolve this.
Thanks,
Tim Chipman
Fortech IT Solutions
http://FortechITSolutions.ca