I have a 3 node cluster. There are two NFS servers with a mount point for each type (ie: KVM, Templates, ISOs, Backups, Containers)
One of the cluster will not mount to Omni02CT, which corresponds to the second NFS Server and mount point for Containers.
This is the error I receive in /var/log/messages:
Omni02CT and Omni02KVM does not seem to mount properly, however, backups02 mounts properly. Just to be sure, I created a new mount point that is called "testCT" which is the same NFS mount to 192.168.222.12 as "Omni02CT"
It works without a problem.
This is my storage.cfg:
The pve Version of the node that is not working properly is:
The Other Nodes Connect Properly Without Any Issues.
vmbr0 is used to connect to the NFS over bond0
vmbr1 is what I run my standard network
vmbr0 is a storage subnetwork completely seperate from vmbr1
Please, any help Please
I can't force a remount after the system has been booted, so I cannot force the error to happen again without a reboot. Furthermore, just before this happened, I had a CentOS/ZoneMinder CT go crazy with cpu usage, if that helps...
One of the cluster will not mount to Omni02CT, which corresponds to the second NFS Server and mount point for Containers.
This is the error I receive in /var/log/messages:
Code:
Nov 12 15:21:59 zwtprox1 kernel: ct0 mount: server 192.168.222.12 not responding, timed out
Omni02CT and Omni02KVM does not seem to mount properly, however, backups02 mounts properly. Just to be sure, I created a new mount point that is called "testCT" which is the same NFS mount to 192.168.222.12 as "Omni02CT"
It works without a problem.
This is my storage.cfg:
Code:
dir: local path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0
nfs: Omni02CT
path /mnt/pve/Omni02CT
server 192.168.222.12
export /vdev1/omni02CT
options vers=tcp,3
content rootdir
maxfiles 0
nfs: Omni02KVM
path /mnt/pve/Omni02KVM
server 192.168.222.12
export /vdev1/omni02KVM
options vers=tcp,3
content images
maxfiles 0
nfs: OpenFiler
path /mnt/pve/OpenFiler
server 192.168.222.150
export /mnt/vg_nfs/nfs-ext3/nfs-datastore
options vers=3
content images,backup,rootdir
maxfiles 1
nfs: backups01
path /mnt/pve/backups01
server 192.168.222.11
export /vdev1/proxBKUP
options vers=3
content backup
maxfiles 3
nfs: backups02
path /mnt/pve/backups02
server 192.168.222.12
export /vdev1/omni02Backup
options vers=tcp,3
content backup
maxfiles 4
nfs: iso01
path /mnt/pve/iso01
server 192.168.222.11
export /vdev1/proxISO
options vers=3
content iso
maxfiles 0
nfs: templates01
path /mnt/pve/templates01
server 192.168.222.11
export /vdev1/proxISO
options vers=3
content vztmpl
maxfiles 0
nfs: Omni01CT
path /mnt/pve/Omni01CT
server 192.168.222.11
export /vdev1/proxCONT
options vers=3
content rootdir
maxfiles 0
nfs: Omni01KVM
path /mnt/pve/Omni01KVM
server 192.168.222.11
export /vdev1/proxKVM
options vers=3
content images
maxfiles 0
nfs: Net30BACKUP
path /mnt/pve/Net30BACKUP
server 192.168.222.12
export /vdev1/omni02Backup
options vers=3
content backup
maxfiles 30
nfs: testCT
path /mnt/pve/testCT
server 192.168.222.12
export /vdev1/omni02CT
options vers=3
content rootdir
maxfiles 0
The pve Version of the node that is not working properly is:
Code:
# pveversion
pve-manager/3.1-24/060bd5a6 (running kernel: 2.6.32-26-pve)
The Other Nodes Connect Properly Without Any Issues.
vmbr0 is used to connect to the NFS over bond0
vmbr1 is what I run my standard network
vmbr0 is a storage subnetwork completely seperate from vmbr1
Please, any help Please
I can't force a remount after the system has been booted, so I cannot force the error to happen again without a reboot. Furthermore, just before this happened, I had a CentOS/ZoneMinder CT go crazy with cpu usage, if that helps...
Last edited: