following the problem reported here http://forum.proxmox.com/threads/16748-changed-nfs-server-IP-address-how-to-make-pve-notice-remount
now I have a really strange problem...
recap: 2 nodes cluster, both 3.1-21/93bf03d4 (on lan)
cluster storage: nfs and lvm/scsi shares, both on two different nas (on lan), nasA and nasB
in the post cited above, I had some trouble after changing one the ip address of one nas (nasB).
now it's all fine, nfs share work, ~20 running vm (kvm, storage lvm/scsi on nasA) and a few ct (openvz, storage nfs on nasA)
except: since yesterday, on one node or the other, or both (it changed a few times)
it seems pve can't acknowledge nasA iscsi "link", named "vm_work"
atm I have:
on /etc/pve/storage.cfg (both nodes)
pvesm status reports
and pvesm iscsiscan -portal 192.168.3.30
but pvesm list vm_work:
at first this "pvesm list vm_work" with no output happened on node pve2, so
- I migrated all vms/cts to pve1,
- rebooted pve2, and then
- "pvesm list vm_work" on pve2 gave output,
- but on pve1 now "pvesm list vm_work" had no more output
- I migrated all vms/cts to pve2,
- rebooted pve1, and then
- "pvesm list vm_work" on pve1 gave output,
- but on pve2 now "pvesm list vm_work" had no more output
...? what is happening here, how to fix this...?
Marco
now I have a really strange problem...
recap: 2 nodes cluster, both 3.1-21/93bf03d4 (on lan)
cluster storage: nfs and lvm/scsi shares, both on two different nas (on lan), nasA and nasB
in the post cited above, I had some trouble after changing one the ip address of one nas (nasB).
now it's all fine, nfs share work, ~20 running vm (kvm, storage lvm/scsi on nasA) and a few ct (openvz, storage nfs on nasA)
except: since yesterday, on one node or the other, or both (it changed a few times)
it seems pve can't acknowledge nasA iscsi "link", named "vm_work"
atm I have:
on /etc/pve/storage.cfg (both nodes)
Code:
pve1
iscsi: [B]vm_work[/B]
target iqn.2004-04.com.qnap:ts-809u:[B]iscsi.pve2.c0a765[/B]
portal 192.168.3.30
content none
pve2
iscsi: [B]vm_work[/B]
target iqn.2004-04.com.qnap:ts-809u:[B]iscsi.pve2.c0a765[/B]
portal 192.168.3.30
content none
pvesm status reports
Code:
pve1:~# pvesm status
iso_qnap nfs 1 10084223488 5297144064 4787079424 53.03%
local dir 1 34116380 1263760 32852620 4.20%
pve_ts879 nfs 1 11619394112 1497403200 10121204480 13.39%
ts879 nfs 1 11619394112 1497403200 10121204480 13.39%
vm_disks lvm 1 1048571904 0 409796608 0.50%
vm_disks_ts879 lvm 1 1048571904 0 1048571904 0.50%
vm_ts879 iscsi 1 0 0 0 100.00%
[B]vm_work[/B] iscsi 1 0 0 0 100.00%
pve2:~# pvesm status
iso_qnap nfs 1 10084223488 5297144064 4787079424 53.03%
local dir 1 34116380 28047268 6069112 82.71%
pve_ts879 nfs 1 11619394112 1497403200 10121204480 13.39%
ts879 nfs 1 11619394112 1497403200 10121204480 13.39%
vm_disks lvm 1 1048571904 0 409796608 0.50%
vm_disks_ts879 lvm 1 1048571904 0 1048571904 0.50%
vm_ts879 iscsi 1 0 0 0 100.00%
[B]vm_work[/B] iscsi 1 0 0 0 100.00%
and pvesm iscsiscan -portal 192.168.3.30
Code:
pve1:~# pvesm iscsiscan -portal 192.168.3.30
iqn.2004-04.com.qnap:ts-809u:iscsi.pve.c0a765 192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:[B]iscsi.pve2.c0a765 [/B] 192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.landrives.c0a765 192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.pvelvmtest.c0a765 192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.hpopenviewdisk01.c0a765 192.168.3.30:3260
pve2:~# pvesm iscsiscan -portal 192.168.3.30
iqn.2004-04.com.qnap:ts-809u:iscsi.pve.c0a765 192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:[B]iscsi.pve2.c0a765 [/B] 192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.landrives.c0a765 192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.pvelvmtest.c0a765 192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.hpopenviewdisk01.c0a765 192.168.3.30:3260
but pvesm list vm_work:
Code:
pve1:~# pvesm list [B]vm_work[/B]
vm_work:0.0.0.scsi-36001405ed36201adab64d4b0ad8e4cd2 raw 1073741824000
pve2:~# pvesm list [B]vm_work[/B]
[B](no output)[/B]
at first this "pvesm list vm_work" with no output happened on node pve2, so
- I migrated all vms/cts to pve1,
- rebooted pve2, and then
- "pvesm list vm_work" on pve2 gave output,
- but on pve1 now "pvesm list vm_work" had no more output
- I migrated all vms/cts to pve2,
- rebooted pve1, and then
- "pvesm list vm_work" on pve1 gave output,
- but on pve2 now "pvesm list vm_work" had no more output
...? what is happening here, how to fix this...?
Marco