missing (but working) iscsi storage from one node, or the other... ?!?

m.ardito

Famous Member
Feb 17, 2010
1,473
17
103
Torino, Italy
following the problem reported here http://forum.proxmox.com/threads/16748-changed-nfs-server-IP-address-how-to-make-pve-notice-remount
now I have a really strange problem...

recap: 2 nodes cluster, both 3.1-21/93bf03d4 (on lan)
cluster storage: nfs and lvm/scsi shares, both on two different nas (on lan), nasA and nasB

in the post cited above, I had some trouble after changing one the ip address of one nas (nasB).

now it's all fine, nfs share work, ~20 running vm (kvm, storage lvm/scsi on nasA) and a few ct (openvz, storage nfs on nasA)

except: since yesterday, on one node or the other, or both (it changed a few times)
it seems pve can't acknowledge nasA iscsi "link", named "vm_work"

atm I have:

on /etc/pve/storage.cfg (both nodes)
Code:
pve1
iscsi: [B]vm_work[/B]
        target iqn.2004-04.com.qnap:ts-809u:[B]iscsi.pve2.c0a765[/B]
        portal 192.168.3.30
        content none

pve2
iscsi: [B]vm_work[/B]
        target iqn.2004-04.com.qnap:ts-809u:[B]iscsi.pve2.c0a765[/B]
        portal 192.168.3.30
        content none

pvesm status reports

Code:
pve1:~# pvesm status
iso_qnap          nfs 1     10084223488      5297144064      4787079424 53.03%
local             dir 1        34116380         1263760        32852620 4.20%
pve_ts879         nfs 1     11619394112      1497403200     10121204480 13.39%
ts879             nfs 1     11619394112      1497403200     10121204480 13.39%
vm_disks          lvm 1      1048571904               0       409796608 0.50%
vm_disks_ts879    lvm 1      1048571904               0      1048571904 0.50%
vm_ts879        iscsi 1               0               0               0 100.00%
[B]vm_work[/B]         iscsi 1               0               0               0 100.00%

pve2:~# pvesm status
iso_qnap          nfs 1     10084223488      5297144064      4787079424 53.03%
local             dir 1        34116380        28047268         6069112 82.71%
pve_ts879         nfs 1     11619394112      1497403200     10121204480 13.39%
ts879             nfs 1     11619394112      1497403200     10121204480 13.39%
vm_disks          lvm 1      1048571904               0       409796608 0.50%
vm_disks_ts879    lvm 1      1048571904               0      1048571904 0.50%
vm_ts879        iscsi 1               0               0               0 100.00%
[B]vm_work[/B]         iscsi 1               0               0               0 100.00%

and pvesm iscsiscan -portal 192.168.3.30
Code:
pve1:~# pvesm iscsiscan -portal 192.168.3.30
iqn.2004-04.com.qnap:ts-809u:iscsi.pve.c0a765              192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:[B]iscsi.pve2.c0a765 [/B]            192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.landrives.c0a765        192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.pvelvmtest.c0a765       192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.hpopenviewdisk01.c0a765 192.168.3.30:3260

pve2:~#  pvesm iscsiscan -portal 192.168.3.30
iqn.2004-04.com.qnap:ts-809u:iscsi.pve.c0a765              192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:[B]iscsi.pve2.c0a765 [/B]            192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.landrives.c0a765        192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.pvelvmtest.c0a765       192.168.3.30:3260
iqn.2004-04.com.qnap:ts-809u:iscsi.hpopenviewdisk01.c0a765 192.168.3.30:3260

but pvesm list vm_work:
Code:
pve1:~# pvesm list [B]vm_work[/B]
vm_work:0.0.0.scsi-36001405ed36201adab64d4b0ad8e4cd2   raw 1073741824000

pve2:~# pvesm list [B]vm_work[/B]
[B](no output)[/B]

at first this "pvesm list vm_work" with no output happened on node pve2, so
- I migrated all vms/cts to pve1,
- rebooted pve2, and then
- "pvesm list vm_work" on pve2 gave output,
- but on pve1 now "pvesm list vm_work" had no more output

- I migrated all vms/cts to pve2,
- rebooted pve1, and then
- "pvesm list vm_work" on pve1 gave output,
- but on pve2 now "pvesm list vm_work" had no more output

...? what is happening here, how to fix this...?

Marco
 
yes, it seems both nodes have identical folder structure and identical records, at least for that target....

I was wondering what exactly does the command "pvesm list vm_work"... I could track what is not working...

update:

i altro tried the API and
https://pve1IP:8006/api2/json/storage/vm_work
https://pve2IP:8006/api2/json/storage/vm_work

both gave identical result:

Code:
{"data":
  {
    "shared":1,
    "target":"iqn.2004-04.com.qnap:ts-809u:iscsi.pve2.c0a765",
    "content":"none",
    "digest":"86569fa609444dd23bccfcd0d10e61d565d38c2b",
    "type":"iscsi",
    "storage":"vm_work",
    "portal":"192.168.3.30"
  }
}

the source pvesm has this
Code:
my $cmddef = {
 120     add => [ "PVE::API2::Storage::Config", 'create', ['storage'] ],
 121     set => [ "PVE::API2::Storage::Config", 'update', ['storage'] ],
 122     remove => [ "PVE::API2::Storage::Config", 'delete', ['storage'] ],
 123     status => [ "PVE::API2::Storage::Status", 'index', [],
 124                 { node => $nodename }, $print_status ],
 125 [B]    list [/B]=> [ "PVE::API2::Storage::Content", 'index', ['storage'],
 126               { node => $nodename }, $print_content ],

for what I understand... there's no reason...?

Thanks,
Marco
 
Last edited:
I found what fails, I think. still don't understand why and how to fix...

this
Code:
https://[B]pve1IP[/B]:8006/api2/json/nodes/[B]pve1[/B]/storage/vm_work/content 
https://[B]pve2IP[/B]:8006/api2/json/nodes/[B]pve1[/B]/storage/vm_work/content

gives
{"data":[{"content":"images","channel":0,"size":1073741824000,"lun":0,"format":"raw","id":0,"vmid":0,"volid":"vm_work:0.0.0.scsi-36001405ed36201adab64d4b0ad8e4cd2"}]}

while this
Code:
https://[B]pve1IP[/B]:8006/api2/json/nodes/[B]pve2[/B]/storage/vm_work/content
https://[B]pve2IP[/B]:8006/api2/json/nodes/[B]pve2[/B]/storage/vm_work/content 

gives
{"data":[]}

any idea?
Marco
 
Last edited:
more details...

... it seems that if a node has vm disks on a lvm/iscsci storage, iscsi "content" is empty.
if a lvm/iscsi storage i sconfigured on datacenter, but has no images, iscsi "content" shows target info, like

{"data":[{"content":"images","channel":0,"size":1073741824000,"lun":0,"format":"raw","id":0,"vmid":0,"volid":"vm_work:0.0.0.scsi-36001405ed36201adab64d4b0ad8e4cd2"}]}

other than this, all is apparently working normally...
could this be a bug, or a problem specific to my nodes/cluster, or is expected?

Marco
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!