Can't add LVM storage after getting - iSCSI Storage is not online

json

New Member
Oct 15, 2014
3
0
1
Hi all,

I tried to add a LVM type storage ( a VHD with LUN from our iSCSI SAN) and suddenly got a message that iSCSI Storage is not online, which surprised me, because it couldn't be true (.

Ok I tried it again and got a different message - create storage failed: command '/usr/bin/file -L -s /dev/disk/by-id/scsi-32055001378939ad8' failed: exit code 1 (500).

I have checked it with multipath -ll | grep 32055001378939ad8 and got this:

32055001378939ad8 dm-28 Qsan,P300Q-D316
size=200G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 2:0:0:21 sdcm 69:160 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 3:0:0:21 sdcn 69:176 active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 6:0:0:21 sdco 69:192 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
`- 9:0:0:21 sdcp 69:208 active ready running

The node see both portals and is logged to them...

Now I'm a little bit unsure what to do - is it possible to configure it manually by adding it for example to /etc/pve/storage.cfg?

Proxmox version:
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


Running the command manually:
/usr/bin/file -L -s /dev/disk/by-id/scsi-32055001378939ad8
Gets me this:
/dev/disk/by-id/scsi-32055001378939ad8: ERROR: (null)
But I see the disk in the directory:
ls /dev/disk/by-id | grep scsi-32055001378939ad8
scsi-32055001378939ad8



Thanks for any help.
 
Last edited:
Now I'm a little bit unsure what to do - is it possible to configure it manually by adding it for example to /etc/pve/storage.cfg?


Possible, but will not work either as long as there are problems with access to iscsi - and that seems to be the case

Proxmox version:
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

More than 1 year old - consider an upgrade.

Running the command manually:
/usr/bin/file -L -s /dev/disk/by-id/scsi-32055001378939ad8
Gets me this:
/dev/disk/by-id/scsi-32055001378939ad8: ERROR: (null)
But I see the disk in the directory:
ls /dev/disk/by-id | grep scsi-32055001378939ad8
scsi-32055001378939ad8

That means it has been there but currently not (correctly) accesible.

Check your network and the iscsi target!
 
Hello Richard,

thank you for your reply and suggestions. I'm planning the upgrade very soon, just after I solve this problem...

I do not see any problem on the target... The node is attached to it, logged properly and there are no errors in the SAN logs. I tried to list disks from the target:

Code:
root@server3:~# pvesm list san01_00
...
san01_00:0.0.20.scsi-32054001378939ad8   raw 64961380352
san01_00:0.0.21.scsi-32055001378939ad8   raw 215285235712
...


root@server3:~# /usr/bin/file -L -s /dev/disk/by-id/scsi-32055001378939ad8
/dev/disk/by-id/scsi-32055001378939ad8: ERROR: (null)
root@server3:~# /usr/bin/file -L -s /dev/disk/by-id/scsi-32054001378939ad8
/dev/disk/by-id/scsi-32054001378939ad8: sticky LVM2 PV (Linux Logical Volume Manager), UUID: GzDejY-KdCc-FadJ-3EpP-1eRF-BK6R-GI7e5G, size: 64961380352

There is no PV on the disk, I checked the proxmox scripts and it seems that it is creating it after it checks it with the file command. Now I'm unsure what to do :confused: - is this state recoverable?

I would also like to know how to revert this state - deconfigure the disk the safe way in the multipath environment...

Can you please help me further?

Many thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!