ISCSI Storage Setup Error

Jun 20, 2020
7
0
1
Hi,

I work at a local school district and I am trying to setup ISCSI based storage on our newer dell me series san. I have run into several issues while trying to set this up and was hoping that maybe someone could point me in the right direction. In our environment we have 3 hosts with proxmox VE 6.2 installed and fully up to date. As well as our dell PowerVault ME san that has the latest firmware on the controllers and the disks. The issue that I am running into is whenever I try to create the LVM storage with created iscsi storage, it gives me this error: create storage failed: error during cfs-locked 'file-storage_cfg' operation: device '/dev/disk/by-id/scsi-0DellEMC_ME4_116630653464650000c0ff44aa5200000a01012d00c0ff44a98200000a01012e' is already used by volume group '[unknown]' (500). On my dell san I have the storage mapped to the hosts. I even have tried just mapping one singe host and still receive the same error. I also have attached a couple of pictures. If anyone can help I would really appreciate it!
 

Attachments

  • storageissue2.PNG
    storageissue2.PNG
    14.4 KB · Views: 81
  • storageIssue3.PNG
    storageIssue3.PNG
    14.8 KB · Views: 72
  • StorageIssue1.PNG
    StorageIssue1.PNG
    33.7 KB · Views: 69
hi,

please post the following commands:

Code:
fdisk -l
pvs -a
vgs -a
lvs -a
iscsiadm -m session
 
can you also post lsblk? are you using multipath?
 
root@SABRETOOTH:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 930.5G 0 part
├─pve-swap 253:5 0 8G 0 lvm [SWAP]
├─pve-root 253:6 0 96G 0 lvm /
├─pve-data_tmeta 253:7 0 8.1G 0 lvm
│ └─pve-data 253:9 0 794.3G 0 lvm
└─pve-data_tdata 253:8 0 794.3G 0 lvm
└─pve-data 253:9 0 794.3G 0 lvm
sdb 8:16 0 223.5G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part
└─sdb3 8:19 0 223G 0 part
├─pve--OLD--108BA64D-swap 253:0 0 8G 0 lvm
├─pve--OLD--108BA64D-root 253:1 0 55.8G 0 lvm
├─pve--OLD--108BA64D-data_tmeta 253:2 0 1.4G 0 lvm
│ └─pve--OLD--108BA64D-data 253:4 0 140.4G 0 lvm
└─pve--OLD--108BA64D-data_tdata 253:3 0 140.4G 0 lvm
└─pve--OLD--108BA64D-data 253:4 0 140.4G 0 lvm
sdc 8:32 0 6.4T 0 disk
sdd 8:48 0 6.4T 0 disk
sde 8:64 0 6.4T 0 disk
sdf 8:80 0 6.4T 0 disk
sdg 8:96 0 6.4T 0 disk
sdh 8:112 0 6.4T 0 disk
sdi 8:128 0 6.4T 0 disk
sdj 8:144 0 6.4T 0 disk
sr0 11:0 1 1024M 0 rom
 
An update:

Yesterday I had taken a server that we had had different from our host device (test) and installed the same version of proxmox as on our host devices. I had removed the mappings from other host devices and installed proxmox. On the server mentioned here (test), it has two nic cards. One in which is connected to my network. and the other I set up with the IP of 192.168.1.6/24 (the same subnet as the dell me storage) After setting this up I was able to successfully add the same multipath config as the other nodes and go through the same process of setting it up on the node here. There was nothing different in this process other then after I had set it up I had seen 1 Lun in the storage content of ISCSI instead of 2 or 3. After doing this I mapped the device and was able to create the LVM on the lun with no issue.

At this point, I thought I was in the right direction. So, I had removed sabertooth from the main cluster and added it to this test one. Same setup same subnets and at this point It could access the storage until I had rebooted both of the devices. Once I had rebooted the devices they came up and I was able to see the content on the test node but not on the sabertooth node. At this point when I went to the content, I saw two lun images as shown in the first screenshot (1) and I got an error when trying to access the content on the device as shown in the screenshot (2). I also attached the log for reference.

I also had attached the list of VM disk I had created on the share prior to adding sabretooth to show that I had it working (3). As well as attached the Storage setup to show that they are there and I was able to add the LVM (4).

To clarify: I now can not access the data on Sabretooth(nodeA) or on test (NodeB) I am trying to get this issue resolved in a timely manner as our servers have been running on local disks for about three weeks now.
 

Attachments

  • 2.png
    2.png
    99.1 KB · Views: 46
  • 3.png
    3.png
    110.2 KB · Views: 42
  • 4.png
    4.png
    66.1 KB · Views: 37
  • 1.png
    1.png
    81 KB · Views: 39
At this point when I went to the content, I saw two lun images as shown in the first screenshot (1) and I got an error when trying to access the content on the device as shown in the screenshot (2). I also attached the log for reference.
this sounds very weird... how is your san setup? from the session output above i can see that your san announces multiple ips in the same range, why does it do that?
multipath only makes sense on different paths, and if they are in the same subnet there is nothing to gain here....
 
As shown in the screenshots below:

I have the san operating under two different ip ranges 192.168.1.* and 192.168.4.* as the node nics correspond under the same subnets.

Additionally, In this test environment, I had kept them on the same subnet (192.168.1.*). It, however, does not matter if I do it with separate subnets or not. It seems that I have the same issue as I had tested it both ways. The issue I seem to have is it pulling multiple luns each time a host is added.

Also, If you see the id names are different for each host. As I had found on a previous form an image that looks like my environment and I have attached it as well.

Has not changed on test environment or Main Nodes setup: testing.png, sansetup.png
Test environment: testen1.png, testing3.png
Main Nodes setup: DellSanSetup.png
 

Attachments

  • sansetup1.png
    sansetup1.png
    20.9 KB · Views: 50
  • testing.png
    testing.png
    173.9 KB · Views: 50
  • testing3.png
    testing3.png
    30.1 KB · Views: 42
  • testen1.png
    testen1.png
    6.7 KB · Views: 36
  • DellSanSetup.PNG
    DellSanSetup.PNG
    38.9 KB · Views: 39
This is the config file that I had created. Does this look right?

blacklist {
wwid .*
}

blacklist_exceptions {
wwid "3600c0ff00044aa526e41f15e01000000"
}

multipaths {
multipath {
wwid "3600c0ff00044aa526e41f15e01000000"
alias mpath0
}

}

defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}

devices {

device {
vendor "DellEMC"
product "ME4"
path_grouping_policy group_by_prio
path_checker "tur"
hardware_handler "1 alua"
prio "alua"
failback immediate
rr_weight "uniform"
path_selector "round-robin 0"
}
}
 
if this can help you, thi is mine:
defaults {
find_multipaths no
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(td|hd)[a-z]"
devnode "^dcssblk[0-9]*"
devnode "^cciss!c[0-9]d[0-9]*"
}
blacklist_exceptions {
wwid "36001405c261c626dd18dd4bacd85e9d3"
}
multipaths {
multipath {
wwid "36001405c261c626dd18dd4bacd85e9d3"
alias mpathe
}
}

with version 6 of proxmox multipath have some change...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!