PVE iSCSI with different /dev/sdX for iSCSI LUN

unsichtbarre

Member
Oct 1, 2024
32
4
8
Howdy!

I created a PVE 8.4.1 host, connected iSCSI SAN/LUN then built a VM. I then created two other PVE hosts and created a cluster. Trying to migrate a VM from the first host to another failed. I created CEPH on the cluster, migrated the VM to CEPH and then migration worked. I started looking at storage and, while all hosts were connected to the iSCSI, hosts 2 & 3 used different /dev/sdX identifiers for iSCSI as compared to host 1 (output below). I see now that hosts 2 & 3 have a flash drive enabled (/dev/sdd 29/7G) whereas host 1 has the flash drive disabled in BIOS/UEFI.

Is there a way of correcting this or should I just remove host 1 from the cluster and re-install with the flash drive enabled (easier than re-installing hosts 2 & 3!).

THX in ADV
-JB

host 1:
Code:
root@va1-pve101:~# lsblk
NAME                                                                            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                                                               8:0    0   3.6T  0 disk
└─ceph--421ed1b6--412a--4750--ae8b--ec69db4a35a7-osd--block--686e5635--9f7b--4309--bb6c--e25e69ea0eba
                                                                                252:9    0   3.6T  0 lvm
sdb                                                                               8:16   0   3.6T  0 disk
└─ceph--9c4a4c0c--6438--4a58--9daa--aca9026685be-osd--block--eb2f597f--c89a--411c--becd--c188e2f44627
                                                                                252:10   0   3.6T  0 lvm
sdc                                                                               8:32   0   3.6T  0 disk
└─ceph--815722ea--dd4b--4b85--94f6--5b8851eda3d6-osd--block--08502226--125a--49ba--98e3--e886707595f5
                                                                                252:11   0   3.6T  0 lvm
sdd                                                                               8:48   0   500G  0 disk
└─sdd1                                                                            8:49   0   500G  0 part
sde                                                                               8:64   0    10T  0 disk
└─sde1                                                                            8:65   0    10T  0 part
sdf                                                                               8:80   0    15T  0 disk
└─sdf1                                                                            8:81   0    15T  0 part
sdg                                                                               8:96   0    15T  0 disk
└─sdg1                                                                            8:97   0    15T  0 part
sdh                                                                               8:112  0     5T  0 disk
├─S1-vm--100--disk--0                                                           252:5    0    32G  0 lvm
└─S1-vm--101--disk--0                                                           252:6    0   120G  0 lvm
nvme0n1                                                                         259:0    0 232.9G  0 disk
├─nvme0n1p1                                                                     259:1    0  1007K  0 part
├─nvme0n1p2                                                                     259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                                                                     259:3    0 231.9G  0 part
  ├─pve-swap                                                                    252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                                                                    252:1    0    68G  0 lvm  /
  ├─pve-data_tmeta                                                              252:2    0   1.4G  0 lvm
  │ └─pve-data                                                                  252:4    0 137.1G  0 lvm
  └─pve-data_tdata                                                              252:3    0 137.1G  0 lvm
    └─pve-data                                                                  252:4    0 137.1G  0 lvm
hosts 2 & 3
Code:
root@va1-pve102:~# lsblk
NAME                                                                            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                                                               8:0    0   3.6T  0 disk
└─ceph--05845e81--e3ee--470c--9d4b--f00a4d869a52-osd--block--14a20678--4f20--429f--8e33--f4496e7d1d52
                                                                                252:7    0   3.6T  0 lvm
sdb                                                                               8:16   0   3.6T  0 disk
└─ceph--0a5dc595--e7c6--48ef--ac19--0e437f0570dc-osd--block--3139840a--64e4--4c84--990c--14a8a4d2d683
                                                                                252:8    0   3.6T  0 lvm
sdc                                                                               8:32   0   3.6T  0 disk
└─ceph--a85fe242--0269--4125--a2aa--9221ef262313-osd--block--45e17a26--8756--4dfc--b078--a75210f24b40
                                                                                252:9    0   3.6T  0 lvm
sdd                                                                               8:48   0  29.7G  0 disk
├─sdd1                                                                            8:49   0   100M  0 part
├─sdd5                                                                            8:53   0     4G  0 part
├─sdd6                                                                            8:54   0     4G  0 part
└─sdd7                                                                            8:55   0  21.6G  0 part
sde                                                                               8:64   0   500G  0 disk
└─sde1                                                                            8:65   0   500G  0 part
sdf                                                                               8:80   0    10T  0 disk
└─sdf1                                                                            8:81   0    10T  0 part
sdg                                                                               8:96   0    15T  0 disk
└─sdg1                                                                            8:97   0    15T  0 part
sdh                                                                               8:112  0    15T  0 disk
└─sdh1                                                                            8:113  0    15T  0 part
sdi                                                                               8:128  0     5T  0 disk
├─S1-vm--100--disk--0                                                           252:5    0    32G  0 lvm
└─S1-vm--101--disk--0                                                           252:6    0   120G  0 lvm
nvme0n1                                                                         259:0    0 232.9G  0 disk
├─nvme0n1p1                                                                     259:1    0  1007K  0 part
├─nvme0n1p2                                                                     259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                                                                     259:3    0 231.9G  0 part
  ├─pve-swap                                                                    252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                                                                    252:1    0    68G  0 lvm  /
  ├─pve-data_tmeta                                                              252:2    0   1.4G  0 lvm
  │ └─pve-data                                                                  252:4    0 137.1G  0 lvm
  └─pve-data_tdata                                                              252:3    0 137.1G  0 lvm
    └─pve-data                                                                  252:4    0 137.1G  0 lvm
 
Hi @unsichtbarre , welcome to the forum.

Your post skips over important details, answering them would have the community to assist better:

connected iSCSI SAN/LUN
How did you do this? Did you use PVE iSCSI storage pool? PVE Direct iSCSI? Direct iscsiadm? Some other way?

Trying to migrate a VM from the first host to another failed
What was the exact error message? You can review the TASK log for details.

hosts 2 & 3 used different /dev/sdX identifiers for iSCSI as compared to host 1 (output below)
This does not matter.

At first glance, you appear to have Multipath capable iSCSI (based on seemingly duplicate devices). However, you appear to not have configured multipath.
Please read this article carefully - https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

If you do indeed have multipath connected storage, you need to a) remove your existing data/config b) configure multipath c) place LVM on Multipath device e) never use sdX device directly

I see now that hosts 2 & 3 have a flash drive enabled (/dev/sdd 29/7G) whereas host 1 has the flash drive disabled in BIOS/UEFI.
This data point is not relevant.

Cheers



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: unsichtbarre
Hi @unsichtbarre , welcome to the forum.

Your post skips over important details, answering them would have the community to assist better:


How did you do this? Did you use PVE iSCSI storage pool? PVE Direct iSCSI? Direct iscsiadm? Some other way?


What was the exact error message? You can review the TASK log for details.


This does not matter.

At first glance, you appear to have Multipath capable iSCSI (based on seemingly duplicate devices). However, you appear to not have configured multipath.
Please read this article carefully - https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/

If you do indeed have multipath connected storage, you need to a) remove your existing data/config b) configure multipath c) place LVM on Multipath device e) never use sdX device directly


This data point is not relevant.

Cheers



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I connected to iSCSI by: Datacenter > Storage > Add > iSCSI + Datacenter > Storage > Add > LVM (base volume LUN on SAN) + (Shared)

The error message: UPID:va1-pve101:0001094A:001664C1:6879609A:qmigrate:102:root@pam: 1 687960A6 migration problems

The SAN is multipath capable, I have not tackled this yet.

THX,
-John
 
The SAN is multipath capable, I have not tackled this yet.
The system will be confused by multiple disks having the same signature. I recommend that you finish the configuration of the storage first, before doing app testing.
If you insist on doing it in the opposite order - cut all but one path.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: unsichtbarre
The system will be confused by multiple disks having the same signature. I recommend that you finish the configuration of the storage first, before doing app testing.
If you insist on doing it in the opposite order - cut all but one path.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
THX for the info and reading, very helpful. I believe I am only connected by one path on all hosts:
Code:
root@va1-pve103:/# iscsiadm -m session
tcp: [1] 10.1.16.10:3260,1 iqn.2002-10.com.infortrend:raid.uid796266.001 (non-flash)
root@va1-pve103:/# iscsiadm -m session -P3
iSCSI Transport Class version 2.0-870
version 2.1.8
Target: iqn.2002-10.com.infortrend:raid.uid796266.001 (non-flash)
        Current Portal: 10.1.16.10:3260,1
        Persistent Portal: 10.1.16.10:3260,1
                **********
                Interface:
                **********
                Iface Name: default
                Iface Transport: tcp
                Iface Initiatorname: iqn.1993-08.org.debian:01:8465d626416d
                Iface IPaddress: 10.1.16.103
                Iface HWaddress: default
                Iface Netdev: default
                SID: 1
                iSCSI Connection State: LOGGED IN
                iSCSI Session State: LOGGED_IN
                Internal iscsid Session State: NO CHANGE
                *********
                Timeouts:
                *********
                Recovery Timeout: 120
                Target Reset Timeout: 30
                LUN Reset Timeout: 30
                Abort Timeout: 15
                *****
                CHAP:
                *****
                username: <empty>
                password: ********
                username_in: <empty>
                password_in: ********
                ************************
                Negotiated iSCSI params:
                ************************
                HeaderDigest: None
                DataDigest: None
                MaxRecvDataSegmentLength: 262144
                MaxXmitDataSegmentLength: 65536
                FirstBurstLength: 65536
                MaxBurstLength: 262144
                ImmediateData: Yes
                InitialR2T: Yes
                MaxOutstandingR2T: 1
                ************************
                Attached SCSI devices:
                ************************
                Host Number: 4  State: running
                scsi4 Channel 00 Id 0 Lun: 0
                        Attached scsi disk sde          State: running
                scsi4 Channel 00 Id 0 Lun: 1
                        Attached scsi disk sdf          State: running
                scsi4 Channel 00 Id 0 Lun: 2
                        Attached scsi disk sdg          State: running
                scsi4 Channel 00 Id 0 Lun: 3
                        Attached scsi disk sdh          State: running
                scsi4 Channel 00 Id 0 Lun: 4
                        Attached scsi disk sdi          State: running
                scsi4 Channel 00 Id 0 Lun: 5
 
So all those disks from your lsblk output with identical size and structure are just identically-sized individual LUNs?
Run: lsblk -o NAME,KNAME,TYPE,HCTL,SIZE,MODEL,SERIAL



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yes. /dev/sde to /dev/sdh are iSCSI LUNS NOT used by PVE and /dev/sdi is the LUN I have used (except it is seen as /dev/sdh on host 1 because no /dev/sdd exists).

here you go:
Code:
root@va1-pve103:/# lsblk -o NAME,KNAME,TYPE,HCTL,SIZE,MODEL,SERIAL
NAME                               KNAME     TYPE HCTL         SIZE MODEL                      SERIAL
sda                                sda       disk 0:0:1:0      3.6T Samsung SSD 870 QVO 4TB    S5VYNJ0R605400B
└─ceph--73399c9e--de85--429c--b0c2--9018391757be-osd--block--db796d42--d751--4c09--a4f5--ad92a2285eaf
                                   dm-7      lvm               3.6T
sdb                                sdb       disk 0:0:2:0      3.6T Samsung SSD 870 QVO 4TB    S5VYNJ0R605424Z
└─ceph--da51911a--2f71--427e--96a1--aed98b7c9b3e-osd--block--67fb759c--9b66--4b30--bc8a--239e65042af5
                                   dm-8      lvm               3.6T
sdc                                sdc       disk 0:0:3:0      3.6T Samsung SSD 870 QVO 4TB    S5VYNJ0R605423Y
└─ceph--c41ef366--5866--4f65--8761--2afc1ed0b456-osd--block--d6ad8fd8--e722--4da1--9676--2fb587e9ecd4
                                   dm-9      lvm               3.6T
sdd                                sdd       disk 2:0:0:0     29.5G Internal SD-CARD           000002660A01
├─sdd1                             sdd1      part              100M
├─sdd5                             sdd5      part                4G
├─sdd6                             sdd6      part                4G
└─sdd7                             sdd7      part             21.4G
sde                                sde       disk 4:0:0:0      500G GS 3000 Series             0C266A329576B9062F3673
└─sde1                             sde1      part              500G
sdf                                sdf       disk 4:0:0:1       10T GS 3000 Series             0C266A6AE6C9B830524CAB
└─sdf1                             sdf1      part               10T
sdg                                sdg       disk 4:0:0:2       15T GS 3000 Series             0C266A5C71BA9B38572E10
└─sdg1                             sdg1      part               15T
sdh                                sdh       disk 4:0:0:3       15T GS 3000 Series             0C266A76C2F3934B43A675
└─sdh1                             sdh1      part               15T
sdi                                sdi       disk 4:0:0:4        5T GS 3000 Series             0C266A5358657735B861B8
├─S1-vm--100--disk--0              dm-5      lvm                32G
└─S1-vm--101--disk--0              dm-6      lvm               120G
nvme0n1                            nvme0n1   disk            232.9G SanDisk SSD Plus 250GB A3N 24380U800703
├─nvme0n1p1                        nvme0n1p1 part             1007K
├─nvme0n1p2                        nvme0n1p2 part                1G
└─nvme0n1p3                        nvme0n1p3 part            231.9G
  ├─pve-swap                       dm-0      lvm                 8G
  ├─pve-root                       dm-1      lvm                68G
  ├─pve-data_tmeta                 dm-2      lvm               1.4G
  │ └─pve-data                     dm-4      lvm             137.1G
  └─pve-data_tdata                 dm-3      lvm             137.1G
    └─pve-data                     dm-4      lvm             137.1G
 
Last edited:
To answer your original question - you do not need to remove and re-add the LUNs to have the human-readable device names match.

Properly LVM your disks (thick) and you should be ok. If you continue having issues provide:
- cat /etc/storage.cfg
- qm config [vmid]
- qm migrate [vmid] [target] --online
- journalctl -n 100

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: unsichtbarre
To answer your original question - you do not need to remove and re-add the LUNs to have the human-readable device names match.

Properly LVM your disks (thick) and you should be ok. If you continue having issues provide:
- cat /etc/storage.cfg
- qm config [vmid]
- qm migrate [vmid] [target] --online
- journalctl -n 100

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Well migration is working flawlessly now with two smaller VMS, both on the iSCSI storage.
Thanks for your help, I am going to migrate them all to CEPG and work on iSCSI multi-path

Jb