ISCSI SETUP NOT WORKING - duplicate uuids

Oct 25, 2017
6
0
21
51
I have tried to create a LUN multiple times, following this thread (except not allowing the ISCSI target to be reach directly, and setting up LVM through the GUI).

Every time I try, I end up with duplication uuids, as below from pvesm status command:

<snip>
WARNING: Not using lvmetad because duplicate PVs were found.
WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
WARNING: PV 7rtS2N-3t71-CyO6-SqnI-24Um-LfOh-VRlfSW on /dev/sdc was already found on /dev/sdb.
WARNING: PV 7rtS2N-3t71-CyO6-SqnI-24Um-LfOh-VRlfSW prefers device /dev/sdb because device was seen first.
WARNING: Not using lvmetad because duplicate PVs were found.
WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.
WARNING: PV 7rtS2N-3t71-CyO6-SqnI-24Um-LfOh-VRlfSW on /dev/sdc was already found on /dev/sdb.
WARNING: PV 7rtS2N-3t71-CyO6-SqnI-24Um-LfOh-VRlfSW prefers device /dev/sdb because device was seen first.
local dir 1 98559220 19951020 73558652 21.84%
local-lvm lvmthin 1 342761472 0 342761472 0.50%
netstore lvm 1 4718587904 0 4718587904 0.50%
scsimount1 iscsi 1 0 0 0 100.00%

</snip>

No amount of mucking about seems to solve this - every time I add the iscsi volume, this ends up being the result. Using a synology, very recent firmware. Only thing I can think of to attempt next is multipath, but it is disabled on all nodes.

The error appeared on 2/3 nodes. It did not appear on the system that I'm connecting to the GUI on. Once I restarted the system without the problem, it is now happening on all nodes.

iscsiadm -m session

tcp: [1] 1.1.1.200:3260,1 internaname.d5f838b405 (non-flash)
tcp: [2] 1.1.1.201:3260,1 internalname.d5f838b405 (non-flash)


I don't need multiple targets, what's the best way to disable as I suspect this will solve the problem. I'm running proxmox-ve: 5.0-15 (running kernel: 4.10.15-1-pve)

Thanks in advance
 
this message from pvesm status is just a warning and not an error
probably you created a LVM PV on this LUN, but since you exported this LUN via multiple paths, the PV signature is found on multiple disks

normally if you export a LUN via multiple paths, you would use a multipath daemon on the initiator side, which would coalesce those paths into a single block device, so that the logical volume manager just sees a single PV signature

note that configuring multipath is a tricky task by itself, so if you don't know if you need it, the easiest way to get rid of this warning is to defined a single path on your SAN
 
This is way more than a warning, you can't do live migrations at all with this in place. That's pretty much what I thought happened though. I started tinkering with multipath a bit and discovered that, even though I have installed from the same media, one of the proxmox nodes has no multipath-tools available:

Package multipath-tools is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source


It's not easy to disable a full nic on the san - so at this point I'm inclined to try the multi-path configuration. The benefit there is throughput which could be helpful down the road. Going to start here I guess: https://pve.proxmox.com/wiki/ISCSI_Multipath. If anyone has any third party links that helped them, that would be great. Thanks in advance!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!