Found duplicate PV? Multiple block devices for same disk

tomstephens89

Renowned Member
Mar 10, 2014
177
6
83
Kingsclere, United Kingdom
When I run vgdisplay I get the following messages:

Found duplicate PV jiK8NfLiSOLiCZQmz5hwPzN1v2zrjBsd: using /dev/sde not /dev/sdd
Found duplicate PV jiK8NfLiSOLiCZQmz5hwPzN1v2zrjBsd: using /dev/sdf not /dev/sde
Found duplicate PV jiK8NfLiSOLiCZQmz5hwPzN1v2zrjBsd: using /dev/sdg not /dev/sdf
Found duplicate PV dJGqX72OJo2pbKkp0KqWXgsBGUpKzOMn: using /dev/sdi not /dev/sdh
Found duplicate PV dJGqX72OJo2pbKkp0KqWXgsBGUpKzOMn: using /dev/sdj not /dev/sdi
Found duplicate PV dJGqX72OJo2pbKkp0KqWXgsBGUpKzOMn: using /dev/sdl not /dev/sdj

I have two iSCSI LUN's connected to this host, multipath-tools is NOT installed.

When I run fdisk -l I see several /dev/sdX devices for the same LUN's as follows:

Disk /dev/sde: 931.3 GiB, 999999995904 bytes, 1953124992 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
Disk /dev/sdd: 931.3 GiB, 999999995904 bytes, 1953124992 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
Disk /dev/sdh: 465.7 GiB, 499999965184 bytes, 976562432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
Disk /dev/sdi: 465.7 GiB, 499999965184 bytes, 976562432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
Disk /dev/sdf: 931.3 GiB, 999999995904 bytes, 1953124992 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
Disk /dev/sdg: 931.3 GiB, 999999995904 bytes, 1953124992 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
Disk /dev/sdj: 465.7 GiB, 499999965184 bytes, 976562432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
Disk /dev/sdl: 465.7 GiB, 499999965184 bytes, 976562432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes

What is going on here? I am having iSCSI problems and was wondering if this is related.

EDIT:

output of iscsiadm -m session shows:

tcp: [10] 10.102.41.2:3260,4 iqn.1986-03.com.hp:storage.p2000g3.13091973a5 (non-flash)
tcp: [11] 10.102.40.2:3260,2 iqn.1986-03.com.hp:storage.p2000g3.13091973a5 (non-flash)
tcp: [12] 10.102.41.1:3260,3 iqn.1986-03.com.hp:storage.p2000g3.13091973a5 (non-flash)
tcp: [9] 10.102.40.1:3260,1 iqn.1986-03.com.hp:storage.p2000g3.13091973a5 (non-flash)

This implies iSCSI is multipathing even though I have not installed or configured multipath-tools??? All I done was enter a single portal IP in the GUI for iSCSI then configured LVM on my 2 LUNS's.
 
Last edited:
It has nothing to do with multipath tools or not. You storage server simply replies on several IP's which all can be discovered by iscsiadm. To prevent this you should instruct your storage server to only listen on one address.
 
It has nothing to do with multipath tools or not. You storage server simply replies on several IP's which all can be discovered by iscsiadm. To prevent this you should instruct your storage server to only listen on one address.

Sorry but im not sure I follow....

I have only given proxmox a single portal address of my iSCSI target which is a HP P2000 SAN with dual controllers. This proxmox node has two 10Gb connections to 2 seperate storage networks. Each controller in the SAN has a connection to both of those networks.

Proxmox has discovered the other SAN interfaces and established a session to the SAN on all 4 interfaces. It has also created a block device for each LUN on all its available paths which means to me that Proxmox has automatically configured itself to multipath.

I bet if I was to install multipath tools now and run multipath -ll that there would be 4 paths to each disk.
 
See this, installed multipath-tools and ran multipath -ll straight away:

root@apollo12:~# apt-get install multipath-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
kpartx
Suggested packages:
multipath-tools-boot
The following NEW packages will be installed:
kpartx multipath-tools
0 upgraded, 2 newly installed, 0 to remove and 66 not upgraded.
Need to get 217 kB of archives.
After this operation, 692 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://ftp.uk.debian.org/debian/ jessie/main kpartx amd64 0.5.0-6+deb8u2 [31.5 kB]
Get:2 http://ftp.uk.debian.org/debian/ jessie/main multipath-tools amd64 0.5.0-6+deb8u2 [185 kB]
Fetched 217 kB in 0s (2,402 kB/s)
Selecting previously unselected package kpartx.
(Reading database ... 44657 files and directories currently installed.)
Preparing to unpack .../kpartx_0.5.0-6+deb8u2_amd64.deb ...
Unpacking kpartx (0.5.0-6+deb8u2) ...
Selecting previously unselected package multipath-tools.
Preparing to unpack .../multipath-tools_0.5.0-6+deb8u2_amd64.deb ...
Unpacking multipath-tools (0.5.0-6+deb8u2) ...
Processing triggers for man-db (2.7.0.2-5) ...
Processing triggers for systemd (215-17+deb8u4) ...
Setting up kpartx (0.5.0-6+deb8u2) ...
Setting up multipath-tools (0.5.0-6+deb8u2) ...
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Processing triggers for systemd (215-17+deb8u4) ...
Processing triggers for libc-bin (2.19-18+deb8u4) ...
root@apollo12:~# multipath -ll
3600c0ff000198cd8a8089a5701000000 dm-7 HP,P2000 G3 iSCSI
size=931G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 7:0:0:0 sdd 8:48 active ready running
| `- 8:0:0:0 sde 8:64 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 6:0:0:0 sdf 8:80 active ready running
`- 9:0:0:0 sdg 8:96 active ready running
3600c0ff000198cd8cd1d9f5701000000 dm-8 HP,P2000 G3 iSCSI
size=466G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 7:0:0:6 sdi 8:128 active ready running
| `- 8:0:0:6 sdh 8:112 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 6:0:0:6 sdj 8:144 active ready running
`- 9:0:0:6 sdk 8:160 active ready running
 
I have only given proxmox a single portal address of my iSCSI target which is a HP P2000 SAN with dual controllers. This proxmox node has two 10Gb connections to 2 seperate storage networks. Each controller in the SAN has a connection to both of those networks.

Proxmox has discovered the other SAN interfaces and established a session to the SAN on all 4 interfaces. It has also created a block device for each LUN on all its available paths which means to me that Proxmox has automatically configured itself to multipath.

the portal ip address is not an address to (a) lun(s), but to the service which provides the luns, which (as mir already stated) listens to multiple ip addresses and sends this information back to us
 
the portal ip address is not an address to (a) lun(s), but to the service which provides the luns, which (as mir already stated) listens to multiple ip addresses and sends this information back to us

I know that.... The portal address is an address to the storage controller(s) and it will respond with other IP's in the portal group....

But does this mean that iSCSI in proxmox / debian now multipath's without the need for multipath-tools and a bunch of devices in /etc/multipath.conf?

I was under the impression I would have to install multipath tools, find the WWID's of my LUN's, then configure them as multipath devices in /etc/multipath.conf. On top of those multipath devices I would then be able to make LVM groups like I used to do.

Thats what it looks like....

In /dev/mapper I have the two LUN ID's....
3600c0ff000198cd8a8089a5701000000
3600c0ff000198cd8cd1d9f5701000000.

So rather than using the GUI to create the LVM which I guess creates on top of the /dev/sdX device, I could just do pvcreate and vgcreate on these /dev/mapper paths.
 
no multipath is not configured without multipath tools

this is the reason you get the error from your first post.
it sees multiple devices with the same pv but does not know they reference the same volume
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!