[SOLVED] Cannot use device /dev/sdar with duplicates. (500). Cannot create LVM on top of ISCSI volume

deku

New Member
Jun 17, 2023
2
0
1
Hi everyone,

Proxmox Virtual Environment 7.0-8

I'm setting up a proxmox cluster with three hosts and an QNAP TS-1273U ISCSI storage.
Everything works fine, the only problem is that I cannot create the LVM on top of the ISCSI lun.
I get the following error message in the Proxmox GUI:

create storage failed: vgcreate prox /dev/disk/by-id/scsi-SQNAP_iSCSI_Storage_28af651f-cdd9-419d-ac72-6a49a0b128b1 error: Cannot use device /dev/sdar with duplicates. (500)



root@prox:~# lsblk -o NAME,WWN
sda 0x600605b00a18143028b8f30435bc407c
├─sda1 0x600605b00a18143028b8f30435bc407c
├─sda2 0x600605b00a18143028b8f30435bc407c
└─sda3 0x600605b00a18143028b8f30435bc407c
├─pve-swap
├─pve-root
├─pve-data_tmeta
│ └─pve-data
└─pve-data_tdata
└─pve-data
sdb 0x60080e50001bdafc00002665550101d9
sdc 0x60080e50001bd8d000000faf550a27ed
sdd 0x60080e50001bdafc00002668550b7b7e
sde 0x60080e50001bd8d000000f9854dc5bdf
sdf 0x60080e50001bdafc00002665550101d9
sdg 0x60080e50001bd8d000000faf550a27ed
sdh 0x60080e50001bdafc00002668550b7b7e
sdi 0x60080e50001bd8d000000f9854dc5bdf
sdj 0x60050763008088a7300000000000001c
sdk 0x60050763008088a73000000000000023
sdl 0x60050763008088a73000000000000022
sdm 0x60050763008088a7300000000000001c
sdn 0x60050763008088a73000000000000023
sdo 0x60050763008088a73000000000000022
sdp 0x600c0ff0003cde37da8e9b6001000000
sdq 0x600c0ff0003cde372a1ae45b01000000
sdr 0x600c0ff0003cde374c5c5e5d01000000
sds 0x600c0ff0003cde37da8e9b6001000000
sdt 0x600c0ff0003cde372a1ae45b01000000
sdu 0x600c0ff0003cde374c5c5e5d01000000
sdv 0x60080e50001bdafc00002665550101d9
sdw 0x60080e50001bd8d000000faf550a27ed
sdx 0x60080e50001bdafc00002668550b7b7e
sdy 0x60080e50001bd8d000000f9854dc5bdf
sdz 0x60080e50001bdafc00002665550101d9
sdaa 0x60080e50001bd8d000000faf550a27ed
sdab 0x60080e50001bdafc00002668550b7b7e
sdac 0x60080e50001bd8d000000f9854dc5bdf
sdad 0x60050763008088a7300000000000001c
sdae 0x60050763008088a73000000000000023
sdaf 0x60050763008088a73000000000000022
sdag 0x60050763008088a7300000000000001c
sdah 0x60050763008088a73000000000000023
sdai 0x60050763008088a73000000000000022
sdaj 0x600c0ff0003cde37da8e9b6001000000
└─sdaj1 0x600c0ff0003cde37da8e9b6001000000
sdak 0x600c0ff0003cde372a1ae45b01000000
sdal 0x600c0ff0003cde374c5c5e5d01000000
sdam 0x600c0ff0003cde37da8e9b6001000000
sdan 0x600c0ff0003cde372a1ae45b01000000
sdao 0x600c0ff0003cde374c5c5e5d01000000
sdap 0x6e843b628af651fdcdd9d419ddac72d6
sdaq 0x6e843b628af651fdcdd9d419ddac72d6
sdar 0x6e843b628af651fdcdd9d419ddac72d6

I read that I need to configure multipath but I have never used this tool. Is anyone able to help me with this ?
 
Last edited:
When dealing with disks where you plan to store potentially valuable data, the best approach is to do some research and experiment a bit, before committing configuration to "production".
You can start here:
https://pve.proxmox.com/wiki/ISCSI_Multipath
and here:
https://www.qnap.com/en-au/how-to/tutorial/article/configuring-linux-iscsi-storage-with-qnap-es-nas

If you have more specific questions as you do your research, I am sure forum members will help answer those.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you for your reply.

I coped in a different way. After installing and configuring multipath I was able to add the datastore from proxmox gui. The datastore is visible to all nodes and this is what I wanted.

But thank you for your help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!