Proxmox with HPE 3Par storage & LVM, lun issue

May 9, 2025
5
0
1
I have 3 Proxmox nodes in a cluster, and I need to configure the 3PAR to the nodes
I'm using multipath now, but the LUNs are showing 4 times for all the nodes. Sometimes the lun is showing in one node but not in other nodes. How can I sync the LUNs automatically? Is there any way to solve this issue?

If exporting the 3Par lun to the Proxmox nodes, it is not showing until this command is executed: echo "- - -" | tee /sys/class/scsi_host/host*/scan (or) reboot the node. Why is it not showing automatically?

lsblk output:
sdw 65:96 0 500G 0 disk
└─mpathi 252:28 0 500G 0 mpath
└─mpathi-part1 252:29 0 500G 0 part
└─mpathj_testingdisk_vg-vm--103--disk--0 252:15 0 100G 0 lvm
sdx 65:112 0 500G 0 disk
└─mpathi 252:28 0 500G 0 mpath
└─mpathi-part1 252:29 0 500G 0 part
└─mpathj_testingdisk_vg-vm--103--disk--0 252:15 0 100G 0 lvm
sdy 65:128 0 500G 0 disk
└─mpathi 252:28 0 500G 0 mpath
└─mpathi-part1 252:29 0 500G 0 part
└─mpathj_testingdisk_vg-vm--103--disk--0 252:15 0 100G 0 lvm
sdz 65:144 0 500G 0 disk
└─mpathi 252:28 0 500G 0 mpath
└─mpathi-part1 252:29 0 500G 0 part
└─mpathj_testingdisk_vg-vm--103--disk--0 252:15 0 100G 0 lvm

multipath -ll output:
mpathi (360002ac0000000001600a61e0001d232) dm-28 3PARdata,VV
size=500G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 2:0:0:3 sdw 65:96 active ready running
|- 2:0:1:3 sdx 65:112 active ready running
|- 3:0:0:3 sdy 65:128 active ready running
`- 3:0:1:3 sdz 65:144 active ready running
 
If exporting the 3Par lun to the Proxmox nodes, it is not showing until this command is executed: echo "- - -" | tee /sys/class/scsi_host/host*/scan (or) reboot the node. Why is it not showing automatically?
There is no autodiscovery implemented, therefore you need to scan by yourself.

I'm using multipath now, but the LUNs are showing 4 times for all the nodes.
Yes, they show as raw block devices as often as you have paths to the LUN on the SAN. The multipathed device in /dev/mapper is only shown once.

Please refer to https://pve.proxmox.com/wiki/Multipath
 
Thank you for your response.

Every time I export the LUN, I rescan the nodes using the command: `echo "- - -" | tee /sys/class/scsi_host/host*/scan`. After that, I create a partition, a Physical Volume (PV), and a Volume Group (VG). Once that’s done, I can only add the storage to the cluster for shared storage.

I am continuously exporting and unexporting the LUNs to test my application, which makes this process quite difficult.

Additionally, I exported the iSCSI LUN to the cluster and created a virtual machine (VM) on that storage. After testing, I removed the VM and storage in Proxmox, but the storage still appears in SSH. It disappears only after rebooting the Proxmox node.

How can I refresh the storage table without rebooting?
Here is an image of the issue.
1747120642591.png



And, one more thing we need to do is increase the LVM disk size also. without downtime of vms
 
Last edited:
(OS inside default) rescan-scsi-bus.sh would look for new block devices and updated size, just execute without any arguments.
 
(OS inside default) rescan-scsi-bus.sh would look for new block devices and updated size, just execute without any arguments.
I tried that one also, but it is not working.
I need to increase the partition, PV, and VG
I'm using the FC port for the storage, for which I'm using the multipath.

root@SCIPLProxServ1:~# cat /etc/multipath.conf
defaults {
find_multipaths yes
user_friendly_names yes
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute "ID_SERIAL"
features "0"
path_checker directio
rr_min_io 100
rr_min_io_rq 1
flush_on_last_del yes
max_fds 8192
force_sync yes
}
devices {
device {
vendor "3PARdata"
product "HPE"
path_grouping_policy multibus
}
}

blacklist {
# Exclude devices by World Wide Identifier (WWID)
wwid 600508b1001cec98
# Exclude devices by device node name patterns
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^dcssblk[0-9]*"
}



How can I increase the PV and VG? Need to know which way I create a PV, and VG
Option 1
Using "gdisk" or "fdisk" on the "/dev/mapper/mpathX", create a partition and then create a PV & VG on "/dev/mapper/mpathX-part1", and lastly add the storage in the datacenter proxmox cluster.

Option 2
Create a PV and VG on "/dev/mapper/mpathX" itself, and lastly add the storage in the datacenter proxmox cluster.

Need to know which is the best way and how to expand the partition, PV, and VG?
 
There is no autodiscovery implemented, therefore you need to scan by yourself.


Yes, they show as raw block devices as often as you have paths to the LUN on the SAN. The multipathed device in /dev/mapper is only shown once.

Please refer to https://pve.proxmox.com/wiki/Multipath
Thank you for your response.

Every time I export the LUN, I rescan the nodes using the command: `echo "- - -" | tee /sys/class/scsi_host/host*/scan`. After that, I create a partition, a Physical Volume (PV), and a Volume Group (VG). Once that’s done, I can only add the storage to the cluster for shared storage.

I am continuously exporting and unexporting the LUNs to test my application, which makes this process quite difficult.

Additionally, I exported the iSCSI LUN to the cluster and created a virtual machine (VM) on that storage. After testing, I removed the VM and storage in Proxmox, but the storage still appears in SSH. It disappears only after rebooting the Proxmox node.

How can I refresh the storage table without rebooting?
Here is an image of the issue.
1747120642591.png





And, one more thing we need to do is increase the LVM disk size also. without downtime of vms

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(OS inside default) rescan-scsi-bus.sh would look for new block devices and updated size, just execute without any arguments.

I tried that one also, but it is not working.
I need to increase the partition, PV, and VG
I'm using the FC port for the storage, for which I'm using the multipath.

root@SCIPLProxServ1:~# cat /etc/multipath.conf
defaults {
find_multipaths yes
user_friendly_names yes
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute "ID_SERIAL"
features "0"
path_checker directio
rr_min_io 100
rr_min_io_rq 1
flush_on_last_del yes
max_fds 8192
force_sync yes
}
devices {
device {
vendor "3PARdata"
product "HPE"
path_grouping_policy multibus
}
}

blacklist {
# Exclude devices by World Wide Identifier (WWID)
wwid 600508b1001cec98
# Exclude devices by device node name patterns
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^dcssblk[0-9]*"
}



How can I increase the PV and VG? Need to know which way I create a PV, and VG
Option 1
Using "gdisk" or "fdisk" on the "/dev/mapper/mpathX", create a partition and then create a PV & VG on "/dev/mapper/mpathX-part1", and lastly add the storage in the datacenter proxmox cluster.

Option 2
Create a PV and VG on "/dev/mapper/mpathX" itself, and lastly add the storage in the datacenter proxmox cluster.

Need to know which is the best way and how to expand the partition, PV, and VG?
 
Last edited: