PVE Very Slow to displaying Storage

Mar 9, 2022
9
0
1
33
Dear Support,
when to show storage ( ex : Clone VM or Create VM ), it will be very slow to displaying Storage
sometimes it manages to show storage, but more often it timeouts.
but no problem about performance, ALL VM start and runing perfectly.

Slow and Time out

I'm check the output of TASK VIEWER : CLone VM
Code:
INFO: starting new backup job: vzdump 109 --node pve --storage NFSONNAS --mode stop --remove 0 --compress lzo
INFO: Starting Backup of VM 109 (qemu)
INFO: Backup started at 2022-04-10 19:00:15
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: dokumenlokal-Prod
INFO: include disk 'sata0' 'LVM_DOKUMENLOKAL:vm-109-disk-0' 32G
  WARNING: VG name centos is used by VGs c58zAh-303M-eXjh-MXFZ-MdWc-osha-6DfHpc and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs ast6HD-G8kN-PNXB-qf2V-8Zle-X7qW-4JLgwO and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs bsUPc2-oSdi-zmMg-hlyN-wLpE-ReXl-p7b8Tn and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs 0d9sxJ-F0uC-JOcY-q5yC-SqRb-fq8M-8P3epp and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: Not using device /dev/sdo2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: Not using device /dev/sdp2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: VG name centos is used by VGs c58zAh-303M-eXjh-MXFZ-MdWc-osha-6DfHpc and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs ast6HD-G8kN-PNXB-qf2V-8Zle-X7qW-4JLgwO and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs bsUPc2-oSdi-zmMg-hlyN-wLpE-ReXl-p7b8Tn and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs 0d9sxJ-F0uC-JOcY-q5yC-SqRb-fq8M-8P3epp and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: Not using device /dev/sdo2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: Not using device /dev/sdp2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: VG name centos is used by VGs c58zAh-303M-eXjh-MXFZ-MdWc-osha-6DfHpc and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs ast6HD-G8kN-PNXB-qf2V-8Zle-X7qW-4JLgwO and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs bsUPc2-oSdi-zmMg-hlyN-wLpE-ReXl-p7b8Tn and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs 0d9sxJ-F0uC-JOcY-q5yC-SqRb-fq8M-8P3epp and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: Not using device /dev/sdo2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: Not using device /dev/sdp2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: VG name centos is used by VGs c58zAh-303M-eXjh-MXFZ-MdWc-osha-6DfHpc and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs ast6HD-G8kN-PNXB-qf2V-8Zle-X7qW-4JLgwO and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs bsUPc2-oSdi-zmMg-hlyN-wLpE-ReXl-p7b8Tn and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs 0d9sxJ-F0uC-JOcY-q5yC-SqRb-fq8M-8P3epp and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: Not using device /dev/sdo2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: Not using device /dev/sdp2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
INFO: pending configuration changes found (not included into backup)
INFO: creating vzdump archive '/mnt/pve/NFSONNAS/dump/vzdump-qemu-109-2022_04_10-19_00_15.vma.lzo'
INFO: starting kvm to execute backup task
  WARNING: VG name centos is used by VGs c58zAh-303M-eXjh-MXFZ-MdWc-osha-6DfHpc and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs ast6HD-G8kN-PNXB-qf2V-8Zle-X7qW-4JLgwO and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs bsUPc2-oSdi-zmMg-hlyN-wLpE-ReXl-p7b8Tn and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs 0d9sxJ-F0uC-JOcY-q5yC-SqRb-fq8M-8P3epp and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: Not using device /dev/sdo2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: Not using device /dev/sdp2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: VG name centos is used by VGs c58zAh-303M-eXjh-MXFZ-MdWc-osha-6DfHpc and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs ast6HD-G8kN-PNXB-qf2V-8Zle-X7qW-4JLgwO and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs bsUPc2-oSdi-zmMg-hlyN-wLpE-ReXl-p7b8Tn and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs 0d9sxJ-F0uC-JOcY-q5yC-SqRb-fq8M-8P3epp and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: Not using device /dev/sdo2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: Not using device /dev/sdp2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: VG name centos is used by VGs c58zAh-303M-eXjh-MXFZ-MdWc-osha-6DfHpc and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs ast6HD-G8kN-PNXB-qf2V-8Zle-X7qW-4JLgwO and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs bsUPc2-oSdi-zmMg-hlyN-wLpE-ReXl-p7b8Tn and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs 0d9sxJ-F0uC-JOcY-q5yC-SqRb-fq8M-8P3epp and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: Not using device /dev/sdo2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: Not using device /dev/sdp2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
INFO: started backup task '3eca11d0-602c-4fe1-bbba-d597800d84a5'
INFO:   3% (1.2 GiB of 32.0 GiB) in 3s, read: 416.9 MiB/s, write: 101.0 MiB/s
INFO:   8% (2.7 GiB of 32.0 GiB) in 6s, read: 488.5 MiB/s, write: 6.1 MiB/s
INFO:  10% (3.4 GiB of 32.0 GiB) in 9s, read: 241.4 MiB/s, write: 119.3 MiB/s
INFO:  12% (3.9 GiB of 32.0 GiB) in 12s, read: 190.9 MiB/s, write: 165.5 MiB/s
INFO:  13% (4.4 GiB of 32.0 GiB) in 15s, read: 168.0 MiB/s, write: 166.5 MiB/s
INFO:  15% (4.9 GiB of 32.0 GiB) in 18s, read: 164.3 MiB/s, write: 164.3 MiB/s
INFO:  19% (6.2 GiB of 32.0 GiB) in 21s, read: 429.8 MiB/s, write: 45.7 MiB/s
INFO:  24% (7.8 GiB of 32.0 GiB) in 24s, read: 547.7 MiB/s, write: 0 B/s
INFO:  30% (9.6 GiB of 32.0 GiB) in 27s, read: 631.0 MiB/s, write: 0 B/s
INFO:  33% (10.6 GiB of 32.0 GiB) in 30s, read: 335.8 MiB/s, write: 106.1 MiB/s
INFO:  34% (11.0 GiB of 32.0 GiB) in 45s, read: 30.4 MiB/s, write: 29.3 MiB/s
INFO:  35% (11.4 GiB of 32.0 GiB) in 48s, read: 119.6 MiB/s, write: 118.9 MiB/s
INFO:  36% (11.8 GiB of 32.0 GiB) in 51s, read: 129.3 MiB/s, write: 129.3 MiB/s
INFO:  37% (11.9 GiB of 32.0 GiB) in 54s, read: 50.8 MiB/s, write: 50.8 MiB/s
INFO:  39% (12.6 GiB of 32.0 GiB) in 57s, read: 228.3 MiB/s, write: 49.8 MiB/s
INFO:  44% (14.2 GiB of 32.0 GiB) in 1m, read: 537.0 MiB/s, write: 0 B/s
INFO:  48% (15.6 GiB of 32.0 GiB) in 1m 3s, read: 492.7 MiB/s, write: 0 B/s
INFO:  54% (17.4 GiB of 32.0 GiB) in 1m 6s, read: 608.0 MiB/s, write: 0 B/s
INFO:  56% (18.0 GiB of 32.0 GiB) in 1m 9s, read: 211.2 MiB/s, write: 158.9 MiB/s
INFO:  58% (18.6 GiB of 32.0 GiB) in 1m 12s, read: 192.1 MiB/s, write: 192.1 MiB/s
INFO:  59% (19.1 GiB of 32.0 GiB) in 1m 15s, read: 179.0 MiB/s, write: 143.8 MiB/s
INFO:  64% (20.6 GiB of 32.0 GiB) in 1m 18s, read: 516.3 MiB/s, write: 0 B/s
INFO:  68% (22.1 GiB of 32.0 GiB) in 1m 21s, read: 505.3 MiB/s, write: 0 B/s
INFO:  73% (23.6 GiB of 32.0 GiB) in 1m 24s, read: 521.0 MiB/s, write: 0 B/s
INFO:  77% (24.9 GiB of 32.0 GiB) in 1m 27s, read: 442.4 MiB/s, write: 48.8 MiB/s
INFO:  79% (25.5 GiB of 32.0 GiB) in 1m 30s, read: 190.9 MiB/s, write: 184.5 MiB/s
INFO:  81% (26.0 GiB of 32.0 GiB) in 1m 33s, read: 172.8 MiB/s, write: 172.7 MiB/s
INFO:  82% (26.5 GiB of 32.0 GiB) in 1m 46s, read: 41.2 MiB/s, write: 18.6 MiB/s
INFO:  87% (27.9 GiB of 32.0 GiB) in 1m 49s, read: 493.7 MiB/s, write: 0 B/s
INFO:  92% (29.6 GiB of 32.0 GiB) in 1m 52s, read: 568.0 MiB/s, write: 0 B/s
INFO:  97% (31.4 GiB of 32.0 GiB) in 1m 55s, read: 597.7 MiB/s, write: 0 B/s
INFO: 100% (32.0 GiB of 32.0 GiB) in 1m 57s, read: 332.5 MiB/s, write: 0 B/s
INFO: backup is sparse: 25.11 GiB (78%) total zero data
INFO: transferred 32.00 GiB in 117 seconds (280.1 MiB/s)
INFO: stopping kvm after backup task
  WARNING: VG name centos is used by VGs c58zAh-303M-eXjh-MXFZ-MdWc-osha-6DfHpc and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs ast6HD-G8kN-PNXB-qf2V-8Zle-X7qW-4JLgwO and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs bsUPc2-oSdi-zmMg-hlyN-wLpE-ReXl-p7b8Tn and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name centos is used by VGs 0d9sxJ-F0uC-JOcY-q5yC-SqRb-fq8M-8P3epp and ozs095-LgHW-0urh-iEmQ-R6pj-BMVQ-Dhhxot.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: Not using device /dev/sdo2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: Not using device /dev/sdp2 for PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
  WARNING: PV 7gOuyw-ZxcU-02A3-r39M-fDoy-66uT-787enB prefers device /dev/sdn2 because device was seen first.
INFO: archive file size: 3.92GB
INFO: Finished Backup of VM 109 (00:03:02)
INFO: Backup finished at 2022-04-10 19:03:17
INFO: Backup job finished successfully
TASK OK
sample_TASK Viewer_ Start VM.png
for your information the VM runing Normaly

sample LVM list.png
i don't remember when to add VG 'Centos',
I'm sure none of the VMs use a 'centos' VG.
and I don't know if the 'slow displaying storage' problem is related or not with this 'CENTOS' VG

what i need to check and do ?
Thanks you
 
It would be helpful to provide:
- cat /etc/pve/storage.cfg
- lsblk
- explain what storage you are using

An educated guess is that you have an issue with storage available via multiple paths but which is not using multipath: https://pve.proxmox.com/wiki/ISCSI_Multipath


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
root@pve:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

iscsi: SRS_2022_LUN
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.saris2022.59e5a8
content images
nodes pve

iscsi: satria
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.satria.59e5a8
content images

iscsi: PSRMALANG
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.psrmalang.59e5a8
content images
nodes pve

lvm: LVM_PSRMLG
vgname VGPSRMALANG
base PSRMALANG:0.0.0.scsi-SQNAP_iSCSI_Storage_c271ae28-0a9d-40ed-8643-a74fd8e4c598
content rootdir,images
nodes pve
shared 1

iscsi: bobypwa
portal 10.10.10.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.bobypwa.59e5a8
content images
nodes pve

iscsi: PSR140
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.psr140.59e5a8
content images
nodes pve

lvm: LVM_PSR140
vgname VGPSR140
base PSR140:0.0.0.scsi-SQNAP_iSCSI_Storage_4dff9c73-5d3a-428a-81c9-9157bc9d30b8
content images,rootdir
nodes pve
shared 0

iscsi: MLG140
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.mlg140.59e5a8
content images

lvm: LVM_MLG140
vgname VGMLG140
base MLG140:0.0.0.scsi-SQNAP_iSCSI_Storage_c1844fb2-f7b9-40d3-9cb7-39598725f77a
content rootdir,images
shared 0

iscsi: DOKUMENLOKAL
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.dokumenlokal.59e5a8
content images

lvm: LVM_DOKUMENLOKAL
vgname VGDOKUMENLOKAL
base DOKUMENLOKAL:0.0.0.scsi-SQNAP_iSCSI_Storage_346fbc2e-da9d-4767-ad24-31b1b8f6e3e5
content images,rootdir
shared 0

iscsi: SKB140
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.skb140.59e5a8
content images

lvm: LVM_SKB140
vgname VGSKB140
base SKB140:0.0.0.scsi-SQNAP_iSCSI_Storage_7b21c7b4-5d26-45ec-872f-067551c06da8
content images,rootdir
shared 0

iscsi: PWA140
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.pwa140.59e5a8
content images

lvm: LVM_PWA140
vgname VGPWA
base PWA140:0.0.0.scsi-SQNAP_iSCSI_Storage_b3a3306a-8540-48ef-96a4-8adc534c44c1
content images,rootdir
shared 0

iscsi: ERP250
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.erp250.59e5a8
content images

lvm: LVM_ERP250
vgname VGERP250
base ERP250:0.0.0.scsi-SQNAP_iSCSI_Storage_f76d67de-e9ec-4d0d-a3c4-574335df5436
content images,rootdir
shared 0

iscsi: JKT3PROD10GBPS
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.jktprod10gbps.59e5a8
content images

iscsi: PML140
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.pml140.59e5a8
content images
nodes pve

lvm: LVM_PML140
vgname VGPML140
base PML140:0.0.0.scsi-SQNAP_iSCSI_Storage_c30cead1-e437-454e-bb11-e1899b04205a
content rootdir,images
shared 0

iscsi: PWO140
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.pwo140.59e5a8
content images

lvm: LVM_PWO140
vgname VGPWO140
base PWO140:0.0.0.scsi-SQNAP_iSCSI_Storage_da680d85-a1d5-4104-a554-a191ca5b36ed
content images,rootdir
shared 0

iscsi: TSK140
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.tsk140.59e5a8
content images

lvm: LVM_TSK140
vgname VGTSK140
base TSK140:0.0.0.scsi-SQNAP_iSCSI_Storage_d0805437-1cd7-45c0-8ea8-2b30f134ae96
content rootdir,images
shared 0

iscsi: BOBYPWA140
portal 10.41.56.1
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.bobypwa140.59e5a8
content images

lvm: LVM_BOBYPWA140
vgname VGBOBYPWA140
base BOBYPWA140:0.0.0.scsi-SQNAP_iSCSI_Storage_09658f65-cfbd-4354-b9c2-220f4da984fe
content rootdir,images
shared 0

nfs: NFSONNAS
export /NFS_For_Gen8
path /mnt/pve/NFSONNAS
server 10.41.56.1
content backup,images
prune-backups keep-all=1

Code:
root@pve:~# lsblk
NAME                              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                 8:0    0 558.7G  0 disk
├─sda1                              8:1    0  1007K  0 part
├─sda2                              8:2    0   512M  0 part
└─sda3                              8:3    0 558.2G  0 part
  ├─pve-swap                      253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                      253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta                253:2    0   4.4G  0 lvm
  │ └─pve-data-tpool              253:4    0 429.5G  0 lvm
  │   ├─pve-data                  253:5    0 429.5G  1 lvm
  │   ├─pve-vm--101--disk--0      253:6    0   130G  0 lvm
  │   ├─pve-vm--105--disk--0      253:7    0   170G  0 lvm
  │   ├─pve-vm--102--disk--0      253:8    0   130G  0 lvm
  │   └─pve-base--104--disk--0    253:12   0   130G  1 lvm
  └─pve-data_tdata                253:3    0 429.5G  0 lvm
    └─pve-data-tpool              253:4    0 429.5G  0 lvm
      ├─pve-data                  253:5    0 429.5G  1 lvm
      ├─pve-vm--101--disk--0      253:6    0   130G  0 lvm
      ├─pve-vm--105--disk--0      253:7    0   170G  0 lvm
      ├─pve-vm--102--disk--0      253:8    0   130G  0 lvm
      └─pve-base--104--disk--0    253:12   0   130G  1 lvm
sdb                                 8:16   0   131G  0 disk
sdc                                 8:32   0   201G  0 disk
sdd                                 8:48   0    59G  0 disk
├─sdd1                              8:49   0     1G  0 part
└─sdd2                              8:50   0    58G  0 part
sde                                 8:64   0    59G  0 disk
├─sde1                              8:65   0     1G  0 part
└─sde2                              8:66   0    58G  0 part
sdf                                 8:80   0    59G  0 disk
├─sdf1                              8:81   0     1G  0 part
└─sdf2                              8:82   0    58G  0 part
sdg                                 8:96   0    59G  0 disk
├─sdg1                              8:97   0     1G  0 part
└─sdg2                              8:98   0    58G  0 part
sdh                                 8:112  0   101G  0 disk
├─sdh1                              8:113  0   100M  0 part
└─sdh2                              8:114  0 100.9G  0 part
sdi                                 8:128  0   110G  0 disk
└─sdi1                              8:129  0   110G  0 part
sdj                                 8:144  0   101G  0 disk
├─sdj1                              8:145  0   100M  0 part
└─sdj2                              8:146  0 100.9G  0 part
sdk                                 8:160  0   110G  0 disk
└─sdk1                              8:161  0   110G  0 part
sdl                                 8:176  0   101G  0 disk
├─sdl1                              8:177  0   100M  0 part
└─sdl2                              8:178  0 100.9G  0 part
sdm                                 8:192  0   110G  0 disk
└─sdm1                              8:193  0   110G  0 part
sdn                                 8:208  0    16G  0 disk
├─sdn1                              8:209  0     1G  0 part
└─sdn2                              8:210  0    15G  0 part
sdo                                 8:224  0    16G  0 disk
├─sdo1                              8:225  0     1G  0 part
└─sdo2                              8:226  0    15G  0 part
sdp                                 8:240  0    16G  0 disk
├─sdp1                              8:241  0     1G  0 part
└─sdp2                              8:242  0    15G  0 part
sdq                                65:0    0   140G  0 disk
└─VGPSR140-vm--106--disk--0       253:10   0   130G  0 lvm
sdr                                65:16   0   140G  0 disk
└─VGMLG140-vm--107--disk--0       253:11   0   130G  0 lvm
sds                                65:32   0    33G  0 disk
└─VGDOKUMENLOKAL-vm--109--disk--0 253:13   0    32G  0 lvm
sdt                                65:48   0   140G  0 disk
└─VGSKB140-vm--112--disk--0       253:9    0   130G  0 lvm
sdu                                65:64   0   140G  0 disk
└─VGPWA-vm--103--disk--0          253:14   0   130G  0 lvm
sdv                                65:80   0   250G  0 disk
├─VGERP250-vm--110--disk--0       253:15   0   101G  0 lvm
└─VGERP250-vm--110--disk--1       253:16   0   110G  0 lvm
sdw                                65:96   0   101G  0 disk
├─sdw1                             65:97   0   100M  0 part
└─sdw2                             65:98   0 100.9G  0 part
sdx                                65:112  0   110G  0 disk
└─sdx1                             65:113  0   110G  0 part
sdy                                65:128  0   140G  0 disk
sdz                                65:144  0   140G  0 disk
└─VGPWO140-vm--113--disk--0       253:18   0   130G  0 lvm
sdaa                               65:160  0   140G  0 disk
└─VGTSK140-vm--114--disk--0       253:17   0   130G  0 lvm
sdab                               65:176  0   140G  0 disk


I'm using QNAP NAS, and iscsi protocol.
All configuration is Disable Multi path on QNAP...
 
So you seem to be suffering from a few things:
- more than average number of iSCSI targets, each one is being probed during listing. All of that ends up pushing the overall operation to timeout value.
- standard udev handling of persistent naming rules that is not suited for hypervisor environment:
-- you have multiple VMs that use standard volume group names. These are seen by hypervisor OS which is not happy about it
-- if you indeed have no multipath, then you likely cloned a VM and now also have clashing UUID values on guest LVM. The Hypervisor is not happy about it either.

In fact, this situation is something we addressed in our driver recently:
https://kb.blockbridge.com/guide/proxmox/#blockbridge-plugin-version-history


Short of carefully creating your own udev rules to ignore guest disks, you can try to de-duplicate your UUIDs and volume group names. Also, it may make sense to reduce the number of iSCSI portals if operationally possible. Perhaps even switch to static iSCSI configuration. In most cases you dont make changes to iSCSI infrastructure daily, so it doesn't provide much benefit to keep it dynamic.

You should also check the output of: lsblk --nodeps -o name,serial
Check sdo, sdp and sdn. Make sure they dont have the same serial.


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
"centos" is a default vg name used by centos installer. It seems that you are using LUNs directly as HDDs of your VMs.
You need to write a proper filter/global_filter in /etc/lvm.conf on your pve host to avoid LVM scan on this LUNs.

PS: A LUN per VM approach is kind a strange, we use following iSCSI/LVM setup:
one iSCSI target -> few iSCSI LUNs and LVM volume groups configured as datastores -> many LVM logical volumes and VMs
Code:
root@pve01:~# vgs
  VG               #PV #LV #SN Attr   VSize    VFree  
  iscsilun01-lvg   1  86   0 wz--n-  <13.45t    8.97t
  iscsilun02-lvg   1  30   0 wz--n-   13.68t   10.31t
root@pve01:~# lvs
  LV             VG               Attr       LSize
  vm-201-disk-0  iscsilun01-lvg   -wi------- 128.00g                                                  
  vm-202-disk-0  iscsilun01-lvg   -wi-------   8.00g                                                  
  vm-204-disk-0  iscsilun01-lvg   -wi-------  32.00g                                                  
  vm-207-disk-0  iscsilun01-lvg   -wi-------  40.00g                                                  
...
 
Last edited:
PS: A LUN per VM approach is kind a strange
If you are able to do a LUN dedicated to VM that provides a lot of flexibility in size/snapshot/replication/migration management. At scale its hard to use this approach when your storage is managed manually. However, not impossible as almost anything can be semi-automated.

At Blockbridge we do a virtual disk (LUN) per PVE disk, however that's tightly controlled and managed by a PVE native storage plugin.


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!