Multipath to SAS Array

prometheus76

New Member
Feb 22, 2024
11
2
3
Hi all, quick rundown of my setup and issue.

I've a 2 node cluster connected to a HP MSA 2040 SAS array, it took me ages to get the multipath set and working across both nodes.

I haven't rebooted the pve nodes since, but I decided to update them today which required a reboot.

I evacuated all VM's from node 1 and rebooted, no issue and all the disks show as mpath device - all good.

I then moved all the vms over to node 1 so I could reboot node 2. This is when the issues started.

Node now will not load the disks into multipath with an error saying Device or resource busy!

root@proxmox02:~# multipath -v2
1534.847304 | Virtual_Machines: addmap [0 3906248704 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:48 1 round-robin 0 1 1 8:0 1]
1534.848258 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Virtual_Machines (252:31) failed: Device or resource busy
1534.848544 | dm_addmap: libdm task=0 error: Success
1534.848586 | Virtual_Machines: ignoring map
1534.848964 | Data1: addmap [0 3906248704 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:16 1 round-robin 0 1 1 8:64 1]
1534.849356 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Data1 (252:31) failed: Device or resource busy
1534.849528 | dm_addmap: libdm task=0 error: Success
1534.849570 | Data1: ignoring map
1534.849898 | Data2: addmap [0 1171873792 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:112 1 round-robin 0 1 1 8:32 1]
1534.850123 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Data2 (252:31) failed: Device or resource busy
1534.850202 | dm_addmap: libdm task=0 error: Success
1534.850237 | Data2: ignoring map
1534.850805 | Virtual_Machines: addmap [0 3906248704 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:48 1 round-robin 0 1 1 8:0 1]
1534.851015 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Virtual_Machines (252:31) failed: Device or resource busy
1534.851102 | dm_addmap: libdm task=0 error: Success
1534.851130 | Virtual_Machines: ignoring map
1534.851667 | Data1: addmap [0 3906248704 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:16 1 round-robin 0 1 1 8:64 1]
1534.851876 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Data1 (252:31) failed: Device or resource busy
1534.851938 | dm_addmap: libdm task=0 error: Success
1534.851962 | Data1: ignoring map
1534.852502 | Data2: addmap [0 1171873792 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:112 1 round-robin 0 1 1 8:32 1]
1534.852682 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Data2 (252:31) failed: Device or resource busy
1534.852746 | dm_addmap: libdm task=0 error: Success
1534.852771 | Data2: ignoring map

root@proxmox02:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1.8T 0 disk
└─sda1 8:1 0 1.8T 0 part
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
└─Data1-vm--102--disk--0 252:0 0 1T 0 lvm
sdc 8:32 0 558.8G 0 disk
└─sdc1 8:33 0 558.8G 0 part
sdd 8:48 0 1.8T 0 disk
└─sdd1 8:49 0 1.8T 0 part
├─Virtual_Machines-vm--103--disk--0 252:6 0 4M 0 lvm
├─Virtual_Machines-vm--103--disk--1 252:7 0 100G 0 lvm
├─Virtual_Machines-vm--103--disk--2 252:8 0 4M 0 lvm
├─Virtual_Machines-vm--101--disk--0 252:9 0 4M 0 lvm
├─Virtual_Machines-vm--101--disk--1 252:10 0 4M 0 lvm
├─Virtual_Machines-vm--101--disk--2 252:11 0 100G 0 lvm
├─Virtual_Machines-vm--106--disk--0 252:12 0 32G 0 lvm
├─Virtual_Machines-vm--102--disk--0 252:13 0 4M 0 lvm
├─Virtual_Machines-vm--102--disk--1 252:14 0 4M 0 lvm
├─Virtual_Machines-vm--102--disk--2 252:15 0 100G 0 lvm
├─Virtual_Machines-vm--105--disk--0 252:16 0 4M 0 lvm
├─Virtual_Machines-vm--105--disk--1 252:17 0 4M 0 lvm
├─Virtual_Machines-vm--105--disk--2 252:18 0 100G 0 lvm
├─Virtual_Machines-vm--107--disk--0 252:19 0 4M 0 lvm
├─Virtual_Machines-vm--107--disk--1 252:20 0 4M 0 lvm
├─Virtual_Machines-vm--107--disk--2 252:21 0 80G 0 lvm
├─Virtual_Machines-vm--104--disk--0 252:22 0 4M 0 lvm
├─Virtual_Machines-vm--104--disk--1 252:23 0 4M 0 lvm
├─Virtual_Machines-vm--104--disk--2 252:24 0 80G 0 lvm
├─Virtual_Machines-vm--108--disk--0 252:25 0 32G 0 lvm
├─Virtual_Machines-vm--109--disk--0 252:26 0 32G 0 lvm
└─Virtual_Machines-vm--100--disk--0 252:27 0 32G 0 lvm
sde 8:64 0 1.8T 0 disk
└─sde1 8:65 0 1.8T 0 part
sdf 8:80 0 838.2G 0 disk
└─ISOs 252:30 0 838.2G 0 mpath
sdg 8:96 0 838.2G 0 disk
└─ISOs 252:30 0 838.2G 0 mpath
sdh 8:112 0 558.8G 0 disk
└─sdh1 8:113 0 558.8G 0 part
└─Data2-vm--104--disk--0 252:5 0 300G 0 lvm
sdi 8:128 1 0B 0 disk
sdj 8:144 0 223.5G 0 disk
├─sdj1 8:145 0 1007K 0 part
├─sdj2 8:146 0 1G 0 part /boot/efi
└─sdj3 8:147 0 222.5G 0 part
├─pve-swap 252:1 0 8G 0 lvm [SWAP]
├─pve-root 252:2 0 65.6G 0 lvm /
├─pve-data_tmeta 252:3 0 1.3G 0 lvm
│ └─pve-data-tpool 252:28 0 130.2G 0 lvm
│ └─pve-data 252:29 0 130.2G 1 lvm
└─pve-data_tdata 252:4 0 130.2G 0 lvm
└─pve-data-tpool 252:28 0 130.2G 0 lvm
└─pve-data 252:29 0 130.2G 1 lvm

Any ideas where i'm going wrong here would be very much appreciated,

Thanks
 
please format in CODE tags, I cannot see where the lines are and what entry is parent or child.
What about multipath -ll?
 
Not really sure how to format them in code tags, you mean like this?

Code:
multipath -ll
ISOs (3600c0ff0001e398b3cc0c06501000000) dm-30 HP,MSA 2040 SAS
size=838G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 0:0:0:4 sdf 8:80 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  `- 0:0:1:4 sdg 8:96 active ready running

This is the multipath -ll, this drive isn't accesses by any host but is presented from the MSA SAS SAN.

Code:
NAME                                  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                                     8:0    0   1.8T  0 disk
└─sda1                                  8:1    0   1.8T  0 part
sdb                                     8:16   0   1.8T  0 disk
└─sdb1                                  8:17   0   1.8T  0 part
  └─Data1-vm--102--disk--0            252:0    0     1T  0 lvm
sdc                                     8:32   0 558.8G  0 disk
└─sdc1                                  8:33   0 558.8G  0 part
sdd                                     8:48   0   1.8T  0 disk
└─sdd1                                  8:49   0   1.8T  0 part
  ├─Virtual_Machines-vm--103--disk--0 252:6    0     4M  0 lvm
  ├─Virtual_Machines-vm--103--disk--1 252:7    0   100G  0 lvm
  ├─Virtual_Machines-vm--103--disk--2 252:8    0     4M  0 lvm
  ├─Virtual_Machines-vm--101--disk--0 252:9    0     4M  0 lvm
  ├─Virtual_Machines-vm--101--disk--1 252:10   0     4M  0 lvm
  ├─Virtual_Machines-vm--101--disk--2 252:11   0   100G  0 lvm
  ├─Virtual_Machines-vm--106--disk--0 252:12   0    32G  0 lvm
  ├─Virtual_Machines-vm--102--disk--0 252:13   0     4M  0 lvm
  ├─Virtual_Machines-vm--102--disk--1 252:14   0     4M  0 lvm
  ├─Virtual_Machines-vm--102--disk--2 252:15   0   100G  0 lvm
  ├─Virtual_Machines-vm--105--disk--0 252:16   0     4M  0 lvm
  ├─Virtual_Machines-vm--105--disk--1 252:17   0     4M  0 lvm
  ├─Virtual_Machines-vm--105--disk--2 252:18   0   100G  0 lvm
  ├─Virtual_Machines-vm--107--disk--0 252:19   0     4M  0 lvm
  ├─Virtual_Machines-vm--107--disk--1 252:20   0     4M  0 lvm
  ├─Virtual_Machines-vm--107--disk--2 252:21   0    80G  0 lvm
  ├─Virtual_Machines-vm--104--disk--0 252:22   0     4M  0 lvm
  ├─Virtual_Machines-vm--104--disk--1 252:23   0     4M  0 lvm
  ├─Virtual_Machines-vm--104--disk--2 252:24   0    80G  0 lvm
  ├─Virtual_Machines-vm--108--disk--0 252:25   0    32G  0 lvm
  ├─Virtual_Machines-vm--109--disk--0 252:26   0    32G  0 lvm
  └─Virtual_Machines-vm--100--disk--0 252:27   0    32G  0 lvm
sde                                     8:64   0   1.8T  0 disk
└─sde1                                  8:65   0   1.8T  0 part
sdf                                     8:80   0 838.2G  0 disk
└─ISOs                                252:30   0 838.2G  0 mpath
sdg                                     8:96   0 838.2G  0 disk
└─ISOs                                252:30   0 838.2G  0 mpath
sdh                                     8:112  0 558.8G  0 disk
└─sdh1                                  8:113  0 558.8G  0 part
  └─Data2-vm--104--disk--0            252:5    0   300G  0 lvm
sdi                                     8:128  1     0B  0 disk
sdj                                     8:144  0 223.5G  0 disk
├─sdj1                                  8:145  0  1007K  0 part
├─sdj2                                  8:146  0     1G  0 part  /boot/efi
└─sdj3                                  8:147  0 222.5G  0 part
  ├─pve-swap                          252:1    0     8G  0 lvm   [SWAP]
  ├─pve-root                          252:2    0  65.6G  0 lvm   /
  ├─pve-data_tmeta                    252:3    0   1.3G  0 lvm
  │ └─pve-data-tpool                  252:28   0 130.2G  0 lvm
  │   └─pve-data                      252:29   0 130.2G  1 lvm
  └─pve-data_tdata                    252:4    0 130.2G  0 lvm
    └─pve-data-tpool                  252:28   0 130.2G  0 lvm
      └─pve-data                      252:29   0 130.2G  1 lvm

This is the multipath -v2

Code:
7104.651090 | Virtual_Machines: addmap [0 3906248704 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:48 1 round-robin 0 1 1 8:0 1]
7104.653183 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Virtual_Machines (252:31) failed: Device or resource busy
7104.653466 | dm_addmap: libdm task=0 error: Success
7104.653502 | Virtual_Machines: ignoring map
7104.653838 | Data1: addmap [0 3906248704 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:16 1 round-robin 0 1 1 8:64 1]
7104.655914 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Data1 (252:31) failed: Device or resource busy
7104.656386 | dm_addmap: libdm task=0 error: Success
7104.656425 | Data1: ignoring map
7104.656760 | Data2: addmap [0 1171873792 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:112 1 round-robin 0 1 1 8:32 1]
7104.658567 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Data2 (252:31) failed: Device or resource busy
7104.658777 | dm_addmap: libdm task=0 error: Success
7104.658804 | Data2: ignoring map
7104.659375 | Virtual_Machines: addmap [0 3906248704 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:48 1 round-robin 0 1 1 8:0 1]
7104.661358 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Virtual_Machines (252:31) failed: Device or resource busy
7104.661515 | dm_addmap: libdm task=0 error: Success
7104.661542 | Virtual_Machines: ignoring map
7104.662089 | Data1: addmap [0 3906248704 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:16 1 round-robin 0 1 1 8:64 1]
7104.663949 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Data1 (252:31) failed: Device or resource busy
7104.664138 | dm_addmap: libdm task=0 error: Success
7104.664164 | Data1: ignoring map
7104.664715 | Data2: addmap [0 1171873792 multipath 1 queue_if_no_path 1 alua 2 1 round-robin 0 1 1 8:112 1 round-robin 0 1 1 8:32 1]
7104.666625 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on Data2 (252:31) failed: Device or resource busy
7104.666899 | dm_addmap: libdm task=0 error: Success
7104.666994 | Data2: ignoring map

If you need any more please let me know.

Thanks
 
Last edited:
As a bit of an update it appears that node 2 will not multipath the disk if there is any data on it, that is why it only maps the 1 ISO as its empty.

I removed the mappings from the SAN, rebooted then added them back and then ran
Code:
rescan-scsi-bus.sh
and it mapped straight away as a multipath. Done this for all drives and it looked hopeful.

Rebooted the node and all back to same error.

This is getting very frustrating. lol
 
hi,

i have almost the same problem.
After reinstall my node I see in the 'dmesg' same errors.
In the `multipath -ll` section no resources are found.

Regards,
p.
 
What does multipath -v2 show?

Also I think you can run multipath -d and it will show what it can multipath.

C
 
Not really sure how to format them in code tags, you mean like this?
Yes.

This is the multipath -ll, this drive isn't accesses by any host but is presented from the MSA SAS SAN.
and it is present on all nodes? What did you try next?

Normal setup is:
  • present luns to all nodes
  • configure multipath properly
  • all nodes see the disk in /dev/mapper/<multipath-name>
  • create physical disk on the LUN
  • add to existing or create new volume group
  • configure in PVE
  • use as a virtual disk for VMs or containers
 
Thanks for your reply.

I followed a guide I found on github and done it all on the first node and it worked fine, installed multipath-tools on the second node and copied the multipath.conf file and it all worked. The issue was when I rebooted the second node it lost all the multipaths with the errors above.

If I unmap the LUNs from the SAN to the second node reboot then re-map them from the SAN it all works and gets picked up as multipath devices, but as soon as I reboot the node back to not being able to multipath them and getting errors as above.

Is there something special I need to add to the multipath.conf file??

Thanks
 
If I unmap the LUNs from the SAN to the second node reboot then re-map them from the SAN it all works and gets picked up as multipath devices, but as soon as I reboot the node back to not being able to multipath them and getting errors as above.

Is there something special I need to add to the multipath.conf file??
No, there should be nothing different. I would backout Proxmox config, remove multipath. Start from basics: do the disks, when zoned, appear on both nodes after configuration and after rebooting each node? Do they stay put after rebooting SAS controllers?
Once you've proven good basic connectivity, add multipath. (you can use lsscsi and lsblk, in addition to lspci).

Its not impossible that you have a hardware or cable issue on node 2. Try to only configure node 2, dont connect node1.
Again, when everything works your multipath config and pretty much everything else up to PVE should be identical between nodes. Now, the mpathX device number could be different, but uuid should be the same.

Best of luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I followed a guide I found on github and done it all on the first node and it worked fine, installed multipath-tools on the second node and copied the multipath.conf file and it all worked.
Also, instead of following a guide on github - find your storage vendor official document on SAS/multipath connectivity for your specific device in Linux (Ubuntu or Debian, if they make a distinction). Or generic Linux if they dont.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
What does multipath -v2 show?

Also I think you can run multipath -d and it will show what it can multipath.

C
After executing multipath -v2 i have sth like this (this is the part):
Code:
74608.920373 | FC_STORAGE_04: addmap [0 21474836480 multipath 3 pg_init_retries 50 queue_if_no_path 1 rdac 2 1 round-robin 0 2 1 66:16 1 67:0 1 round-robin 0 2 1 8:32 1 66:144 1]
74608.920684 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on FC_STORAGE_04 (252:440) failed: Device or resource busy
74608.920735 | dm_addmap: libdm task=0 error: Success
74608.920760 | FC_STORAGE_04: ignoring map
74608.921501 | FC_STORAGE_03: addmap [0 21474836480 multipath 3 pg_init_retries 50 queue_if_no_path 1 rdac 2 1 round-robin 0 2 1 66:32 1 67:16 1 round-robin 0 2 1 8:48 1 66:160 1]
74608.921811 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on FC_STORAGE_03 (252:440) failed: Device or resource busy
74608.921861 | dm_addmap: libdm task=0 error: Success
74608.921886 | FC_STORAGE_03: ignoring map
74608.922588 | FC_STORAGE_02: addmap [0 21474836480 multipath 3 pg_init_retries 50 queue_if_no_path 1 rdac 2 1 round-robin 0 2 1 66:48 1 67:32 1 round-robin 0 2 1 8:64 1 66:176 1]
74608.922899 | libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on FC_STORAGE_02 (252:440) failed: Device or resource busy
74608.922948 | dm_addmap: libdm task=0 error: Success
74608.922972 | FC_STORAGE_02: ignoring map

and this repeats on all LUNs.

and in multipath -d
Code:
: FC_STORAGE_04 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) undef STORAGE,YYYY
size=10T features='3 pg_init_retries 50 queue_if_no_path' hwhandler='1 rdac' wp=undef
|-+- policy='round-robin 0' prio=50 status=undef
| |- 14:0:3:0 sdah 66:16  undef ready running
| `- 15:0:3:0 sdaw 67:0   undef ready running
`-+- policy='round-robin 0' prio=10 status=undef
  |- 14:0:0:0 sdc  8:32   undef ready running
  `- 15:0:0:0 sdap 66:144 undef ready running
: FC_STORAGE_03 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) undef STORAGE,YYYY
size=10T features='3 pg_init_retries 50 queue_if_no_path' hwhandler='1 rdac' wp=undef
|-+- policy='round-robin 0' prio=50 status=undef
| |- 14:0:3:1 sdai 66:32  undef ready running
| `- 15:0:3:1 sdax 67:16  undef ready running
`-+- policy='round-robin 0' prio=10 status=undef
  |- 14:0:0:1 sdd  8:48   undef ready running
  `- 15:0:0:1 sdaq 66:160 undef ready running
: FC_STORAGE_02 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) undef STORAGE,YYYY
size=10T features='3 pg_init_retries 50 queue_if_no_path' hwhandler='1 rdac' wp=undef
|-+- policy='round-robin 0' prio=50 status=undef
| |- 14:0:3:2 sdaj 66:48  undef ready running
| `- 15:0:3:2 sday 67:32  undef ready running
`-+- policy='round-robin 0' prio=10 status=undef
  |- 14:0:0:2 sde  8:64   undef ready running
  `- 15:0:0:2 sdar 66:176 undef ready running

Is it possible to mount a resource when it's already in use and mounted on other nodes - LVM is created and VMs are running on it?
I want to add upgraded nodes to the cluster; other nodes of the cluster are running on the older Proxmox version.

Regards,
p.
 
Last edited:
Hi
I have the same problem as @piotrzu. I recently reported the same issue but none help me. I tried to resolve problem with multiple multipath-tool versions like as (0.7.9-3, 0.7.5-3 and 0.9.7-4) and the result was a the same failure. Maybe problem is the configuration of multipath?

Regard
Tom
 
I followed a guide I found on github
What guide and did it include an automatic start of the multipathd?
We can help you more if you share what you actually did instead of just error messages.

Is it possible to mount a resource when it's already in use and mounted on other nodes - LVM is created and VMs are running on it?
Technically yes, but it's dangerous if you change metadata, it will not be automatically propagated to non-cluster nodes. I did a hardware swap (nodes proxmox1-5) to a new cluster (nodes proxmox6-8) while running on the same storage. I just shutdown the VMs on the old cluster, copied the appropriate vm.conf to the other cluster and started it up. Downtime from approx 1min per VM.
 
Technically yes, but it's dangerous if you change metadata, it will not be automatically propagated to non-cluster nodes. I did a hardware swap (nodes proxmox1-5) to a new cluster (nodes proxmox6-8) while running on the same storage. I just shutdown the VMs on the old cluster, copied the appropriate vm.conf to the other cluster and started it up. Downtime from approx 1min per VM.

Thank you,
maybe you see the solution how to fix the issue with multipath? When I run the command multipath -d, I see all available resources (but all are undef ready running) whereas with multipath -ll nothing is visible. Could it be caused by a newer version of the multipath?
On Proxmox 6 everything works fine, but after update it crashed.

Regards,
p.
 
Hi, maybe there is some unfortunate interaction between multipathd and LVM at play here. Some questions:
  • Is the SAN connected via iSCSI or Fibre Channel?
  • Could you post the files /etc/multipath.conf and /etc/multipath/wwids?
  • Could you post the output of the following commands (please use [CODE]/[/CODE] tags):
    Code:
    pveversion -v
    pvs
    lsblk -o +HOTPLUG,ROTA,PHY-SEC,FSTYPE,MODEL,TRAN
    lvmconfig --typeconfig full devices/multipath_component_detection
    dmsetup ls --tree
 
Hello,

thank you for the responses.

Maybe try this config:

Code:
defaults {
    find_multipaths "on"
}
I changed that and unfortunatelly nothing changed.
Hi, maybe there is some unfortunate interaction between multipathd and LVM at play here. Some questions:
  • Is the SAN connected via iSCSI or Fibre Channel?
  • Could you post the files /etc/multipath.conf and /etc/multipath/wwids?
  • Could you post the output of the following commands (please use [CODE]/[/CODE] tags):
    Code:
    pveversion -v
    pvs
    lsblk -o +HOTPLUG,ROTA,PHY-SEC,FSTYPE,MODEL,TRAN
    lvmconfig --typeconfig full devices/multipath_component_detection
    dmsetup ls --tree
The SAN is connected via Fibre Channel.
I found out that after unplug and plug again the FC cables some resources have appeared but not all.

Sure,
pvsersion -v:
Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx8
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: not correctly installed
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2

pvs
Code:
  PV                             VG        Fmt  Attr PSize   PFree   
  /dev/mapper/FC_STORAGE_01-part1    FC_STORAGE_01 lvm2 a--  <29.07t   <4.17g
  /dev/mapper/FC_STORAGE_07-part1              lvm2 ---  <29.07t  <29.07t
  /dev/mapper/FC_STORAGE_08-part1              lvm2 ---  <29.07t  <29.07t
  /dev/mapper/FC_STORAGE_09-part1              lvm2 ---  <29.07t  <29.07t
  /dev/mapper/FC_NETAPP_01-part1 FC_NETAPP lvm2 a--  <20.00t  285.99g
  /dev/mapper/FC_NETAPP_02-part1 FC_NETAPP lvm2 a--  <10.00t       0
  /dev/mapper/FC_NETAPP_03-part1 FC_NETAPP lvm2 a--  <10.00t <276.54g
  /dev/mapper/FC_NETAPP_04-part1 FC_NETAPP lvm2 a--  <10.00t   <2.61t
  /dev/mapper/FC_NETAPP_05-part1 FC_NETAPP lvm2 a--   <9.00t    2.29t
  /dev/mapper/FC_NETAPP_06-part1 FC_NETAPP lvm2 a--   <6.00t       0
  /dev/mapper/FC_NETAPP_07-part1 FC_NETAPP lvm2 a--   <4.00t       0
  /dev/mapper/FC_XROOTD_00-part1           lvm2 ---  <29.07t  <29.07t
  /dev/mapper/FC_XROOTD_01-part1           lvm2 ---  <29.07t  <29.07t
  /dev/md0                       vgroot    lvm2 a--  447.00g    4.00m


/etc/multipath.conf

Code:
blacklist {
    device {
        product "INTEL SSDSC2KB48"
    }
}

defaults {
     polling_interval        2
         path_selector           "round-robin 0"
         path_grouping_policy    multibus
         uid_attribute           ID_SERIAL
         rr_min_io               100
         failback                immediate
         no_path_retry           queue
         user_friendly_names     yes
     find_multipaths      "no"
}

devices {
    device {
        vendor                "LSI"
        product               "INF-01-00"
        path_grouping_policy  group_by_prio
        prio                  rdac
        #getuid_callout        "/lib/udev/scsi_id -g -u -d /dev/%n"
        path_checker          rdac
        path_selector         "round-robin 0"
        hardware_handler      "1 rdac"
        failback               immediate
        features              "2 pg_init_retries 50"
        no_path_retry          30
        rr_min_io              100
    }
    device {
        vendor                "NETAPP"
        product               "INF-01-00"
        path_grouping_policy  group_by_prio
        prio                  rdac
        #getuid_callout        "/lib/udev/scsi_id -g -u -d /dev/%n"
        path_checker          rdac
        path_selector         "round-robin 0"
        hardware_handler      "1 rdac"
        failback               immediate
        features              "2 pg_init_retries 50"
        no_path_retry          30
        rr_min_io              100
    }

    device {
    vendor                         "HUAWEI"
    product                        "XSG1"
        path_grouping_policy           multibus
        failback                       immediate
        path_selector                  "round-robin 0"
        path_checker                   tur
        prio                           const
        fast_io_fail_tmo               5
        dev_loss_tmo                   30
        no_path_retry                  6
    }
      
}

multipaths {

    multipath {
                wwid    360080e5000299a38000006975b57ec3f
                alias   FC_XROOTD_01
        }
    
    multipath {
                wwid    360080e5000297d3c0000074a5b57e515
                alias   FC_XROOTD_00
        }

    multipath {
        wwid    360080e5000297d3c000007405b57e407
                alias   FC_STORAGE_00
        }
    
    multipath {
                wwid    360080e5000299a380000068d5b57eb40
        alias   FC_STORAGE_01
    }
    
    multipath {     
                wwid    360080e5000297d3c000007425b57e471
                alias   FC_STORAGE_02
        }
    
    multipath {     
                wwid    360080e5000299a380000068f5b57eb94
                alias   FC_STORAGE_03
        }
    
    multipath {     
                wwid    360080e5000297d3c000007445b57e4a8
                alias   FC_STORAGE_04
        }
    
    multipath {
                wwid    360080e5000299a38000006915b57ebc3
                alias   FC_STORAGE_05
        }
        
    multipath {
                wwid    360080e5000297d3c000007465b57e4d0
                alias   FC_STORAGE_06
        }
    
    multipath {
                wwid    360080e5000299a38000006935b57ebe6
                alias   FC_STORAGE_07
        }
        
    multipath {
                wwid    360080e5000297d3c000007485b57e4f3
                alias   FC_STORAGE_08
        }
        
    multipath {
                wwid    360080e5000299a38000006955b57ec09
                alias   FC_STORAGE_09
        }
        
    multipath {
        wwid    360080e5000432b9c000013dc64361d8e
        alias   FC_NETAPP_01
    }

    multipath {
        wwid    360080e50004327300000191164362860
        alias    FC_NETAPP_02
    }

    multipath {
                wwid    360080e50004327300000190f64362838
                alias   FC_NETAPP_03
        }

        multipath {
                wwid    360080e5000432730000018ed64362807
                alias   FC_NETAPP_04
        }

    multipath {
                wwid    360080e500043273000001991643fb57c
                alias   FC_NETAPP_05
        }


        multipath {
                wwid    360080e5000432b9c00001419643faaad
                alias   FC_NETAPP_06
        }

        multipath {
                wwid    360080e500043273000001993643fb602
                alias   FC_NETAPP_07
        }

    multipath {
        wwid    36e477271001cd55800210be500000001
        alias    Huawei_Disk_01
    }

}


/etc/multipath/wwids

Code:
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/360080e5000297d3c000007485b57e4f3/
/360080e5000297d3c0000074a5b57e515/
/360080e5000299a38000006935b57ebe6/
/360080e5000299a38000006975b57ec3f/
/360080e5000299a38000006955b57ec09/
/360080e5000299a380000068d5b57eb40/
/360080e5000299a380000068f5b57eb94/
/360080e5000432b9c00001419643faaad/
/360080e5000432b9c000013dc64361d8e/
/360080e50004327300000190f64362838/
/360080e50004327300000191164362860/
/360080e5000432730000018ed64362807/
/360080e500043273000001991643fb57c/
/360080e500043273000001993643fb602/
 
lsblk -o +HOTPLUG,ROTA,PHY-SEC,FSTYPE,MODEL,TRAN

Code:
NAME         MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS HOTPLUG ROTA PHY-SEC FSTYPE MODEL TRAN
sda            8:0    0 447.1G  0 disk                    0    0    4096        INTEL sata
└─sda1         8:1    0 447.1G  0 part                    0    0    4096 linux_      
  └─md0        9:0    0   447G  0 raid1                   0    0    4096 LVM2_m      
    └─vgroot-lvroot
             252:0    0   447G  0 lvm   /                 0    0    4096 ext4        
sdb            8:16   0 447.1G  0 disk                    0    0    4096        INTEL sata
└─sdb1         8:17   0 447.1G  0 part                    0    0    4096 linux_      
  └─md0        9:0    0   447G  0 raid1                   0    0    4096 LVM2_m      
    └─vgroot-lvroot
             252:0    0   447G  0 lvm   /                 0    0    4096 ext4        
sdc            8:32   0    10T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_NETAPP_04
             252:75   0    10T  0 mpath                   0    1     512            
  └─FC_NETAPP_04-part1
             252:77   0    10T  0 part                    0    1     512 LVM2_m      
sdd            8:48   0    10T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_NETAPP_03
             252:71   0    10T  0 mpath                   0    1     512            
  └─FC_NETAPP_03-part1
      
.
.

sdq           65:0    0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sdq1        65:1    0  29.1T  0 part                    0    1     512 LVM2_m      
sdr           65:16   0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_STORAGE_01  252:1    0  29.1T  0 mpath                   0    1     512            
  └─FC_STORAGE_01-part1
             252:3    0  29.1T  0 part                    0    1     512 LVM2_m      
    ├─FC_STORAGE_01-vm--130--disk--0
    │        252:11   0    10T  0 lvm                     0    1     512            
    ├─FC_STORAGE_01-vm--130--disk--1
    │        252:13   0    10T  0 lvm                     0    1     512            
    └─FC_STORAGE_01-vm--130--disk--2
             252:14   0   9.1T  0 lvm                     0    1     512            
sds           65:32   0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sds1        65:33   0  29.1T  0 part                    0    1     512 LVM2_m      
sdt           65:48   0  29.1T  0 disk                    0    1     512        INF-0 fc
sdu           65:64   0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sdu1        65:65   0  29.1T  0 part                    0    1     512 LVM2_m      
sdv           65:80   0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sdv1        65:81   0  29.1T  0 part                    0    1     512 LVM2_m      
  ├─FC_STORAGE_05-vm--128--disk--0
  │          252:4    0    10T  0 lvm                     0    1     512            
  ├─FC_STORAGE_05-vm--128--disk--1
  │          252:5    0    10T  0 lvm                     0    1     512            
  └─FC_STORAGE_05-vm--128--disk--2
             252:6    0   8.8T  0 lvm                     0    1     512            
sdw           65:96   0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sdw1        65:97   0  29.1T  0 part                    0    1     512 LVM2_m      
  ├─FC_STORAGE_06-vm--128--disk--0
  │          252:59   0    10T  0 lvm                     0    1     512            
  ├─FC_STORAGE_06-vm--128--disk--1
  │          252:61   0    10T  0 lvm                     0    1     512            
  ├─FC_STORAGE_06-vm--128--disk--2
  │          252:62   0   8.1T  0 lvm                     0    1     512            
  └─FC_STORAGE_06-vm--142--disk--0
             252:63   0     1T  0 lvm                     0    1     512            
sdx           65:112  0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_STORAGE_07  252:10   0  29.1T  0 mpath                   0    1     512            
  └─FC_STORAGE_07-part1
             252:51   0  29.1T  0 part                    0    1     512 LVM2_m      
sdy           65:128  0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_STORAGE_08  252:79   0  29.1T  0 mpath                   0    1     512            
  └─FC_STORAGE_08-part1
             252:81   0  29.1T  0 part                    0    1     512 LVM2_m      
sdz           65:144  0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_STORAGE_09  252:50   0  29.1T  0 mpath                   0    1     512            
  └─FC_STORAGE_09-part1
             252:54   0  29.1T  0 part                    0    1     512 LVM2_m      
sdaa          65:160  0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_XROOTD_00
             252:78   0  29.1T  0 mpath                   0    1     512            
  └─FC_XROOTD_00-part1
             252:80   0  29.1T  0 part                    0    1     512 LVM2_m      
sdab          65:176  0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_XROOTD_01
             252:52   0  29.1T  0 mpath                   0    1     512            
  └─FC_XROOTD_01-part1
             252:68   0  29.1T  0 part                    0    1     512 LVM2_m      
sdac          65:192  0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sdac1       65:193  0  29.1T  0 part                    0    1     512 LVM2_m      
  ├─FC_STORAGE_00-vm--130--disk--0
  │          252:53   0    10T  0 lvm                     0    1     512            
  ├─FC_STORAGE_00-vm--130--disk--1
  │          252:55   0    10T  0 lvm                     0    1     512            
  └─FC_STORAGE_00-vm--130--disk--2
             252:57   0   9.1T  0 lvm                     0    1     512            
sdad          65:208  0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_STORAGE_01  252:1    0  29.1T  0 mpath                   0    1     512            
  └─FC_STORAGE_01-part1
             252:3    0  29.1T  0 part                    0    1     512 LVM2_m      
    ├─FC_STORAGE_01-vm--130--disk--0
    │        252:11   0    10T  0 lvm                     0    1     512            
    ├─FC_STORAGE_01-vm--130--disk--1
    │        252:13   0    10T  0 lvm                     0    1     512            
    └─FC_STORAGE_01-vm--130--disk--2
             252:14   0   9.1T  0 lvm                     0    1     512            
sdae          65:224  0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sdae1       65:225  0  29.1T  0 part                    0    1     512 LVM2_m      
  ├─FC_STORAGE_02-vm--130--disk--0
  │          252:65   0    10T  0 lvm                     0    1     512            
  ├─FC_STORAGE_02-vm--130--disk--1
  │          252:66   0    10T  0 lvm                     0    1     512            
  └─FC_STORAGE_02-vm--130--disk--2
             252:67   0   9.1T  0 lvm                     0    1     512            
sdaf          65:240  0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sdaf1       65:241  0  29.1T  0 part                    0    1     512 LVM2_m      
  ├─FC_STORAGE_03-vm--128--disk--0
  │          252:7    0    10T  0 lvm                     0    1     512            
  ├─FC_STORAGE_03-vm--128--disk--1
  │          252:8    0    10T  0 lvm                     0    1     512            
  └─FC_STORAGE_03-vm--128--disk--2
             252:9    0   9.1T  0 lvm                     0    1     512            
sdag          66:0    0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sdag1       66:1    0  29.1T  0 part                    0    1     512 LVM2_m      
  ├─FC_STORAGE_04-vm--128--disk--0
  │          252:56   0    10T  0 lvm                     0    1     512            
  ├─FC_STORAGE_04-vm--128--disk--1
  │          252:58   0    10T  0 lvm                     0    1     512            
  └─FC_STORAGE_04-vm--128--disk--2
             252:60   0   9.1T  0 lvm                     0    1     512            
sdah          66:16   0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sdah1       66:17   0  29.1T  0 part                    0    1     512 LVM2_m      
sdai          66:32   0  29.1T  0 disk                    0    1     512        INF-0 fc
└─sdai1       66:33   0  29.1T  0 part                    0    1     512 LVM2_m      
sdaj          66:48   0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_STORAGE_07  252:10   0  29.1T  0 mpath                   0    1     512            
  └─FC_STORAGE_07-part1
             252:51   0  29.1T  0 part                    0    1     512 LVM2_m      
sdak          66:64   0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_STORAGE_08  252:79   0  29.1T  0 mpath                   0    1     512            
  └─FC_STORAGE_08-part1
             252:81   0  29.1T  0 part                    0    1     512 LVM2_m      
sdal          66:80   0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_STORAGE_09  252:50   0  29.1T  0 mpath                   0    1     512            
  └─FC_STORAGE_09-part1
             252:54   0  29.1T  0 part                    0    1     512 LVM2_m      
sdam          66:96   0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_XROOTD_00
             252:78   0  29.1T  0 mpath                   0    1     512            
  └─FC_XROOTD_00-part1
             252:80   0  29.1T  0 part                    0    1     512 LVM2_m      
sdan          66:112  0  29.1T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_XROOTD_01
             252:52   0  29.1T  0 mpath                   0    1     512            
  └─FC_XROOTD_01-part1
             252:68   0  29.1T  0 part                    0    1     512 LVM2_m      
sdao          66:128  0    50T  0 disk                    0    0     512        XSG1  fc
└─sdao1       66:129  0    50T  0 part                    0    0     512 LVM2_m      
  ├─Huawei_Disk_01-vm--118--disk--0
  │          252:15   0    92G  0 lvm                     0    0     512            
  ├─Huawei_Disk_01-vm--123--disk--0
  │          252:48   0   200G  0 lvm                     0    0     512            
  └─Huawei_Disk_01-vm--1611--cloudinit
             252:49   0     4M  0 lvm                     0    0     512 iso966      
sdap          66:144  0    10T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_NETAPP_04
             252:75   0    10T  0 mpath                   0    1     512            
  └─FC_NETAPP_04-part1
             252:77   0    10T  0 part                    0    1     512 LVM2_m      
sdaq          66:160  0    10T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_NETAPP_03
             252:71   0    10T  0 mpath                   0    1     512            
  └─FC_NETAPP_03-part1
             252:74   0    10T  0 part                    0    1     512 LVM2_m      
sdar          66:176  0    10T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_NETAPP_02
             252:73   0    10T  0 mpath                   0    1     512            
  └─FC_NETAPP_02-part1
             252:76   0    10T  0 part                    0    1     512 LVM2_m      
sdas          66:192  0    20T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_NETAPP_01
             252:69   0    20T  0 mpath                   0    1     512            
  └─FC_NETAPP_01-part1
             252:72   0    20T  0 part                    0    1     512 LVM2_m      
sdat          66:208  0     9T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_NETAPP_05
.
.

└─FC_NETAPP_01
             252:69   0    20T  0 mpath                   0    1     512            
  └─FC_NETAPP_01-part1
             252:72   0    20T  0 part                    0    1     512 LVM2_m      
sdba          67:64   0     9T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_NETAPP_05
             252:82   0     9T  0 mpath                   0    1     512            
  └─FC_NETAPP_05-part1
             252:83   0     9T  0 part                    0    1     512 LVM2_m      
sdbb          67:80   0     6T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_NETAPP_06
             252:64   0     6T  0 mpath                   0    1     512            
  └─FC_NETAPP_06-part1
             252:70   0     6T  0 part                    0    1     512 LVM2_m      
sdbc          67:96   0     4T  0 disk                    0    1     512 mpath_ INF-0 fc
└─FC_NETAPP_07
             252:84   0     4T  0 mpath                   0    1     512            
  └─FC_NETAPP_07-part1
             252:85   0     4T  0 part                    0    1     512 LVM2_m      
sdbd          67:112  0    50T  0 disk                    0    0     512        XSG1  fc
└─sdbd1       67:113  0    50T  0 part                    0    0     512 LVM2_m
lvmconfig --typeconfig full devices/multipath_component_detection
Code:
multipath_component_detection=1

dmsetup ls --tree

Code:
FC_STORAGE_00-vm--xxx--disk--0 (252:53)
 └─ (65:193)
FC_STORAGE_00-vm--xxx--disk--1 (252:55)
 └─ (65:193)
FC_STORAGE_00-vm--xxx--disk--2 (252:57)
 └─ (65:193)
FC_STORAGE_01-vm--xxx--disk--0 (252:11)
 └─FC_STORAGE_01-part1 (252:3)
    └─FC_STORAGE_01 (252:1)
       ├─ (65:16)
       └─ (65:208)
FC_STORAGE_01-vm--xxx--disk--1 (252:13)
 └─FC_STORAGE_01-part1 (252:3)
    └─FC_STORAGE_01 (252:1)
       ├─ (65:16)
       └─ (65:208)
FC_STORAGE_01-vm--xxx--disk--2 (252:14)
 └─FC_STORAGE_01-part1 (252:3)
    └─FC_STORAGE_01 (252:1)
       ├─ (65:16)
       └─ (65:208)
FC_STORAGE_02-vm--xxx--disk--0 (252:65)
 └─ (65:225)
FC_STORAGE_02-vm--xxx--disk--1 (252:66)
 └─ (65:225)
FC_STORAGE_02-vm--xxx--disk--2 (252:67)
 └─ (65:225)
FC_STORAGE_03-vm--xxx--disk--0 (252:7)
 └─ (65:241)
FC_STORAGE_03-vm--xxx--disk--1 (252:8)
 └─ (65:241)
FC_STORAGE_03-vm--xxx--disk--2 (252:9)
 └─ (65:241)
.
.
 └─ (65:97)
FC_STORAGE_06-vm--xxx--disk--0 (252:63)
 └─ (65:97)
FC_STORAGE_07-part1 (252:51)
 └─FC_STORAGE_07 (252:10)
    ├─ (65:112)
    └─ (66:48)
FC_STORAGE_08-part1 (252:81)
 └─FC_STORAGE_08 (252:79)
    ├─ (66:64)
    └─ (65:128)
FC_STORAGE_09-part1 (252:54)
 └─FC_STORAGE_09 (252:50)
    ├─ (65:144)
    └─ (66:80)
FC_NETAPP_01-part1 (252:72)
 └─FC_NETAPP_01 (252:69)
    ├─ (66:192)
    ├─ (8:80)
.
.
    └─ (8:128)
FC_XROOTD_00-part1 (252:80)
 └─FC_XROOTD_00 (252:78)
    ├─ (66:96)
    └─ (65:160)
FC_XROOTD_01-part1 (252:68)
 └─FC_XROOTD_01 (252:52)
    ├─ (65:176)
    └─ (66:112)
Huawei_Disk_01-vm--xxx--disk--0 (252:15)
 └─ (66:129)
Huawei_Disk_01-vm--xxx--disk--0 (252:25)
 └─ (66:129)
Huawei_Disk_01-vm--xxx--disk--0 (252:44)
 └─ (66:129)
vgroot-lvroot (252:0)
 └─ (9:0)


I apologize for everything being so fragmented. I had to unfortunately cut part of the text because it was too long.

Regards,
p.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!