PVE 8.x Cluster Setup of shared LVM/LV with MSA2060

zahmad

New Member
Jun 11, 2025
14
1
3
I have three nodes [PVENODE01,PVENODE02,PVENODE03] i have successfully configured multipath both VG and LVs are available to all nodes.

Now is this mandatory to configure lvmlockd, dlm for shared LVM?

I didn't find about this is administration guide.

I stuck in it my lvmlockd service is not stable.

root@PVENODE01:~# systemctl status lvmlockd.service
○ lvmlockd.service - LVM Lock Daemon
Loaded: loaded (/etc/systemd/system/lvmlockd.service; enabled; preset: enabled)
Active: inactive (dead) since Thu 2025-08-07 15:29:17 PKT; 13min ago
Duration: 2ms
Main PID: 2971 (code=exited, status=0/SUCCESS)
CPU: 3ms

Aug 07 15:29:17 PVENODE01 systemd[1]: Started lvmlockd.service - LVM Lock Daemon.
Aug 07 15:29:17 PVENODE01 lvmlockd[2972]: 1754562557 lvmlockd started
Aug 07 15:29:17 PVENODE01 systemd[1]: lvmlockd.service: Deactivated successfully.
root@PVENODE01:~#
======================================================================

Multipath
root@PVENODE01:~# multipath -ll
mpatha (3600c0ff000f6113fda26d86501000000) dm-6 HPE,MSA 2060 FC
size=17T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=10 status=active
`- 1:0:0:1 sdb 8:16 active ready running
mpathb (3600c0ff000f61468b326d86501000000) dm-10 HPE,MSA 2060 FC
size=17T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
`- 1:0:0:2 sdc 8:32 active ready running

LSBLK

root@PVENODE01:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 223.1G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi
└─sda3 8:3 0 222.1G 0 part
├─pve-swap 252:0 0 5G 0 lvm [SWAP]
├─pve-root 252:1 0 66.3G 0 lvm /
├─pve-data_tmeta 252:2 0 1.3G 0 lvm
│ └─pve-data-tpool 252:4 0 132.1G 0 lvm
│ └─pve-data 252:5 0 132.1G 1 lvm
└─pve-data_tdata 252:3 0 132.1G 0 lvm
└─pve-data-tpool 252:4 0 132.1G 0 lvm
└─pve-data 252:5 0 132.1G 1 lvm
sdb 8:16 0 17.4T 0 disk
└─mpatha 252:6 0 17.4T 0 mpath
├─MSA2060VG1-thinpool1_tmeta 252:7 0 128M 0 lvm
│ └─MSA2060VG1-thinpool1 252:9 0 16T 0 lvm
└─MSA2060VG1-thinpool1_tdata 252:8 0 16T 0 lvm
└─MSA2060VG1-thinpool1 252:9 0 16T 0 lvm
sdc 8:32 0 17.4T 0 disk
└─mpathb 252:10 0 17.4T 0 mpath
├─MSA2060VG2-thinpool2_tmeta 252:11 0 128M 0 lvm
│ └─MSA2060VG2-thinpool2 252:13 0 16T 0 lvm
└─MSA2060VG2-thinpool2_tdata 252:12 0 16T 0 lvm
└─MSA2060VG2-thinpool2 252:13 0 16T 0 lvm

VGS


root@PVENODE01:~# vgs
Configuration setting "activation/use_lvmlockd" unknown.
WARNING: lvmlockd process is not running.
Reading without shared global lock.
VG #PV #LV #SN Attr VSize VFree
MSA2060VG1 1 1 0 wz--n- 17.43t 1.43t
MSA2060VG2 1 1 0 wz--n- 17.43t 1.43t
===========================================================
root@PVENODE01:~# vgs -o +locktype
Configuration setting "activation/use_lvmlockd" unknown.
VG #PV #LV #SN Attr VSize VFree LockType
MSA2060VG1 1 1 0 wz--n- 17.43t 1.43t
MSA2060VG2 1 1 0 wz--n- 17.43t 1.43t
====================================================
/etc/lvm/lvm.conf
Is this file is ok?

root@PVENODE01:~# cat /etc/lvm/lvm.conf
# This is an example configuration file for the LVM2 system.
# It contains the default settings that would be used if there was no
# /etc/lvm/lvm.conf file.
#
# Refer to 'man lvm.conf' for further information including the file layout.
#
# Refer to 'man lvm.conf' for information about how settings configured in
# this file are combined with built-in values and command line options to
# arrive at the final values used by LVM.
#
# Refer to 'man lvmconfig' for information about displaying the built-in
# and configured values used by LVM.
#
# If a default value is set in this file (not commented out), then a
# new version of LVM using this file will continue using that value,
# even if the new version of LVM changes the built-in default value.
#
# To put this file in a different directory and override /etc/lvm set
# the environment variable LVM_SYSTEM_DIR before running the tools.
#
# N.B. Take care that each setting only appears once if uncommenting
# example settings in this file.


# Configuration section config.
# How LVM configuration settings are handled.
config {

# Configuration option config/checks.
# If enabled, any LVM configuration mismatch is reported.
# This implies checking that the configuration key is understood by
# LVM and that the value of the key is the proper type. If disabled,
# any configuration mismatch is ignored and the default value is used
# without any warning (a message about the configuration key not being
# found is issued in verbose mode only).
# This configuration option has an automatic default value.
# checks = 1

# Configuration option config/abort_on_errors.
# Abort the LVM process if a configuration mismatch is found.
# This configuration option has an automatic default value.
# abort_on_errors = 0

# Configuration option config/profile_dir.
# Directory where LVM looks for configuration profiles.
# This configuration option has an automatic default value.
# profile_dir = "/etc/lvm/profile"
}

# Configuration section devices.
# How LVM uses block devices.
devices {

# Prefer FC multipath and WWN-style identifiers
preferred_names = [ "^/dev/mapper/mpath", "^/dev/disk/by-id/" ]

# Allowlisted devices: multipath and MSA LUNs
filter = [
"a|/dev/mapper/mpath.*|",
"a|/dev/disk/by-id/scsi-3600c0ff.*|",
"r|.*|"
]

# Skip scanning ZFS and Ceph devices (Proxmox default)
global_filter = [ "r|/dev/zd.*|", "r|/dev/rbd.*|" ]

# Performance and stability
cache_dir = "/etc/lvm/cache"
write_cache_state = 1
multipath_component_detection = 1
}

# Configuration section allocation.
# How LVM selects space and applies properties to LVs.
allocation {

}

# Configuration section log.
# How LVM log information is reported.
log {

verbose = 0
syslog = 1
overwrite = 0
level = 0
indent = 1
}

# Configuration section backup.
# How LVM metadata is backed up and archived.
# In LVM, a 'backup' is a copy of the metadata for the current system,
# and an 'archive' contains old metadata configurations. They are
# stored in a human readable text format.
backup {
}

# Configuration section shell.
# Settings for running LVM in shell (readline) mode.
shell {

# Configuration option shell/history_size.
# Number of lines of history to store in ~/.lvm_history.
# This configuration option has an automatic default value.
# history_size = 100
}

# Configuration section global.
# Miscellaneous global LVM settings.
global {
locking_type = 1
use_lvmlockd = 1
}

# Configuration section activation.
activation {
volume_list = [ "MSA2060VG1", "MSA2060VG2", "@proxmox" ]
use_lvmlockd = 1
}

# Configuration section metadata.
# This configuration section has an automatic default value.
# metadata {
# }

# Configuration section report.
# LVM report command output formatting.
# This configuration section has an automatic default value.
# report {
# }

# Configuration section dmeventd.
# Settings for the LVM event daemon.
dmeventd {
}

# Configuration section tags.
# Host tag settings.
# This configuration section has an automatic default value.
# tags {
# }

root@PVENODE01:~#
=========================================================================================
/etc/systemd/system/lvmlockd.conf

root@PVENODE01:~# cat /etc/systemd/system/lvmlockd.service
[Unit]
Description=LVM Lock Daemon
Requires=dlm.service
After=dlm.service

[Service]
Type=simple
ExecStart=/usr/sbin/lvmlockd --gl-type dlm
Restart=on-failure

[Install]
WantedBy=multi-user.target
root@PVENODE01:~#
====================================================

root@PVENODE01:~# systemctl status dlm
● dlm.service - dlm control daemon
Loaded: loaded (/lib/systemd/system/dlm.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-08-07 15:21:48 PKT; 26min ago
Docs: man:dlm_controld
man:dlm.conf
man:dlm_stonith
Main PID: 1569 (dlm_controld)
Tasks: 3 (limit: 154238)
Memory: 2.8M
CPU: 41ms
CGroup: /system.slice/dlm.service
├─1569 /usr/sbin/dlm_controld --foreground
└─1570 /usr/sbin/dlm_controld --foreground

Aug 07 15:21:48 PVENODE01 systemd[1]: Starting dlm.service - dlm control daemon...
Aug 07 15:21:48 PVENODE01 dlm_controld[1569]: 18 dlm_controld 4.2.0 started
Aug 07 15:21:48 PVENODE01 systemd[1]: Started dlm.service - dlm control daemon.
Aug 07 15:22:19 PVENODE01 dlm_controld[1569]: 49 receive_fence_result 1 from 2 result 177 no need_fencing
root@PVENODE01:~#

==========================================================
 
Last edited:
Hi @zahmad ,

As a suggestions : please use CODE tags available from the toolbar of the editbox, they make your post much easier to use.

Now is this mandatory to configure lvmlockd, dlm for shared LVM?
No, neither mandatory nor needed. PVE central lock management is taking care of those responsibilities.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: zahmad
What you recommend Can i use Blockbridge for share storage ?
Yes, of course you can use our solution, please reach to support@blockbridge.com to see if we are a fit.

Other than that, you can review this article we wrote that goes over some concepts and details of implementing legacy SAN environments with PVE:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Yes, of course you can use our solution, please reach to support@blockbridge.com to see if we are a fit.

Other than that, you can review this article we wrote that goes over some concepts and details of implementing legacy SAN environments with PVE:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
No, neither mandatory nor needed. PVE central lock management is taking care of those responsibilities.
@bbgeek17 , My simple query is "Is PROXMOX by default manage the shared storage among three Nodes as in my scenario" OR i have to configure blockbridge or some other tools as i try lvmlockd with dlm. My three nodes are connected with single MSA 2060 point to point and there are two LUNS in MSA.
 

Attachments

  • Cluster.png
    Cluster.png
    140.6 KB · Views: 6
Last edited:
If you presented the two MSA LUNs to all nodes, you have the basics of shared storage in place.
However, you used wrong LVM type - LVM-thin is not compatible with multi-writer scheme that is needed by "shared storage".
Once you have wiped your luns and reconfigured it with LVM (lvm-thick), you need neither DLM nor BB.

This article covers the concepts and steps needed : https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
You can ignore the iSCSI part, everything above iSCSI is the same for your P2P config.

PS if you did not present the LUNs to all nodes at the same time, then you do not have shared storage.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: zahmad
If you presented the two MSA LUNs to all nodes, you have the basics of shared storage in place.
However, you used wrong LVM type - LVM-thin is not compatible with multi-writer scheme that is needed by "shared storage".
Once you have wiped your luns and reconfigured it with LVM (lvm-thick), you need neither DLM nor BB.
@bbgeek17
I created Thick LVM but unable to allocate disk to VM
Code:
root@PVENODE01:/# lvcreate -L 100G -n vm-100-disk-0 MSA2060VG1
  Volume group "MSA2060VG1" has insufficient free space (3516 extents): 25600 required.
root@PVENODE01:/#

Code:
Aug 13 12:37:27 PVENODE01 pvedaemon[1150457]: unable to create VM 100 - lvcreate 'MSA2060VG1/vm-100-disk-0' error:   Volume group "MSA2060VG1" has insufficient free space (3516 extents): 25600 required.
Aug 13 12:37:27 PVENODE01 pvedaemon[1530]: <root@pam> end task UPID:PVENODE01:00118DF9:028671C3:689C40B7:qmcreate:100:root@pam: unable to create VM 100 - lvcreate 'MSA2060VG1/vm-100-disk-0' error:   Volume group "MSA2060VG1" has insufficient free space (3516 extents): 25600 required.

After creating Thick LV the space of both LUN/VGs were reserved and not available for VM disks
 

Attachments

  • Shared_LVMVG1.png
    Shared_LVMVG1.png
    136.5 KB · Views: 4
  • Shared_LVMVG2.png
    Shared_LVMVG2.png
    139.5 KB · Views: 4
  • Thick LVM.png
    Thick LVM.png
    147.3 KB · Views: 4
I created Thick LVM but unable to allocate disk to VM
The allocation of disks (LVs) is not done manually, PVE does that automatically.

However, it will likely produce the same error if you were to try to create a 100G VM disk via PVE GUI. Since we have no information about your current configuration - the warning you get may be appropriate for your current state.

If you share updated data, someone may spot something interesting:
lsblk
lsscsi -ss
pvs
vgs
lvs
pvdisplay
vgdisplay
pvesm status
cat /etc/pve/storage.cfg

Please use text output encoded either as CODE tags, or SPOILER tags.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
If you share updated data, someone may spot something interesting:
lsblk
lsscsi -ss
pvs
vgs
lvs
pvdisplay
vgdisplay
pvesm status
cat /etc/pve/storage.cfg

Please use text output encoded either as CODE tags, or SPOILER tags.
@bbgeek17 and other distinguished members,

Following is the output of the commands


Code:
root@PVENODE01:~# lsblk
NAME                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                          8:0    0 223.1G  0 disk 
├─sda1                       8:1    0  1007K  0 part 
├─sda2                       8:2    0     1G  0 part  /boot/efi
└─sda3                       8:3    0 222.1G  0 part 
  ├─pve-swap               252:0    0     5G  0 lvm   [SWAP]
  ├─pve-root               252:1    0  66.3G  0 lvm   /
  ├─pve-data_tmeta         252:2    0   1.3G  0 lvm  
  │ └─pve-data-tpool       252:4    0 132.1G  0 lvm  
  │   └─pve-data           252:5    0 132.1G  1 lvm  
  └─pve-data_tdata         252:3    0 132.1G  0 lvm  
    └─pve-data-tpool       252:4    0 132.1G  0 lvm  
      └─pve-data           252:5    0 132.1G  1 lvm  
sdb                          8:16   0  17.4T  0 disk 
└─mpatha                   252:6    0  17.4T  0 mpath
  ├─MSA2060VG1-LVMVG1      252:8    0  16.9T  0 lvm  
  └─MSA2060VG1-shared--dir 252:10   0   500G  0 lvm   /mnt/shared-dir
sdc                          8:32   0  17.4T  0 disk 
└─mpathb                   252:7    0  17.4T  0 mpath
  └─MSA2060VG2-LVMVG2      252:9    0  16.9T  0 lvm

Code:
root@PVENODE01:~# pvs

  PV                 VG         Fmt  Attr PSize    PFree

  /dev/mapper/mpatha MSA2060VG1 lvm2 a--    17.43t  13.73g

  /dev/mapper/mpathb MSA2060VG2 lvm2 a--    17.43t 513.73g

  /dev/sda3          pve        lvm2 a--  <222.06g  16.00g

root@PVENODE01:~#

Code:
root@PVENODE01:~# pvs

  PV                 VG         Fmt  Attr PSize    PFree

  /dev/mapper/mpatha MSA2060VG1 lvm2 a--    17.43t  13.73g

  /dev/mapper/mpathb MSA2060VG2 lvm2 a--    17.43t 513.73g

  /dev/sda3          pve        lvm2 a--  <222.06g  16.00g

root@PVENODE01:~#


Code:
root@PVENODE01:~# lsscsi -ss
[0:1:124:0]  enclosu BROADCOM VirtualSES       03    -               -
[0:3:111:0]  disk    HPE      MR416i-a Gen10+  5.16  /dev/sda   223GiB
[1:0:0:0]    enclosu HPE      MSA 2060 FC      I210  -               -
[1:0:0:1]    disk    HPE      MSA 2060 FC      I210  /dev/sdb   17.4TiB
[1:0:0:2]    disk    HPE      MSA 2060 FC      I210  /dev/sdc   17.4TiB
root@PVENODE01:~#

Code:
root@PVENODE01:~# vgs
  VG         #PV #LV #SN Attr   VSize    VFree
  MSA2060VG1   1   2   0 wz--n-   17.43t  13.73g
  MSA2060VG2   1   1   0 wz--n-   17.43t 513.73g
  pve          1   3   0 wz--n- <222.06g  16.00g
root@PVENODE01:~#

Code:
root@PVENODE01:~# lvs
  LV         VG         Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  LVMVG1     MSA2060VG1 -wi-a-----   16.93t                                               
  shared-dir MSA2060VG1 -wi-ao----  500.00g                                               
  LVMVG2     MSA2060VG2 -wi-a-----   16.93t                                               
  data       pve        twi-aotz-- <132.10g             0.00   1.21                       
  root       pve        -wi-ao----   66.26g                                               
  swap       pve        -wi-ao----    5.00g

Code:
root@PVENODE01:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/mapper/mpathb
  VG Name               MSA2060VG2
  PV Size               17.43 TiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              4569614
  Free PE               131516
  Allocated PE          4438098
  PV UUID               OkmqiV-ivP6-VE8V-wca7-DcpZ-fn88-Mpg6ar
 
  --- Physical volume ---
  PV Name               /dev/mapper/mpatha
  VG Name               MSA2060VG1
  PV Size               17.43 TiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              4569614
  Free PE               3516
  Allocated PE          4566098
  PV UUID               0VQh5P-zjtJ-dSlW-B1bx-eaDS-oNsb-RD8MhP
 
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               pve
  PV Size               222.06 GiB / not usable 2.98 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              56847
  Free PE               4097
  Allocated PE          52750
  PV UUID               0CfHK3-5cfb-7Qjh-9Z1M-ElpC-w2EP-4F7Ob0
Code:
root@PVENODE01:~# vgdisplay
  --- Volume group ---
  VG Name               MSA2060VG2
  System ID         
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               17.43 TiB
  PE Size               4.00 MiB
  Total PE              4569614
  Alloc PE / Size       4438098 / 16.93 TiB
  Free  PE / Size       131516 / 513.73 GiB
  VG UUID               aP8TZW-8cge-gLxR-SA2K-PEuJ-2GII-aVpESw
 
  --- Volume group ---
  VG Name               MSA2060VG1
  System ID         
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               17.43 TiB
  PE Size               4.00 MiB
  Total PE              4569614
  Alloc PE / Size       4566098 / <17.42 TiB
  Free  PE / Size       3516 / 13.73 GiB
  VG UUID               GLud0G-d5jF-9vGB-p4uR-9Jha-NBiC-dyq0G3
 
  --- Volume group ---
  VG Name               pve
  System ID         
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  31
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <222.06 GiB
  PE Size               4.00 MiB
  Total PE              56847
  Alloc PE / Size       52750 / 206.05 GiB
  Free  PE / Size       4097 / 16.00 GiB
  VG UUID               olf00n-piGk-3IJa-moi8-kpkb-LAVu-VtPEAi
Code:
root@PVENODE01:~# pvesm status
Name                 Type     Status           Total            Used       Available        %
Shared_dir            dir     active       514937088              60       488706244    0.00%
local                 dir     active        67810656        13502772        50817480   19.91%
shared-LVMVG1         lvm     active     18717138944     18702737408        14401536   99.92%
shared-LVMVG2         lvm     active     18717138944     18178449408       538689536   97.12%
root@PVENODE01:~#
Code:
root@PVENODE01:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content snippets,images,iso,backup,rootdir,vztmpl
        prune-backups keep-all=1

lvm: shared-LVMVG1
        vgname MSA2060VG1
        content images,rootdir
        nodes PVENODE02,PVENODE01,PVENODE03
        saferemove 0
        shared 1

lvm: shared-LVMVG2
        vgname MSA2060VG2
        content rootdir,images
        nodes PVENODE03,PVENODE02,PVENODE01
        saferemove 0
        shared 1

dir: Shared_dir
        path /mnt/shared-dir
        content images,snippets,iso,import,backup,rootdir,vztmpl
        nodes PVENODE02,PVENODE01,PVENODE03
        prune-backups keep-all=1
        shared 1
 
Last edited:
First, I would normally expect two paths per lun. You are showing 2 individual LUNs that the OS only sees once. So while its ok to have Multipath sitting there, I do not think you actually have multipath.
You already posted "multipath -ll" and it also appears to show single path per LUN.

Second, addressing your creation issue - multiple outputs show that you indeed utilized all space on one of the volume groups (VG1). I am not sure how you got to this state.

It's up to you whether you want to delete piece by piece, or start from scratch.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
First, I would normally expect two paths per lun. You are showing 2 individual LUNs that the OS only sees once. So while its ok to have Multipath sitting there, I do not think you actually have multipath.
You already posted "multipath -ll" and it also appears to show single path per LUN.
@bbgeek17 Yes I have single point-to-point FC from Node to Storage (both LUNS) The HPE MSA2060 is connected with three NODES on single point to point FC connections. Below is the multipath -ll on NODE1 (PVENODE01)
Code:
root@PVENODE01:~# multipath -ll
mpatha (3600c0ff000f6113fda26d86501000000) dm-6 HPE,MSA 2060 FC
size=17T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=10 status=active
  `- 1:0:0:1 sdb 8:16 active ready running
mpathb (3600c0ff000f61468b326d86501000000) dm-7 HPE,MSA 2060 FC
size=17T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  `- 1:0:0:2 sdc 8:32 active ready running
root@PVENODE01:~#

You are recommending that each node should connect with MSA 2060 with four FC Connections (two per LUN)?
Second, addressing your creation issue - multiple outputs show that you indeed utilized all space on one of the volume groups (VG1). I am not sure how you got to this state.

It's up to you whether you want to delete piece by piece, or start from scratch.
I created LVM-Thick of the whole available space of VG1 and VG2. with some space for Share Folder on VG1. Can i remove the Thick LVM? Please guide
 
You are recommending that each node should connect with MSA 2060 with four FC Connections (two per LUN)?
I don't know your infrastructure limitations or business requirements to make any recommendations. Generally, it is always best to have more paths than less. Would 4 be better than 2 for you? I don't know.

I created LVM-Thick of the whole available space of VG1 and VG2. with some space for Share Folder on VG1. Can i remove the Thick LVM? Please guide
I am not sure I follow. You need to feed PVE a Volume Group. You've already allocated all space in your VG, if I recall correctly. If that was not done intentionally - yes you should remove the LV.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
You are recommending that each node should connect with MSA 2060 with four FC Connections (two per LUN)?
Generally speaking you wont get much benefit from more than two host connections to a node (one per controller,) but it is conceivable you would be able to consume more then 25gbit on a single host under which case you will want to ensure that you have at least two disk volumes and a lun on each of those to get the benefit of your active/active controllers. These dot hill controllers aren't really suited for much faster throughput then that.

Generally, it is always best to have more paths than less.
diminishing returns :) host ports have cost.
 
  • Like
Reactions: Johannes S