Shared storage on Dell VRTX

fsantulli

Member
Feb 9, 2020
3
0
21
45
Hi All,

i've managed to install proxmox over two blades on a Dell VRTX chassis. In the chassis are installed two Shared PERC 8 controllers that access one disk group.

Once i've setup proxmox i've successfully installed multipath tools as i could see two instances of the same drive (cause the dual coltroller) as sbd and sdc.

I've excluded them from the lvm.conf and installed dlm and lvmlockd services to handle a shared storage.

Just a small fix over the lvmlockd service script to make it depends on the dlm service once startup.

I've added the shared vg i've created without pv's to the Cluster installation.

It works on migration with vm's turned off, anyway when the migration is requested with the vm powered on the system fails cause

lvm is locked by other node.

How should i proceed to let it works correctly ?

Kind regards.

Federico
 
Hi,

I don't know this setup in detail.
At the online migration for takeover, the storages on both nodes have to be active.
I guess the logic in the cassis does not allow active-active mode.
 
Hi wolfgang.

The chassis allow an active-active shared volume at controller level.

I do also have the volume group configured on both nodes.

Maybe i've just mistaken the design as it's a matter of lvmlockd that i couldn't migrate an active vm.

The setup should be just as an iscsi san (even if we have local storage).

Do i have to set it up as a shared directory with a filesystem to cluster or it could be done at LVM level?

I have to tell that i've already managed to make it works with Hyper-V and VMWare so it could be done!

Please let me know.

Kind regards.
 
The Proxmox VE recommendation about such setups is LVM as shared storage.
That means no cluster lvm and no ThinLVM.
Proxmox VE handles access to the LVM level by itself.
 
Hi All,

i've managed to install proxmox over two blades on a Dell VRTX chassis. In the chassis are installed two Shared PERC 8 controllers that access one disk group.

Once i've setup proxmox i've successfully installed multipath tools as i could see two instances of the same drive (cause the dual coltroller) as sbd and sdc.

I've excluded them from the lvm.conf and installed dlm and lvmlockd services to handle a shared storage.

Just a small fix over the lvmlockd service script to make it depends on the dlm service once startup.

I've added the shared vg i've created without pv's to the Cluster installation.

It works on migration with vm's turned off, anyway when the migration is requested with the vm powered on the system fails cause

lvm is locked by other node.

How should i proceed to let it works correctly ?

Kind regards.

Federico


Hello, I am investigating how to install proxmox on my vrtx dell, but I can't find a link anywhere where I can take the steps to follow, do you have any place that you can share with me for the steps to follow? , I have the same version of you

regards
 
I know this is a bit of an old post but I followed the following steps to enable storage on my VRTX

Code:
Setup is:
VRTX with 2 blades and in the DAS 10 SSD drives and 10 SAS drives
Proxmox 9 running on each blade
4 x 10GB ports in Bond to Fortiswitch

on the VRTX obviously setup the raid arrays for your drives and then move onto the software

On Node1:

Check the drives:

root@pve1:~# lsblk

NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda            8:0    0  223G  0 disk
├─sda1         8:1    0 1007K  0 part
├─sda2         8:2    0    1G  0 part
└─sda3         8:3    0  222G  0 part
  ├─pve-swap 252:0    0    8G  0 lvm  [SWAP]
  └─pve-root 252:1    0  214G  0 lvm  /
sdb            8:16   0 13.1T  0 disk
sdc            8:32   0 27.9T  0 disk
sdd            8:48   0 13.1T  0 disk
sde            8:64   0 27.9T  0 disk
sr0           11:0    1 1024M  0 rom

Get the wwids:

root@pve1:~# /lib/udev/scsi_id -g -u -d /dev/sdb

36d094660160f18002dd095ac79b69158
root@pve1:~# /lib/udev/scsi_id -g -u -d /dev/sdc
36d094660160f18002dd096b7899dd306
root@pve1:~# /lib/udev/scsi_id -g -u -d /dev/sdd
36d094660160f18002dd095ac79b69158
root@pve1:~# /lib/udev/scsi_id -g -u -d /dev/sde
36d094660160f18002dd096b7899dd306

add wwids to /etc/multipath/wwids:

root@pve1:~# multipath -a 36d094660160f18002dd095ac79b69158

wwid '36d094660160f18002dd095ac79b69158' added
root@pve1:~# multipath -a 36d094660160f18002dd096b7899dd306
wwid '36d094660160f18002dd096b7899dd306' added

confirm:

root@pve1:~# cat /etc/multipath/wwids

# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/36d094660160f18002dd095ac79b69158/
/36d094660160f18002dd096b7899dd306/

edit multipath.conf (you can do more here but this works; I have made changes since)

root@pve1:~# vi /etc/multipath.conf

defaults {

        polling_interval 5
        checker_timeout 15
        find_multipaths no
        user_friendly_names yes
}

devices {
        device {
                vendor DellEMC
                product VRTX
                detect_prio "yes"
                path_selector "queue-length 0"
                path_grouping_policy "group_by_prio"
                path_checker tur
                failback immediate
                fast_io_fail_tmo 5
                no_path_retry 3
                rr_min_io_rq 1
                max_sectors_kb 1024
                dev_loss_tmo 10
                hardware_handler "1 alua"
        }
}

restart multipathd:

systemctl restart multipathd.service

Check the drives now have mpath reference:

root@pve1:~# lsblk

NAME         MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sda            8:0    0  223G  0 disk
├─sda1         8:1    0 1007K  0 part
├─sda2         8:2    0    1G  0 part
└─sda3         8:3    0  222G  0 part
  ├─pve-swap 252:0    0    8G  0 lvm   [SWAP]
  └─pve-root 252:1    0  214G  0 lvm   /
sdb            8:16   0 13.1T  0 disk
└─mpatha     252:2    0 13.1T  0 mpath
sdc            8:32   0 27.9T  0 disk
└─mpathb     252:3    0 27.9T  0 mpath
sdd            8:48   0 13.1T  0 disk
└─mpatha     252:2    0 13.1T  0 mpath
sde            8:64   0 27.9T  0 disk
└─mpathb     252:3    0 27.9T  0 mpath
sr0           11:0    1 1024M  0 rom

check the mpath:

root@pve1:~# multipath -ll

mpatha (36d094660160f18002dd095ac79b69158) dm-2 DELL,Shared PERC8
size=13T features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 15:2:0:0 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  `- 16:2:0:0 sdd 8:48 active ready running
mpathb (36d094660160f18002dd096b7899dd306) dm-3 DELL,Shared PERC8
size=28T features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 15:2:1:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  `- 16:2:1:0 sde 8:64 active ready running

Create pv and vg's for mpatha and mpathb:

root@pve1:~# pvcreate /dev/mapper/mpatha

  Physical volume "/dev/mapper/mpatha" successfully created.

root@pve1:~# vgcreate vg_sas /dev/mapper/mpatha
  Volume group "vg_sas" successfully created

root@pve1:~# pvcreate /dev/mapper/mpathb

  Physical volume "/dev/mapper/mpathb" successfully created.

root@pve1:~# vgcreate vg_sas /dev/mapper/mpathb
  Volume group "vg_ssd" successfully created

Now got to the Proxmox interface and add LVM storage under datacenter:
Note: Ensure you select "Shared:" under the Enable checkbox

On node2:

Edit the multipath.conf and put the same content from above

update the pv and vg:

oot@pve2:~# pvscan --cache

  pvscan[244427] PV /dev/mapper/mpatha online.
  pvscan[244427] PV /dev/mapper/mpathb online.
  pvscan[244427] PV /dev/sda3 online.

root@pve2:~# vgscan --cache

  Found volume group "pve" using metadata type lvm2
  Found volume group "vg_ssd" using metadata type lvm2
  Found volume group "vg_sas" using metadata type lvm2

When you have vm or ct running it will look like this:

Code:
root@pve1:~# lsblk
NAME                        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sda                           8:0    0  223G  0 disk 
├─sda1                        8:1    0 1007K  0 part 
├─sda2                        8:2    0    1G  0 part 
└─sda3                        8:3    0  222G  0 part 
  ├─pve-swap                252:0    0    8G  0 lvm   [SWAP]
  └─pve-root                252:1    0  214G  0 lvm   /
sdb                           8:16   0 13.1T  0 disk 
└─mpatha                    252:2    0 13.1T  0 mpath
  └─vg_sas-vm--101--disk--0 252:5    0   32G  0 lvm   
sdc                           8:32   0 27.9T  0 disk 
└─mpathb                    252:3    0 27.9T  0 mpath
  └─vg_ssd-vm--100--disk--0 252:4    0    8G  0 lvm   
sdd                           8:48   0 13.1T  0 disk 
└─mpatha                    252:2    0 13.1T  0 mpath
  └─vg_sas-vm--101--disk--0 252:5    0   32G  0 lvm   
sde                           8:64   0 27.9T  0 disk 
└─mpathb                    252:3    0 27.9T  0 mpath
  └─vg_ssd-vm--100--disk--0 252:4    0    8G  0 lvm   
sr0                          11:0    1 1024M  0 rom