Adding shared storage to proxmox cluster nodes

jamil-rahman

Member
Sep 16, 2020
6
0
6
31
I have several physical servers which all are configured in same proxmox cluster. I'm facing problem in adding storage which needs to be shared between all nodes.

I have Fiber-channel based SAN storage system, and from that, normally a LUN needs to be created which can be exported to many host servers.

So, here I have created 1 LUN (virtual volume) which exported to all of the nodes in proxmox cluster. I want to add that created LUN as shared storage in PVE so that all nodes can access that storage simultineously, and compute migration can be done easily without storage migration. I'm badly seeking for your suggestion.

N.B: I have tried CephFS, but Ceph is not compatible with disks backed by a hardware RAID controller. So, Ceph is not suitable for me.
 
Dear UdoB,

Thanks for your reply. I have already read out this guide, and found no clue regarding my case. Anyway, I have been using proxmox VE since 2019 using directory based storage which was not shared based. When I migrate 1 VM to another nodes, that VM is migrated with its storage also which is not exceptable in virtualization environment. Now we are planning to deploy proxmox VE with full fledge.

I need help in configuring shared storage in my case where a LUN/ storage is being accessible by all nodes through my SAN connectivity, but logically proxmox VE can't manage and I think it needs to be configured. When I run lsblk command on all nodes, they all show the same device as I expected.

Which storage type needs to be configured in my case?? Ceph and zfs is not compatible for backend RAID. So, they are unmarked in my case.

Is it possible to get my requirement with lvm??
 
I need help in configuring shared storage in my case where a LUN/ storage is being accessible by all nodes through my SAN connectivity, but logically proxmox VE can't manage and I think it needs to be configured. When I run lsblk command on all nodes, they all show the same device as I expected.
Create an LVM physical volume, a volume group and add it to your custer and mark the shared option. Make sure you have running and configured multipath for best availability.
 
Create an LVM physical volume, a volume group and add it to your custer and mark the shared option. Make sure you have running and configured multipath for best availability.
Yes, it works...thank you so much. I really appreciate your response...
 
Create an LVM physical volume, a volume group and add it to your custer and mark the shared option. Make sure you have running and configured multipath for best availability.
I have two LVM physical volumes on a 2 node cluster, I don't think I ticked "shared" during creation, can't see how to do it now, and when I click Create Volume Group I get "No Disks Unused". Do I need to blow these away / reformat LVM going into the mirror? Or is that advice only applicable to SAN and with LUNs / NAS whereas I'm on a locally attached SSD on node1 one node and a RAID-5 array on node2.
 
hi @tomachi ,
Before you are advised to "blow" away things, you should post the output of the following commands (in CODE tags):
- lsblk
- multipath -ll
- pvs
- vgs
- cat /etc/pve/storage.cfg
- pvesm status


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
OK cool, thanks.
Code:
root@hulk:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0 136.7G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0     1G  0 part
└─sda3                         8:3    0 135.7G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0  43.9G  0 lvm  /
  ├─pve-data_tmeta           252:3    0     1G  0 lvm
  │ └─pve-data-tpool         252:6    0  65.8G  0 lvm
  │   ├─pve-vm--202--disk--0 252:2    0    16G  0 lvm
  │   ├─pve-data             252:7    0  65.8G  1 lvm
  │   └─pve-vm--201--disk--0 252:8    0    58G  0 lvm
  └─pve-data_tdata           252:4    0  65.8G  0 lvm
    └─pve-data-tpool         252:6    0  65.8G  0 lvm
      ├─pve-vm--202--disk--0 252:2    0    16G  0 lvm
      ├─pve-data             252:7    0  65.8G  1 lvm
      └─pve-vm--201--disk--0 252:8    0    58G  0 lvm
sdb                            8:16   0   1.4T  0 disk
└─sdb1                         8:17   0   1.4T  0 part /mnt/pve/xfs_dir
sr0                           11:0    1  1024M  0 rom
-bash: multipath: command not found

I'm on the free version.
pvs
  PV         VG  Fmt  Attr PSize    PFree
  /dev/sda3  pve lvm2 a--  <135.70g 16.00g

 
 
root@hulk:~# pvs
  PV         VG  Fmt  Attr PSize    PFree
  /dev/sda3  pve lvm2 a--  <135.70g 16.00g
root@hulk:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   5   0 wz--n- <135.70g 16.00g
root@hulk:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content rootdir,iso,backup,snippets,vztmpl,images
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: xfs_dir
        path /mnt/pve/xfs_dir
        content backup,iso,rootdir,images,snippets,vztmpl
        is_mountpoint 1
        nodes hulk

root@hulk:~# pvesm status
Name             Type     Status           Total            Used       Available        %
local             dir     active        45015956        11213504        31483300   24.91%
local-lvm     lvmthin     active        68964352        39702777        29261574   57.57%
xfs_dir           dir     active      1463962488       761773140       702189348   52.04%

other machine
Code:
pvesm status
Name             Type     Status           Total            Used       Available        %
local             dir     active        71150744        16845792        50644972   23.68%
local-lvm     lvmthin     active       148488192        51852076        96636115   34.92%
xfs_dir           dir   disabled               0               0               0      N/A
root@elite:~#
 
@tomachi , I've re-read your post and I see that your disks are LOCAL. In Local case you have no ability to do SHARED. This thread does not apply to you in any way.

Good luck

PS your current non-root disks from the only node that you showed (sdb) is already formatted with XFS and mounted as directory. You do not, in fact, have any free disks to create an LVM volume. Nor do I see a reason for you to do so.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: tomachi
I'm very happy with the system so far. I was able to clone an onemediavault NAS using the local-lvm setup by shutting down and migrate it was perfectly fast for my needs. I paid $50 for a rack mount HP DL380 G7! Dual Xeon + 36 GB and a spinning disk hardware RAID. Used hack "nomodeset gfxpayload=640" to install proxmox!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!