Map Single Storage Volume to Proxmox 2 node cluster

skpatel

New Member
Apr 18, 2024
9
0
1
I am new with Proxmox VE and want to migrate my existing VMware setup to Proxmox. I have created 2 node cluster and map 1 TB volume from Storage.
I mapped 1 TB single volume on both Promox node but when i tried to migrate VM from node 1 to node 2 i am not bale to get the 1 TB drive here as a volume.

In Vmware, when we assigned volume to cluster node, we can migrate VM to any node.

Please help to understand the procedure in Proxmox. if i need another 1 TB to other node then its very difficult to migarte from VMware.

please help.
 
How is the storage connected? Is it a local drive on a node or is it a network storage like NFS, iSCSI, etc.?

Could you post the output of cat /etc/pve/storage.cfg in [CODE][/CODE] tags?
 
Node -1:
root@prox1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

esxi: Esxi
server 172.16.144.35
username root
content import
skip-cert-verification 1

lvm: V3700
vgname V3700
content images,rootdir
nodes prox
shared 0

dir: V3700-Storage
path /mnt/pve/V3700-Storage
content snippets,vztmpl,rootdir,iso,backup,images
is_mountpoint 1
nodes prox

dir: Test-Maping
path /mnt/pve/Test-Maping
content snippets,vztmpl,rootdir,iso,backup,images
is_mountpoint 1
nodes prox
 
Node -2

root@prox:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

esxi: Esxi
server 172.16.144.35
username root
content import
skip-cert-verification 1

lvm: V3700
vgname V3700
content images,rootdir
nodes prox
shared 0

dir: V3700-Storage
path /mnt/pve/V3700-Storage
content snippets,vztmpl,rootdir,iso,backup,images
is_mountpoint 1
nodes prox

dir: Test-Maping
path /mnt/pve/Test-Maping
content snippets,vztmpl,rootdir,iso,backup,images
is_mountpoint 1
nodes prox
 
Attached screen shot in , highlighted storage not dispalyed on node 2
 

Attachments

  • Proxmox.jpg
    Proxmox.jpg
    39.6 KB · Views: 8
How is the storage connected? Is it a local drive on a node or is it a network storage like NFS, iSCSI, etc.?

Could you post the output of cat /etc/pve/storage.cfg in [CODE][/CODE] tags?
This Storage is connected to both node with Fiber. Its SAN Storage


Node1:

Node -1:
root@prox1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

esxi: Esxi
server 172.16.144.35
username root
content import
skip-cert-verification 1

lvm: V3700
vgname V3700
content images,rootdir
nodes prox
shared 0

dir: V3700-Storage
path /mnt/pve/V3700-Storage
content snippets,vztmpl,rootdir,iso,backup,images
is_mountpoint 1
nodes prox

dir: Test-Maping
path /mnt/pve/Test-Maping
content snippets,vztmpl,rootdir,iso,backup,images
is_mountpoint 1
nodes prox



Node 2:

Node -2

root@prox:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

esxi: Esxi
server 172.16.144.35
username root
content import
skip-cert-verification 1

lvm: V3700
vgname V3700
content images,rootdir
nodes prox
shared 0

dir: V3700-Storage
path /mnt/pve/V3700-Storage
content snippets,vztmpl,rootdir,iso,backup,images
is_mountpoint 1
nodes prox

dir: Test-Maping
path /mnt/pve/Test-Maping
content snippets,vztmpl,rootdir,iso,backup,images
is_mountpoint 1
nodes prox

Attached screen shot in , highlighted storage not dispalyed on node 2
 

Attachments

  • Proxmox.jpg
    Proxmox.jpg
    39.6 KB · Views: 2
Ah, OK. Maybe someone else can help here as I never used a SAN before.
Okay Thanks. but in case of VMWare the storage auto mapped in every cluster node and the VM migration complete in seconds from one node to another node.
 
dir: V3700-Storage
dir: Test-Maping
I noticed that in your screenshot you highlighted the above two storage pools. They are directory type and are marked as mountpoint.

This implies that you manually mounted them somehow. There are two correct approaches to do so: CIFS/NFS or Cluster Aware Filesystem. Did you use one of these? If not, then it's likely that you misunderstood and misconfigured your storage setup.

Can you explain what these two pools are and why do you think they should be available on both nodes?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
This storage pool assigned in San in both node.
Then the storage should be marked shared.
The directory type storage pools that are NOT present on second node have an attribute "node Prox" that tells Proxmox that only that node has access to those entries. So, it is correct that they dont show up on second node.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I noticed that in your screenshot you highlighted the above two storage pools. They are directory type and are marked as mountpoint.

This implies that you manually mounted them somehow. There are two correct approaches to do so: CIFS/NFS or Cluster Aware Filesystem. Did you use one of these? If not, then it's likely that you misunderstood and misconfigured your storage setup.

Can you explain what these two pools are and why do you think they should be available on both nodes?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
For testing purpose i created 2 volumes on SAN and mapped to both Node of proxmox. after assigned, i created directory and mapped to both node but its give error in second node like "directory is expected to be a mount point but is not mounted".
Can you explain what these two pools are and why do you think they should be available on both nodes? -- We are using VMware as of now and we are doing testing for migration if its work perfect. In Vmware we do same like create volume in SAN and assigned all Node. after that we created a file system in one node which displayed on all node. then its very easy to migrate VM which is running in this volume to other host and it will take seconds to move from one node to other because the storage already assigned to other node its just migrate the Hardware only.
 
after assigned, i created directory and mapped to both node but its give error in second node like "directory is expected to be a mount point but is not mounted".
If you take a SAN LUN, format it with regular filesystem (EXT, ZFS, XFS, etc) and try to access from two nodes at the same time - you will corrupt the data.
As I mentioned before, you either need Cluster Aware Filesystem (OCFS2, GFS2, etc) or NAS (CIFS, NFS).

In addition, you have an option set on these two storage pools: nodes prox , as I said, that tells PVE that it should not look for this storage on Prox1. This configuration actually saved you from data corruption.

Your only supported option is to take a LUN and use Thick LVM on top of it. That will give you shared storage, sans snapshot and thin provisioning.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!