Proxmox cluster - shared ISO/Container template storage

bzdigblig

Member
Aug 6, 2021
29
0
6
43
I'm trying to figure out how to simply create a shared storage location so ISO or container templates can be accessed from any node in the cluster.
I've got SSD (/dev/mpath0-VGSSD). and HDD (/dev/mpath0-VGHDD) storage added to the cluster...what do I need to do to actually use some of that storage for something other than housing VM disks?

I've created a directory called SharedStorage on the root of my mpath device (either for the SSD or HDD storage), and then added that as a Directory through the GUI, but if I save anything to that directory, none of the other nodes see it.

I read here that I'd need to create a logical volume, format it, mount it and then create a storage of type directory on the mount path.

If I try creating a logical volume with
Code:
lvcreate -L 50G -n SharedStorage mpath0-VGSSD
then I get the following error:

Code:
WARNING: PMBR signature detected on /dev/mpath0-VGSSD/SharedStorage at offset 510. Wipe it? [y/n]: n
  Aborted wiping of PMBR.
  1 existing signature left on the device.
  Failed to wipe signatures on logical volume mpath0-VGSSD/SharedStorage.
  Aborting. Failed to wipe start of new LV.

Of course I'm not going to overwrite without understanding what I'm actually overwriting, so I really don't know what I'm meant to do with that. I've got VMs running on mpath0-VGSSD so I'm not sure if this is telling me it's about to create a logical volume over top of an already existing VM, or if it's just spewing noise, like "Hey, you're about to make a thing, I just thought you should know".

Ultimately, I just want to use a little bit of the shared storage so we're not saving anything locally on any node. If there's some better way to accomplish this, I'm all ears. I'm surely missing something, because this seems unnecessarily complicated.

Thank you
 
If you want to share data between two computers, then you need shared storage.
In your case I would recommend NFS. You can either set it up directly on one of your nodes, inside a VM, or externally.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Setting NFS up locally on one of the nodes makes no sense if I'm trying to make this storage independent of a particular node, and externally makes no sense, since I'm trying to migrate stuff to our cluster, not off of it.

Just to make sure I've got this straight...I already have storage that's shared among all the nodes in the cluster, but in order for the various Proxmox hosts to use any of that storage directly and ensure that other hosts can access that same data, I need to spool up a VM, which would be running on the storage that's already shared between all the nodes in the first place, and use it to house an NFS share that every Proxmox host has to point to?

Gotta be honest, I thought there would be a more graceful way of accomplishing this lol.
 
So your mpath storage is actual external storage that is connected to multiple nodes?
If yes, then accessing it from multiple clients in the way you described will or has already caused data corruption.

You will need to setup some sort of Cluster Aware Filesystem - https://en.wikipedia.org/wiki/Clustered_file_system
There is nothing thats prepackaged into Proxmox, so you will need to install/configure/maintain it on your own.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Yes, the mpath storage is external storage, connecting to each node with multipath iSCSI connections.

I didn't do a whole lot when trying to access this storage in the way I described...pretty much just tried it, saw it wasn't sharing in the way that I wanted, and reversed everything I did. The few VMs that I have running on that storage are all still running fine.

Is there a way to run a consistency check to ensure there's no data corruption issues? Or if there were possibly data corruption issues, would they only potentially affect a specific VM, and not the integrity of the underlying storage? I'm fine with nuking a VM or two if that's the extent of a potential issue.

I really didn't think that just using some of the shared storage for each of the nodes directly was going to be such a big deal...installing a clustered FS seems even more obnoxious than just spinning up a VM for an NFS share. At least I can use PBS to back up the VM housing the share..

I appreciate your info. I've been having a hell of a time wrapping my brain around everything Proxmox...it's almost like there's two Proxmox's...there's the Proxmox that neatly fits within the GUI and everything just works, and that's what a lot of documentation, Youtube videos, and online training courses cover. And there's the Proxmox where it's the IT equivalent of hardcore off-roading, and you're breaking axles and drive shafts, and hopefully ya get back onto a paved road at some point lol. The transition from one Proxmox to the other has been like running headfirst into a brick wall.
 
I didn't do a whole lot when trying to access this storage in the way I described...pretty much just tried it, saw it wasn't sharing in the way that I wanted, and reversed everything I did.
There is information missing in your original post, so I dont have a full picture of what you did. It seems you created a separate LVM Thick(?) slice, placed a filesystem on it and tried your access. If so, then it doesnt matter if data is inconsistent on that slice. Presumably you have already destroyed it and/or it contains no useful data.

The few VMs that I have running on that storage are all still running fine.
Again, presumably using LVM thick? If so, you are ok. You dont need to worry about VM data consistency - Proxmox ensures that only one node at a time is accessing the LVM slices.

installing a clustered FS seems even more obnoxious
A truly shared file storage needs to be arbitrated. HyperV and ESXi have built-in cluster aware file systems and use SCSI Persistent reservation. Proxmox does not have a built-in equivalent.

.it's almost like there's two Proxmox's
You can say that about almost any product. There are tested and supported scenarios and there are advanced/custom situations. For Proxmox Ceph is a storage choice for most people. It provides both block and file shared storage on top of object repository. It comes neatly wrapped in Proxmox infrastructure with all of its pros and cons. The main con for you is that its not meant to be used with the storage product you've already invested in.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
There is information missing in your original post, so I dont have a full picture of what you did. It seems you created a separate LVM Thick(?) slice, placed a filesystem on it and tried your access. If so, then it doesnt matter if data is inconsistent on that slice. Presumably you have already destroyed it and/or it contains no useful data.


Again, presumably using LVM thick? If so, you are ok. You dont need to worry about VM data consistency - Proxmox ensures that only one node at a time is accessing the LVM slices.
I didn't create any LVM slice or anything. I just navigated to /dev/mpath0-VGSSD, saw that's where my VM files were, created a /sharedstorage folder there, then in the GUI, I went to Datacenter > Storage, and added a Directory for disk images and container templates, pointing to the /dev/mpath0-VGSSD/sharedstorage location I just created. I did the same thing on another node as well, then I downloaded a container template into that directory, saw that the other nodes don't see it, then removed everything, because it clearly wasn't accomplishing what I wanted.

EDIT: Just to potentially clear up any misunderstanding, the storage initially was set up as LVM. Aside from using that storage as a location to house my VM disks, I haven't done anything further with that storage other than the procedure I described above.

A truly shared file storage needs to be arbitrated. HyperV and ESXi have built-in cluster aware file systems and use SCSI Persistent reservation. Proxmox does not have a built-in equivalent.
I wouldn't have thought twice about this with Hyper-V or ESXi, so I mistakenly assumed that it wouldn't be a big deal with Proxmox either. That'll show me lol.

You can say that about almost any product. There are tested and supported scenarios and there are advanced/custom situations. For Proxmox Ceph is a storage choice for most people. It provides both block and file shared storage on top of object repository. It comes neatly wrapped in Proxmox infrastructure with all of its pros and cons. The main con for you is that its not meant to be used with the storage product you've already invested in.
To my understanding, for us to implement Ceph, we'd have had to get drives to meet our storage needs, for every node in our cluster. The boss was all "Surely we can just get a fast NAS and just have all the nodes point to it", and I honestly didn't have enough info to make a case one way or the other.
 
Last edited:
I just navigated to /dev/mpath0-VGSSD, saw that's where my VM files were, created a /sharedstorage folder there,
What you described is not possible under normal circumstances.
/dev is special location in Linux https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/dev.html
/dev/mpath0-VGSSD is a block device, it contains no filesystem for a directory to be created. Regardless, thats not how one achieves shared storage as you already realized.

To my understanding, for us to implement Ceph, we'd have had to get drives to meet our storage needs, for every node in our cluster.
yes, Ceph architecture is completely different from what you have now. Since it sounds like a business environment - you should look whether your existing external storage provides NFS or CIFS, if it doesnt - you can buy a cheap NAS box. If you want high availability then that will cost you more.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
What you described is not possible under normal circumstances.
/dev is special location in Linux https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/dev.html
/dev/mpath0-VGSSD is a block device, it contains no filesystem for a directory to be created. Regardless, thats not how one achieves shared storage as you already realized.
If I had to guess, I'm going to assume it has something to do with my iSCSI/MPIO config, and the PV, VG and LVM created on top of that.

/dev/mapper/HDDmpath0 or /dev/mapper/SSDmpath0 seem to be the block devices you're thinking of...I think those are the physical volumes. mpath0-VGSSD or mpath0-VGHDD are the volume groups created on top of that, and then I've got LVM on top of that. (LVM-SYN-HDD and LVM-SYN-SSD).

Gonna guess that's probably wrong too, but that's the only way I've ever been able to make shared iSCSI storage work in a cluster.
yes, Ceph architecture is completely different from what you have now. Since it sounds like a business environment - you should look whether your existing external storage provides NFS or CIFS, if it doesnt - you can buy a cheap NAS box. If you want high availability then that will cost you more.
The NAS Just does iSCSI or Fibre Channel.
 
mpath0-VGSSD or mpath0-VGHDD are the volume groups created on top of that, and then I've got LVM on top of that
LVM as suite of software consist of 3 things at a high level. There are Physical Volumes (PVs), Volume Groups (VGs), and Logical Volumes (LVs) - all are block devices. You can only create a file (or directory which is a special file) on top of LV which has been formatted with a file system and mounted.
You can't format or place files on PVs or VGs.
Gonna guess that's probably wrong too, but that's the only way I've ever been able to make shared iSCSI storage work in a cluster.
no, you did it right. As long as LVM is of Thick type - you are good.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thanks for the clarification. Am I to assume that my little experiment to try and create shared storage didn't actually damage anything then?

And yes, the LVM is thick so no issues there.
 
I appreciate your help. Thank you.

There's just so many moving parts to this project, and for every thing that I know, there's like 15 things that I don't know, so there's a lot of learning as I go.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!