Multipathing with SAS storage but still be able to snapshot

RoxyProxy

New Member
Aug 19, 2024
6
3
3
Hi there,
I currently have a Proxmox Cluster with 2 identical Hosts with SAS cards and a V3700 V2 Storage that has each storage-canister directly attached to both Hosts with SAS cables.

I was able to successfully configure multipathing and create an LVM on it, however, as the documentation says, I'm not able to take snapshots of VMs with LVM.

Sadly I wasn't able to find much about SAS Storage connection with Proxmox, most I found was about iscsi.

What ways are there to have the multipathed directly connected storage and still be able to take snapshots?

sorry, Im not very versed with storages and filesystem stuff and so on, neither with proxmox tbh but Im getting there xD

Any help would be greatly appreciated
 
What ways are there to have the multipathed directly connected storage and still be able to take snapshots?
The primary way is to ditch LVM and implement a Cluster-aware filesystem (OCFS2 or GFS). Technically, you could place CAF on top of LVM as well.

Note that CaFs are not directly supported by PVE, nor do they have a GUI button to make it just work. There are many guides online that may walk you through. There have been several threads about this in the last 7 days. I'd recommend reviewing them.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
Whether you use iSCSI, FC or SAS does not matter, they are just different transfer protocols for SCSI packets.

Why do you necessarily need snapshots? Most people only want it because they are used to it from VMware.
You can still perform snapshot backups even with LVM without snapshot functionality. If you have typically taken a snapshot before a software update, you can better make a backup. You can restore this if the update ruins everything, but usually only individual files or settings are lost. This can be restored much better from a backup than rolling back the snapshot every time.
 
  • Like
Reactions: Johannes S
I was able to successfully configure multipathing and create an LVM on it, however, as the documentation says, I'm not able to take snapshots of VMs with LVM.
As @bbgeek17 said, thats because thats not an option. HOWEVER, it is possible to utilize the storage backend to accomplish this even if it will not be hosts aware. I'm not familiar with the Storwize productline but I have to imagine there is an API available for you to call and issue a hardware level snapshot. It would necessarily be a hack you would have to develop.
Note that CaFs are not directly supported by PVE, nor do they have a GUI button to make it just work. There are many guides online that may walk you through. There have been several threads about this in the last 7 days. I'd recommend reviewing them.
I'd add that the available options on Linux/Debian (OCFS/GFS2) are not well implemented and suffer various limitations and poor performance.
Why do you necessarily need snapshots? Most people only want it because they are used to it from VMware.
Snapshots are FAR easier and faster to commit/revert/mount then backups. Consequently, they are essential when you have a service level commitment. It has everything to do with scope; In OPs case, I tend to agree with you.
 
I was able to successfully configure multipathing and create an LVM on it, however, as the documentation says, I'm not able to take snapshots of VMs with LVM.
As a V3700 v2 is an external hw raid system you could even create a LVM-thin onto the multipath'ed volumes to take snapshots of VM's by pve or use the snapshot functions inside the storage system. A cluster aware filesystem with 2 nodes will make really no sense, just let one fail ...
 
Sure, this is why Dell, HP, Lenovo and others still sell hw raids and the customers still buy and use it as it's so bad in the last decades
I may be missing your point, I apologize.

What I was saying is that using an external storage that is simultaneously connected to more than one host (for the same LUN) and then placing _thin_ LVM layer on that LUN will lead to data corruption.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
Yes, that's right when using same time that volume ... but that's normal when doing ha-nas and in case of one node is failing the other take over the volume and filesystem for exporting to nas clients.
But anyway using lvm or lvm-thin on a volume under a filesystem (when using) is ever an additional i/o layer which can fail for unwritten blocks in a power outage ... but again still asuming pve users are prefering block storage instead of file storage.
By me I find file storage much easier and usage clearer to understand as block storage looks more complex in space usage and trimming etc.
 
Yes, that's right when using same time that volume ... but that's normal when doing ha-nas and in case of one node is failing the other take over the volume and filesystem for exporting to nas clients.
This particular conversation is in reference to PVE, where multiple nodes in the cluster are accessing the LUN concurrently. It seems like you are thinking of a different product.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
Sure, this is why Dell, HP, Lenovo and others still sell hw raids and the customers still buy and use it as it's so bad in the last decades :)
If you use a hypervisor that works file system centric, then such a storage is a great solution. For ESXi, HyperV and Xen it makes it easier to consume and set up.
Proxmox has just gone the block- centric way and can therefore achieve better performance, but does not have a cluster file system in the repository for monolithic storages.
Both are not bad for the respective use case and I have learned to love the Proxmox way. I achieve up to double SQL I/O performance with identical hardware and VM sizing, just because you can save the additional file system.
 
  • Like
Reactions: Johannes S
Hi again, its been a while but I have only had time now to read a bit more about the mentioned filesystems and I have implemented OCFS2 now and it does seem to work.

I do have problems with Windows 11 VMs for example, it seems that because it has a TPM Disk it cannot be snapshotted, is there a way around that?

I have tried putting said TPM Disk on the local-lvm which allows the VM to be snapshotted, but I then cannot migrate the VM to a different host anymore and I have to put the TPM back onto the OCFS2 Storage, for which I need to delete the snapshot.

I'm also going to try upgrading both Hosts later some time (I havent done the newest kernel update yet, so I'm interested if OCFS2 keeps working or if it breaks because I read that has happened in the past)

Any help would be appreciated
 
Apropos ocfs2 see reflink feature for snapshot functionality in man page.
 
Apropos ocfs2 see reflink feature for snapshot functionality in man page.
Hi!
Thank you, this does seem like an interesting feature but I'll be honest, its a bit above my skill level.

I just want to be able to use the Proxmox Snapshot function as it has a gui and is more intuitive for other users too.
 
Hahaha ... creating the ocfs2 isn't in the gui also but you went that way ... and even a script of your own would do so for you by cron, so after done you don't need the gui for that anymore :)
 
  • Like
Reactions: RoxyProxy
Hi @bbgeek17
thank you for your help!
The guide I used for OCFS2 linked to this io_uring issue, I also noticed the vms not starting with it so I switched to threads and it works smoothly so far.

I also saw the bugzilla post about the TPM State Disk as qcow2, so I guess there isn't anything I can do till a feature like that gets implemented?

I'm still just kind of searching for a way to use the hardware of an existing VMware environment for proxmox, so with 2 Hosts + Direct SAS Shared Storage, Live-Migration and Snapshot support :)

OCFS2 did seem promising even though it isn't officially supported, and it does seem to work just fine but of course I didn't research about this issue beforehand haha

Im hopeful though because as it says in the bugzilla report, it does get annoying for Users with Windows VMs and this is an issue with NFS too if I understood correctly, so maybe sometime in the short future there will be a workaround for this.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!