shared storage allowing VM and LXC snapshots

Chentoa

Well-Known Member
Feb 14, 2018
32
1
48
46
Just a quick question to be sure my understanding of https://pve.proxmox.com/wiki/Storage#_storage_types is right

I am looking to set up a shared storage that supports snapshots for both VMs and LXC containers.

As I understand, the only options are :
  • CephFS
  • Ceph/RBD
  • ZFS over iSCSI
  • LVM over iSCSI
Am I right ?
 
Last edited:
If I understand what I read on your website, you provide a driver that's a lot faster than Ceph/RBD. That's good to know.

But that wasn't my question. Right now I'm only doing some lab tests, so speed is not (yet) a critical factor. My question was to validate my understanding that the 4 options I listed are the only ones that support snapshots for both VMs and LXC containers and are shared.
 
Last edited:
LVM over iSCSI doesn't have snapshot support
 
Fabian you are right. But LVM-thin does

question : for LVM your storage page states as a note "It is possible to use LVM on top of an iSCSI or FC-based storage". I guess this statement is also valid for LVM-thin, right ?
 
no - a shared LV is not allowed to be active on multiple nodes concurrently. since a thin pool is just a set of LVs under the hood, this would mean you could only use the pool actively one a single node, making it no longer shared.
 
"a shared LV is not allowed to be active on multiple nodes concurrently"

I'm still not sure if I understand

your storage page states, for LVM (not LVM-thin) :
"It is possible to use LVM on top of an iSCSI or FC-based storage. That way you get a shared LVM storage."

So, I understand that :
  • A shared storage can be set up using LVM over iSCSI
  • A shared storage can not be set up using LVM-thin
  • And since only LVM-thin supports snapshots, settting up a shared storage supporting snapshots with LVM or LVM-thin is not possible.
Are these 3 statements correct ?
 
yes, exactly.

a bit more detail might help with understanding: with regular non-thin LVM, the VG is shared and active on all nodes (by being located on a PV on a shared block device), but the LVs used for guest volumes are activated when needed only. PVE takes great care to deactivate the guest volumes at the right moment when transferring logical ownership of a guest and its volumes from one node to another (e.g., during migration) so that each LV is never actively used on more than one node. with LVM-thin this does not work, as the thin pool itself is already stored on LVs.
 
Great explanation, thanks !

One last questions. At the end, there are only 3 options possible to set up a shared storage that supports snapshots for both VMs and LXC containers :
  • CephFS
  • Ceph/RBD
  • ZFS over iSCSI
According to this discussion, CephFS is not recommended to store VM or LXC drives, but rather to store ISO images, backups, and so on... That would leave me with 2 choices to store my VMs/containers, Ceph/RBD or ZFS over iSCSI.

Now, I want to use hardware RAID on my storage servers (I'm not going into the debate ZFS vs HW-RAID, like one of your staff members said yesterday, "to a certain degree it comes down to what you prefer", and I happen to prefer HW). Given that ZFS with HW-RAID is not recommended, my analysis boils down to the following statement :

If you need a shared storage that supports snapshots for both VMs and LXC containers, and is hosted on HW-RAID servers, then the best option would be to use Ceph/RBD.

Do you agree ?
 
yes, if you don't mean "combine all disks into a huge one via HW-raid and setup a single OSD on top of that". Ceph makes the HW raid pretty much redundant (literally, as in, Ceph handles the redundancy and recovery in case of failed disks), and you might as well use a simple HBA instead. of course, if you already have a HW-raid using that to pass along the OSD disks as JBOD should be fine (Ceph adds another layer of indirection with LVM and then another one with bluestore on top, it's not as close to hardware as ZFS in that regard).
 
JBOD + Ceph/RBD is the way to go then

Any reason (other than the cost) not to use RAID10 + Ceph/RBD ?
 
This seems to answer my question :

Controllers​

Disk controllers (HBAs) can have a significant impact on write throughput. Carefully consider your selection to ensure that they do not create a performance bottleneck. Notably RAID-mode (IR) HBAs may exhibit higher latency than simpler “JBOD” (IT) mode HBAs, and the RAID SoC, write cache, and battery backup can substantially increase hardware and maintenance costs. Some RAID HBAs can be configured with an IT-mode “personality”.
And my question has also been already answered by this staff member. So, basically RAID1 and RAID10 aren't recommended with Ceph.

But I guess RAID 0 wouldn't hurt and would improve performances.
 
Last edited:
no, raid0 would not improve performance - ceph already does the redundancy and parallelization itself, if you do it another time on a lower level you only add latency.
 
OK !
JBOD + Ceph/RBD it is then

Thanks again for your time and your valuable informations
 
Almost there but not quite yet.
If I take HW RAID out of the picture, then ZFS over iSCSI becomes a candidate again.

So, between Ceph/RDB and ZFS over iSCSI, which one would you recommend in terms of :
- reliability
- performance
- learning curve (I know neither Ceph nor ZFS nor iSCSI, I did all my setups until now with GlusterFS)
 
zfs over iscsi is not HA by default (there are some solutions for this, but it requires quite a lot of knowledge or shelling out a big chunk of money for a proprietary solution). both are not trivial (although the ceph setup integrated into PVE makes it rather streamlined, you should be familiar with the concepts and technology in case you run into problems down the line!). if you are unsure, I'd go with Ceph ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!