which fs on shared storage

joblack

Active Member
Apr 16, 2017
39
5
28
44
I want to try out a shared storage solution with a Synology SAN and iSCSI with snapshots.

As far as I have read at Storage there are two stable solutions which could make that happen
  • LVM-thin over iSCSI
  • ZFS over iSCSI
Questions
  • Any recommendations which one should be tried?
  • Performance and resource differences?
  • Which is more future-proof?
 
Synology SAN support no zfs so ZFS over iSCSI is no option for you.

You can use iSCSI native or LVM on iSCSI.
 
second vote here for NFS. Synology NFS is a nice solid storage pool for Proxmox. Thin VM works inherently 'just because' and performance is much better than seems fair given the cost of the synology box. Your main bottleneck will be the NIC (ie, gig-ether most likely?) but even that is ~sufficient for your average test lab or even production stuff, so long as it is not many IO intensive VMs pounding away on the Gig NFS mount.

Definitely I would avoid stacking / adding complexity (ie, iSCSI on synology, with something else on top of that from the proxmox side).
(ie, "keep it simple" is often a good thing IMHO).

If you really want iSCSI, just use iSCSI on Synology (which works ok as well, but in my testing, I find ironically NFS tends to be better performance than iSCSI when using same hardware of proxmox, NIC, switch, synology, disks etc). So I just don't bother with iSCSI any more, NFS is simpler / better performance / easier to manage / has native access to data from the synology side / etc.

Tim
 
LVM-thin only work on local storage. Never use that on shared storage.

In the Wiki documentation it says

It is possible to use LVM on top of an iSCSI storage. That way you get a shared LVM storage.

So it should work?
 
second vote here for NFS. Synology NFS is a nice solid storage pool for Proxmox. Thin VM works inherently 'just because' and performance is much better than seems fair given the cost of the synology box. Your main bottleneck will be the NIC (ie, gig-ether most likely?) but even that is ~sufficient for your average test lab or even production stuff, so long as it is not many IO intensive VMs pounding away on the Gig NFS mount.

Definitely I would avoid stacking / adding complexity (ie, iSCSI on synology, with something else on top of that from the proxmox side).
(ie, "keep it simple" is often a good thing IMHO).

If you really want iSCSI, just use iSCSI on Synology (which works ok as well, but in my testing, I find ironically NFS tends to be better performance than iSCSI when using same hardware of proxmox, NIC, switch, synology, disks etc). So I just don't bother with iSCSI any more, NFS is simpler / better performance / easier to manage / has native access to data from the synology side / etc.

Tim
Yes, the NICs have gigabit speed but they use VMware vSphere right now so there won't be no performance differences.

Isn't it completely irrelevant which fs is used over iSCSI? ISCSI is just a method to attach "physical" storage? So ZFS should be completely transparent for the Synology NAS?

iSCSI without any more options won't support Snapshots? NFS won't support it, too?
 
In the Wiki documentation it says

It is possible to use LVM on top of an iSCSI storage. That way you get a shared LVM storage.

So it should work?

LVM is NOT the same as LVM-thin.
 
Just looping back to the original question, I think maybe for clarity? I would say:

Originally Questions:
  • Any recommendations which one should be tried?
  • Performance and resource differences?
  • Which is more future-proof?
Summary of answers to date:
  1. LVM-Thin is not same as LVM. Don't confuse them just because both have "LVM" in the name
  2. LVM_Thin is only for use on local storage.
  3. TDC Comment: I am not sure LVM_Thin is fully production ready? Last time I tested it, there was pain (inconsistent poor behaviour under the hood - such that I was not willing to try to use it anywhere outside of "Playground testing with Throw-Away VMs"). Maybe someone else can post back on this thread re: Current production-ready-status of LVM Thin?
  4. Generally speaking, I *THINK* that snapshots are implemented in Proxmox, as a tie-in with the underlying (LVM or ZFS) snapshot feature. So broadly speaking, if you want snapshots, your VMs being snapshotted, must be stored on a storage type which uses one of these two configs.
  5. if the synology supports snapshots of iscsi under-the-hood and it is not implemented in a standards based way (ie, Linux LVM) then you maybe would assume that you have access to iSCSI snapshots from the Synology admin interface, but with no integration at proxmox level. ie, can't see snap status of Synology-iSCSI-volume from the proxmox side. Even if Synology does implement iSCSI Snaps under-the-hood with Linux LVM / I would be guessing it is not in such a way that Proxmox can manipulate those LVM snaps. All the proxmox host will see is an iSCSI Target where blocks are being written to or read from.
  6. Maybe you can test some different config scenarios and then summarize back to this thread what the outcomes of your tests are.
  7. Generally speaking my preference is to 'keep it simple' and avoid adding complexity where possible; in order to keep an environment stable / smooth operating / easier to maintain over longer term. Putting a ZFS filesystem on top of an iSCSI mount block device, is possible, but I'm guessing in that config it is not possible to be a shared storage target any longer. (other proxmox nodes won't have visibility into the ZFS FS that is required for locking separate VM LUNs to make it 'SAN access friendly') would be my guess. So then you are gaining little other than maybe ZFS OS Level snap features for that one host using that storage (?) My general understanding of a Proxmox iSCSI (or direct-attached Fibre SAN for that matter) shared Storage target - is that proxmox put onto the (iSCSI or FibreSAN) block device, an LVM Volume in a way that any attached proxmox host will be able to determine who has 'ownership' (RW access) to a given VM's disk - and thus you have the safe shared storage with only a single proxmox node attached to each VM at any given time. If you subsequently add another layer of LVM_inside-LVM I think it gets messy:gross quickly.
Anyhow. Maybe I'm missing the point of all your questions. End of the day, if you want to test and summarize findings back I am sure people will be happy to see it laid out clearly, "this config is not shared-storage-capable, THAT config is not supporting Proxmox-snapshot-intergration; THIS_OTHER config lets me do snapshots at Synology-level-only-not-proxmox-integrated; YET_ANOTHER config is utterly broken and does not work; etc etc.

And possibly someone else will add more clarification on this thread as it keeps puttering along (ie, my post here is maybe quite a mess)

Tim
 
Thank you for your answers.

Tim, I am referring to the official table for storage.

https://pve.proxmox.com/wiki/Storage

As far as I understand snapshots are done on the Proxmox level and the Synology NAS only provides iSCSI functionality? And as far as I read ZFS over iSCSI supports snapshots? So there seem to be no other way to support snapshots with shared iSCSI NAS storage?

I am especially wondering because VMware vSphere is supporting Snapshots on commercial iSCSI shared storage for years now (without Snapshot support on the hardware side). I thought Proxmox should be and is an adequate substitution for that product.

So if I want a stable shared storage with Snapshots I have to build my own Ceph/RBD or iSCSI with ZFS Linux cluster and I cannot use a commercial NFS / SAN products? Sounds to me a little bit obscure. ;)
 
Hi dcsapak,

thanks for your answers.

In that case your documentation is not completely correct. Under

https://pve.proxmox.com/wiki/Storage

it says NFS cannot use snapshots.

Maybe there is a misunderstanding. I am talking about snapshots of the state (memory, harddisk, ...) of a virtual machine and not of raw (disk) partitions. And this worked with VMware since ESX 3.0.

At the moment we still use Vsphere 4.1 and want to upgrade to another system (preferable Proxmox). And snapshots work with VMFS over iSCSI with vSphere 4.1 without problems.
 
In that case your documentation is not completely correct. Under

https://pve.proxmox.com/wiki/Storage

it says NFS cannot use snapshots.

well it is not incorrect either.

it is true that the storage itself does not support snapshots (the same for "directory" storage),
but it also says it is a "file" type storage, which supports qcow2, which in turn supports snapshots

Maybe there is a misunderstanding. I am talking about snapshots of the state (memory, harddisk, ...) of a virtual machine and not of raw (disk) partitions. And this worked with VMware since ESX 3.0.

At the moment we still use Vsphere 4.1 and want to upgrade to another system (preferable Proxmox). And snapshots work with VMFS over iSCSI with vSphere 4.1 without problems.
ok then this works differently for proxmox ve,
a snapshot is config+disk+(when selected)memory for us, so for snapshots to work, the underlying storage/disks have to support it
 
  • Like
Reactions: joblack
well it is not incorrect either.

it is true that the storage itself does not support snapshots (the same for "directory" storage),
but it also says it is a "file" type storage, which supports qcow2, which in turn supports snapshots

Well. Then that's a little bit confusing. :)

KVM supports live snapshots (with qcow2) as you have mentioned. Wouldn't that just be the functionality somebody wants with VMs?

Maybe it would be a good idea to extend the documentation a little bit more so that the snapshot functionality gets more background information.
 
Wouldn't that just be the functionality somebody wants with VMs?
i feel there is still some misunderstanding here.

we fully support "live snapshots", but as i said, we need this to work on the underlying storage, whether this is a qcow2 file, ceph, or something else

Maybe it would be a good idea to extend the documentation a little bit more so that the snapshot functionality gets more background information.
i agree the documentation could be better, but we are continuously working on that, (and i will probably send a patch later today updating the snapshot/storage documentation for this)
 
i feel there is still some misunderstanding here.
we fully support "live snapshots", but as i said, we need this to work on the underlying storage, whether this is a qcow2 file, ceph, or something else

Okay thanks. I will try it out.
 
Thank you for your answers.

Tim, I am referring to the official table for storage.

https://pve.proxmox.com/wiki/Storage

As far as I understand snapshots are done on the Proxmox level and the Synology NAS only provides iSCSI functionality? And as far as I read ZFS over iSCSI supports snapshots? So there seem to be no other way to support snapshots with shared iSCSI NAS storage?

I am especially wondering because VMware vSphere is supporting Snapshots on commercial iSCSI shared storage for years now (without Snapshot support on the hardware side). I thought Proxmox should be and is an adequate substitution for that product.

So if I want a stable shared storage with Snapshots I have to build my own Ceph/RBD or iSCSI with ZFS Linux cluster and I cannot use a commercial NFS / SAN products? Sounds to me a little bit obscure. ;)

Thanks for pointing out this documentation. It is 'more recent' and I have not read that before. From reading the thread this morning / I gather there will be a bit more updates even to try to clarify what scenarios snapshot are supported.

Broadly speaking, I think you will find that the features you need are possible. And that Proxmox is certainly a viable VM platform to replace an old Vmware 4.x environment with.

In my own experience, snapshots are relatively not-important, actually, is part of why I have to re-read the documents a bit more I guess. While snapshots are interesting 'in theory' in practice, I don't actually use them ~hardly ever. So long as I've got sufficient regular backups that 'just happen'; and a stable environment - then on-demand snaps don't enter into any of the environments I am managing.

Anyhow. This is all a good thread; especially if it results in more good discussion and some updates to the documentation to clarify more precisely what features are available in which configs! :)

Tim
 
  • Like
Reactions: joblack
Hi Tim,

of course snapshots are not a replacement for backups but they were (and are) essential for some systems. If you ever have f*cked up a JIRA or Confluence update you'll know why it is more fun to get back to an earlier snapshot than to reinstall and copy the backup into it again. :D

Full VM backups are a specialty of Proxmox so that this use case would go a lot of easier but as far as I know snapshots are used for automatic Proxmox backups, too? If not Proxmox has to shutdown, backup and restart every VM on the way?
 
Last edited:
Hi, yes, it is true, snaps can be useful for very short on-demand install config work, if used properly. (ie, delicate installs of PITA apps). I've seen snapshots used poorly so many times as well that I am sometimes leery about their use in some cases. (ie, ever heard the story about the VM with a snapshot left running for 18 months, by the sysadmin who didn't know what made this a bad idea? Great fun when underlying VM stoage was filled with all the copy-on-write-deltas. Or when people think it is a good idea to gradually accumulate 'nested snapshots' 12 layers deep over a 6 month period. Yay. What fun!)

I agree also, you are correct that full backups in proxmox use snapshot as a way to allow a 'live' backup without power off to the VM. And this 'just works fine' for all the deployments I've got, so it is not really a topic I gave much consideration to / or worry about what may or may not be in the official documentation. So end of the day, I think if the docs are updated for clarity, that will be a good thing! :)

Tim
 
  • Like
Reactions: joblack

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!