iSCSI any way to get Snapshots?

tomstephens89

Renowned Member
Mar 10, 2014
177
6
83
Kingsclere, United Kingdom
Hi there,

I am about to perform a restructure of our virtual environment and as such upgrade to 4.1. I currently present iSCSI to 16 servers, have a multipath device configured and LVM on top of that.

Obviously I can't have snapshots on LVM since it required RAW VM disks.

I just wondered if there is anything I can do with this storage to get snapshot functionality? I could put a front end in front of the storage array and present NFS or something else but id rather not put a 'man in the middle' since I like to use the dual active controllers for multipathing on my SAN.

I have read about ZFS over iSCSI but I am not sure how this works and whether it will work with my HP P2000 SAN. It also appears not to work with multipathing?

Thanks
Tom
 
Last edited:
Unfortunately, I cannot give a good answer to that, but I'm interested in a good one, too. I have a similar setup with an FC-based, multipathed EVA.

What I use for some machines is a HA-'man in the middle' VM which runs on LVM, but has ZFS inside for local containers and also NFS and iSCSI exports. This is of course less than optimal, but it works.

I played around with GFS2, but it crashed my nodes.

I'd really love to have snapshots, but I think this is not going to happen soon, or ever.
LVM snapshots work different than QCOW2 or ZFS snapshots and can therefore not be used in a VM environment like in the other two. ZFS is a single-host filesystem and intended this way. QCOW2 can be used over NFS, but no HA there per se.
 
Unfortunately, I cannot give a good answer to that, but I'm interested in a good one, too. I have a similar setup with an FC-based, multipathed EVA.

What I use for some machines is a HA-'man in the middle' VM which runs on LVM, but has ZFS inside for local containers and also NFS and iSCSI exports. This is of course less than optimal, but it works.

I played around with GFS2, but it crashed my nodes.

I'd really love to have snapshots, but I think this is not going to happen soon, or ever.
LVM snapshots work different than QCOW2 or ZFS snapshots and can therefore not be used in a VM environment like in the other two. ZFS is a single-host filesystem and intended this way. QCOW2 can be used over NFS, but no HA there per se.

So it seems like the only option if I want to use QCOW2 snapshots is to serve the iSCSI LUNS up to a 'gateway' server of some sort, then kick it out to Proxmox as NFS?

Doesn't seem worth compromising the integrity of my storage subsystem just for the sake of snapshots if im honest....
 
I can understand. Most of my VMs are also directly on the LVMs. It's really a pity .. especially since VMware supports this for ages.
 
Tom, have you tried ZFS on top of ISCSI (with multipathing)?
I see no reason for it no to work. If it works with ISCSI, it should work even if ISCSI is multipathed.
 
The thing with ZFS-over-iSCSI is that ZFS is running on the server so that you can use ZFS commands, but executed on the server side.

Of course you can use ZFS on iSCSI, but it is then still not a clustered filesystem.
 
I understand. So basically no live migration with ZFS over ISCSI.
Live migration (and potentially HA) can be done when there is LVM (with proxmoxs locking mechanism) or NFS. Right?
Are there any other options for live migration other than LVM or NFS?
 
I got excited about using ZFS on my ISCSI LUN's until I read that it does not work as a clustered file system so I would be unable to share the storage across hosts and use live migration.

The wiki also says multipath devices not supported with ZFS.
 
I understand. So basically no live migration with ZFS over ISCSI.
Live migration (and potentially HA) can be done when there is LVM (with proxmoxs locking mechanism) or NFS. Right?
Are there any other options for live migration other than LVM or NFS?
This is not correct. Live migration, snapshots and HA is fully supported by ZFS over iSCSI. Only thing you don't get is a HA storage appliance. To get a ZFS based HA storage appliance requires commercial products like Nexenta, Zeta or RSF-1.

PS. Using multipath to a single storage appliance is not HA!
 
OK, I am confused now.

LnxBil says, ZFS over ISCSI target does not support live migration, because ZFS can not be used on multiple nodes at the same time.

mir says, ZFS over ISCSI does support live migration.

If we use LVM (with proxmox locking or cLVM), I can understand how it would work, if we use a clustered file system (like gfs2), I could understand how it would work, but I do not understand how it would work, when we have the same filesystem (on ISCSI target) mounted on multiple proxmox VE nodes at the same time. It would be awesome to have it, and if you can please do elaborate a bit more, mir.
 
The difference between multipath iSCSI LUNs and ZFS over iSCSI is that storage is access locally with multipath iSCSI LUNs while ZFS over iSCSI is accessed remotely on the storage appliance. So with ZFS over iSCSI live migration is a matter of transferring the running instance between nodes maintaining the same storage connection since the connection is a single path.

Multipath iSCSI LUNs means live migration is a matter of transferring the running instance between nodes but at the same time transfer the storage connection since the same block device resides on both nodes.

Someone with better English skills probably could give a better and more prices explanation;-)
 
OK, I am confused now.

LnxBil says, ZFS over ISCSI target does not support live migration, because ZFS can not be used on multiple nodes at the same time.

mir says, ZFS over ISCSI does support live migration.

The problem is that the term 'ZFS over iSCSI' is misleading. @mir and I mean the same thing. The term which Proxmox uses 'ZFS over iSCSI' means, that you have ZFS on your SERVER, not on your local Proxmox machine, therefore it is seen as ordinary iSCSI block storage and it can be used that way (also multipathed, that does not matter how much paths there are) and of course including live migration etc. The point is that you have all ZFS features as it were a local ZFS, but it is server based (you can create snapshots, etc.) but you can attach it to a cluster environment.

Yet, as mir already said, this is no HA, because ZFS is "normally" not a cluster filesystem (on the SERVER side). You need to buy a storage appliance which can offer HA while using ZFS.
 
I think more in terms of only half named. What actually happens is that qemu accesses a remote disk using the iSCSI protocol bypassing the local VFS and block layer by means of libiscsi -> https://github.com/sahlberg/libiscsi

I did not know that it bypasses the general kernel iSCSI layer. Thank you for the clearification. Yet it seems that it does not support multipath, does it?
 
I did not know that it bypasses the general kernel iSCSI layer. Thank you for the clearification. Yet it seems that it does not support multipath, does it?
No, part of the storage URL includes the host (dns or ip) which makes multipath difficult (remember the disk is not locally available). On the other hand multipath on the storage appliance is off course supported.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!