ZFS over iSCSI Network HA

velocity08

Well-Known Member
May 25, 2019
247
16
58
48
Hi All

have been looking around and not found an answer yet.

Ive noticed in the ProxMox Wiki which is quiet out of date that it mentioned MPIO is not available for ZFS over iSCSI.

Is LACP or NIC Active/ Passive available for ZFS over ISCSI?

Considering using a Free/TrueNas system with dual controllers for HA which would mean 2 connections to each host Nic if a controller fails then the other takes over.

Same scenario on the host side if a Nic fails it would be good to fail over to a standby Nic.

Is this available for ZFS over iSCSI in PROXMOX?

If so maybe the ProxMox Wiki could be updated to reflect the feature and add the feature to the documentation which doesn’t mention this, if it does I wasn’t able to find it easily.

””Cheers
G
 
Is LACP or NIC Active/ Passive available for ZFS over ISCSI?

Considering using a Free/TrueNas system with dual controllers for HA which would mean 2 connections to each host Nic if a controller fails then the other takes over.

Same scenario on the host side if a Nic fails it would be good to fail over to a standby Nic.

Is this available for ZFS over iSCSI in PROXMOX?
Since LACP is a pure network thing and completely unrelated to iSCSI it is working 100%.
 
  • Like
Reactions: velocity08
Since LACP is a pure network thing and completely unrelated to iSCSI it is working 100%.
Mir have you played with ZFS over iSCSI?

i had noticed some caveats while reading forum posts but the I formation is scattered everywhere, it would be great if all this info was collated into 1 place so users could make an informed decision if this is the right protocol for them.

would love to hear some feedback from others who are using ZFS over iSCSI.

””Cheers
G
 
I wrote the code ;-)

Mid im trying to get my head around the correct use case for ZFS over iSCSI.
Apart from:
- running on central storage
- native zfs snaps
- data scrubbing etc
- reduce ZFS memory overhead per host to be able to run more VM’s (i think this is a big one)

do live migrations work the same as normal shared storage?
what about replication would this be handeled at the proxmox level or the NAS level?
What happens if a shared storage controller fails over to another controller will this be gracefully handled buy by the VM and will it keep running?

I love ZFS but its not always practical to run it for everything (or maybe it is)

thoughts ?

””Cheers
G
 
do live migrations work the same as normal shared storage?
Yes, it uses the same migration features as any other supported storage in Proxmox.
what about replication would this be handeled at the proxmox level or the NAS level?
Replication is handled on the storage server, not by Proxmox.
What happens if a shared storage controller fails over to another controller will this be gracefully handled buy by the VM and will it keep running?
What to you mean by 'shared storage controller fails over to another controller'?
 
Yes, it uses the same migration features as any other supported storage in Proxmox.

Replication is handled on the storage server, not by Proxmox.

What to you mean by 'shared storage controller fails over to another controller'?
For example you can get a TrueNas shared storage box with dual controllers running in active/ passive mode.

If 1 controller fails the second takes over to minimise any potential outage, the second controller takes over in about 20 sec so iscsi if set correctly will keep working and VM should resume as it was.

hope my explanation makes sense.

“”Cheers
G
 
For example you can get a TrueNas shared storage box with dual controllers running in active/ passive mode.

If 1 controller fails the second takes over to minimise any potential outage, the second controller takes over in about 20 sec so iscsi if set correctly will keep working and VM should resume as it was.

hope my explanation makes sense.

“”Cheers
G
Why not LACP? If one nic fails the take-over will be instantaneously.
 
Why not LACP? If one nic fails the take-over will be instantaneously.
Hey LACP is for the network deferment and as you have advised will work :)

this is more to do with controller, in theory I don’t see why it would t work it was more a question if you had tested something like the above and what the deal world results are.

mall good thanks for taking the time to reply I’ll keep researching :)

“”Cheers
G
 
this is more to do with controller, in theory I don’t see why it would t work it was more a question if you had tested something like the above and what the deal world results are.
Use stackable switches and create a LACP with connections to more than one switch (obviously the storage box should likewise have connections to more than one switch) and you should be failure proof.
 
Use stackable switches and create a LACP with connections to more than one switch (obviously the storage box should likewise have connections to more than one switch) and you should be failure proof.
Yes thank you that was the intention for the stack.

dual controller NAS
dual switches with lacp bond
10 or 25 GB switch and nic’s using lacp.

will keep you posted :)

appreciate you assistance.

“”Cheers
G
 
I did not understand to the end.
On nas / san, I create a raid from disks. I open access to one common SCSI LUN for two proxmox nodes. I create zfs storage on some node. Both nodes will work in HA mode with one zfs file system, on which several zvol for different virtual machines? Will they see other snapshots, etc.? Naturally, not one zvol is simultaneously two nodes.
 
I did not understand to the end.
On nas / san, I create a raid from disks. I open access to one common SCSI LUN for two proxmox nodes. I create zfs storage on some node. Both nodes will work in HA mode with one zfs file system, on which several zvol for different virtual machines? Will they see other snapshots, etc.? Naturally, not one zvol is simultaneously two nodes.

ok with ZFS over iSCSI ProxMox creates the LUN per VM the correct term will be zvol, this zvol will be managed by Proxmox, snapshots managed by ProxMox etc.

each zvol will have it's own snapshots using ZFS snapshot.

My understanding is when you migrate VM between hosts the config file is passed to the other host and VM starts running on Host x. Somewhere there will be a sync of config files between the hosts potentially stored locally on each host DB and kept in sync which may be why the requirement for SSH is currently needed.

Im not an expert in this feature yet but working on growing my knowledge.

At this stage its the best fit solution as it allows the shared storage to provide snapshots natively for the VM's and other features not available in other standard iSCSI LUN exports.

using a normal SAN with iSCSI would mean carving out a LUN with X amount of storage > exporting the LUN to be mounted by Proxmox > formatting the LUN with LVM, EXT4 or other supported File System as far as im aware neither of these allows for VM snapshots or other functions because VAAI isn't being used.

i'm sure if i've gotten something wrong someone will jump in to correct me.

hope the above helps.

""Cheers
G
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!