Best Way ZFS another server in cluster

cdn123

Member
Aug 7, 2024
43
5
8
Hello,

what is the best way to use the local zfs storage within another pve in same cluster for vm disks?

Thank you!
 
NFS is less easier and even less trouble causing ... but trying both is always good for comparing on it's own and perhaps have another way for another future problem with a solution already in drawer :)
 
Last edited:
  • Like
Reactions: UdoB
Which solution would you prefer?
I would prefer an actually "shared storage" solution without a single point of failure - which both solutions discussed here will introduce.

(Actually I for myself do mainly use ZFS replication..., which is not really shared. But it copies snapshots between nodes (by "replication") which are usable if the original node dies. You need the same ZFS zpool on each cluster member for this to make sense. This approach has been discussed often in this forum.)

While I can not present a tutorial for any solution I am with @waltar - NFS is easier. But ZFS-over-iSCSI has the advantage of actually using ZFS "zvols", including snapshots, compression etc. With NFS what you get is basically a directory based share, not a ZFS-like thing.

Both is availabe for free..., go and test it. :)
 
I cannot see any option to use the pve as storage node for another node.
Sorry..., I do not use that construct, so I can not compare my settings.

I had tested it once (two years ago) and I was pleasantly surprised that it "just worked" for me. Of course you need to install an iSCSI target etc...

Unfortunately https://pve.proxmox.com/wiki/Storage:_ZFS_over_ISCSI does NOT contain all required steps.
 
But the PVE-"middleware" assures that always only one single node will get access granted for a specific target/LUN = one VM's block device. This aspect is part of the cluster-awareness.
Assuming service virtual disks to a cluster is the use case, sure. But if that's the use case, its worth mentioning the limitations, such as it means he will end up with a node as a SPOF which is not ideal. Proper C&C of the backing store requires some sophistication as well- he will need to use a compatible iscsi host stack.

Yes already seen this page, but there is no instruction to install iSCSI.
Since PVE is meant to consume storage, serving storage as ISCSI is not in its scope. However, since this is just debian, just follow any tutorial for this (tons available, example https://thelinuxcode.com/share-zfs-volumes-via-iscsi/)

Having said that- consider what you are really trying to accomplish here; making this configuration PERFORMANT would require adequate networking configuration, and updating/rebooting or unplanned outages of your node with the backing store will take down all your guests.
 
  • Like
Reactions: UdoB
That's the advantage of nfs as the protocal stack is prepared to a couple of min's of disconnected. Nfs reads were stalled, writes were slowed down and buffered until network mount is back again, don't know how long exactly yet but that's much longer as server come back and before a vm get's oin trouble by itself. Even a nfs server could be replicated to second one and as reboot (for updates) is no problem there is nearly no demand for switching to the DR-server which indeed could be prepared if that day anyone come. HA nfs is possible also but even brings new problems into the game and a DR-server is less easier and mostly sufficient to bring the "env" back, eg. client prepared a reboot solves a primary outage.
 
Assuming service virtual disks to a cluster is the use case, sure. But if that's the use case, its worth mentioning the limitations, such as it means he will end up with a node as a SPOF which is not ideal. Proper C&C of the backing store requires some sophistication as well- he will need to use a compatible iscsi host stack.


Since PVE is meant to consume storage, serving storage as ISCSI is not in its scope. However, since this is just debian, just follow any tutorial for this (tons available, example https://thelinuxcode.com/share-zfs-volumes-via-iscsi/)

Having said that- consider what you are really trying to accomplish here; making this configuration PERFORMANT would require adequate networking configuration, and updating/rebooting or unplanned outages of your node with the backing store will take down all your guests.
Yes you‘re right and thank you for the link.
Our setup will be a node with only ssd and a node with only HDD (no other choice).
One small RDS server (2 user) will also have a huge data partition for files. This disk needs to be on the HDD node.
 
Has any one recently got the zfs over iscsi working with the actual proxmox version?
It used to work for me for quite some time but recently stopped.
 
Our setup will be a node with only ssd and a node with only HDD (no other choice).
conflating "this is what I have, and this is what I want to do" and "no other choice" is folly. hardware is cheap and easy; building solutions on inadequate hardware is saving a penny to lose a pound. Whats the relative cost of an outage? If it doesnt doest present a cost, I posit its probably not worth doing in the first place.

If its worth doing, its worth doing right.
 
While I can not present a tutorial for any solution I am with @waltar - NFS is easier. But ZFS-over-iSCSI has the advantage of actually using ZFS "zvols", including snapshots, compression etc. With NFS what you get is basically a directory based share, not a ZFS-like thing.
NFS has same advantage of doing snapshots and having compression when it's "served" by a zfs dataset. When defined tpm for a vm snapshot didn't work from gui but is still available on dataset manually while compression has totally nothing to do with nfs because data is compressed after nfs transfer while writing in the dataset.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!