what is the best way
Thank you! Which solution would you prefer? I cannot find any tutorial.Either look for NFS server setup or look here:
https://pve.proxmox.com/wiki/Storage:_ZFS_over_ISCSI
Note: probably there is no "best way"!
I would prefer an actually "shared storage" solution without a single point of failure - which both solutions discussed here will introduce.Which solution would you prefer?
iscsi is exclusive (meaning only one host can access a given resource.) NFS is multiuser. you use what your application requires.Thank you! Which solution would you prefer? I cannot find any tutorial.
Technically you are right.iscsi is exclusive (meaning only one host can access a given resource.)
I cannot see any option to use the pve as storage node for another node.Technically you are right.
But the PVE-"middleware" assures that always only one single node will get access granted for a specific target/LUN = one VM's block device. This aspect is part of the cluster-awareness.
ZFS-over-iSCSI is "shared" in the required sense. See https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_types
Sorry..., I do not use that construct, so I can not compare my settings.I cannot see any option to use the pve as storage node for another node.
Yes already seen this page, but there is no instruction to install iSCSI.Sorry..., I do not use that construct, so I can not compare my settings.
I had tested it once (two years ago) and I was pleasantly surprised that it "just worked" for me. Of course you need to install an iSCSI target etc...
Unfortunately https://pve.proxmox.com/wiki/Storage:_ZFS_over_ISCSI does NOT contain all required steps.
Assuming service virtual disks to a cluster is the use case, sure. But if that's the use case, its worth mentioning the limitations, such as it means he will end up with a node as a SPOF which is not ideal. Proper C&C of the backing store requires some sophistication as well- he will need to use a compatible iscsi host stack.But the PVE-"middleware" assures that always only one single node will get access granted for a specific target/LUN = one VM's block device. This aspect is part of the cluster-awareness.
Since PVE is meant to consume storage, serving storage as ISCSI is not in its scope. However, since this is just debian, just follow any tutorial for this (tons available, example https://thelinuxcode.com/share-zfs-volumes-via-iscsi/)Yes already seen this page, but there is no instruction to install iSCSI.
Yes you‘re right and thank you for the link.Assuming service virtual disks to a cluster is the use case, sure. But if that's the use case, its worth mentioning the limitations, such as it means he will end up with a node as a SPOF which is not ideal. Proper C&C of the backing store requires some sophistication as well- he will need to use a compatible iscsi host stack.
Since PVE is meant to consume storage, serving storage as ISCSI is not in its scope. However, since this is just debian, just follow any tutorial for this (tons available, example https://thelinuxcode.com/share-zfs-volumes-via-iscsi/)
Having said that- consider what you are really trying to accomplish here; making this configuration PERFORMANT would require adequate networking configuration, and updating/rebooting or unplanned outages of your node with the backing store will take down all your guests.
conflating "this is what I have, and this is what I want to do" and "no other choice" is folly. hardware is cheap and easy; building solutions on inadequate hardware is saving a penny to lose a pound. Whats the relative cost of an outage? If it doesnt doest present a cost, I posit its probably not worth doing in the first place.Our setup will be a node with only ssd and a node with only HDD (no other choice).
A pve cluster requires three nodes, even if one node only provides quorum services.Our setup will be a node with only ssd and a node with only HDD
NFS has same advantage of doing snapshots and having compression when it's "served" by a zfs dataset. When defined tpm for a vm snapshot didn't work from gui but is still available on dataset manually while compression has totally nothing to do with nfs because data is compressed after nfs transfer while writing in the dataset.While I can not present a tutorial for any solution I am with @waltar - NFS is easier. But ZFS-over-iSCSI has the advantage of actually using ZFS "zvols", including snapshots, compression etc. With NFS what you get is basically a directory based share, not a ZFS-like thing.