ZFS over iSCSI Lacking Container Support - How do I get around this?

Flux

Member
Jun 1, 2021
4
0
6
Australia
Hey all,

I am trying to achieve a dual node setup where Node A is my main compute node which has SSDs. This is a SFF chassis, and the goal is to house all the VMs/Containers here (or at least the boot drive for the VMs and containers here). Node B will be the storage node with HDDs, and as you can guess this is a LFF chassis. It'll house the other drives for each VM/Container which can't fit on Node A or does not need to be on Node A as its a waste of space being there. (Yes essentially it functions as a DAS, I could get a DAS but I am able to control the fans and other bits and pieces of the chassis easier this way, and I already have the LFF chassis with me)

To address this issue I saw that ZFS over iSCSI is an option. See [1] and [2]. I have managed to set this up in a test environment successfully but to my surprise it doesn't support containers/container mountpoints, just zvols/diskimages... See [3]. After asking around in IRC, apparently this feature is on the roadmap and has been there since 2018 (Point 6 [4]).

So my questions are:

1. When will this be added? Could this be added please? Pretty please?:D I don't imagine its to different from zvols as for containers its just creating a dataset? (I realise I could be wrong, there could be other issues I don't know about)

2. Have other people ran into this issue/use case, if so what was your workaround? (Without getting a 3rd Node, setting up Ceph, ...)

Thanks!
 
When will this be added? Could this be added please? Pretty please?:D I don't imagine its to different from zvols as for containers its just creating a dataset? (I realise I could be wrong, there could be other issues I don't know about)
it is actually not that easy to do ( i looked at it some time ago), while it surely should be possible, it is not trivial to do
the problem we face here is that for qemu we can give it directly a 'url' to an iscsi lun, and qemu will access that itself

for containers we'd have to connect to the iscsi target with e.g. iscsiadm, find the correct blockdevice and mount it
and no one got around to do it yet

Additionally, can container be added to the content line here? or is that not recommended?
no that'll not work
 
  • Like
Reactions: Flux
it is actually not that easy to do ( i looked at it some time ago), while it surely should be possible, it is not trivial to do
the problem we face here is that for qemu we can give it directly a 'url' to an iscsi lun, and qemu will access that itself

for containers we'd have to connect to the iscsi target with e.g. iscsiadm, find the correct blockdevice and mount it
and no one got around to do it yet


no that'll not work

Thanks for the response. I didn't know it was more difficult for that approach.

Ok that makes sense. For now using it normaly with VM images should be fine, also there is the normal iSCSI option.
 
Thanks for the response. I didn't know it was more difficult for that approach.

Ok that makes sense. For now using it normaly with VM images should be fine, also there is the normal iSCSI option.
iSCSI is a block device. Should it not be possible to format the block device with LVM and use this LVM as a storage for containers?
LVM ontop of ZFS should still be able to use blocklevel compression, bit rot protection and so on.
 
Last edited:
Maybe you
it is actually not that easy to do ( i looked at it some time ago), while it surely should be possible, it is not trivial to do
the problem we face here is that for qemu we can give it directly a 'url' to an iscsi lun, and qemu will access that itself

for containers we'd have to connect to the iscsi target with e.g. iscsiadm, find the correct blockdevice and mount it
and no one got around to do it yet


no that'll not work
maybe you refresh the summary of the doku for point "zfs over iscsi". add keynote that sharing is only possible for VM
 
Last edited:
What about NFS-over-ZFS? I asked this before, yet did not get an answer. That should be a bit easier, wouldn't it?
possibly, but this does not have the advantages of using a block device (since there would be no 1<->1 mapping of the zvols to vm/ct disks)
i suppose there could be something like that done (and we maybe wouldn't oppose patches that implement this) but i'd be rather low priority feature request
in any case: feel free to open one: https://bugzilla.proxmox.com :)
 
possibly, but this does not have the advantages of using a block device (since there would be no 1<->1 mapping of the zvols to vm/ct disks)
No, you would have a 1:1 mapping dataset <-> container ... I don't see why you would need a block device in the first place.

i suppose there could be something like that done (and we maybe wouldn't oppose patches that implement this) but i'd be rather low priority feature request
in any case: feel free to open one: https://bugzilla.proxmox.com :)
I honestly thought I already have and I also thought that I already wrote to pve-devel some years ago, yet I cannot find any trace of that .... hmmm ... maybe a glitch in the matrix.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!