Will LXC ever be supported on ZFS over ISCSI?

np86

Renowned Member
Jun 11, 2016
38
3
73
38
I am on the hunt for a shared storage where I can use both VMs and LXC with snapshot. I really want to use ZFS over ISCSI. But from what I could gather last time I tried, it is not supported for LXC?

I know CEPH is an option, but it just seem too complex for my little homelab.
 
Unfortunately, I could not convince the Proxmox staff to implement this. I did try it though:

https://pve.proxmox.com/pipermail/pve-devel/2016-July/022173.html
https://pve.proxmox.com/pipermail/pve-devel/2016-October/023255.html

The current way to go is ext4 on a ZVOL from ZFS-over-iSCSI
But would that support snapshot then?

What I really want is the way ZFS (Local) works as a shared storage.

I dont like the idea that I have to run lxc on local storage, when a shared storage should be a viable option

EDIT: What was the reason for the decline?
 
  • Like
Reactions: Tmanok
But would that support snapshot then?

Yes, like with ZFS-over-iSCSI, but with NFS for LX(C) containers.

What I really want is the way ZFS (Local) works as a shared storage.

Using ZFS-over-iSCSI is the most similar thing currently available.

EDIT: What was the reason for the decline?

I think they did not see the benefit in it, maybe they misunderstood me?
 
Using ZFS-over-iSCSI is the most similar thing currently available.

Yeah sadly it does not support LXC.

I just dont get why there aren`t a shared storage solution that support all types.
Unless you are using something like CEPH, you have to go on a compromise with something
 
  • Like
Reactions: Tmanok
LVM does everything, using it for years. (except snapshots of course)
So this was a useless reply. What can I use this for?

NFS already does the same you just said.
Please dont comment if you dont have any meaninful to contribute.
 
Please dont comment if you dont have any meaninful to contribute.

Either you simply don't understand what I meant or you're .... let's assume the first kind.

So this was a useless reply. What can I use this for?

So lets analyse what we wrote, shall we?

You wrote this:

I just dont get why there aren`t a shared storage solution that support all types.

and I replied with this

LVM does everything, using it for years. (except snapshots of course)

'all types' means LXC and KVM so LVM solves this for a clustered setup. Running it for years as do hundreds of others - again without snapshots, yet this is unrelated to 'all types'
 
Either you simply don't understand what I meant or you're .... let's assume the first kind.



So lets analyse what we wrote, shall we?

You wrote this:



and I replied with this



'all types' means LXC and KVM so LVM solves this for a clustered setup. Running it for years as do hundreds of others - again without snapshots, yet this is unrelated to 'all types'
Okay so maybe I should have worded "all types" differently. But should have been pretty obvious what I meant. Obviously I meant with snapshot, which are the feature I am requesting. I am well aware I can use Directory, nfs etc etc. If I just want a storage type that would work with the file format.
 
Okay, now we agree:

There is, unfortunately, no solution for LXC and KVM on one a SAN-based shared storage that can support features like snapshots and thin provisioning.

Maybe you could create a feature request for that? Then the Proxmox staff will see that there are people out there who are interested in a ZFS-over-iSCSI for containers with NFS as transport technology except me.
 
  • Like
Reactions: Tmanok
Okay, now we agree:

There is, unfortunately, no solution for LXC and KVM on one a SAN-based shared storage that can support features like snapshots and thin provisioning.

Maybe you could create a feature request for that? Then the Proxmox staff will see that there are people out there who are interested in a ZFS-over-iSCSI for containers with NFS as transport technology except me.
yeah definately.
Where do I do that?
 
Hi Everyone,

Sorry to bump an old thread. Can someone explain technically why LXC can't live on ZFS over iSCSI? Just as I was looking into this again I found this thread which is the first news I've heard about LXC not being supported on ZFS over iSCSI.

Thanks,


Tmanok
 
Can someone explain technically why LXC can't live on ZFS over iSCSI?
The iSCSI integration is done directly in QEMU, so that the underlying OS does not actually see the disks directly in its SCSI layer. For LX(C) containers, the underlying OS has to see the disks in order to create an ext4 filesystem on top of it. Even if it would be working, you would not be able to live migrate, because ext4 (or any non-clustered filesystem) would support access from two different hosts at the same time.

The only viable option would be to use ZFS-over-NFS, but even 4 years later, it is not there. Maybe there are some unresolvable issues that I'm unaware of, but not official reply.
 
  • Like
Reactions: Tmanok
Admittedly, I have never fully implemented ZFS over iSCSI with PVE and you may be correct @LnxBil , but I have a sense that the reason is not quite that. At Blockbridge we were the first storage company that build a Docker storage Plugin when they just came out with Storage API. We've even implemented our entire storage stack inside a Docker container as a simulator. Providing iSCSI from the container was not easy but we achieved it.
The way we integrated with Docker Swarm and later K8s is by having the plugin attach iSCSI disks to the underlying host and pass them through to containers with or without file system.

This is the same concept we use now with Blockbridge PVE plugin - iSCSI disk is created, attached, and then bind-mounted to the PVE LXC. The plugin also takes care of moving the disk when LXC migration is needed.

So it should be possible in theory.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
So it should be possible in theory.
Dont' get me wrong: I didn't (wanted) to say it's impossible. Your way is the "kernel way" that is always possible, but more work and has more layers. The NFS way is already a filesystem, therefore concurrent filesystem access is "only" a filesystem and not a blocklevel problem.

At Blockbridge we were the first storage company that build a Docker storage Plugin when they just came out with Storage API.
Oh great!

BTW: A similar approach for ZFS-over-NFS does exist for k8s for volumes.

Admittedly, I have never fully implemented ZFS over iSCSI with PVE and you may be correct @LnxBil , but I have a sense that the reason is not quite that.
I recently did it with the @fireon tutorial. QEMU mounts it directly in userspace and therefore, there is also no mulitpath available.
 
  • Like
Reactions: Tmanok
QEMU mounts it directly in userspace and therefore, there is also no mulitpath available.
Oh well that's a dealbreaker anyway. MPIO is lower latency, higher throughput, and parallel, whereas LAGs do not present themselves to the iSCSI protocol as something with multiple paths just more bandwidth. Unfortunately that means that iSCSI cannot take advantage of the same physical links with LAGs.

Plugins and kernel patches would be great for a specific SAN I'm sure but only with very regular patching and testing which I am doubtful to find. NFS with .raw snapshots would also be very helpful but alas they are impossible. If only containers could use .qcow2 or .raw improved in features.
Thanks guys,


Tmanok
 
Snapshots on LXC-over-NFS would be really sweet. Backups w/o VM shutdown/restart would be the primary win.
 
  • Like
Reactions: Tmanok

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!