How to set up ZFS for use among multiple LXC

jaysun_n

New Member
Jul 15, 2025
2
0
1
Obligatory I am new to Proxmox and am still learning so please excuse my question if it is a bit basic.

I am trying to set up my single node Proxmox server with a RAIDZ2 array to store files (media, samba, downloads, configurations, etc...) and share them across multiple LXCs and VMs. I would also like to use ZFS's snapshot capabilities to make automated ZFS snapshots of the files so I have a file history. However, I have not found any explanations or guides on how to achieve these goals nor have I found a way to achieve them myself.

In working towards my goals, Ive gotten confused with how Proxmox handles storage and was hoping to get some clarification to some questions I had:
  1. When I make a storage location under Datacenter > host > Disks > LVM/LVM-thin/ZFS, what does the "Add Storage" option do? I read that if that option is disabled and then you use that storage location under Datacenter > Storage, all files made under it will be .raw but I am unsure the ramifications of this.
  2. When making a storage location for files under a zpool under Datacenter > Storage, should I use Directory or ZFS? I want to store files, so Directory sounds correct but under Table 1 of the Proxmox VE Storage docs it says it doesn't support snapshots so do I use ZFS? But I thought ZFS was for setting up vdevs so Im not sure what 'ZFS' really means in this menu.
  3. I'm also not sure what the difference of the 'ZFS' options are under Datacenter > host > Disks > ZFS and Datacenter > Storage. Is one creating a zpool whereas the other is just making a dataset (file structure)?
  4. If I store files in a ZFS-based storage location for files, does Proxmox allow the host shell to see the files contained within? When I made a share for use in a Samba LXC, the host shell only saw virtual disks and .raw files. even though the LXC and connected hosts saw the file contents in a directory structure.
  5. If I want to just store files in a Datacenter > Storage location, what should the 'Content' be set to? Are there any good resources for seeing what each of these content options mean?
  6. For ZFS storage locations, why is 'Backup Retention' greyed out saying 'Backup content type not available for this storage.'? I thought ZFS had integrated snapshots. Are these 'backups' different?
  7. How can I take snapshots/backups of Proxmox itself? I saw several guides saying to install the proxmox-backup-server package but when I run that on my host shell, apt says it cannot find the package.
 
1. It adds the created storage with the appropriate type to Datacenter > Storage. Only Directory storage would use .raw files.
2. Depends what you want to store. Use ZFS for guest disks and Directory for files. You can even point both types to the same dataset.
3. In simple terms node > Disks > ... creates/formats storage and Datacenter > Storage makes PVE aware of (and potentially mount) it.
4. Assuming you use a Directory storage and give a guest a virtual disk on that it will be a file based disk like .raw, .qcow2 and so on.
If you want to share directories with a guest look into Bind Mounts and Virtiofs.
5. For files use Directory. Also see here and here.
6. Because the ZFS type does not directly handle files or store backups. It's why it says Backup content type not available for this storage.
7. PBS does not yet have a PVE host backup mechanism. You can back PVE up like any other debian system. PBS is one of those options, of course.

As you can see, everything's kind of connected and one question somewhat answers another. I tried to be brief because there's a lots of variables and overlap but I hope that helps get the hang of it. For example with ZFS CTs get datasets and VMs get ZVOLs.
Let me know if you need more detail. As long as you don't quote my whole message I'll explain more :)

Just so people don't waste time answering the same thing it's also been asked here.
Please make sure to link to cross-posts if you ask in multiple places.
 
Last edited:
Only Directory storage would use .raw files.
.raw files confuse me. They seem to appear as .raw on the VE host but appear as an actual file structure when mounted in a LXC or VM? Is this correct? Is there any way to interact with the file contents from the host?
You can even point both types to the same dataset.
Do you mean I can have a storage location of type Directory and and another of type ZFS pointing to the same path (dataset) in the VE host filesystem?
look into Bind Mounts
Looking over the resource, they look to be a way to share any folder from the VE host to a LXC/VM and but may run into permissions issues. I tried a simple test where I made a Directory within my RAIDZ2 storage location, like /zpool/share and then passed that directory to my container but the container cannot write to files in there so I dont think it will work since Bind Mounts seem like they dont work with different users.

Rereading my post and looking through different sites, my real struggle is how to best structure my zpool with support for multiple directories (with multiple users support for shares) I can mount across containers and maintain snapshot abilities for files contained within. I was able to get something close by going to Datacenter > node > vmid > Resources > Add > Mount point. However, I have some concerns
  1. The lifetime of the folder (and files) tied to the container so if the container ever gets deleted, so will my files. This worries me as I'd like the files to be persistent even if the container is deleted. On the positive side, since the files are contained within the container, a container snapshot will snapshot all shared files
  2. This solution makes requires that the owner container is the samba server container as to share with other containers, I need to use fstab and cifs to mount the shares. If possible Id like the mounting to be independed of the 'owner' container.
Do you know of any way to address either concern?
 
.raw files confuse me. They seem to appear as .raw on the VE host but appear as an actual file structure when mounted in a LXC or VM? Is this correct? Is there any way to interact with the file contents from the host?
Well something similar happens with ZVOLs. You can see it's there with zfs list -t vol but not easily access the files within. The .raw file is a file based virtual disk. The file system on it can be mounted inside the guest like with any other disk. To the node it's "just" a file. You can technically mount it on the node (while not in use) but that's probably not what you want to do. There's also a pct mount command to mount a CT's file system but that is more for emergencies. I don't recommend file based virtual disks anyways as they are slow and CTs don't support snapshots with them.

Do you mean I can have a storage location of type Directory and and another of type ZFS pointing to the same path (dataset) in the VE host filesystem?
Yeah, PVE will just handle the storage types differently. If it's a Directory you can store files. File based disks too. If it's ZFS it will create Datasets or ZVOLs depending on the guest type. The given dataset will be the root/parent of those elements. I recommend creating a separate dataset for files though to have better separation and control over properties.

the container cannot write to files in there
The user IDs in unprivileged containers are shifted. UID 0 inside is not UID 0 outside. You need to configure ACLs/permissions on the node side and/or use ID mapping. I find this last part kind of complicated.

Do you know of any way to address either concern?
See above. I'm afraid there is no silver bullet here. Everything has their pro and con.
 
Last edited:
  • Like
Reactions: UdoB