Disk for multiple LXCs

kosta88

New Member
Sep 5, 2025
13
4
3
Hello,
I am totally confused about what might be possible, so let me share what I would like:
Multiple unpriv LXC containers, shared storage on PVE for them. I have an NVME for my main storage so would very much like to give those LXCs a shared storage on it.
Just compare it to the following scenario: ubuntu VM, has an additional 300GB disk, which is then mounted into /mnt/data and docker containers can access it via the /mnt/data.
My NVME is actually only one big ZFS pool, so it's 100% used.

So the only way I would see is to create some kind of virtual disk on my zfs pool, and then mount it in the PVE itself, and then bind mount it to the LXC with that super-duper-confusing-UID/GID-mapping-stuff.

But... is that right? How would I do it? Can you give me some pointers if that's possible?

Otherwise I might as well just go a VM with containers, but what I dislike is just having an additional kernel overhead.
 
That additional kernel overhead gives you the OS separation you need to not have to go through all the confusing UID/GID stuff. If you use a lightweight distro like Alpine for your docker host, the overhead is pretty insignificant. You are talking 256-512MB of ram and 1 GB of hard drive space. Debian without a desktop is in the same range.

I stopped using LXCs once I figured out how to use the docker NFS driver. Now my docker images can access my Synology directly and there is no need to mess with FSTAB, or mounting any NFS shares on the host. https://phoenixnap.com/kb/nfs-docker-volumes

PS: I do something similar with Kubernetes with the NFS CSI driver. But I am only using Kubernetes for learning at this point. All of my self hosted apps run on Docker with the exception of Wordpress. Wordpress is just easier to manage in a VM in my opinion.
 
Last edited:
  • Like
Reactions: Johannes S
Alright, that is what I actually went with yesterday - installed a Debian 13, the RAM/Disk usage is pretty much insignificant. Docker on it is running the apps very smoothly, just like it ever did with Ubuntu earlier.

Hmm, NFS driver sounds like it could be cool. Will definitely look into that. Thanks!
 
As an example, here is the docker compose I use for Vaultwarden

YAML:
services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: always
    networks:
      - nginx
    environment:
      DOMAIN: "https://vaultwarden.yourdomain.com" # use your own domain
      SIGNUPS_ALLOWED: "true" # Deactivate this with "false" after you have created your account so that no strangers can register
      ADMIN_TOKEN: 123456789 # use your actual admin token
    volumes:
      - type: volume
        source: vaultwarden # must match the volume name below
        target: /data       # path inside the container where this data gets mounted
        volume:
          nocopy: true
    ports:
      - 8080:80             # use whatever port you want to map this to

networks:
  nginx:
    external: true          # I set up nginx separately (before hand) with its own docker compose then I add services to it.

volumes:
  vaultwarden:# name this whatever you want but it has to match with source above
    driver_opts:
      type: "nfs"
      o: "vers=4,addr=10.10.10.4,nolock,soft,rw" # use your correct IP address
      device: ":/volume1/vaultwarden"            # use your correct directory/NFS share
 
  • Like
Reactions: kosta88
Just wanted to again say thanks, tried it out, works like charm! And I really like the fact that I can throw all those entries from fstab out now and leave the host be a host only. I added separate disk for docker container configs, also my data drive, and all seems to work like clockwork.
 
I'll come back to this one more time. Unfortunately, I spent the better of today to trying to fix the missing permission of the container not having an access the the NFS share when mounted via docker compose. Namely, apparently there is some issue when it comes to PID and GID, and I for the life of me, couldn't figure it out. Besides, I have read somewhere that NFS in docker containers is not actually making a direct connection to the NFS share, but actually makes the host mount it in the background anyway, so the data is still flowing over the host - which also makes sense really, as docker, same as LXC unpriv, can't mount NFS. It's juts a host-directive, as far as I understand it. That means for me, I can mount it back on the host as far as I care, which also means all containers can use that one path if needed. Anyway, returned to fstab, and now all my permissions are back and good to go.
 
What are you using for a NAS? Fixing permissions is not hard (although frustrating until you get the hang of it) and you are going to need to learn how to do this regardless of how you mount the NFS share. It won't be any different in a bind mount to your host. You will still run into permission issues. I have used both TrueNAS and Synology to server NFS shares, and there's some quirks to each.