Different ways to store and share data between containers (Pros/Cons)

serveronion

New Member
Oct 28, 2022
4
3
3
Hello Proxmox Community,
I have some questions about sharing data between containers. While researching I found many ways to do this and because I’m relatively new to Proxmox and maybe don’t understand the implications of doing certain things, I wanted to ask if there are any problems with the configurations listed here.
I hope you can tell me the advantages and disadvantages of these different configurations so that maybe other people reading this can decide easily which configuration is right for them.

1. NFS
One suggestion that came up often was to create a network share to store all the data and give the CTs access to that share.

Pro:
  • can be used across different nodes
  • can be mounted in VMs
  • can also be accessed outside of proxmox
Con:
  • when you’re only dealing with only one node and only use CTs (like I do in my homelab) it is very unintuitive to use the network to share data that is on the same hardware
2. Use bindmounts
You can use bindmounts to mount a directory on the host to a directory in a LXC CT.
The command is:
Code:
pct set [container-id] -mp0 /mnt/bindmounts/[mountpoint on host],mp=[mountpoint in CT]
You may need to do
Code:
chmod 0777 -R [foldername]
on the host to allow the CTs to write to it.

pro:
  • Very easy and fast to do
  • great for sharing small files between containers
con:
  • does not work for VMs
  • can only be used for CTs that are on the same node
  • the data is stored on the host root partition (which is 100GB in size with my setup). If you have a large folder (for example a 250GB folder that you want to share between a nextcloud CT and a samba CT) the root partition is not big enough to fit all the data
    • This can be solved by resizing the root partition
3. Use a separate disk
An easy solution is to get another disk, mount it on the host and use bindmounts as described above to give containers access to the data.

pro:
  • The size of the root partition does not matter because the data goes on the extra drive. If you need more space, you can get a bigger drive.
con:
  • you need to have a second dedicated drive just for the data

4. mount a CT disk on the host
You can mount a CT disk on the host and use it like it is an external drive (see this thread)
Like with bindmounts, this is very easy to do:
Code:
mount /dev/mapper/pve-vm--[ID]--disk--0 [mountpoint]
To unmount, just do
Code:
umount [mountpoint]

pro:
  • you can use the features that a LXC CT disk provides (you can expand it later on and it only uses the space it actually needs)
  • you don’t need another drive and can just use the space in lvm-thin
  • AFAIK you can also include the disk in the backup job
con:
  • you have to create a new CT for every disk you want to have (because the disk already has to be formatted to ext4 so that you can mount it)
  • this setup feels very unconventional because you are using the root disk of a Container to store data that is then linked back into other CTs via bindmounts
5. mount a VM disk on the host
Just like you can mount a CTs rootfs, you can also mount a VM disk. If the disk is new, you need to format it:
Code:
mkfs.ext4 /dev/mapper/pve-vm--[ID]--disk--x
. Then you can mount the disk:
Code:
mount /dev/mapper/pve-vm--[ID]--disk--x [mountpoint]
.
If that doesn’t work, you may need to activate it first:
Code:
kpartx -av /dev/pve/vm-[ID]-disk-x
.

pro:
  • You can use a single VM to create the disks under and just never turn it on
  • Resizing the disks is very easy to do. Resize the disk via the GUI, then resize the filesystem manually:
    Code:
    resize2fs /dev/mapper/pve-vm--[ID]--disk--x [Size in GiB]G
    If that doesn’t work, run the suggested command:
    Code:
    e2fsck -f /dev/mapper/pve-vm--[ID]--disk--x
    and then try again
con:
  • none :)

Please let me know what you think of the different ways listed here. If you have another solution, please leave a comment explaining it and I will add it to this list.
 
Last edited:
Thanks for the post, it was helpful. Can you explain last one ie. mount VM disk on host, and never turn it on ?
Is it for sharing VM disk on host ?
 
Thanks for the post, it was helpful. Can you explain last one ie. mount VM disk on host, and never turn it on ?
Is it for sharing VM disk on host ?
You can use that to access data on the VM, but only if it is turned off. Running the VM while its disk is mounted on the host would probably break the VM.
The only way I know you can share data between the host and a VM is by using the 1. way described in my post, running a network share (which has the disadvantages I described above).
The idea with the last example was to use a VM disk to store data instead of storing it on the root partition of the proxmox host itself, but you can't use it to share data between a running VM and the host.
I will try to rewrite my post above and add it to the wiki page about storage configurations at some point.
 
Last edited:
Thank you for this post I found it incredibly helpful. I have a some clarifying questions regarding points #3 and #5.

If I already have a LXC CT pointing to a bind mount on my local host (#2). How could I point that to a different disk for data storage while keeping the same directory tree? An example, /media/movies on the local host. Why I would like to move it is due to insufficient storage on the local host, a blank ssd I have has 4 times as much storage available.

In the same vein for #5, is there a template or best practice for creating a vm disk to be shared?

For anyone wanting additional information on #2, I used this wiki post https://pve.proxmox.com/wiki/Unprivileged_LXC_containers .
 
@serveronion Thanks a lot for this summary. Its working so far, but i have a question left.
I have 2 SSD in my home-server. The first one is for containers, templates e.g. The second one is for storage...

I created a container with a additional mountpoint mp0 to the 2nd ssd. I reserverd there some space. When i mount this mp0 to /mnt/mountxy on the host, did i lost this space on the 1st ssd? No or?
I think this is just a "link" to that mp0!?!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!