Hello Proxmox Community,
I have some questions about sharing data between containers. While researching I found many ways to do this and because I’m relatively new to Proxmox and maybe don’t understand the implications of doing certain things, I wanted to ask if there are any problems with the configurations listed here.
I hope you can tell me the advantages and disadvantages of these different configurations so that maybe other people reading this can decide easily which configuration is right for them.
1. NFS
One suggestion that came up often was to create a network share to store all the data and give the CTs access to that share.
Pro:
You can use bindmounts to mount a directory on the host to a directory in a LXC CT.
The command is:
You may need to do
on the host to allow the CTs to write to it.
pro:
An easy solution is to get another disk, mount it on the host and use bindmounts as described above to give containers access to the data.
pro:
4. mount a CT disk on the host
You can mount a CT disk on the host and use it like it is an external drive (see this thread)
Like with bindmounts, this is very easy to do:
To unmount, just do
pro:
Just like you can mount a CTs rootfs, you can also mount a VM disk. If the disk is new, you need to format it:
. Then you can mount the disk:
.
If that doesn’t work, you may need to activate it first:
.
pro:
Please let me know what you think of the different ways listed here. If you have another solution, please leave a comment explaining it and I will add it to this list.
I have some questions about sharing data between containers. While researching I found many ways to do this and because I’m relatively new to Proxmox and maybe don’t understand the implications of doing certain things, I wanted to ask if there are any problems with the configurations listed here.
I hope you can tell me the advantages and disadvantages of these different configurations so that maybe other people reading this can decide easily which configuration is right for them.
1. NFS
One suggestion that came up often was to create a network share to store all the data and give the CTs access to that share.
Pro:
- can be used across different nodes
- can be mounted in VMs
- can also be accessed outside of proxmox
- when you’re only dealing with only one node and only use CTs (like I do in my homelab) it is very unintuitive to use the network to share data that is on the same hardware
You can use bindmounts to mount a directory on the host to a directory in a LXC CT.
The command is:
Code:
pct set [container-id] -mp0 /mnt/bindmounts/[mountpoint on host],mp=[mountpoint in CT]
Code:
chmod 0777 -R [foldername]
pro:
- Very easy and fast to do
- great for sharing small files between containers
- does not work for VMs
- can only be used for CTs that are on the same node
- the data is stored on the host root partition (which is 100GB in size with my setup). If you have a large folder (for example a 250GB folder that you want to share between a nextcloud CT and a samba CT) the root partition is not big enough to fit all the data
- This can be solved by resizing the root partition
An easy solution is to get another disk, mount it on the host and use bindmounts as described above to give containers access to the data.
pro:
- The size of the root partition does not matter because the data goes on the extra drive. If you need more space, you can get a bigger drive.
- you need to have a second dedicated drive just for the data
4. mount a CT disk on the host
You can mount a CT disk on the host and use it like it is an external drive (see this thread)
Like with bindmounts, this is very easy to do:
Code:
mount /dev/mapper/pve-vm--[ID]--disk--0 [mountpoint]
Code:
umount [mountpoint]
pro:
- you can use the features that a LXC CT disk provides (you can expand it later on and it only uses the space it actually needs)
- you don’t need another drive and can just use the space in lvm-thin
- AFAIK you can also include the disk in the backup job
- you have to create a new CT for every disk you want to have (because the disk already has to be formatted to ext4 so that you can mount it)
- this setup feels very unconventional because you are using the root disk of a Container to store data that is then linked back into other CTs via bindmounts
Just like you can mount a CTs rootfs, you can also mount a VM disk. If the disk is new, you need to format it:
Code:
mkfs.ext4 /dev/mapper/pve-vm--[ID]--disk--x
Code:
mount /dev/mapper/pve-vm--[ID]--disk--x [mountpoint]
If that doesn’t work, you may need to activate it first:
Code:
kpartx -av /dev/pve/vm-[ID]-disk-x
pro:
- You can use a single VM to create the disks under and just never turn it on
- Resizing the disks is very easy to do. Resize the disk via the GUI, then resize the filesystem manually:
Code:
resize2fs /dev/mapper/pve-vm--[ID]--disk--x [Size in GiB]G
Code:e2fsck -f /dev/mapper/pve-vm--[ID]--disk--x
- none
Please let me know what you think of the different ways listed here. If you have another solution, please leave a comment explaining it and I will add it to this list.
Last edited: