Easiest way to share storage across LXC containers

cibernox

New Member
Jul 21, 2023
9
2
3
I'm fairly new to proxmox.

On my home lab (intel nuc, single 2TB SSD drive) I installed proxmox 8 and in it.

My goal is to run:
- A Samba server to act as a home NAS (I know a single drive NAS is not the safest, but I could perform regular backups to amazon S3)
- A photo library manager (Immich)
- Home Assistant
- Plex
- Maybe NextCloud
- Other minor stuff.

I did manage to do all that but each container has its own space disk. Ideally I'd like to have volume or folder that is shared with all containers (at least the ones that need mass storage, like plex and Immich) and that also be mounted and shared in the samba container.

What's the best way to achieve this?
I though that I could assign almost all the available space to the samba container and try to mount that NFS storage in all containers, but I couldn't make it work on unprivileged containers.
I then discovered that you can create a folder in the host volume and mount it as what's called bindmounts in as many containers as I want (if they are privileged at least), but when I create proxmox I didn't anticipate this and I've assigned only 100G to the host and I don't see a way to resize that.

I feel that this shouldn't be so complicated and I'm missing something. How would you mount shared storage in many different containers?
 
  • Like
Reactions: 3y3x
Here is an example:
Code:
mp0: /smb/Other,mp=/smb_storage/Other,replicate=0
mp1: /smb/Media,mp=/smb_storage/Media,replicate=0
mp2: /smb/Data,mp=/smb_storage/Data,replicate=0

Left is the pve host / right mountpoint in container.
replicate=0, is not really needed, but i do it anyway to tell to not to backup or migrate that mountpoint.
But Proxmox won't anyway.

Privileged vs Unprivileged:
Doesn't matter.

But: with Unprivileged containers you need to chown the share directory as 100000:100000
With privileged containers you have normal uid's

That's the only difference, but it doesn't make any difference tbh. You just need to make users with the same uid and group id between multiple lxc containers.

Let's say 100033:100033 = is on the pve host the uid/gid of your folder, and 33:33 is your created user/group in your Unprivileged container.

If you want to mix the access between an privileged and unprivileged container, you can make the same user with the uid/gid of 100033:100033.
Because in the privileged container, the users wont get mapped to +100000

Hopefully that was understandable, maybe i didn't choose the right words, but well :-)
Good luck :-)
 
  • Like
Reactions: roxy
Here is an example:
Code:
mp0: /smb/Other,mp=/smb_storage/Other,replicate=0
mp1: /smb/Media,mp=/smb_storage/Media,replicate=0
mp2: /smb/Data,mp=/smb_storage/Data,replicate=0

Left is the pve host / right mountpoint in container.
replicate=0, is not really needed, but i do it anyway to tell to not to backup or migrate that mountpoint.
But Proxmox won't anyway.

Privileged vs Unprivileged:
Doesn't matter.

But: with Unprivileged containers you need to chown the share directory as 100000:100000
With privileged containers you have normal uid's

That's the only difference, but it doesn't make any difference tbh. You just need to make users with the same uid and group id between multiple lxc containers.

Let's say 100033:100033 = is on the pve host the uid/gid of your folder, and 33:33 is your created user/group in your Unprivileged container.

If you want to mix the access between an privileged and unprivileged container, you can make the same user with the uid/gid of 100033:100033.
Because in the privileged container, the users wont get mapped to +100000

Hopefully that was understandable, maybe i didn't choose the right words, but well :)
Good luck :)
Thanks for answering.
What's not clear to me is... what are `/smb/Other` and `/smb/Media`? Proxmox Volumes? Folders? NFS folders?

I think I need clarification on how to create whatever those are.
 
Last edited:
Thanks for answering.
What's not clear to me is... what are `/smb/Other` and `/smb/Media`? Proxmox Volumes? Folders? NFS folders?

I think I need clarification on how to create whatever those are.
That's just folders on my pve host itself.

But the folders can be located anywhere, on an zfs dataset/pool, or nfs, or whatever that directory points to on your host.
 
That's just folders on my pve host itself.

But the folders can be located anywhere, on an zfs dataset/pool, or nfs, or whatever that directory points to on your host.
The problem is that my PVE host itself is 100G right now and I can't find a way of increasing it.
 
You got two storages. "local" for files/folder and "local-lvm" which is a LVM-Thin pool which can only store block devices (so LXC/VM virtual disks).
So this "local-lvm" can't directly be used for bind-mounting.

Both options require doing stuff manually using the CLI:
A) create a new thin volume on your thin pool. Format that LV with the filesystem of your choice and mount it on the PVE host. Files/folders on that mountpoint then will then consume the space of that "local-lvm" storage.
B) You can't shrink a LVM-Thin pool. If you want to increase the size of your "local" storage you would need to backup all your VMs/LXCs, destroy that thin pool with all VMs/LXC on it, extend your "root" LV, extend the ext4 filesystem of that "root" LV. Create a new thin pool using the remaining space. Restore your VMs/LXCs.

I would prefer option A.
 
Last edited:
  • Like
Reactions: Ramalama
And you can use the more secure unprivileged LXCs with SMB/NFS with a workaround. You aren't allowed to mount these NFS/SMB shares directly inside the unprivileged LXC, but you can mount the SMB/NFS shares on your PVE host and then bind-mount the mountpoints of those mounted SMB/NFS shares from the host into the unprivileged LXC.
This works fine, but is quite annoying because of the user/group remapping when working with NFS and those LXCs then won't work when migrating them or restoring them on another host if ypu don't also mount the NFS/SMB shares in the same locations first.
 
You got two storages. "local" for files/folder and "local-lvm" which is a LVM-Thin pool which can only store block devices (so LXC/VM virtual disks).
So this "local-lvm" can't directly be used for bind-mounting.

Both options require doing stuff manually using the CLI:
A) create a new thin volume on your thin pool. Format that LV with the filesystem of your choice and mount it on the PVE host. Files/folders on that mountpoint then will then consume the space of that "local-lvm" storage.
B) You can't shrink a LVM-Thin pool. If you want to increase the size of your "local" storage you would need to backup all your VMs/LXCs, destroy that thin pool with all VMs/LXC on it, extend your "root" LV, extend the ext4 filesystem of that "root" LV. Create a new thin pool using the remaining space. Restore your VMs/LXCs.

I would prefer option A.
"create a new thin volume on your thin pool" -> I'm googling how to do that but all I see if about creating new thin pools, which I can't do because it seems to require a new disk.
 
Seems that the easiest way would be to create a disk, mount it in the container that runs samba and expose it, to then mount it in proxmox itself to share with with other containers.

However that doesn't seem to work. My compueters do hace access to the nas with samba, but proxmox doesn't seem to access it properly.
 

Attachments

  • Screenshot 2023-07-24 at 12.13.23.jpg
    Screenshot 2023-07-24 at 12.13.23.jpg
    292.8 KB · Views: 143
For hosting vm/containers on, im not a big fan of samba, but it should work without issues if you do that right.
However, first something else....

Is there any chance, that your nas is some sort of Synology or sth?
Because you could do it via iscsi. Iscsi will provide some benefits, especially in regards to performance, but it's harder to setup if you're a "newbie"

Cheers
 
For hosting vm/containers on, im not a big fan of samba, but it should work without issues if you do that right.
However, first something else....

Is there any chance, that your nas is some sort of Synology or sth?
Because you could do it via iscsi. Iscsi will provide some benefits, especially in regards to performance, but it's harder to setup if you're a "newbie"

Cheers
I don't have a nas really. The nas is a LXC container inside proxmox itself. At the end of the day the only thing i want is for several containers to be able to read and write the same folder, I don't care in the slightliest about how to go about it.
I was able to do it with bindmounts but since that shared folder lives in the allocated space for proxmox itself, for which I dedicated 100G, and I can't find an easy way to resize that partition without having to reinstall everything, i was trying the samba option.
 
  • Like
Reactions: Ramalama
Like I said, creating a thin LV on that local-lvm would be what I would do. When you don't want to run insecure privileged LXCs you would need to bind-mount that SMB share anyway and by directly bind-mounting a ext4/xfs formated thin LV you skip that SMB overhead.
The SMB approach would only be useful if you also want VMs to access the same folders (as you can't bind-mount to VMs).
 
Last edited:
  • Like
Reactions: roxy and Ramalama
Like I said, creating a thin LV on that local-lvm would be what I would do. When you don't want to run insecure privileged LXCs you would need to bind-mount that SMB share anyway and by directly bind-mounting a ext4/xfs formated thin LV you skip that SMB overhead.
The SMB approach would only be useful if you also want VMs to access the same folders (as you can't bind-mount to VMs).
Oh, I tried, but I must be doing something wrong.
I created a new volume with `lvcreate -n sharedstorage -V 500G pve/data`. When I formatted it in ext4 with `mkfs.ext4 /dev/pve/sharedstorage` and added it to the fstab table with `echo '/dev/pve/sharedstorage /var/lib/sharedstorage ext4 defaults 0 2' >> /etc/fstab`.

I can see it if I run `lvs`
```
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 1.71t 18.03 0.80
root pve -wi-ao---- 96.00g
sharedstorage pve Vwi-aotz-- 500.00g data 1.77
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-aotz-- 1000.00g data 3.19
vm-100-disk-2 pve Vwi-aotz-- 2.00g data 51.83
vm-101-disk-0 pve Vwi-aotz-- 4.00m data 0.00
vm-101-disk-1 pve Vwi-aotz-- 32.00g data 44.56
vm-103-disk-0 pve Vwi-aotz-- 4.00g data 62.68
vm-104-disk-0 pve Vwi-aotz-- 256.00m data 54.22
vm-110-disk-0 pve Vwi-aotz-- 350.00g data 72.06
vm-115-disk-0 pve Vwi-aotz-- 8.00g data 24.30
vm-120-disk-0 pve Vwi-aotz-- 100.00g data 3.61
```

But I don't see it in the UI anywhere to be selected. It doesn't help the fact that I have no idea what a thin-pool, a LVM or a LVM-thin are. It's obvious that sysadmin is not my strong suit.
 
Its normal that PVE won't show it, as its not managed by PVE. But you can still bind-mount it by editing your LXC config file likw described here:
https://pve.proxmox.com/wiki/Unprivileged_LXC_containers

I have no idea what a thin-pool, a LVM or a LVM-thin are. It's obvious that sysadmin is not my strong suit.
Then you know what to learn next. ;)
You really should understand your used storage or you are pretty screwed on the first problem...probably losing your data you put alot of work into.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!