Setting up LXC Container with (multiple) directory storages?

For the original problem, for the use of a gluster volume for container, you can perhaps activate the nfs-share directly on the gluster volume, or use nfs ganesha for it ( https://download.nfs-ganesha.org ), but from my experience, the performance are very poor with ganesha.

And after that, you can mount the gluster storage as NFS an use it directly for container.
I also read about that but then I found that "simply create a Directory Storage on the gluster PVE mountpoint as base diretcory" and basically this works well (or, ok I would not expect that this is part of the problem).

I have an NFS share from my NAS (basically for images ad backups) already there, I could try it with this if you think this could change anything ...
 
using "storage:0,size=0" should work for directory (and CIFS and NFS storages), provided the unprivileged user/group id(s) are allowed to write on that storage. this is often a problem for CIFS/NFS shares, and the errors are not very helpful unfortunately ;)
 
For me it is directory (and mounted by PVE) and you see the logs and tries from above. The permission error were mostly done as I changed ownership and such from the dirs, but still some were tere ... and in the end it was still not working. So I think there is more strange :-(
 
For me it is directory (and mounted by PVE) and you see the logs and tries from above. The permission error were mostly done as I changed ownership and such from the dirs, but still some were tere ... and in the end it was still not working. So I think there is more strange :-(

Code:
# pct create 123456 iso:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz -rootfs local:0,size=0
extracting archive '/mnt/pve/iso/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz'
Total bytes read: 669562880 (639MiB, 154MiB/s)
Detected container architecture: amd64
Creating SSH host key 'ssh_host_ed25519_key' - this may take some time ...
done: SHA256:q8//B7WmNYrV1G6YjhXAZ7rdEkHklfFQXh9WfG4XuxU root@localhost
Creating SSH host key 'ssh_host_ecdsa_key' - this may take some time ...
done: SHA256:a+aInAwb9SEnZcF335BTvnMLzcV77O8/+TlApSeAudQ root@localhost
Creating SSH host key 'ssh_host_dsa_key' - this may take some time ...
done: SHA256:y3+wlxL/P5tBcOmCVv6gHmkGFRO8Kbgumb9Cc65yf6Y root@localhost
Creating SSH host key 'ssh_host_rsa_key' - this may take some time ...
done: SHA256:ZW6xMY2WzlPaqAyVngQTMxaKyCV75F0aEPN7QfhnF2U root@localhost
# pct destroy 123456

# pct create 123456 iso:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz -rootfs local:0,size=0 -unprivileged
extracting archive '/mnt/pve/iso/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz'
Total bytes read: 669562880 (639MiB, 164MiB/s)
Detected container architecture: amd64
Creating SSH host key 'ssh_host_rsa_key' - this may take some time ...
done: SHA256:sPq7KllrUCfT+ihPRICStT9AuALedovo3AI5WZ0GooM root@localhost
Creating SSH host key 'ssh_host_ecdsa_key' - this may take some time ...
done: SHA256:FvEQdESZ6OGA5FvQ9Oica1rlc10ZYBUHmXyZUBCOrM4 root@localhost
Creating SSH host key 'ssh_host_ed25519_key' - this may take some time ...
done: SHA256:Q6QMElfWa1sSh9B8g+pMkb3Tj8a4992qpGBVFtTLq9E root@localhost
Creating SSH host key 'ssh_host_dsa_key' - this may take some time ...
done: SHA256:hJMZ1OIWf/gXDf+dRGZ1nykwp9e3bOtTWBHt7ABsoig root@localhost
pct destroy 123456

works as expectecd here - so the problem must be related to either your template (containing files that cannot be restored unprivileged) or your storage (not allowing unprivileged users to write).
 
Hm... thank ypu for that answer ... could it be that this is the reason (found while googling):

GlusterFS FUSE client uses the "allow_other" mount option which has to be explicitly enabled for non-priviieged users by adding a line of "user_allow_other" to /etc/fuse.conf.

See mount.fuse(8),
http://man7.org/linux/man-pages/man8/mount.fuse.8.html#CONFIGURATION

Please make sure this setting is in place and non-root mount again.

The /etc/fuse.conf in my Proxmox hosts looks like:

Code:
# /etc/fuse.conf - Configuration file for Filesystem in Userspace (FUSE)

# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
#mount_max = 1000

# Allow non-root users to specify the allow_other or allow_root mount options.
#user_allow_other

Can/Should I change that without any other side-effects?
 
mounting already happens as root user. I don't think running containers directly on glusterfs is a good idea either..
 
why not? I have read that performance shpuld be ok in the meantime?

the performance for having many small files and/or many small write/file creation/file rename operations is abysmal, and is likely exactly the workload you'd generate by running a container directly with a directory stored on glusterfs. with container images it's still not that good (but more parts are already handled on the filesystem layer), but more easily tuned. with VMs, you at least get the default 4k block size as a lower bound ;)
 
Hm ... I think I will jsut stay with the "container raw images" I have set up now stored on glusterfs ... Only a Redis sentinel setup is running inside them at the moment, so should be acceptable ;-)

Thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!