[SOLVED] Running a LXC Container with PBS inside and using Bindmounts for an external drive with zpool from host.

Jarvar

Well-Known Member
Aug 27, 2019
317
10
58
I tried to create a container instead and bindmount the zpool. It takes a lot less resources but messes with permissions.
https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
I followed the instructions but it didn't work or I didn't wait fully for the
chown -R 1005:1005 /mnt/bindmounts/shared
to fully complete.
Has anybody verified this to work properly?
Is this the best method?
The host is PVE


pveversion pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-15-pve)

pct conf 205 --current arch: amd64 cores: 4 description: uid map%3A from uid 0 map 1005 uids (in the ct) to the range starting 100000 (on the host), so 0..1004 (ct) %E2%86%92 100000..101004 (host)%0A we map 1 uid starting from uid 1005 onto 1005, so 1005 %E2%86%92 1005%0A we map the rest of 65535 from 1006 upto 101006, so 1006..65535 %E2%86%92 101006..165535%0A features: nesting=1 hostname: csd-tor-pbs002 memory: 4096 mp0: /usbpool002/dataset021/store021,mp=/store021 net0: name=eth0,bridge=vmbr0,firewall=1,gw=172.16.10.1,hwaddr=0A:72:FC:B6:0E:B4,ip=172.16.10.233/24,type=veth ostype: debian rootfs: storage:subvol-205-disk-0,size=100G swap: 4096 unprivileged: 1

Inside the container
id backup uid=34(backup) gid=34(backup) groups=34(backup),26(tape)

The mount is accessible through the shell on the container but can't be accessed as a datastore.
 
Went through the chown -R 1005:1005 /usbpool002/dataset021/store021

chown -R 1005:1005 /usbpool002/dataset021/store021 root@tks-pve-02:~# pct start 205 lxc_map_ids: 3701 newuidmap failed to write mapping "newuidmap: uid range [0-1005) -> [100000-101005) not allowed": newuidmap 2440002 0 100000 1005 1005 1005 1 1006 101006 64530 lxc_spawn: 1788 Failed to set up id mapping. __lxc_start: 2107 Failed to spawn container "205" startup for container '205' failed

and more

lxc-start -F -n 205 lxc-start: 205: ../src/lxc/conf.c: lxc_map_ids: 3701 newuidmap failed to write mapping "newuidmap: uid range [0-1005) -> [100000-101005) not allowed": newuidmap 2451434 0 100000 1005 1005 1005 1 1006 101006 64530 lxc-start: 205: ../src/lxc/start.c: lxc_spawn: 1788 Failed to set up id mapping. lxc-start: 205: ../src/lxc/start.c: __lxc_start: 2107 Failed to spawn container "205" lxc-start: 205: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start lxc-start: 205: ../src/lxc/tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options
 
Last edited:
If you follow the instructions given in the document that you linked to, things should work. That's how I set up PBS. Works quite nicely in a container as opposed to a full-on VM.

But I don't see why you need to recursively change ownership, nor do I understand why your configuration doesn't show the id mappings that you were supposed to do.
 
If you follow the instructions given in the document that you linked to, things should work. That's how I set up PBS. Works quite nicely in a container as opposed to a full-on VM.

But I don't see why you need to recursively change ownership, nor do I understand why your configuration doesn't show the id mappings that you were supposed to do.
I will try it again. I think the id backup command was issued before the added the uid mapping and recursively changing ownership.
You didn't need to do that?
this is in the document:

As a final step, remember to change to owner of the bind mount point directory on the host, to match the uid and gid that were made accessible to the container:

chown -R 1005:1005 /mnt/bindmounts/shared
 
I recommend going step-by-step and sanity checking each incremental step as you go. If I understand correctly, you have an existing drive that was previously used with a VM and now you want to bind-mount it into a container. If so, then the drive should already have reasonable permissions for all its files, and when you make it accessible to the container, the user mappings should make sure that everything looks correct inside the container, too.

But it's a little difficult to follow along with what you are doing, because I don't fully follow what changes you have already made. So, just go slowly, check the results -- and if you want to ask for help later, then carefully document each step. Makes it easier for me to guess where you might still have to make changes.

Also, when everything is said and done, consider editing your container's configuration and moving from "mp0: ..." to an equivalent "lxc.mount.entry: ..." instead. That tells PVE that this is external storage that isn't supposed to be managed by Proxmox. If you do that, you regain the ability to create snapshots of your container, which is normally not possible if there are bind mounts. But this is an advanced topic. Let's figure out your immediate issue first and then come back to this final step.
 
I followed the document exactly to add the storage.
The previous drive was through USB, I added the hardware to the VM, then setup the pool through the VM.
When trying to access it through the container,
I imported the pool on the hose and tried to use bindmount
mp0: /usbpool002/dataset062/store003,mp=/store003

I added the UID sections laid out in the document under /etc/pve/lxc/205.conf
I changed the /etc/subuid and /etc/subgid with root:1005:1

then chown -R /usbpool002/dataset062/store003

when trying pct start 205 I get this

pct start 205 lxc_map_ids: 3701 newuidmap failed to write mapping "newuidmap: uid range [0-1005) -> [100000-101005) not allowed": newuidmap 1358882 0 100000 1005 1005 1005 1 1006 101006 64530 lxc_spawn: 1788 Failed to set up id mapping. __lxc_start: 2107 Failed to spawn container "205" startup for container '205' failed

pct conf 205 --current is

pct conf 205 --current arch: amd64 cores: 4 description: uid map%3A from uid 0 map 1005 uids (in the ct) to the range starting 100000 (on the host), so 0..1004 (ct) %E2%86%92 100000..101004 (host)%0A we map 1 uid starting from uid 1005 onto 1005, so 1005 %E2%86%92 1005%0A we map the rest of 65535 from 1006 upto 101006, so 1006..65535 %E2%86%92 101006..165535%0A features: nesting=1 hostname: csd-tor-pbs002 memory: 4096 mp0: /usbpool002/dataset062/store003,mp=/store003 net0: name=eth0,bridge=vmbr0,firewall=1,gw=172.16.10.1,hwaddr=0A:72:FC:B6:0E:B4,ip=172.16.10.233/24,type=veth ostype: debian rootfs: storage:subvol-205-disk-0,size=100G swap: 4096 unprivileged: 1 lxc.idmap: u 0 100000 1005 lxc.idmap: g 0 100000 1005 lxc.idmap: u 1005 1005 1 lxc.idmap: g 1005 1005 1 lxc.idmap: u 1006 101006 64530 lxc.idmap: g 1006 101006 64530 root@tks-pve-02:~#
 
Thank you @zodiac
I really appreciate your help. Do you have references on how to use lxc.mount.entry. I’d be interested to move towards that but haven’t found more detailed documentation about it. Seen it mentioned here and there though. I’m newer to containers compared to VMs.
Thank you again.
 
Why are you trying to change the user and group ids to 1005? That would only make sense if you had a user with that user id. It's merely an example that was given in that document, but you have to adjust things for your own needs. Ff you are trying to use your disk in PBS, then the files are probably owned by "backup", and I would expect that to be user id 34. So, that's probably also what you have on your existing drive.

So, make sure you use mappings like this:

Code:
lxc.idmap: u 0 100000 34
lxc.idmap: u 34 34 1
lxc.idmap: u 35 100035 65501
lxc.idmap: g 0 100000 34
lxc.idmap: g 34 34 1
lxc.idmap: g 35 100035 65501

And of course, your /etc/subuid and /etc/subgid should now have lines like this:

root:34:1 root:100000:65536
 
If your existing "mp0" line reads

Code:
mp0: /usbpool002/dataset062/store003,mp=/store003

then you can remove it, and instead add a line such as

Code:
lxc.mount.entry: /usbpool002/dataset062/store003 store003 none rbind,create=dir

I can't comment on whether those paths make sense or not. That depends on your local configuration, of course. So, just substitute as appropriate.

This hides the bind mount from PVE. It has no idea that you are messing with things behind its back and as far as it is concerned, the bind mount is not part of the container's configuration. This is important in your use case, as it tells PVE that it doesn't need to worry about figuring out how to snapshot your dataset when you snapshot the container.

In the general case, this is potentially dangerous and that's why Proxmox doesn't expose this feature in the UI. But in specific cases, it can be a useful performance optimization. It gives a container direct access to storage that is attached to the host without you having to go through something like NFS of Plan9. Of course, it also break clustering. You can no longer migrate the container to a different host without figuring out how to also move storage. So, keep that in mind. But that's an issue with all bind mounts, even if you create them from inside the UI.
 
Last edited:
Why are you trying to change the user and group ids to 1005? That would only make sense if you had a user with that user id. It's merely an example that was given in that document, but you have to adjust things for your own needs. Ff you are trying to use your disk in PBS, then the files are probably owned by "backup", and I would expect that to be user id 34. So, that's probably also what you have on your existing drive.

So, make sure you use mappings like this:

Code:
lxc.idmap: u 0 100000 34
lxc.idmap: u 34 34 1
lxc.idmap: u 35 100035 65501
lxc.idmap: g 0 100000 34
lxc.idmap: g 34 34 1
lxc.idmap: g 35 100035 65501

And of course, your /etc/subuid and /etc/subgid should now have lines like this:

root:34:1 root:100000:65536
I really appreciate your help.
I followed through on your advice. The container starts.
Does it matter if I use the mp0: or lxc.mount.entry mappings?
Somehow the container starts and I can access it through the shell. However, the gui is not accessible on the host:8007
The share is accessible when using the shell...
Think it has something to do with proxmox-backup-proxy.service
Tried it with a fresh CT also to make sure no customizations get in the way. GUI is accessible before making changes and then unavailable after introducing the mappings.
 
Last edited:
mp0: and lxc.mount.entry does almost entirely the same thing, as far as the container is concerned. The main difference is whether PVE is aware of the bind mount or not. If PVE knows that you have bind mounts, it disables a couple of features that can be problematic. Most notably, it disables snapshots for this container. If that doesn't bother you, then keep mp0:. If it does bother you, then switch to lxc.mount.entry, but realize that you removed a safety feature. So, now you are responsible for operating with a non-standard configuration.

As for accessing the PBS UI, which IP address are you using? This should not be the IP address of the PVE host, but the address of the container. You should be able to see it with ip a. If that still doesn't work, you should verify that you actually managed to start the PBS user interface and that it is listening on port 8007. I usually just do a netstat -a |grep LISTEN | grep -v unix. But if that doesn't help, you need to drill down deeper.

How did you create your container? Did you create a Debian container and install PBS, or did you start with the PBS installation media?
 
mp0: and lxc.mount.entry does almost entirely the same thing, as far as the container is concerned. The main difference is whether PVE is aware of the bind mount or not. If PVE knows that you have bind mounts, it disables a couple of features that can be problematic. Most notably, it disables snapshots for this container. If that doesn't bother you, then keep mp0:. If it does bother you, then switch to lxc.mount.entry, but realize that you removed a safety feature. So, now you are responsible for operating with a non-standard configuration.

As for accessing the PBS UI, which IP address are you using? This should not be the IP address of the PVE host, but the address of the container. You should be able to see it with ip a. If that still doesn't work, you should verify that you actually managed to start the PBS user interface and that it is listening on port 8007. I usually just do a netstat -a |grep LISTEN | grep -v unix. But if that doesn't help, you need to drill down deeper.

How did you create your container? Did you create a Debian container and install PBS, or did you start with the PBS installation media?
Okay I sorted some issues. I applied the id mappings before installing PBS.
I created a standard Debian 12 container, then installed PBS.
Is that recommended? or using PBS Installation Media?
Should I be install proxmox-back or proxmox-backup-server. Looks like server leaves out some things.
Looks like I'm getting alot better at settings things up faster if that's any merit.
thanks so much for your help again.
 
I think, you will need to get help from some other experts :-/

I tried the same thing initially and installed PBS into Debian. According to the documentation, that should just work. But in my case, it didn't. It probably is something really simple, but since I didn't have a running copy of PBS to compare with, I didn't even know where to start looking. And this was before I even started messing with bind mounts and user maps. It was just the very first attempt to simply install PBS.

In order to debug things, I set up a PBS VM in addition to the container, allowing me to compare notes. And on a whim, I then copied the root filesystem of that VM into a yet another new container. Turns out, there are a handful of settings that need adjustment, if you do that. It won't quite work out of the box. But that's all standard-Debian stuff. So, it was easier for me to convert the VM to a container than to track down why my PBS package didn't work. But that's because I know about Debian, yet am still relatively new to Proxmox.

I am not suggesting for you to try to do things this way. There must be a much easier way. If PBS didn't start up, then you are probably just missing one or two packages or some basic configuration option somewhere. Maybe, open a new thread and ask "how to install PBS into Debian".

Alternatively, at least check that proxmox-backup-proxy is running. That's the program that gives you the web-based GUI. It's part of the proxmox-backup-server Debian package. It should have been started by systemd using the service file at /lib/systemd/system/proxmox-backup-proxy.service. Maybe, journalctl -xeu proxmox-backup-proxy.service has some helpful messages?
 
I think, you will need to get help from some other experts :-/

I tried the same thing initially and installed PBS into Debian. According to the documentation, that should just work. But in my case, it didn't. It probably is something really simple, but since I didn't have a running copy of PBS to compare with, I didn't even know where to start looking. And this was before I even started messing with bind mounts and user maps. It was just the very first attempt to simply install PBS.

In order to debug things, I set up a PBS VM in addition to the container, allowing me to compare notes. And on a whim, I then copied the root filesystem of that VM into a yet another new container. Turns out, there are a handful of settings that need adjustment, if you do that. It won't quite work out of the box. But that's all standard-Debian stuff. So, it was easier for me to convert the VM to a container than to track down why my PBS package didn't work. But that's because I know about Debian, yet am still relatively new to Proxmox.

I am not suggesting for you to try to do things this way. There must be a much easier way. If PBS didn't start up, then you are probably just missing one or two packages or some basic configuration option somewhere. Maybe, open a new thread and ask "how to install PBS into Debian".

Alternatively, at least check that proxmox-backup-proxy is running. That's the program that gives you the web-based GUI. It's part of the proxmox-backup-server Debian package. It should have been started by systemd using the service file at /lib/systemd/system/proxmox-backup-proxy.service. Maybe, journalctl -xeu proxmox-backup-proxy.service has some helpful messages?
Thank you so so much!

I think I may have got it working.
Though I'm not sure how to add more than one lxc.mount.entry:
How I managed to get proxmox backup working was starting with a fresh Debian12 container. I apt update and apt dist-upgrade -y
then I applied the id mappings. restarted the container for good measure.
Then I added the repository key, added the repo sources.
upated the package cache and the installed proxmox-backup-server.
The GUI was accessible.
Rebooted, shutdown and started a few times to test.
It was working.
Then I added the lxc.mount.entry: /usbpool002/dataset062/store003 store003 none rbind,create=dir into the conf file.
There was some mess up at first. It seems like the datastore.cfg file didn't like the / in the path name.
When I had it as path /store003 it would throw up errors.
I removed the / and just put store003
It's working and I am restoring a backup right now.
The error comes when I try to add another datastore from another pool.
When I use lxc.mount.entry: for the second location it seems to throw up errors when trying to start the container.
Probably something to do with naming or permission issues.
 
If it's of any help, here is the complete list of packages that are installed in my working instance of PBS:

Code:
adduser
apt
apt-listchanges
apt-utils
base-files
base-passwd
bash
bash-completion
bc
bind9-dnsutils
bind9-host
bind9-libs:amd64
bridge-utils
bsd-mailx
bsdextrautils
bsdutils
btrfs-progs
busybox
bzip2
ca-certificates
cifs-utils
console-setup
console-setup-linux
coreutils
cpio
cron
cron-daemon-common
curl
dash
dbus
dbus-bin
dbus-daemon
dbus-session-bus-common
dbus-system-bus-common
debconf
debconf-i18n
debian-archive-keyring
debian-faq
debianutils
diffutils
dirmngr
distro-info-data
dmeventd
dmidecode
dmsetup
doc-debian
dosfstools
dpkg
e2fsprogs
efibootmgr
eject
etckeeper
ethtool
fdisk
fdutils
file
findutils
fonts-font-awesome
fonts-mathjax
gcc-12-base:amd64
gdisk
gettext-base
git
git-man
gnupg
gnupg-l10n
gnupg-utils
gpg
gpg-agent
gpg-wks-client
gpg-wks-server
gpgconf
gpgsm
gpgv
grep
groff-base
gzip
hostname
ifupdown2
inetutils-telnet
init
init-system-helpers
initramfs-tools-core
iproute2
iputils-ping
isc-dhcp-client
isc-dhcp-common
kbd
keyboard-configuration
keyutils
klibc-utils
kmod
krb5-locales
less
libacl1:amd64
libaio1:amd64
libapparmor1:amd64
libapt-pkg6.0:amd64
libarchive13:amd64
libargon2-1:amd64
libassuan0:amd64
libattr1:amd64
libaudit-common
libaudit1:amd64
libavahi-client3:amd64
libavahi-common-data:amd64
libavahi-common3:amd64
libblas3:amd64
libblkid1:amd64
libbpf1:amd64
libbrotli1:amd64
libbsd0:amd64
libbz2-1.0:amd64
libc-bin
libc-l10n
libc6:amd64
libcap-ng0:amd64
libcap2:amd64
libcap2-bin
libcbor0.8:amd64
libcom-err2:amd64
libcrypt1:amd64
libcryptsetup12:amd64
libcurl3-gnutls:amd64
libcurl4:amd64
libdb5.3:amd64
libdbus-1-3:amd64
libdebconfclient0:amd64
libdevmapper-event1.02.1:amd64
libdevmapper1.02.1:amd64
libedit2:amd64
libefiboot1:amd64
libefivar1:amd64
libelf1:amd64
liberror-perl
libevent-core-2.1-7:amd64
libexpat1:amd64
libext2fs2:amd64
libfdisk1:amd64
libffi8:amd64
libfido2-1:amd64
libfile-find-rule-perl
libfreetype6:amd64
libfstrm0:amd64
libfuse2:amd64
libfuse3-3:amd64
libgcc-s1:amd64
libgcrypt20:amd64
libgdbm-compat4:amd64
libgdbm6:amd64
libgmp10:amd64
libgnutls30:amd64
libgpg-error0:amd64
libgssapi-krb5-2:amd64
libhogweed6:amd64
libicu72:amd64
libidn2-0:amd64
libinih1:amd64
libip4tc2:amd64
libisns0:amd64
libjansson4:amd64
libjemalloc2:amd64
libjs-extjs
libjs-mathjax
libjs-qrcodejs
libjson-c5:amd64
libk5crypto3:amd64
libkeyutils1:amd64
libklibc:amd64
libkmod2:amd64
libkrb5-3:amd64
libkrb5support0:amd64
libksba8:amd64
libldap-2.5-0:amd64
libldb2:amd64
liblinear4:amd64
liblmdb0:amd64
liblocale-gettext-perl
liblockfile-bin
liblockfile1:amd64
liblua5.3-0:amd64
liblvm2cmd2.03:amd64
liblz4-1:amd64
liblzma5:amd64
liblzo2-2:amd64
libmagic-mgc
libmagic1:amd64
libmaxminddb0:amd64
libmd0:amd64
libmnl0:amd64
libmount1:amd64
libncursesw6:amd64
libnettle8:amd64
libnewt0.52:amd64
libnfsidmap1:amd64
libnftables1:amd64
libnftnl11:amd64
libnghttp2-14:amd64
libnpth0:amd64
libnsl2:amd64
libnss-systemd:amd64
libnumber-compare-perl
libnvpair3linux
libopeniscsiusr
libp11-kit0:amd64
libpam-modules:amd64
libpam-modules-bin
libpam-runtime
libpam-systemd:amd64
libpam0g:amd64
libpcap0.8:amd64
libpci3:amd64
libpcre2-8-0:amd64
libpcre3:amd64
libperl5.36:amd64
libpipeline1:amd64
libpng16-16:amd64
libpopt0:amd64
libproc2-0:amd64
libprotobuf-c1:amd64
libproxmox-acme-plugins
libpsl5:amd64
libpython3-stdlib:amd64
libpython3.11-minimal:amd64
libpython3.11-stdlib:amd64
libqrencode4:amd64
libreadline8:amd64
librtmp1:amd64
libsasl2-2:amd64
libsasl2-modules-db:amd64
libseccomp2:amd64
libselinux1:amd64
libsemanage-common
libsemanage2:amd64
libsepol2:amd64
libsgutils2-1.46-2:amd64
libslang2:amd64
libsmartcols1:amd64
libsmbclient:amd64
libsqlite3-0:amd64
libss2:amd64
libssh2-1:amd64
libssl3:amd64
libstdc++6:amd64
libsystemd-shared:amd64
libsystemd0:amd64
libtalloc2:amd64
libtasn1-6:amd64
libtdb1:amd64
libtevent0:amd64
libtext-charwidth-perl:amd64
libtext-glob-perl
libtext-iconv-perl:amd64
libtext-wrapi18n-perl
libtinfo6:amd64
libtirpc-common
libtirpc3:amd64
libuchardet0:amd64
libudev1:amd64
libunistring2:amd64
libunwind8:amd64
liburcu8:amd64
libusb-1.0-0:amd64
libuuid1:amd64
libuutil3linux
libuv1:amd64
libwbclient0:amd64
libwrap0:amd64
libxml2:amd64
libxtables12:amd64
libxxhash0:amd64
libzfs4linux
libzpool5linux
libzstd1:amd64
linux-base
locales
login
logrotate
logsave
lsof
lua-lpeg:amd64
lvm2
lynx
lynx-common
mailcap
man-db
manpages
mawk
media-types
mime-support
mount
ncurses-base
ncurses-bin
ncurses-term
net-tools
netbase
netcat-traditional
nfs-common
nftables
nmap
nmap-common
open-iscsi
openssh-client
openssh-server
openssh-sftp-server
openssl
passwd
patch
pbs-i18n
pci.ids
pciutils
perl
perl-base
perl-modules-5.36
pinentry-curses
postfix
procmail
procps
proxmox-archive-keyring
proxmox-backup-client
proxmox-backup-docs
proxmox-backup-server
proxmox-kernel-helper
proxmox-mail-forward
proxmox-mini-journalreader
proxmox-widget-toolkit
psmisc
pve-firmware
pve-xtermjs
python-apt-common
python3
python3-apt
python3-certifi
python3-chardet
python3-charset-normalizer
python3-debconf
python3-debian
python3-debianbts
python3-distutils
python3-httplib2
python3-idna
python3-lib2to3
python3-minimal
python3-pkg-resources
python3-pycurl
python3-pyparsing
python3-pysimplesoap
python3-reportbug
python3-requests
python3-setuptools
python3-six
python3-systemd
python3-urllib3
python3.11
python3.11-minimal
qrencode
readline-common
reportbug
rpcbind
runit-helper
samba-common
samba-libs:amd64
sed
sensible-utils
sg3-utils
smartmontools
smbclient
spl
ssh
ssl-cert
strace
sudo
systemd
systemd-sysv
sysvinit-utils
tar
tasksel
tasksel-data
tcpdump
time
traceroute
tzdata
ucf
udev
usbutils
usrmerge
util-linux
util-linux-extra
vim-common
vim-tiny
wamerican
wget
whiptail
xfsprogs
xkb-data
xz-utils
zfs-zed
zfsutils-linux
zlib1g:amd64

Some of it can probably be pruned. But this should give you a starting point of where to look. In particular, if I grep for anything related to proxmox, I see this list of essential packages:

Code:
ifupdown2
libjs-extjs
libjs-qrcodejs
libnvpair3linux
libproxmox-acme-plugins
libuutil3linux
libzfs4linux
libzpool5linux
pbs-i18n
libproxmox-acme-plugins
proxmox-archive-keyring
proxmox-backup-client
proxmox-backup-docs
proxmox-backup-server
proxmox-kernel-helper
proxmox-mail-forward
proxmox-mini-journalreader
proxmox-widget-toolkit
pve-firmware
pve-xtermjs
smartmontools
spl
zfs-zed
zfsutils-linux

If you don't have at least all of those, then you know that you are obviously still missing something.
 
Last edited:
If should be possible to have more than one bind mount. If you tell me what the errors are, I might make an educated guess.
 
If should be possible to have more than one bind mount. If you tell me what the errors are, I might make an educated guess.
So I was able to attach to a test datastore I created so I don't lose anything too vital. The restore was successful when I connected another PVE to the PBS Container.
This is the error I'm getting when I add the second mount entry.

This is the result of pct conf
pct conf 209 arch: amd64 cores: 4 features: nesting=1 hostname: csd-tor-pbs009 memory: 4096 net0: name=eth0,bridge=vmbr0,firewall=1,gw=172.16.10.1,hwaddr=9A:8F:16:A3:08:3C,ip=172.16.10.239/24,type=veth ostype: debian parent: snapb4 rootfs: storage:subvol-209-disk-0,size=100G swap: 4096 unprivileged: 1 lxc.mount.entry: /usbpool002/dataset062/store003 store003 none rbind.create=dir lxc.mount.entry: /usbpool002/dataset021/store021 store021 none rbind.create=dir lxc.idmap: u 0 100000 34 lxc.idmap: g 0 100000 34 lxc.idmap: u 34 34 1 lxc.idmap: g 34 34 1 lxc.idmap: u 35 100035 65501 lxc.idmap: g 35 100035 65501

pct start 209 mount_entry: 2439 No such file or directory - Failed to mount "/usbpool002/dataset021/store021" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/store021" lxc_setup: 4412 Failed to setup mount entries do_start: 1272 Failed to setup container "209" sync_wait: 34 An error occurred in another process (expected sequence number 3) __lxc_start: 2107 Failed to spawn container "209" startup for container '209' failed

I'm going to try making a mount point, that could be the issue no destination directory.
I will try creating a store021 in the / directory and try again.
 
Last edited:
If says No such file or directory - Failed to mount "/usbpool002/dataset021/store021". So the first thing I'd do is check that this path is actually correct and exists. It's easy to make typos.

Next, check permissions for each directory in that path.

Then also check that the mountpoint /store021 exists in the container. If it doesn't, create it with mkdir. Off the top of my head, I don't recall whether lxc.mount.entry does the right thing here, if it doesn't exist (even though we did set the option to create directories).
 
  • Like
Reactions: Jarvar
If says No such file or directory - Failed to mount "/usbpool002/dataset021/store021". So the first thing I'd do is check that this path is actually correct and exists. It's easy to make typos.

Next, check permissions for each directory in that path.

Then also check that the mountpoint /store021 exists in the container. If it doesn't, create it with mkdir. Off the top of my head, I don't recall whether lxc.mount.entry does the right thing here, if it doesn't exist (even though we did set the option to create directories).
You are brilliant!!! Got it from not working to working.
I can't thank you enough.
I did have to create the directories first to get rid of that error.
Thank you for taking your time to work with me on this.
I got the datastores connected to the container and I'm able to connect to it from another PBS and starting the sync with another PBS I setup with faster drives.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!