Can't rename a bind mount?

Davidoff

Well-Known Member
Nov 19, 2017
66
2
48
Hi there. I've run into a bit of a snag with the bind mount for a container. Initially, when setting up the container, I had added the following line to /etc/pve/lxc/118.conf:

Code:
mp3: /data/docs/agreements-archive,mp=/archive

I noticed an error in a program running in the container, and then noticed that the mountpoint showed up as
Code:
'/archive '
(i.e. with the single quotes and a space between the e and the closing single quote).

Going back into the the config file, I realized that I had inadvertently included a space at the very end, so I deleted the space and restarted the container. However, within the container it still keeps showing up as
Code:
'/archive '
. I've rechecked the config file and the space is definitely gone, so I'm not sure why the space continues to appear when the container is started.

Just to test, I tried setting the mountpoint from the command line with the following, after shutting down the container with

Code:
pct set 118 -mp3 /data/docs/agreements-archive,mp=/archive

But get the following errors:

Code:
file /etc/pve/storage.cfg line 12 (section 'rpool2') - unable to parse value of 'mkdir': unexpected property 'mkdir'
file /etc/pve/storage.cfg line 13 (section 'rpool2') - unable to parse value of 'is_mountpoint': unexpected property 'is_mountpoint'

So I deleted those lines from /etc/pve/storage.cfg and tried again. It then ran without an error. However, to my dismay, when I go into the container, it still shows up as
Code:
'/archive '
.

Would anyone know how this could be corrected or what I'm doing wrong?
 
Nevermind - looks like it took after another try.

As an aside though, does anyone know why I received those messages on mkdir and is_mountpoint in my storage.cfg? Have those been deprecated?
 
Nevermind - looks like it took after another try.
great that you worked it out

As an aside though, does anyone know why I received those messages on mkdir and is_mountpoint in my storage.cfg? Have those been deprecated?
no but they only work on 'directory' type storages. from the name 'rpool2' i guess this is a zfspool type storage ?
 
no but they only work on 'directory' type storages. from the name 'rpool2' i guess this is a zfspool type storage ?
Yes, that's right. It's a ZFS pool comprised of two mirrored SSDs that I had set up through the PVE GUI. And thank you, I wasn't aware that these only work for directory storages.

Are there equivalent parameters that would work for ZFS pools? Or alternatively would these parameters work if I created a subdirectory on rpool2 and migrated the containers there? Just asking as I'm experiencing a problem related to this pool that I was hoping these parameters would remedy.
 
Just asking as I'm experiencing a problem related to this pool that I was hoping these parameters would remedy.
what do you try to do

mkdir - decides if pve tries to create the necessary directory structure (images,templates,etc)
is_mountpoint - uses it only when there is something mounted at that exact path, so we do not accidentally write there and it gets overmounted after
 
what do you try to do
I'll try to be succinct with this initial post but I have quite a bit more information compiled to date, so if more is needed just let me know.

In brief, I set up this new ZFS pool (rpool2) as I was running out of storage on the initial pool (rpool) on which PVE was installed. I moved all my containers over to rpool2. Everything seemed to work fine, but on reboot none of the containers would start. I dug into the logs and various other things and discovered that the subvol directories for the containers on rpool2 were not mounting because the mountpoint directories were not empty. rpool2 itself was also not mounting. Apparently, something in PVE had created empty directories in the subvol directories (typically for /dev and each of the bind mount points specified for the container) before ZFS had a chance to mount them. PVE also created additional disks for some containers for some reason (for example, for CT 100, there is subvol-100-disk-0, subvol-100-disk-1 and subvol-100-disk-2, even though as far as I can tell it only uses disk-2). I was able to get the containers to start by unmounting them all and then deleting all of the subvol directories, then mounting them again. I didn't bother trying to do anything with rpool2 itself or the extra subvol directories.

I found a couple of posts in the forum that seemed to describe issues similar to the one I encountered (such as this one), many of which mentioned the mkdir and is_mountpoint parameters as solutions for the problem, so gave it a try.

rpool2 had previously worked just fine. However, I recently had to reinstall PVE from scratch (which I did with the most current version, 5.4) which is when I started encountering this issue.

Any suggestions or thoughts you have on how I could remedy this issue would be most appreciated.
 
can you post your /etc/pve/storage.cfg the output of 'zfs list', 'zpool status', 'mount' and an example container config ?
 
Happy to oblige Dominik. I have a *ton* of other stuff and happy to provide anything else that may be needed.

/etc/pve/storage.cfg:

Code:
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

zfspool: local-zfs
        pool rpool/data
        content images,rootdir
        sparse 1

zfspool: rpool2
        pool rpool2
        content rootdir,images
        nodes fava2

dir: backups
        path /vdata/system/backups
        content backup
        maxfiles 0
        shared 0

dir: templates
        path /vdata/system/templates
        content iso,vztmpl
        shared 0

dir: rpool2pve
        path /rpool2/pve
        content rootdir,images
        shared 0

zfs list:

Code:
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     5.92G   424G   104K  /rpool
rpool/ROOT                5.92G   424G    96K  /rpool/ROOT
rpool/ROOT/pve-1          5.92G   424G  5.92G  /
rpool/data                  96K   424G    96K  /rpool/data
rpool2                     193G  1.57T   208K  /rpool2
rpool2/subvol-100-disk-0    96K   400G    96K  /rpool2/subvol-100-disk-0
rpool2/subvol-100-disk-1    96K   400G    96K  /rpool2/subvol-100-disk-1
rpool2/subvol-100-disk-2  30.1G   170G  30.1G  /rpool2/subvol-100-disk-2
rpool2/subvol-103-disk-0  1.16G   127G  1.16G  /rpool2/subvol-103-disk-0
rpool2/subvol-104-disk-0  7.43G  24.6G  7.43G  /rpool2/subvol-104-disk-0
rpool2/subvol-105-disk-0  4.42G  27.6G  4.42G  /rpool2/subvol-105-disk-0
rpool2/subvol-106-disk-0   547M   249G   547M  /rpool2/subvol-106-disk-0
rpool2/subvol-107-disk-0  1.06G  14.9G  1.06G  /rpool2/subvol-107-disk-0
rpool2/subvol-108-disk-0   945M  15.1G   945M  /rpool2/subvol-108-disk-0
rpool2/subvol-109-disk-0   903M  15.1G   903M  /rpool2/subvol-109-disk-0
rpool2/subvol-110-disk-0  3.43G  12.6G  3.43G  /rpool2/subvol-110-disk-0
rpool2/subvol-111-disk-0  1.53G  14.5G  1.53G  /rpool2/subvol-111-disk-0
rpool2/subvol-112-disk-0   874M  15.1G   874M  /rpool2/subvol-112-disk-0
rpool2/subvol-113-disk-0   921M  7.10G   921M  /rpool2/subvol-113-disk-0
rpool2/subvol-114-disk-0   873M  7.15G   873M  /rpool2/subvol-114-disk-0
rpool2/subvol-115-disk-0   132G   218G   132G  /rpool2/subvol-115-disk-0
rpool2/subvol-116-disk-0  4.14G  3.86G  4.14G  /rpool2/subvol-116-disk-0
rpool2/subvol-117-disk-0  1.42G  2.58G  1.42G  /rpool2/subvol-117-disk-0
rpool2/subvol-118-disk-0    96K  4.00G    96K  /rpool2/subvol-118-disk-0
rpool2/subvol-118-disk-1  1.69G  6.31G  1.69G  /rpool2/subvol-118-disk-1
zdata                      969G  6.08T   969G  /zdata

mount:

Code:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=65974940k,nr_inodes=16493735,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=13199152k,mode=755)
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=40,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=22467)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
mergerfsPool on /vdata type fuse.mergerfs (rw,relatime,user_id=0,group_id=0,allow_other)
/dev/sdf1 on /mnt/data/data1 type ext4 (rw,relatime,data=ordered)
/dev/sdj1 on /mnt/data/data5 type ext4 (rw,relatime,data=ordered)
/dev/sdv1 on /mnt/parity/parity2 type ext4 (rw,relatime,data=ordered)
/dev/sdo1 on /mnt/data/data7 type ext4 (rw,relatime,data=ordered)
/dev/sdu1 on /mnt/parity/parity1 type ext4 (rw,relatime,data=ordered)
/dev/sdk1 on /mnt/data/data6 type ext4 (rw,relatime,data=ordered)
/dev/sdg1 on /mnt/data/data3 type ext4 (rw,relatime,data=ordered)
/dev/sde1 on /mnt/data/data2 type ext4 (rw,relatime,data=ordered)
/dev/sdp1 on /mnt/data/data8 type ext4 (rw,relatime,data=ordered)
/dev/sds1 on /mnt/data/data14 type ext4 (rw,relatime,data=ordered)
/dev/sdi1 on /mnt/data/data12 type ext4 (rw,relatime,data=ordered)
/dev/sdb1 on /mnt/data/data4 type ext4 (rw,relatime,data=ordered)
/dev/sdd1 on /mnt/data/data11 type ext4 (rw,relatime,data=ordered)
/dev/sdh1 on /mnt/data/data16 type ext4 (rw,relatime,data=ordered)
/dev/sdn1 on /mnt/data/data9 type ext4 (rw,relatime,data=ordered)
/dev/sdt1 on /mnt/data/data10 type ext4 (rw,relatime,data=ordered)
/dev/sdl1 on /mnt/parity/parity3 type ext4 (rw,relatime,data=ordered)
/dev/sdr1 on /mnt/parity/parity4 type ext4 (rw,relatime,data=ordered)
/dev/sdm1 on /mnt/data/data13 type ext4 (rw,relatime,data=ordered)
/dev/sdc1 on /mnt/data/data15 type ext4 (rw,relatime,data=ordered)
rpool on /rpool type zfs (rw,noatime,xattr,noacl)
rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
zdata on /zdata type zfs (rw,xattr,posixacl)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
rpool2/subvol-100-disk-0 on /rpool2/subvol-100-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-100-disk-1 on /rpool2/subvol-100-disk-1 type zfs (rw,xattr,posixacl)
rpool2/subvol-100-disk-2 on /rpool2/subvol-100-disk-2 type zfs (rw,xattr,posixacl)
rpool2/subvol-103-disk-0 on /rpool2/subvol-103-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-106-disk-0 on /rpool2/subvol-106-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-109-disk-0 on /rpool2/subvol-109-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-117-disk-0 on /rpool2/subvol-117-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-118-disk-0 on /rpool2/subvol-118-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-104-disk-0 on /rpool2/subvol-104-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-105-disk-0 on /rpool2/subvol-105-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-107-disk-0 on /rpool2/subvol-107-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-108-disk-0 on /rpool2/subvol-108-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-110-disk-0 on /rpool2/subvol-110-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-111-disk-0 on /rpool2/subvol-111-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-112-disk-0 on /rpool2/subvol-112-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-113-disk-0 on /rpool2/subvol-113-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-114-disk-0 on /rpool2/subvol-114-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-115-disk-0 on /rpool2/subvol-115-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-116-disk-0 on /rpool2/subvol-116-disk-0 type zfs (rw,xattr,posixacl)
rpool2/subvol-118-disk-1 on /rpool2/subvol-118-disk-1 type zfs (rw,xattr,posixacl)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)

Example container config (for CT 116):

Code:
arch: amd64
cores: 4
hostname: samba
memory: 8192
mp0: /vdata,mp=/vdata
net0: name=eth0,bridge=vmbr0,hwaddr=B6:BF:13:7E:7C:AE,ip=dhcp,ip6=dhcp,tag=20,type=veth
onboot: 1
ostype: ubuntu
rootfs: rpool2:subvol-116-disk-0,size=8G
searchdomain: ma-family.ca
swap: 8192
unprivileged: 1
 
Sorry - I should have mentioned that the "rpool2pve" entry in storage.cfg was just created after I read your post about mkdir and is_mountpoint only working for directories. I haven't moved anything just yet - just added the directory and the storage in the GUI. I should also note that the above details describe the state of things after I deleted all the subvol directories, remounted and started the containers back up - they don't reflect the state of things just after boot and when the containers did not start.
 
ok i have a guess as to what happened

the container might get autostarted before the pool is imported/mounted so the bind mount dir gets created and now zfs cannot mount the subvolume

also it looks wrong that your subvols are mounted but rpool2 is not mounted on /rpool2
to solve this you would have to export the pool and remove all files in /rpool2 so that the next time it can be mounted
on /rpool2

also it would make sense to reorder the systemd services so that pve-guests or the pve-container@.service template gets ordered after the 'zfs-mount.service'

this can be done by 'systemctl edit pve-container@.service'
see man systemd.unit for how exactly to do that
 
Thanks very, very much Dominik. I will read up on systemd and figure out how to reorder things.

Can I perhaps just trouble you for the time being with just one additional question? I'm a bit of a ZFS neophyte and previously ended up completely messing up the prior iteration of my system as a result of a misconfiguration of same, so I just want to make sure I don't mess up the part about exporting the pool. Do I need to copy all the subvol directories somewhere else before exporting and deleting all the files in /zpool2? Or is that unnecessary - i.e. I can just run zfs export rpool2, rm -rf /rpool2, then zfs import rpool2?

Also, just to confirm, the use of a subdirectory on rpool2 and the mkdir and is_mountpoint parameters wouldn't be of use here, correct?
 
Do I need to copy all the subvol directories somewhere else before exporting and deleting all the files in /zpool2? Or is that unnecessary - i.e. I can just run zfs export rpool2, rm -rf /rpool2, then zfs import rpool2?
check before that they are really empty and do not contain any important data,
a colleague tested this and zfs cleaned up the empty directories by itself so do not be surprised if the dirs do not exist anymore after the export

Also, just to confirm, the use of a subdirectory on rpool2 and the mkdir and is_mountpoint parameters wouldn't be of use here, correct?
as far as i understand you, you do not actually need the directory storage, the zpool storage is enough?
 
Thank you Dominik. I will check to confirm they are empty.

as far as i understand you, you do not actually need the directory storage, the zpool storage is enough?

Sorry, just to clarify, currently I use the ZFS pool mounted as /rpool2 to store root directories for containers. It was set up in PVE as ZFS storage. You had indicated that mkdir and is_mountpoint can only be used for directory storage types. So what I was thinking as a possibility was to create a subdirectory on /rpool2, say, /rpool2/pve and then add /rpool2/pve as a directory type storage within PVE. The only reason I would do that would be so that I could use the mkdir and is_mountpoint parameters, assuming they would address the problem.

The reason I ask is because if that would work to address the problem, it would be a fair bit easier (and likely less risky) for me to do than reordering systemd services, as I'm not at all familiar with the latter.
 
also it looks wrong that your subvols are mounted but rpool2 is not mounted on /rpool2
to solve this you would have to export the pool and remove all files in /rpool2 so that the next time it can be mounted
on /rpool2

It turns out I'm having just a bit of trouble doing this. When I run zpool export rpool2, it goes offline very briefly, but then literally within a few seconds seems to be automatically imported again and shows as online, without me entering any zpool import command. This makes it a bit tricky to delete things in /rpool2 because by the time I get there everything has already been re-imported and re-mounted.

Have I missed a step or something else? I've kept /rpool2 as a storage location in PVE. Should that entry be removed before I try the above?
 
Here is the output of zfs get all | grep "mounted" after I execute zpool export rpool2:

Code:
rpool                     mounted               yes                        -
rpool/ROOT                mounted               yes                        -
rpool/ROOT/pve-1          mounted               yes                        -
rpool/data                mounted               yes                        -
rpool2                    mounted               no                         -
rpool2/subvol-100-disk-0  mounted               yes                        -
rpool2/subvol-100-disk-1  mounted               yes                        -
rpool2/subvol-100-disk-2  mounted               yes                        -
rpool2/subvol-103-disk-0  mounted               yes                        -
rpool2/subvol-104-disk-0  mounted               yes                        -
rpool2/subvol-105-disk-0  mounted               yes                        -
rpool2/subvol-106-disk-0  mounted               yes                        -
rpool2/subvol-107-disk-0  mounted               yes                        -
rpool2/subvol-108-disk-0  mounted               yes                        -
rpool2/subvol-109-disk-0  mounted               yes                        -
rpool2/subvol-110-disk-0  mounted               yes                        -
rpool2/subvol-111-disk-0  mounted               yes                        -
rpool2/subvol-112-disk-0  mounted               yes                        -
rpool2/subvol-113-disk-0  mounted               yes                        -
rpool2/subvol-114-disk-0  mounted               yes                        -
rpool2/subvol-115-disk-0  mounted               yes                        -
rpool2/subvol-116-disk-0  mounted               yes                        -
rpool2/subvol-117-disk-0  mounted               yes                        -
rpool2/subvol-118-disk-0  mounted               yes                        -
rpool2/subvol-118-disk-1  mounted               yes                        -
zdata                     mounted               yes                        -

In trying it again, I noticed that all the subvol directories disappear shortly after the export, then reappear in a second or two. Not sure I know why rpool2 itself is not mounted.
 
Things have become a bit more interesting. After the export, I tried starting the containers back up but none of them will start. However, this time it seems it's due to a different error - all the subvol directories are mounted and intact. When I run journalctl -xe I see entries like this for each of them:

Code:
-- Unit pve-container@100.service has begun starting up.
Apr 26 17:25:02 fava2 audit[7646]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-100_</var/lib/
Apr 26 17:25:02 fava2 kernel: kauditd_printk_skb: 4 callbacks suppressed
Apr 26 17:25:02 fava2 kernel: audit: type=1400 audit(1556313902.567:239): apparmor="STATUS" operation="profile_load" profile="/usr/bin/
Apr 26 17:25:02 fava2 systemd-udevd[7648]: Could not generate persistent MAC address for vethI5KUNQ: No such file or directory
Apr 26 17:25:02 fava2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth100i0: link is not ready
Apr 26 17:25:03 fava2 ovs-vsctl[7731]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port veth100i0
Apr 26 17:25:03 fava2 ovs-vsctl[7731]: ovs|00002|db_ctl_base|ERR|no port named veth100i0
Apr 26 17:25:03 fava2 ovs-vsctl[7732]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln100i0
Apr 26 17:25:03 fava2 ovs-vsctl[7732]: ovs|00002|db_ctl_base|ERR|no port named fwln100i0
Apr 26 17:25:03 fava2 systemd-udevd[7736]: Could not generate persistent MAC address for fwbr100i0: No such file or directory
Apr 26 17:25:03 fava2 kernel: fwbr100i0: port 1(veth100i0) entered blocking state
Apr 26 17:25:03 fava2 kernel: fwbr100i0: port 1(veth100i0) entered disabled state
Apr 26 17:25:03 fava2 kernel: device veth100i0 entered promiscuous mode
Apr 26 17:25:03 fava2 ovs-vsctl[7753]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl add-port vmbr0 fwln100o0 tag=20 -- set Interfa
Apr 26 17:25:03 fava2 kernel: netlink: 'ovs-vswitchd': attribute type 5 has an invalid length.
Apr 26 17:25:03 fava2 kernel: device fwln100o0 entered promiscuous mode
Apr 26 17:25:03 fava2 systemd-udevd[7758]: Could not generate persistent MAC address for fwln100o0: No such file or directory
Apr 26 17:25:03 fava2 kernel: fwbr100i0: port 2(fwln100o0) entered blocking state
Apr 26 17:25:03 fava2 kernel: fwbr100i0: port 2(fwln100o0) entered disabled state
Apr 26 17:25:03 fava2 kernel: fwbr100i0: port 2(fwln100o0) entered blocking state
Apr 26 17:25:03 fava2 kernel: fwbr100i0: port 2(fwln100o0) entered forwarding state

Please do let me know if you have any thoughts on the above.
 
Hmm. This seems to be getting worse. The GUI has become non-responsive when selecting any container. Just shows "loading" with a spinner for each of the containers.
 
Should that entry be removed before I try the above?
yes sorry i forgot

Apr 26 17:25:03 fava2 kernel: netlink: 'ovs-vswitchd': attribute type 5 has an invalid length.
this looks like a different problem, i remember something like this in the past, maybe a forum search helps here

Hmm. This seems to be getting worse. The GUI has become non-responsive when selecting any container. Just shows "loading" with a spinner for each of the containers.
what is the current status now? anything in the logs ?
 
Thank you Dominik. I'll try again but this time will remove rpool2 storage first in the GUI.

I searched the forums both for the message you cited above (regarding ovs-vswitchd). The only thing I found was this post, which suggests the message is harmless. In my case, it doesn't seem harmless as it results in my containers not starting up. Either that or the startup failure is being caused by something else.Will dig around a bit more (while hoping it doesn't recur).

what is the current status now? anything in the logs ?

Thanks for asking. The node got stuck in a shutdown process. I waited about an hour but no change, so I resorted to powering down and powering back up. The logs didn't show any problems with the network, but same as the original problem with the ZFS pool - I could spin all the containers up only after deleting all the /rpool2/subvol* directories.

If possible, it would be very much appreciated if you could advise on this:

Sorry, just to clarify, currently I use the ZFS pool mounted as /rpool2 to store root directories for containers. It was set up in PVE as ZFS storage. You had indicated that mkdir and is_mountpoint can only be used for directory storage types. So what I was thinking as a possibility was to create a subdirectory on /rpool2, say, /rpool2/pve and then add /rpool2/pve as a directory type storage within PVE. The only reason I would do that would be so that I could use the mkdir and is_mountpoint parameters, assuming they would address the problem.

I have started reviewing materials on reordering systemd services but it looks like it will be a bit of an undertaking (at least for me). If the mkdir and is_mountpoint parameters will enable me to avoid the issue altogether I'd very much prefer to tackle it that way, as it would be a lot simpler and less likely that I would mess up. No problem if it would not - I'll just continue my reading - but it would be good to know either way. Thanks again.
 
I have started reviewing materials on reordering systemd services but it looks like it will be a bit of an undertaking (at least for me). If the mkdir and is_mountpoint parameters will enable me to avoid the issue altogether I'd very much prefer to tackle it that way, as it would be a lot simpler and less likely that I would mess up. No problem if it would not - I'll just continue my reading - but it would be good to know either way. Thanks again.
no this will not help since you want to use zfs as the storage for your containers (and not as a directory storage, where we only support raw files for containers)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!