subvol umount and removed but keeps coming back after container restart

N0_Klu3

Well-Known Member
Mar 8, 2020
37
11
48
40
Hi all,

I setup a LXC with a subvolume but have since change to doing a bind mount.
While the mount works perfectly and I have it all working right, I removed the subvol configs but for some reason when I restart the LXC it keeps re-creating this empty folder.

Is there a setting within the LXC that keeps making it recreate this at boot?

root@pve-namek:/vault# ls
media subvol-201-disk-0

201.config
mp0: /ssd,mp=/mnt/ssd,mountoptions=noatime
mp1: /vault,mp=/mnt/vault
rootfs: nvme:subvol-201-disk-0,mountoptions=noatime,replicate=0,size=64G

nvme is fine as thats the boot and so on, which I want there.
Its just the folder being created on /vault I no longer want.

Looking now, I can see it creating on /ssd as well...
 
Is anyone able to help me permanently remove these 2 directories that keep being recreated?
 
What is the output of zfs list and maybe mount | grep vault?
 
What is the output of zfs list and maybe mount | grep vault?
Code:
NAME                      USED  AVAIL  REFER  MOUNTPOINT
nvme                      144G   755G   128K  /nvme
nvme/subvol-100-disk-0    414M  1.60G   414M  /nvme/subvol-100-disk-0
nvme/subvol-201-disk-0   3.72G  60.3G  3.72G  /nvme/subvol-201-disk-0
nvme/subvol-202-disk-0   5.11G  58.9G  5.11G  /nvme/subvol-202-disk-0
nvme/subvol-203-disk-0   11.1G  52.9G  11.1G  /nvme/subvol-203-disk-0
nvme/subvol-301-disk-0    960M  31.1G   960M  /nvme/subvol-301-disk-0
nvme/subvol-302-disk-0   4.78G  59.2G  4.78G  /nvme/subvol-302-disk-0
nvme/vm-2001-disk-0      3.11M   755G   116K  -
nvme/vm-2001-disk-1       118G   849G  18.2G  -
nvme/vm-2001-disk-2      6.06M   755G    64K  -
rpool                    12.4G   433G   104K  /rpool
rpool/ROOT               3.32G   433G    96K  /rpool/ROOT
rpool/ROOT/pve-1         3.32G   433G  3.32G  /
rpool/data                 96K   433G    96K  /rpool/data
rpool/var-lib-vz         9.05G   433G  9.05G  /var/lib/vz
ssd                       314G  1.37T   313G  /ssd
ssd/subvol-201-disk-0      96K  1024G    96K  /ssd/subvol-201-disk-0
vault                    21.9T  73.5T  21.9T  /vault
vault/subvol-201-disk-0   236K  73.5T   236K  /vault/subvol-201-disk-0

Code:
root@pve-namek:~# mount | grep vault
vault on /vault type zfs (rw,relatime,xattr,noacl,casesensitive)
vault/subvol-201-disk-0 on /vault/subvol-201-disk-0 type zfs (rw,relatime,xattr,posixacl,casesensitive)
 
Well, unless I am mistaken, you mount /vault of the host, to /mnt/vault in the CT:
Code:
 mp1: /vault,mp=/mnt/vault
And if you check the output of zfs list:
Code:
NAME                      USED  AVAIL  REFER  MOUNTPOINT
[…]
vault                    21.9T  73.5T  21.9T  /vault
vault/subvol-201-disk-0   236K  73.5T   236K  /vault/subvol-201-disk-0

I would recommend that you create a new dataset in the vault pool that you bind mount, for example:
Code:
zfs create vault/{new bindmount dataset}
Which will be mounted at /vault/{new bindmount dataset}.
That way you have it cleanly separated from the rest of the pool.
 
Well, unless I am mistaken, you mount /vault of the host, to /mnt/vault in the CT:
Code:
 mp1: /vault,mp=/mnt/vault
And if you check the output of zfs list:
Code:
NAME                      USED  AVAIL  REFER  MOUNTPOINT
[…]
vault                    21.9T  73.5T  21.9T  /vault
vault/subvol-201-disk-0   236K  73.5T   236K  /vault/subvol-201-disk-0

I would recommend that you create a new dataset in the vault pool that you bind mount, for example:
Code:
zfs create vault/{new bindmount dataset}
Which will be mounted at /vault/{new bindmount dataset}.
That way you have it cleanly separated from the rest of the pool.
I did bind mounts, like a lot of my other containers.

1735636728611.png

So there should only be a subvol-201-disk-0 on the name pool.
I did originally set the same resource mounts and did have subvol-201-disk-0 on ssd and vault, but I switched them to bind mounts.

I removed the old mounts and I removed those directories on /ssd and /vault but they keep getting recreated when the LXC restarts.

None of my other LXC's. that use the same mount points like the image above get the subvol created, but then again I did them manually after creation by adding the mounts in the conf...

So something seems set when I created the LXC with the mount points and its that, that I want to remove.
 
Well, what is the config of CT 201? pct config 201? Does an MP on vault show up? If it doesn't, try running pct rescan --vmid 201.
If it shows up, you can then remove it.

If container 201 doesn't exist anymore, you should also be able to go to the storage view of vault in the GUI and there to the "CT Volumes" menu and select and remove this volume.
 
Well, what is the config of CT 201? pct config 201? Does an MP on vault show up? If it doesn't, try running pct rescan --vmid 201.
If it shows up, you can then remove it.

If container 201 doesn't exist anymore, you should also be able to go to the storage view of vault in the GUI and there to the "CT Volumes" menu and select and remove this volume.
arch: amd64
cores: 4
features: nesting=1
hostname: docker-arrs
memory: 8192
mp0: /ssd,mp=/mnt/ssd,mountoptions=noatime
mp1: /vault,mp=/mnt/vault
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:93:1A:8D,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: nvme:subvol-201-disk-0,mountoptions=noatime,replicate=0,size=64G
startup: up=90
swap: 512
tags: docker;portainer
unprivileged: 1

This is all that is in my conf
Yes mp0 & 1 show up but they are bind mounts.

This is a container that is not re-creating those empty folders:
arch: amd64
cores: 8
features: keyctl=1,nesting=1
hostname: jellyfin
memory: 8192
mp0: /ssd,mp=/mnt/ssd,mountoptions=noatime
mp1: /vault,mp=/mnt/vault
net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:67:AD:22,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: nvme:subvol-202-disk-0,mountoptions=noatime,size=64G
startup: up=60
swap: 512
tags: community-script;jellyfin
unprivileged: 1

Both are mounted the same way, only difference is that the 2nd one was created without any resources for mounts.
I added them after via the conf...
 
root@pve-namek:~# pct rescan --vmid 201
rescan volumes...
CT 201: add unreferenced volume 'ssd:subvol-201-disk-0' as 'unused0' to config.
CT 201: add unreferenced volume 'vault:subvol-201-disk-0' as 'unused1' to config.
CT 201: updated volume size of '/ssd' in config.
CT 201: updated volume size of '/vault' in config.
root@pve-namek:~# nano /etc/pve/lxc/201.conf
Running this, then going to the container, and clicking remove on both those unused volumes resolved the issue...
 
Last edited: