[SOLVED] New container gets appointed to all custom storages?

Skyrider

Member
May 11, 2020
51
1
13
37
I currently have 4 storages. Local, local-zfs, nginx-storage and test2. Apparently all new containers I've created, is added to all storages content (with the exception of local-zfs):

1589243990717.png

1589244003080.png

Is there any reason why all containers that I create are added to all custom storages under content that I've created? While I specifically tagged the storage the container should use when I created it.

All these storages are ZFS, including its main partition. I've been trying to solve this for hours, and I just can't seem to get the solution. I'm still new to proxmox, but I'm learning :)

Thanks in advance!

Regards,
Skyrider
 
Hi,
could you post the output of cat /etc/pve/storage.cfg and pveversion -v?
 
@Fabian_E

Thanks for your reply! Here are the results:

dir: local
path /var/lib/vz
content vztmpl,backup,iso

zfspool: local-zfs
pool rpool/data
content rootdir,images
sparse 1

zfspool: test
pool rpool
content rootdir
mountpoint /rpool
sparse 1

zfspool: test2
pool rpool
content images,rootdir
mountpoint /rpool
nodes skyrider
sparse 0

zfspool: test3
pool rpool
content rootdir,images
mountpoint /rpool
sparse 0

zfspool: test4
pool rpool
content images,rootdir
mountpoint /rpool
sparse 0

and:

proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-11 (running version: 6.1-11/f2f18736)
pve-kernel-helper: 6.1-9
pve-kernel-5.3: 6.1-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.2
libpve-access-control: 6.0-7
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-6
pve-cluster: 6.1-8
pve-container: 3.1-4
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.1-1
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-20
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

I have, since yesterday erased the storages and created a few new ones:

1589271447578.png

And as you can see:

1589271472404.png

1589271502792.png

1589271513407.png

1589271525180.png

All of the test storages contains the exact same container content. (subvol 100 and 103). Now, this could be because I still lack understanding on how things work. But even if that's the case, why would it auto assign the container content to each new storage I create by default?
 
The reason is that all storages use the very same backing pool and so PVE cannot distinguish which volumes belong to which storage. Please only create one storage for each "pool", otherwise you might run into problems like this one. The storage configuration in PVE should reflect what really is present on the host.

That said, the fact that it's called pool in PVE is actually a bit confusing. It should be dataset, as you can use any zfs dataset, not just pools.

For example, if you want one ZFS storage where PVE creates sparse volumes and one where PVE creates non-sparse volumes, I'd suggest creating two new datasets with zfs create rpool/pve-sparse and zfs create rpool/pve-non-sparse and then add two storages with pool set to rpool/pve-sparse and rpool/pve-non-sparse respectively. Of course you have to set the Thin provison option accordingly.
 
That actually makes a lot of sense, thank you.

How come these datasets are not generated when creating a storage/container upon request? I assume creating these in the terminal couldn't be the only way when having a nifty GUI that could do it for you.
 
That actually makes a lot of sense, thank you.

How come these datasets are not generated when creating a storage/container upon request? I assume creating these in the terminal couldn't be the only way when having a nifty GUI that could do it for you.

I mean we could create the file system in the special case of adding a new ZFS storage below an existing one. But ZFS has a lot more features and is best managed via its CLI tools. And in general the configuration of storages involves partitioning disks, preparing mount points, etc.

When you create a container, PVE does create a new ZFS filesystem for the container, something like myzpool/subvol-100-disk-0, where myzpool is the existing storage. It doesn't show up as it's own storage in PVE, it's treated as a volume on the storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!