[SOLVED] New ZFS mount/pool with same name can not be added anymore to new Node after cluster join

Jun 29, 2021
3
1
23
45
I guess it is my mistake, but maybe there is a way to rectify it without removing the 3rd new node from the cluster and reinstall?

The goal is replication and to have the same ZFS mounting points on every node....
This 3 node cluster have the same disk layouts. I did forget to create on the new 3rd node the ZFS-HDD mount before joining this last new node to the cluster!
Now the ZFS-HDD exist with a question mark, as the ZFS pool is not ready created. And it can not be created anymore from the GUI or CLI

On Node3:
zpool create -f -o ashift=12 zfs-ssd raidz /dev/disk/by-id/wwn-0x500a07514f3263de /dev/disk/by-id/wwn-0x500a07514f3262fc /dev/disk/by-id/wwn-0x500a07514f32669f /dev/disk/by-id/wwn-0x500a07514f326636
mountpoint '/zfs-ssd' exists and is not empty

zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 370G 2.86G 367G - - 0% 0% 1.00x ONLINE -

Important is its a production system with runnings VMs on the other 2 nodes, so I like really careful :)
Thanks in advance!
 
Try to see what's in there with either ls -lah /zfs-ssd or
Bash:
apt install gdu
gdu /zfs-ssd
 
Thanks Impact! Do I understand it correctly there is a folder present preventing the pool to be created? What could be the reason? And is it safe to remove it? I tried but seem unable to do so.... it gets instantly created again....

ls -lah /zfs-ssd
total 9.5K
drwxr-xr-x 3 root root 3 Jul 22 16:00 .
drwxr-xr-x 19 root root 23 Jul 22 16:00 ..
drwxr-xr-x 3 root root 3 Jul 22 16:00 ISO

ls -lah /zfs-ssd
total 9.5K
drwxr-xr-x 3 root root 3 Jul 25 19:39 .
drwxr-xr-x 19 root root 23 Jul 25 19:39 ..
drwxr-xr-x 3 root root 3 Jul 25 19:39 ISO
 
Yeah deleting it should let you use the path again. I don't know the contents though. Check with gdu what's in it as shown above.
I'm not aware of PVE creating uppercase ISO directories.

We could try to see if it's a separate mountpoint
Bash:
df -hT
and check if a PVE config is referencing it
Bash:
grep -sR "zfs-ssd" /etc/pve

How did you try to delete it? Please use code blocks when sharing results.
 
Last edited:
Thanks, I managed to remove the mount in the GUI by limiting on Cluster level the nodes it was assigned to (node 1 and 2)
Next remove in the CLI the folder, but it seems to be recreated despite not being assigned in the GUI.

I managed to be quick enough to create the zfs pool ;) All fine for me, hope the next struggling benefit from the posts. Technical i'm not sure why the folder was still replicated/recreated... ;) I guess it might be due to where I created a ISO folder for the rest of the cluster... Maye the trick would have been to disable both storage points on cluster level!

dir: ISO
path /zfs-ssd/ISO
content vztmpl,iso
prune-backups keep-all=1
shared 0

Screenshot 2025-07-25 at 19.44.34.png
 
  • Like
Reactions: Impact