zfs pool disappears when joining cluster

howudodat

Active Member
Feb 15, 2019
8
5
43
56
pve1:
local
pool1 (zfs raidz pool)
VMs:
101 web1, 102 web2...+ 6 other VMs

pve2:
local
pool1 (zfs raidz pool)
VMs: none

Create cluster on pve1, join pve2 to that cluster. pool1 on pve2 disappears.
storage.cfg on pve2 shows pool1 pointing back to pve1
dir: local path /var/lib/vz content iso,vztmpl,backup zfspool: pool1 pool pool1 content images,rootdir mountpoint /pool1 nodes pve1

I could make the pool on pve2 named pool2, but then I cant replicate from one server to the other. Is there a guide that shows how to create a cluster when one of the machines is already a running instance with VMs? NOTE: pve1 is a live server so I can't go delete it's VMs. pve2 is a backup server that hardware wise is a mirror of pve1, but has no VMs on it yet. All the guides I can find show creating the cluster before creating any VMs.
 
Last edited:
Hi,
if the storage pool1 is available on both nodes, simply remove the node restriction, i.e. nodes pve1 from the storage configuration. Note that the storage configuration (like everything in /etc/pve) is shared cluster-wide.
 
Hi there,

I have a similar problem except I don't even see the pool on either one. I created two nodes independently and eventually created a cluster on one and joined the second one to it. Since then both zfs pools disappeared from the webUI and even storage.cfg doesn't show them.

storage.cfg from node 1:
Code:
dir: local
        path /var/lib/vz
        content snippets,rootdir,images,iso,vztmpl,backup
        shared 0

# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.81T  1.06T   766G        -         -    10%    58%  1.00x    ONLINE  -

storage.cfg and zpool from node 2:
Code:
dir: local
        path /var/lib/vz
        content snippets,rootdir,images,iso,vztmpl,backup
        shared 0

# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.81T  1.74T  75.1G        -         -    68%    95%  1.00x    ONLINE  -

I guess part of the problem is that both pool share the same name: rpool (default name I believe). So now what can I do to make them both show up the webUI again?
 
Hi,
Hi there,

I have a similar problem except I don't even see the pool on either one. I created two nodes independently and eventually created a cluster on one and joined the second one to it. Since then both zfs pools disappeared from the webUI and even storage.cfg doesn't show them.

storage.cfg from node 1:
Code:
dir: local
        path /var/lib/vz
        content snippets,rootdir,images,iso,vztmpl,backup
        shared 0

# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.81T  1.06T   766G        -         -    10%    58%  1.00x    ONLINE  -

storage.cfg and zpool from node 2:
Code:
dir: local
        path /var/lib/vz
        content snippets,rootdir,images,iso,vztmpl,backup
        shared 0

# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.81T  1.74T  75.1G        -         -    68%    95%  1.00x    ONLINE  -

I guess part of the problem is that both pool share the same name: rpool (default name I believe). So now what can I do to make them both show up the webUI again?
are you sure that you had the storage defined in storage.cfg on both nodes? When a node joins a cluster it inherits the cluster's configuration. It's not an issue if the pool name is the same, that's actually what should be done. You simply need to re-add the storage to the configuration e.g. in Datacenter > Storage > Add > ZFS in the UI.
 
Hi,

are you sure that you had the storage defined in storage.cfg on both nodes? When a node joins a cluster it inherits the cluster's configuration. It's not an issue if the pool name is the same, that's actually what should be done. You simply need to re-add the storage to the configuration e.g. in Datacenter > Storage > Add > ZFS

Both zfs pool were shown when they were first installed and configured. It was when I created the cluster on one and join the other that both just disappeared. The other weird thing is that after searching online, I managed to see it and add it via Datacenter | Storage ZFS. However, I don't know why but when I added them, they both big usage but there are no VM Disks or CT Volumes.

Here is my disk configuration:
1686929711558.png

Here is the zfs:
1686929757628.png

This is what it looks like after I manually add the zfs back via Datacenter | Storage | Add | ZFS
1686929813378.png

1686929824052.png

1686929833464.png

How can I check what's occupying the zfs volume??
 
Both zfs pool were shown when they were first installed and configured. It was when I created the cluster on one and join the other that both just disappeared. The other weird thing is that after searching online, I managed to see it and add it via Datacenter | Storage ZFS. However, I don't know why but when I added them, they both big usage but there are no VM Disks or CT Volumes.

Here is my disk configuration:
View attachment 51719

Here is the zfs:
View attachment 51720

This is what it looks like after I manually add the zfs back via Datacenter | Storage | Add | ZFS
View attachment 51721

View attachment 51722

View attachment 51723

How can I check what's occupying the zfs volume??

%% SOLVED - finally figured out that it was the zfs snapshots that took up all the spaces. I just removed most of them and now all the spaces are reclaimed. Thank goodness
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!