storage status unknown in cluster

jona

New Member
Mar 2, 2024
23
5
3
Hi all.
I just got my new server and I wanted to migrate some of my cts and vms from my old server (node1) to the new server (node2). So what I did, was
0. upgrade node1 from Proxmox 8.1.* (latest) to 9.1
1. to create a cluster on node1
2. join the cluster from node2
3. this is where it got stuck... right-click migrate CT target: node2

Issue:
ERROR: migration aborted (duration 00:00:00): storage 'local-zfs' is not available on node 'node2'


Configuration:
node1 has "local-zfs"
node2, when I select all nodes (or each one specifcally) in Datacenter / Storage has "status unknown" on node2

The storage.cfg tells - yes the local-zfs is there, at least configured, bit the zpool list tells no

node1: Intel Xeon, Proxmox 9.1 (upgrded)
node2: Intel Ultra 9, Proxmox 9.1 (brand new)


I already tried restarting both servers, modifying the storages back and forth but I still got my issue. I feel like have forgotten something really basic, and I'm already really sorry, but any help is highly appreciated.
 
The storage.cfg tells - yes the local-zfs is there, at least configured,
There's your misunderstanding: not "storage.cfg" tells you that it is there. You told the storage system that it does exist.

bit the zpool list tells no
So... I would trust this one ;-)

Post the full output of zpool status from each node and the output of cat /etc/pve/storage.cfg once. Then we will see... :-)

(( Please use [code]...[/code]-tags - or use the "</>" symbol above the editor ))
 
Thanks @UdoB

Here's the output of node1:
Code:
# zpool status
  pool: rpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:18:12 with 0 errors on Sun Jan 11 00:42:16 2026
config:

        NAME                                        STATE     READ WRITE CKSUM
        rpool                                       ONLINE       0     0     0
          mirror-0                                  ONLINE       0     0     0
            ata-CT2000BX500SSD1_2317E6CE1643-part3  ONLINE       0     0     0
            ata-CT2000BX500SSD1_2307E6AB4724-part3  ONLINE       0     0     0
          mirror-1                                  ONLINE       0     0     0
            ata-CT2000BX500SSD1_2307E6AB4747-part3  ONLINE       0     0     0
            ata-CT2000BX500SSD1_2307E6AB46D5-part3  ONLINE       0     0     0

errors: No known data errors

here's the output from node2:
Code:
# zpool status
  pool: zfs-node2
 state: ONLINE
config:

        NAME           STATE     READ WRITE CKSUM
        zfs-node2       ONLINE       0     0     0
          raidz1-0     ONLINE       0     0     0
            nvme0n1p1  ONLINE       0     0     0
            nvme2n1p1  ONLINE       0     0     0
            nvme3n1p1  ONLINE       0     0     0
            nvme4n1p1  ONLINE       0     0     0

errors: No known data errors


and the storage.cfg:
Code:
# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
        nodes node2

zfspool: local-zfs
        pool rpool/data
        content images,rootdir
        nodes node1
        sparse 1

cifs: Bup2
        path /mnt/pve/Bup2
        server 10.10.10.2
        share Bup2
        content backup
        prune-backups keep-all=1
        username someuser

zfspool: zfs-node2
        pool zfs-node2
        content rootdir,images
        mountpoint /zfs-node2
        nodes node2
        sparse 1
 
zfspool: local-zfs
pool rpool/data
nodes node1

zfspool: zfs-node2
pool zfs-node2
nodes node2

You have pools with different names. (( And this prevents replication which is really annoying :-( ))
  • Migration of running VMs allows me to select a specific storage on the target
  • Migration of LXC does not offer this to me
At the end... backup/restore should work.

There is no easy solution for this dilemma. There is no "aliasing" of ZFS pools to use them in a compatible way. Search for it, it has been discussed multiple times here in the forum. Also in the bugtracker: https://bugzilla.proxmox.com/show_bug.cgi?id=7200

A pragmatic solution is to move everything to identically named "local-zfs"/"rpool" on all nodes. The easiest way might be to reinstall node2.
 
  • Like
Reactions: Johannes S