local-zfs gone after clustering

digit23

Member
Oct 12, 2020
8
1
8
49
Need help here !

I have prox01 v7 with a few VM. I also have prox02 v8 also with few VM. I create a cluster from prox02 then proceed to add prox01 to cluster. It failed because prox01 have some VM on it.

Then I found this guy: https://youtu.be/rMKwEOL2HSA

Check à 1:20, just backup /etc/pve/nodes/prox01/qemu-server/*.conf, remove .conf, then join cluster, then put back .conf....

WRONG !!! Now VM disk are gone !!!

If I check VM-101 disk was: local-zfs:vm-101-disk-0

On prox01 I don't have local-zfs, it is local-lvm with a question mark. Is there something to do to recover my VMs ?

####

root@prox01:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 14G 0 14G 0% /dev
tmpfs 2.8G 1.3M 2.8G 1% /run
rpool/ROOT/pve-1 621G 14G 608G 3% /
tmpfs 14G 66M 14G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
rpool 608G 128K 608G 1% /rpool
rpool/ROOT 608G 128K 608G 1% /rpool/ROOT
rpool/data 608G 128K 608G 1% /rpool/data
/dev/fuse 128M 28K 128M 1% /etc/pve
tmpfs 2.8G 0 2.8G 0% /run/user/0

####

root@prox01:~# find / -name "vm-102*"
/dev/rpool/data/vm-102-disk-1
/dev/rpool/data/vm-102-disk-0-part2
/dev/rpool/data/vm-102-disk-0-part1
/dev/rpool/data/vm-102-disk-0
/dev/zvol/rpool/data/vm-102-disk-1
/dev/zvol/rpool/data/vm-102-disk-0-part2
/dev/zvol/rpool/data/vm-102-disk-0-part1
/dev/zvol/rpool/data/vm-102-disk-0

####

root@prox01:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 892G 257G 635G - - 11% 28% 1.00x ONLINE -

root@prox01:/rpool/data# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 257G 607G 104K /rpool
rpool/ROOT 13.7G 607G 96K /rpool/ROOT
rpool/ROOT/pve-1 13.7G 607G 13.7G /
rpool/data 243G 607G 96K /rpool/data
rpool/data/vm-100-disk-0 5.30G 607G 5.30G -
rpool/data/vm-101-disk-0 210G 607G 196G -
rpool/data/vm-101-disk-1 244K 607G 148K -
rpool/data/vm-101-disk-2 136K 607G 68K -
rpool/data/vm-102-disk-0 28.1G 607G 27.8G -
rpool/data/vm-102-disk-1 204K 607G 132K -

root@prox01:/rpool/data# ls -al /rpool/data/
total 1
drwxr-xr-x 2 root root 2 Dec 28 2021 .
drwxr-xr-x 4 root root 4 Dec 28 2021 ..
 
Last edited:
when a node join a cluster, the /etc/pve of this node is overwrited (not merged) with the /etc/pve of the cluster.

that's include /etc/pve/storage.cfg.

if you don't have backup, you can still readd the storage in datacenter ->storage->add->zfs with "rpool/data" as pool.
 
  • Like
Reactions: utorth
Just to make sure I don't dig myself deeper...

VM disk are "hidden" but still on disk right ?

if in datacenter ->storage->add->zfs with "rpool/data" as pool with arbitrary pool id, i'll recover VM disk ?

1687960045671.png
 
VM disk are "hidden" but still on disk right ?
yes sure, you have only lost the entry point to access the storage

if in datacenter ->storage->add->zfs with "rpool/data" as pool with arbitrary pool id, i'll recover VM disk ?

for the id, you need to reuse "local-zfs", same storage name than previously.
 
Last edited:
when a node join a cluster, the /etc/pve of this node is overwrited (not merged) with the /etc/pve of the cluster.

that's include /etc/pve/storage.cfg.

if you don't have backup, you can still readd the storage in datacenter ->storage->add->zfs with "rpool/data" as pool.
Thank you! I was having this problem, did everything right except this one small piece - making sure to select "rpool/data" and NOT just "rpool". Fixed!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!