VMs config lost after pool importing

bogdan.solga

Member
Nov 26, 2020
13
0
6
44
Hello, everyone!

Recently I have moved my Proxmox host from one office to another one. That move implied the changing of the network.
Between the two moves, I have repeatedly tried to start the host, both by the normal boot process and using the Proxmox 'ZFS rescue boot' (by booting from a Proxmox USB installer).

After I have successfully reconnected to the running Proxmox host, I have noticed that the VMs that I had configured and used were gone. I had a look in the ZFS datasets and on the filesystem, and their content is there:

Bash:
root@ws:~# zfs list

NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                     87.4G   318G      128K  /rpool
rpool/ROOT                16.4G   318G      128K  /rpool/ROOT
rpool/ROOT/pve-1          16.4G   318G     16.4G  /
rpool/data                71.0G   318G      149K  /rpool/data
rpool/data/vm-100-disk-0  26.2G   318G     26.2G  -
rpool/data/vm-100-disk-1   117K   318G      117K  -
...

The VM data is also displayed by lsblk:
Bash:
root@ws:~# lsblk

zd0               230:0    0    95G  0 disk
|-zd0p1           230:1    0   200M  0 part
|-zd0p2           230:2    0  94.8G  0 part
|-vm-100-disk-0p1 253:0    0   200M  0 part
`-vm-100-disk-0p2 253:1    0  94.8G  0 part
...

and
Bash:
root@ws:~# ls -lh /dev/zvol/rpool/data/

total 0
lrwxrwxrwx 1 root root 12 Jan  5 17:51 vm-100-disk-0 -> ../../../zd0
lrwxrwxrwx 1 root root 14 Jan  5 17:51 vm-100-disk-0-part1 -> ../../../zd0p1
lrwxrwxrwx 1 root root 14 Jan  5 17:51 vm-100-disk-0-part2 -> ../../../zd0p2
...

Any idea how to recover that VM?
 
Hello, everyone!

Recently I have moved my Proxmox host from one office to another one. That move implied the changing of the network.
Between the two moves, I have repeatedly tried to start the host, both by the normal boot process and using the Proxmox 'ZFS rescue boot' (by booting from a Proxmox USB installer).

After I have successfully reconnected to the running Proxmox host, I have noticed that the VMs that I had configured and used were gone. I had a look in the ZFS datasets and on the filesystem, and their content is there:

Bash:
root@ws:~# zfs list

NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                     87.4G   318G      128K  /rpool
rpool/ROOT                16.4G   318G      128K  /rpool/ROOT
rpool/ROOT/pve-1          16.4G   318G     16.4G  /
rpool/data                71.0G   318G      149K  /rpool/data
rpool/data/vm-100-disk-0  26.2G   318G     26.2G  -
rpool/data/vm-100-disk-1   117K   318G      117K  -
...

The VM data is also displayed by lsblk:
Bash:
root@ws:~# lsblk

zd0               230:0    0    95G  0 disk
|-zd0p1           230:1    0   200M  0 part
|-zd0p2           230:2    0  94.8G  0 part
|-vm-100-disk-0p1 253:0    0   200M  0 part
`-vm-100-disk-0p2 253:1    0  94.8G  0 part
...

and
Bash:
root@ws:~# ls -lh /dev/zvol/rpool/data/

total 0
lrwxrwxrwx 1 root root 12 Jan  5 17:51 vm-100-disk-0 -> ../../../zd0
lrwxrwxrwx 1 root root 14 Jan  5 17:51 vm-100-disk-0-part1 -> ../../../zd0p1
lrwxrwxrwx 1 root root 14 Jan  5 17:51 vm-100-disk-0-part2 -> ../../../zd0p2
...

Any idea how to recover that VM?

Cool, We have exactly the same thing after the cluster mess up.
 
is pmxcfs starting normally (systemctl status pve-cluster)?
what are the exact steps you did to "reconnect"?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!