Lost VMs after join to new cluster

lukkaz14

Member
Jul 23, 2019
1
0
6
33
Hello,
I have a problem. I added the old server to the new cluster. Instead, make a cluster on the old one and add a new server to it. Finally, I lost all VMs. How to get them back?
I will be grateful for your help.
Code:
root@pve:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.4.35-1-pve: 4.4.35-76
ceph: 12.2.12-pve1
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-4
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
Code:
root@pve:~# zfs list -t all
NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                      488G  3.03T       96K  /rpool
rpool/ROOT                22.3G  3.03T       96K  /rpool/ROOT
rpool/ROOT/pve-1          22.3G  3.03T     22.3G  /
rpool/data                 456G  3.03T       96K  /rpool/data
rpool/data/vm-101-disk-1  1.90G  3.03T     1.90G  -
rpool/data/vm-103-disk-1  1.21G  3.03T     1.21G  -
rpool/data/vm-104-disk-1  8.64G  3.03T     8.64G  -
rpool/data/vm-105-disk-1  42.8G  3.03T     42.8G  -
rpool/data/vm-106-disk-1  89.0G  3.03T     89.0G  -
rpool/data/vm-107-disk-2  3.37G  3.03T     3.37G  -
rpool/data/vm-108-disk-1   253G  3.03T      253G  -
rpool/data/vm-109-disk-1  4.24G  3.03T     4.24G  -
rpool/data/vm-110-disk-1  2.69G  3.03T     2.69G  -
rpool/data/vm-112-disk-1  16.0G  3.03T     16.0G  -
rpool/data/vm-113-disk-1  30.8G  3.03T     30.8G  -
rpool/data/vm-114-disk-1    64K  3.03T       64K  -
rpool/data/vm-115-disk-1  2.53G  3.03T     2.53G  -
rpool/swap                8.50G  3.04T      434M  -
 
I'm really not sure what you did from your description. What did happend exactly please explain step by step. The output just shows your version and that you used zfs as storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!