Hi !
First: sorry for my english
a few years ago, i deleted a cluster and lost my vm's on it. I want to be sure that according to these steps, I will not lose my vm's and containers.
i have 3 nodes. 2 servers Proxmox 6.3-4 and one Qdevice.
Please tell me if it's safe to do that.
pvecm status at the bottom of this post.
1 - All my vm's are moved on my nodeid 01 on the local storage of this node. Here is the only one proxmox server i want to keep at the end.
2 - I have deleted all my network shared storages.
3 - I still have to do the next steps and repeat delnode step for id#02 and Qdevice:
so am I on the right track to lose nothing?
This is my cluster:
First: sorry for my english
a few years ago, i deleted a cluster and lost my vm's on it. I want to be sure that according to these steps, I will not lose my vm's and containers.
i have 3 nodes. 2 servers Proxmox 6.3-4 and one Qdevice.
Please tell me if it's safe to do that.
pvecm status at the bottom of this post.
1 - All my vm's are moved on my nodeid 01 on the local storage of this node. Here is the only one proxmox server i want to keep at the end.
2 - I have deleted all my network shared storages.
3 - I still have to do the next steps and repeat delnode step for id#02 and Qdevice:
First, stop the corosync and the pve-cluster services on the node:
systemctl stop pve-cluster
systemctl stop corosync
Start the cluster filesystem again in local mode:
pmxcfs -l
Delete the corosync configuration files:
rm /etc/pve/corosync.conf
rm -r /etc/corosync/*
You can now start the filesystem again as normal service:
killall pmxcfs
systemctl start pve-cluster
The node is now separated from the cluster. You can deleted it from a remaining node of the cluster with:
pvecm delnode oldnode
If the command failed, because the remaining node in the cluster lost quorum when the now separate node exited, you may set the expected votes to 1 as a workaround:
pvecm expected 1
so am I on the right track to lose nothing?
This is my cluster:
root@srv-bhs7-01:~# pvecm status
Cluster information
-------------------
Name: mew-cluster
Config Version: 11
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Wed Jul 28 13:04:49 2021
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.3c90
Quorate: Yes
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate Qdevice
Membership information
----------------------
Nodeid Votes Qdevice Name
0x00000001 1 A,V,NMW 192.168.0.1 (local)
0x00000002 1 A,V,NMW 192.168.0.2
0x00000000 1 Qdevice