getting rid of cluster

gta

New Member
Jul 30, 2023
14
0
1
good day to everyone!

some time ago I had a standalone PVE with working VMs on it. Then I installed PVE on the 2nd host. After that I created cluster on the 1st host and did join the 2nd node to this cluster, with no shared datastore at all (except testing NFS).
Now (as I understood there is no reason of having cluster without shared datastore... am i right?) I wish to make these nodes standalone PVEs again (with PBS on the 3rd host).
I have no VMs on the 2nd node at the moment. All production VMs are on the 1st node only.

How can I get rid of cluster, having 1st PVE up and running? Are next command issued on the 1st node will be correct? (then I'll make fresh PVE install on the 2nd host)

#systemctl stop pve-cluster
#systemctl stop corosync
#pmxcfs -l
#rm /etc/pve/corosync.conf
#rm -r /etc/corosync/*
#killall pmxcfs
#rm -fr /etc/pve/nodes
#systemctl start corosync
#systemctl start pve-cluster
#service pveproxy restart

p.s. is it necessary to stop all VMs before these steps?
thank you!
 
Last edited:
Now (as I understood there is no reason of having cluster without shared datastore... am i right?)

No, a cluster can also make sense if you do not have shared storage. You can (live) migrate VMs from one node to the other. The drawback without shared storage is that the migration process will take a longer time, since the whole disk image has to be transferred over the network. If you are using ZFS, you can also enable storage replication for your VM disks, and that enables to you enable High-Availability (HA) features for your guests.

The process of removing a cluster node is described in our wiki: https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node
But from a glance it looks like your commands actually come from that page.

p.s. is it necessary to stop all VMs before these steps?

I'm actually not sure if it necessary, but I'd highly recommend it.
 
Hi, Lukas!
appreciate your help. i've started to think not to destroy cluster if HA and migration will be available.
But on the 1st node i dont use ZFS. It has hardware raid controller, as i know zfs is not working with hard raid. am i wrong?
as regards 2nd node - PVE on it was installed on internal sd card and it also has hard raid controller, with two raid1. Before I joined 2nd host to this cluster both raid (ext4 and xfs) was visible. After joining 2nd host to the cluster i've lost both storage on it (i can see them in Disks>Directory).
Is there way to rebuild disk in zfs without destroying cluster and reinstalling PVEs?
thank you
 
Last edited:
could you post the output of pvecm status, pvecm nodes and
lsblk ?
 
Cluster information
-------------------
Name: cluster1
Config Version: 2
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Mon Jul 31 13:42:41 2023
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.30
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.10.11.62 (local)
0x00000002 1 10.10.11.10
Membership information
----------------------
Nodeid Votes Name
1 1 pxmx (local)
2 1 pve2
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.7T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 1.7T 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 15.8G 0 lvm
│ └─pve-data-tpool 253:4 0 1.6T 0 lvm
│ ├─pve-data 253:5 0 1.6T 1 lvm
│ ├─pve-vm--101--disk--0 253:6 0 16G 0 lvm
│ ├─pve-vm--103--disk--0 253:7 0 100G 0 lvm
│ ├─pve-vm--102--disk--0 253:8 0 300G 0 lvm
│ ├─pve-vm--105--disk--0 253:9 0 100G 0 lvm
│ ├─pve-vm--107--disk--0 253:10 0 100G 0 lvm
│ ├─pve-vm--109--disk--0 253:12 0 100G 0 lvm
│ ├─pve-vm--106--disk--0 253:13 0 200G 0 lvm
│ ├─pve-vm--110--disk--0 253:14 0 16G 0 lvm
│ └─pve-vm--111--disk--0 253:15 0 100G 0 lvm
└─pve-data_tdata 253:3 0 1.6T 0 lvm
└─pve-data-tpool 253:4 0 1.6T 0 lvm
├─pve-data 253:5 0 1.6T 1 lvm
├─pve-vm--101--disk--0 253:6 0 16G 0 lvm
├─pve-vm--103--disk--0 253:7 0 100G 0 lvm
├─pve-vm--102--disk--0 253:8 0 300G 0 lvm
├─pve-vm--105--disk--0 253:9 0 100G 0 lvm
├─pve-vm--107--disk--0 253:10 0 100G 0 lvm
├─pve-vm--109--disk--0 253:12 0 100G 0 lvm
├─pve-vm--106--disk--0 253:13 0 200G 0 lvm
├─pve-vm--110--disk--0 253:14 0 16G 0 lvm
└─pve-vm--111--disk--0 253:15 0 100G 0 lvm
sdb 8:16 0 200G 0 disk
└─sdb1 8:17 0 200G 0 part
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.6T 0 disk
└─sda1 8:1 0 3.6T 0 part /mnt/pve/storage4tb
sdb 8:16 0 893.8G 0 disk
└─sdb1 8:17 0 893.7G 0 part /mnt/pve/storage1tb
sdc 8:32 0 14.9G 0 disk
├─sdc1 8:33 0 1007K 0 part
├─sdc2 8:34 0 512M 0 part /boot/efi
└─sdc3 8:35 0 14.4G 0 part
├─pve-swap 253:0 0 1G 0 lvm [SWAP]
├─pve-root 253:1 0 6.7G 0 lvm /
├─pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data-tpool 253:4 0 4.7G 0 lvm
│ └─pve-data 253:5 0 4.7G 1 lvm
└─pve-data_tdata 253:3 0 4.7G 0 lvm
└─pve-data-tpool 253:4 0 4.7G 0 lvm
└─pve-data 253:5 0 4.7G 1 lvm
sdd 8:48 1 29.4G 0 disk
└─sdd4 8:52 1 29.4G 0 part
sr0 11:0 1 1024M 0 rom
 
Last edited:
Your lsblk outputs are identical. Is it possible that you are still looking on node 1 instead of node 2?
 
Last edited:
But on the 1st node i dont use ZFS. It has hardware raid controller, as i know zfs is not working with hard raid. am i wrong?
Yes, ZFS and hardware RAID do not play too nicely. Some RAID controllers can be configured to 'IT mode' - that will give the OS direct access to the disks and then you might be able to use ZFS on them. But in a conventional Hardware RAID setup you should just use a regular Linux filesystem.

PVE on it was installed on internal sd card and it also has hard raid controller
Installing Proxmox VE on an SD card is HIGHLY discouraged. If you node is still running from that, I'd recommend a re-installation on a proper, (enterprise-grade SSD). I know it was a common practice for another virtualization product, but for PVE it is a really bad idea. Your node is bound to fail eventually due to worn out flash cells.
 
please advice the sequence of steps.
as node2 is on sd-card with disks on raid and node1 is on raid xfs disks, do I need to remove node2 from cluster, reinstall PVE on SSD with zfs, join node2 to cluster, migrate all VMs from node1 to node2, then repeat process (unjoin, reinstall) for node1 (trying to configure raid is passthrough)?!
thank you
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!