how to delete/uninstall the whole cluster

maxprox

Renowned Member
Aug 23, 2011
420
53
93
Germany - Nordhessen
fair-comp.de
Hallo,

Code:
pve-manager: 2.0-7 (pve-manager/2.0/de5d8ab1)
running kernel: 2.6.32-6-pve                                                                                                      
proxmox-ve-2.6.32: 2.0-46                                                                                                         
pve-kernel-2.6.32-6-pve: 2.6.32-46
lvm2: 2.02.86-1pve1                                                                                                                    
clvm: 2.02.86-1pve1
corosync-pve: 1.4.1-1                                                                                                                           
openais-pve: 1.1.4-1                                                                                                                            
libqb: 0.5.1-1
redhat-cluster-pve: 3.1.7-1
pve-cluster: 1.0-9                                                                                                                                         
qemu-server: 2.0-2                                                                                                                                         
pve-firmware: 1.0-13
libpve-common-perl: 1.0-6                                                                                                                                        
libpve-access-control: 1.0-1                                                                                                                                     
libpve-storage-perl: 2.0-4
vncterm: 1.0-2                                                                                                                                                               
vzctl: 3.0.29-3pve2                                                                                                                                                          
vzdump: 1.2.6-1                                                                                                                                                              
vzprocps: 2.0.11-2                                                                                                                                                           
vzquota: 3.0.12-3
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.1-1
root@fcprox02:~#
I have two test machines, yesterday I've been experimenting with the cluster function first and I have it installed as described in the wiki.
Now I see that this is a bit too high for me.
One of the two hosts also serves as a workstation with the KDE GUI, I got a conflict here with the Network Manager, which I then uninstalled.
Then after I have other error messages Web-based Management 2.0 (Data error...), I've tried one node to remove and install it again.
Now, after a reboot of both hosts I have the two nodes on the other machine in the Web-Management, but also not working ...
Among others cman is not starting on the workstation:

Code:
root@fcprox02:~# /etc/init.d/cman restart
Stopping cluster: 
   Leaving fence domain... [  OK  ]
   Stopping dlm_controld... [  OK  ]
   Stopping fenced... [  OK  ]
   Stopping cman... [  OK  ]
   Unloading kernel modules... [  OK  ]
   Unmounting configfs... [  OK  ]
Starting cluster: 
   Checking if cluster has been disabled at boot... [  OK  ]
   Checking Network Manager... [  OK  ]
   Global setup... [  OK  ]
   Loading kernel modules... [  OK  ]
   Mounting configfs... [  OK  ]
   Starting cman... Cannot find node name in cluster.conf
Unable to get the configuration
Cannot find node name in cluster.conf
cman_tool: corosync daemon didn't start Check cluster logs for details
[FAILED]
root@fcprox02:~# less /var/log/cluster .....

Please HELP! I decided that I need NO cluster!
But how can I uninstall cleanly again?
One host is currently still without VMs (I wanted to migrate per cluster ;-) and the other host already (here's my Zentyal Linux file- and user-server)
I do not want to reinstall from scratch ...


Regards,
maxprox
 
Last edited:
Please HELP! I decided that I need NO cluster!
But how can I uninstall cleanly again?
One host is currently still without VMs (I wanted to migrate per cluster ;-) and the other host already (here's my Zentyal Linux file- and user-server)
I do not want to reinstall from scratch ...


It would be great if someone could give me one tip or have a suggestion

Is it perhaps possible, the following packages:
clvm, corosync-pve, redhat-cluster-pve, libpve-access-control and its
dependencies, to remove with purge.
Separately on both machines so that they do not synchronize again?
The other machine is off or not in the network.
And then re-install separately? The VMs and the <VMID>.Conf will remain in
this way?
Now I'm not even as root have access to the configuration files under /etc/pve/

Regards,
maxprox
 
Last edited:
Simply remove the packages you do not want - what is the problem?

I have (twice) removed (purge!) the following packages.:
clvm, corosync-pve, redhat-cluster-pve, libpve-access-control and its dependencies (=> proxmox*, qemu-server and so on).
Separately on the machine with no VM the other machine is off.
I could see that the directory /etc/pve/ was empty ...
After a restart I installed it again, then everything was as before:
the config files in /etc/pve => as before and without access also for root and I can not do anything. Please have a look at the screenshots
Also the cluster node from the other host is there in the web-management, and on this second node everything is banned. Also the showed LVM storage from the cluster ...
The host fcprox02 is the one I talked about and here I can not do anything - I'm helpless
How can I kill the Cluster config-files so that they does not come back?


proxfail4.jpegproxfail5.jpegproxfail6.jpeg


Regards,
maxprox
 
Last edited:
How can I kill the Cluster config-files so that they does not come back?

Well, the software is designed to survive a serious crash.

After removal, try to remove the following dirs/files

/var/lib/pve-cluster
/etc/cluster

Then reinstall the packakes after reboot.
 
Well, the software is designed to survive a serious crash.

After removal, try to remove the following dirs/files

/var/lib/pve-cluster
/etc/cluster

Then reinstall the packakes after reboot.
Well, I also see it as a feature not as a problem. But for me it is :-(
I have remove (purge) the packages like before:
"clvm, corosync-pve, redhat-cluster-pve, libpve-access-control and its dependencies (=> proxmox*, qemu-server and so on).
Separately on the machine with no VM the other machine is off.
I could see that the directory /etc/pve/ was empty ..."
then I remove the dirs/files /var/lib/pve-cluster - /etc/cluster ... Then I reinstalled the packages after reboot and reboot again.
A N D - I can not understand it - everything is back - look at the pictures:
proxfail6.jpeg proxfail7.jpeg proxfail8.jpeg
the shown "fcprox01" on pic1 was at this time not reachabele
On this one "fcprox01" the one from which I build the cluster now everything is fine, there is only one / it's own node, I can build new VMs and so on.
On the second "fcprox02" (this thread about it) works still nothing relating on proxmox ( I hate myself - fuk)

Some last tip or last chance for me?
This is my "developer workstation" for a new install I need a whole day :-(

best regards,
maxprox
 
Last edited:
I am sure you will get it, keep trying. But you know, that's it a quite bad idea to test such things on the primary workstation so lessons learned.

you can test the ISO installation, package installation, cluster communication, etc. also if you run in a virtualization environment, e.g. I am used to do such test with VMware Workstation, also Virtual Box is probably a good candidate. Even better, you have dedicated hardware for Proxmox VE beta tests. if you understand the system in full, go further.
 
I am sure you will get it, keep trying. But you know, that's it a quite bad idea to test such things on the primary workstation so lessons learned.

you can test the ISO installation, package installation, cluster communication, etc. also if you run in a virtualization environment, e.g. I am used to do such test with VMware Workstation, also Virtual Box is probably a good candidate. Even better, you have dedicated hardware for Proxmox VE beta tests. if you understand the system in full, go further.
Yes, Yes, Yes your right!

But ... (I hope) it is done!
the solution was, in addition to the above advice of dietmar, to change the hostname and IP address of the host
In this first minute it seems to work

Thank you,
maxprox
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!