Proxmox VE 4.0 released!

screenie

Member
Jul 21, 2009
146
0
16
We have a couple of multi node clusters running latest 3.4 without any issues and tried to re-install one 4 node cluster of them with pve 4;
Base install was straight forward but run into issues with quorum when creating the cluster - all nodes were setup identical but one node couldn't join the cluster successfully - it hang at 'waiting for quorum...' the other nodes added them but the node itself did nothing and syslog showed:
Oct 25 23:44:00 node3 pmxcfs[3788]: [status] crit: cpg_send_message failed: 9
Tried several times to delete and re-add the node but no luck, so i installed the node from scratch and when adding i had the same issue again - with the -force option it was able to join;
While testing i rebooted another node and after that containers could not started on this node - found also quorum messages in syslog:
Oct 26 16:41:01 node1 pmxcfs[1302]: [quorum] crit: quorum_initialize failed: 2
Oct 26 16:41:01 node1 pmxcfs[1302]: [quorum] crit: can't initialize service
Oct 26 16:41:01 node1 pmxcfs[1302]: [confdb] crit: cmap_initialize failed: 2
Oct 26 16:41:01 node1 pmxcfs[1302]: [confdb] crit: can't initialize service
Oct 26 16:41:01 node1 pmxcfs[1302]: [dcdb] crit: cpg_initialize failed: 2
Oct 26 16:41:01 node1 pmxcfs[1302]: [dcdb] crit: can't initialize service
Oct 26 16:41:01 node1 pmxcfs[1302]: [status] crit: cpg_initialize failed: 2
Oct 26 16:41:01 node1 pmxcfs[1302]: [status] crit: can't initialize service
and this message again:
Oct 26 16:41:10 node1 pmxcfs[1302]: [status] crit: cpg_send_message failed: 9
Oct 26 16:41:10 node1 pmxcfs[1302]: [status] crit: cpg_send_message failed: 9
Oct 26 16:41:12 node1 pmxcfs[1302]: [status] crit: cpg_send_message failed: 9
Oct 26 16:41:12 node1 pmxcfs[1302]: [status] crit: cpg_send_message failed: 9
Oct 26 16:41:12 node1 pmxcfs[1302]: [status] crit: cpg_send_message failed: 9
After rebooting the node again it had no quorum issue;
Same happened again on a another node after reboot - rebooting again and quorum issue is gone;
Seems clustering/quorum is not as reliable as in 3.4 where i never saw this issue on any node;
Anything changed or any idea what could casue this issue?

Also not having the live migration feature implemented for containers let me decide to go back to 3.4;
Seems LXC has to improve their tools to be useful, stopping containers to be able to move them is a No-Go for us - will stick with OpenVZ for the moment;
 

screenie

Member
Jul 21, 2009
146
0
16
This is how it looked like on the remaining pve4 nodes:
root@node1:/# pvecm status
Quorum information
------------------
Date: Tue Sun 25 23:49:56 2015
Quorum provider: corosync_votequorum
Nodes: 4
Node ID: 0x00000001
Ring ID: 60
Quorate: Yes

Votequorum information
----------------------
Expected votes: 4
Highest expected: 4
Total votes: 4
Quorum: 3
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.100.26 (local)
0x00000002 1 192.168.100.27
0x00000003 1 192.168.100.28
0x00000004 1 192.168.100.29
And 3.4 cluster is also running cluster sync via multicast where i never had a problem before - nothing has changed on the infrastructure;
 

adoII

Member
Jan 28, 2010
124
0
16
Hi,

I have done it online without vm interruption

The tricky part is to upgrade to jessie and proxmox 4 online , sxtich to cororync2 on all nodes at the same time

Then you ll be able to do live migration and reboot empty hosts

Im on holiday this week but i ll try to post an howto next week
Hi Spirit,
did you have the possibility to write down how you did your upgrade ?

I am also looking for the best method to upgrade a 5 node cluster from proxmox3 to proxmox 4 on the fly.
 

spirit

Well-Known Member
Apr 2, 2010
3,527
156
63
www.odiso.com
Hi Spirit,
did you have the possibility to write down how you did your upgrade ?

I am also looking for the best method to upgrade a 5 node cluster from proxmox3 to proxmox 4 on the fly.

Hi,

I have finished to migrate a small 5 nodes cluster from proxmox3 to proxmox4,
using qemu live migration.



Here the howto:

Code:
[COLOR=#333333][FONT=monospace]requirements:[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]-------------[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]external storage (nfs,ceph).[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]don't have tested with clvm + iscsi, or local ceph(which should work)[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]1)Upgrade a first node to proxmox 4.0 and recreate cluster[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]------------------------------------------------------------[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]Have an empty node,[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]then upgrade it to proxmox 4.0, folowing the current wiki[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]# apt-get update && apt-get dist-upgrade[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get update[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get install pve-kernel-4.2.2-1-pve[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get dist-upgrade[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]reboot[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# apt-get install proxmox-ve[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get remove pve-kernel-2.6.32-41-pve[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# pvecm create <clustername>[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]2) upgrade second node[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]----------------------[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get update && apt-get dist-upgrade[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get update[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get install pve-kernel-4.2.2-1-pve[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get dist-upgrade[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]---> here no reboot[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# apt-get install proxmox-ve[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]if, apt return an error like:[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]"Setting up pve-manager (4.0-48) ...[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]Failed to get D-Bus connection: Unknown error -1[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]Failed to get D-Bus connection: Unknown error -1[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]dpkg: error processing package pve-manager (--configure):[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]subprocess installed post-installation script returned error exit status 1[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]dpkg: dependency problems prevent configuration of proxmox-ve:[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]proxmox-ve depends on pve-manager; however:[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]Package pve-manager is not configured yet.[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]dpkg: error processing package proxmox-ve (--configure):[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]dependency problems - leaving unconfigured[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]Errors were encountered while processing:"[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]then,[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# touch /proxmox_install_mode[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get install proxmox-ve[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# rm /proxmox_install_mode[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]now the tricky part[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]mount /etc/pve[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# /usr/bin/pmxcfs -l[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]add node to cluster[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# pvecm add ipofnode1 -force[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]close old corosync and delete old config[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# killall -9 corosync[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# /etc/init.d/pve-cluster stop[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# rm /var/lib/pve-cluster/config.db*[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]start new corosync and pve-cluster[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# corosync[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# /etc/init.d/pve-cluster start[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]verify than you can write in /etc/pve/ and that's is correctly replicate on other proxmox4 nodes[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]#touch /etc/pve/test.txt[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]#rm /etc/pve/test.txt[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]migrate vms (do it for each vmid)[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# qm migrate <vmid> <target_proxmox4_server> -online[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace](migrate must do done with cli, because pvestatd can't start without systemd, so gui is not working)[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]# reboot node[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]3) do the same thing for next node(s)[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]4) when all nodes are migrated, remove[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# rm /etc/pve/cluster.conf[/FONT][/COLOR]
 
Last edited:

p1v12002

New Member
Jun 17, 2014
7
1
3
Hy everybody,

does someone has encountered problems during installation on HP Proliant server DL385 G8 with P420i Raid Array controller?
I'm seeng a "
p420i lock up code 0x13" during the installation.
I have tried to update server's firmware with last SPP 2015.10.0 including a Raid controller fw with 6.68 version, without emprovements.
I would like put in evidence that the previous Proxmox VE 3.4 works fine.

Thanks and regards
Paolo Vola
 

p1v12002

New Member
Jun 17, 2014
7
1
3
Hy all,

solved the problem of the array controller with a special patch , the installation now stops at the first screen of Agreement (mouse pointer locked) for driver issues USB keyboard and mouse .
Does anyone encountered similar problems ?
Tx in advance and regards
paolo
 

Phinitris

Member
Jun 1, 2014
83
11
8
Hey,
don't think that OpenVZ will die. Especially for hosting I think it's better than LXC because it support CPULIMIT, IOLIMIT, IOPSLIMIT.

Currently running a quite stable Proxmox OpenVZ Ploop cluster with integration for Graphite/Grafana, Traffic accounting/calculation, rsync backups,
Duo Security Authentication and much more. Can not complain.

http://prntscr.com/94p6hv

I hope that OpenVZ will release Virtuozzo 7(3.1 Kernel).

Best regards,
Phinitris
 

Florent

Member
Apr 3, 2012
91
2
8
If I understand well, there is no cluster upgrade procedure from 3.4 to 4.0 ? We need to re-create cluster from root so we loose all cluster configuration as users, permissions, etc ?
I can't understand your strategy with this release. Think to people having cluster with dozens of nodes ... impossible to upgrade.
 

adamb

Well-Known Member
Mar 1, 2012
1,069
36
48
If I understand well, there is no cluster upgrade procedure from 3.4 to 4.0 ? We need to re-create cluster from root so we loose all cluster configuration as users, permissions, etc ?
I can't understand your strategy with this release. Think to people having cluster with dozens of nodes ... impossible to upgrade.
The HA stack is alot different and a ton of it is new. There honestly was no upgrade path for them. We have tons of pve 3.4 clusters out in the feild so I feel your pain, but there is nothing that can be done.

Doesn't matter if you have 3 nodes or 16, upgrade procedure is going to be the same. Is the issue just downtime?
 

Florent

Member
Apr 3, 2012
91
2
8
If I use procedure provided by spirit, it seems there's no downtime, isn't it ?

Problem is that upgrade needs to be done "by hand", impossible to automate it with Ansible for example.

And during upgrade, we have 2 clusters, not 1....
 

adamb

Well-Known Member
Mar 1, 2012
1,069
36
48
If I use procedure provided by spirit, it seems there's no downtime, isn't it ?

Problem is that upgrade needs to be done "by hand", impossible to automate it with Ansible for example. And

And during upgrade, we have 2 clusters, not 1....
I wasn't aware of a online procedure, I have been doing all of mine offline. I would never want to automate an upgrade, sounds like a bad situation waiting to happen.

Not sure what the issue is with having 2 clusters, do 1 at a time?
 

Florent

Member
Apr 3, 2012
91
2
8
Yes I do 1 cluster at a time, but read procedure : impossible to mix 3.4 & 4.0 nodes in a same cluster. So procedure is to upgrade a first node, and create a new cluster on that node. So during upgrade, you have 2 clusters instead of one.
When you have thousands of nodes, you can't do it by hand... that's not my case, but it seems that Proxmox is not used on large clusters...
 

adamb

Well-Known Member
Mar 1, 2012
1,069
36
48
Yes I do 1 cluster at a time, but read procedure : impossible to mix 3.4 & 4.0 nodes in a same cluster. So procedure is to upgrade a first node, and create a new cluster on that node. So during upgrade, you have 2 clusters instead of one.
When you have thousands of nodes, you can't do it by hand... that's not my case, but it seems that Proxmox is not used on large clusters...
Ahhh ok, understood. Yea I don't know of anyone running 1000's of nodes or even 100's for that matter.
 

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
16,507
320
83
Austria
www.proxmox.com
I can't understand your strategy with this release. Think to people having cluster with dozens of nodes ... impossible to upgrade.
We alway try hard to make updates as easy as possible. But this time, some projects we depend on made totally incompatible changes.
That makes it impossible to provide a fully automatic update.
 

Florent

Member
Apr 3, 2012
91
2
8
We alway try hard to make updates as easy as possible. But this time, some projects we depend on made totally incompatible changes.
That makes it impossible to provide a fully automatic update.
Ok ok, I just say that's unusable in production environment.

Hi,

I have finished to migrate a small 5 nodes cluster from proxmox3 to proxmox4,
using qemu live migration.



Here the howto:

Code:
[COLOR=#333333][FONT=monospace]requirements:[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]-------------[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]external storage (nfs,ceph).[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]don't have tested with clvm + iscsi, or local ceph(which should work)[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]1)Upgrade a first node to proxmox 4.0 and recreate cluster[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]------------------------------------------------------------[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]Have an empty node,[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]then upgrade it to proxmox 4.0, folowing the current wiki[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]# apt-get update && apt-get dist-upgrade[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get update[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get install pve-kernel-4.2.2-1-pve[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get dist-upgrade[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]reboot[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# apt-get install proxmox-ve[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get remove pve-kernel-2.6.32-41-pve[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# pvecm create <clustername>[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]2) upgrade second node[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]----------------------[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get update && apt-get dist-upgrade[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get update[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get install pve-kernel-4.2.2-1-pve[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get dist-upgrade[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]---> here no reboot[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# apt-get install proxmox-ve[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]if, apt return an error like:[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]"Setting up pve-manager (4.0-48) ...[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]Failed to get D-Bus connection: Unknown error -1[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]Failed to get D-Bus connection: Unknown error -1[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]dpkg: error processing package pve-manager (--configure):[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]subprocess installed post-installation script returned error exit status 1[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]dpkg: dependency problems prevent configuration of proxmox-ve:[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]proxmox-ve depends on pve-manager; however:[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]Package pve-manager is not configured yet.[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]dpkg: error processing package proxmox-ve (--configure):[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]dependency problems - leaving unconfigured[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]Errors were encountered while processing:"[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]then,[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# touch /proxmox_install_mode[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# apt-get install proxmox-ve[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# rm /proxmox_install_mode[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]now the tricky part[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]mount /etc/pve[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# /usr/bin/pmxcfs -l[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]add node to cluster[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# pvecm add ipofnode1 -force[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]close old corosync and delete old config[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# killall -9 corosync[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# /etc/init.d/pve-cluster stop[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# rm /var/lib/config.db*[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]start new corosync and pve-cluster[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# corosync[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]# /etc/init.d/pve-cluster start[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]verify than you can write in /etc/pve/ and that's is correctly replicate on other proxmox4 nodes[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]#touch /etc/pve/test.txt[/FONT][/COLOR]
[COLOR=#333333][FONT=monospace]#rm /etc/pve/test.txt[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]migrate vms (do it for each vmid)[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# qm migrate <vmid> <target_proxmox4_server> -online[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace](migrate must do done with cli, because pvestatd can't start without systemd, so gui is not working)[/FONT][/COLOR]


[COLOR=#333333][FONT=monospace]# reboot node[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]3) do the same thing for next node(s)[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]4) when all nodes are migrated, remove[/FONT][/COLOR]

[COLOR=#333333][FONT=monospace]# rm /etc/pve/cluster.conf[/FONT][/COLOR]
Hi spirit, thank you for your how-to, but I don't think it can work.
When you run "pvecm add ipofnode1 -force" on a non-rebooted node, it will fail because it calls 'systemctl stop pve-cluster' and systemctl does not work yet (system not rebooted) :
Code:
pvecm add 192.168.0.203 -force
node test2 already defined
copy corosync auth key
stopping pve-cluster service
Failed to get D-Bus connection: Unknown error -1
can't stop pve-cluster service
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!