Last night we tried to add a new node to the cluster, it stuck on joining showing the below messages:
can't create shared ssh key database '/etc/pve/priv/authorized_keys'
(re)generate node files
generate new node certificate
unable to create directory '/etc/pve/priv' - Permission denied
Hi all ,joined a new node(4th) to my existing proxmox cluster(3- node cluster) and it joined after 15 min showing waiting for quorum, after that it joined but after restart it is not joining the cluster. after 30 in or so it is joining again .
any suggestion on what is wrong or what the...
i am trying to setup a qdevice in my 2 Node Cluster. I am running a Debian10 VM on a different Server outside the cluster.
When i am trying to connect the qdevice I get the following Error Message.
:pvecm qdevice setup <ip qdevice >
/bin/ssh-copy-id: INFO: Source of key(s) to be...
Guten Abend zusammen,
wir setzen bei uns grade ein neues PVE-Cluster mit 3 Nodes auf (mit ZFS und Storage Replication).
Netzwerkkonfigurationsmäßig würden wir gerne folgendes machen:
1G-Port: Corosync (10.0.11.x) (damit das sich nicht mit anderen Paketen in die Quere kommt)
I keep putting off posting this but I'm looking for some advise on upgrading from PVE5.4-13 to PVE6.x I'm running a 3-5 node cluster where 3 nodes are all flash CEPH backed machines and another two machines are non-ceph game server hosting machines. The CEPH machines communicate...
I have a (possibly unrelated) problem where one cluster member's TOTEM keep generateing retransmits from other nodes or local timeouts. When I checked nodeC kept joining and leaving the cluster, about 50 per seconds. Restarting corosync resulted the same.
It turned out however that on other...
So, I was following the directions to separate the cluster network, and things started to seem like they were going alright with the first node.
However, when I rebooted the node, Datacenter view\Cluster says, missing ':' after key 'interface' (500). I then check the /etc/pve/corosync.conf file...
We have a proxmox cluster, with each node in the same subnet. The cluster contained 3 hosts and everything worked as it should in the last years. Recently we wanted to add a new host to the cluster. This host is also in the same subnet, nothing changed in the network confugration...
we have a 2-node cluster without HA, named pve1 and pve2. pve1 replicates VMs to pve2. Both System run well until last week.
Both Systems run with Proxmox VE 6.0-2, PVE-Manager 6.0-4, PVE-Kernel 6.0-5
Now we get the following error messages
pve1 is marked...
I am running PVE cluster over WAN (different datacenters across the globe). It worked all the time flawlessly and best suited my needs (of course no shared storage, LM or HA but still central management, easy offline migrations etc). Some time ago I've upgraded to PVE 6.0 and was able to run the...
I currently have a cluster of 2 promox with "two_node : 1" in corosync quorum setting. This cluster will grow to 6 servers. However, for reasons of maintenance it is possible that only 1 node is active on the 6.
Because of the qorum vote, it is impossible to make a 2 node cluster work...
We are migrating our server to a different cloud provider.
While reading a documentation, i read this: "Storage communication should never be on the same network as corosync!".
Our server must have HA and data redundancy/ data high availability (using ceph).
The problem is, our...
we have a mixed 4.4/5.3 cluster, it's our production system, so the upgrade is very slow.
recently we got a lot problems when upgrade or add new installed 5.4 box to cluster, sometimes the whole cluster locked up, corosync use 100%cpu. only separate the new box from cluster can restore it...
Installing on a HP Pavilion g7 with an Intel CPU.
While installing Proxmox VE 6.0-1 from the ISO download I get the screen that says:
The Proxmox VE could not be installed
Installation of package corosync_3.0.2-pre_amd64.deb failed
The progress bar on the bottom is at 100%...
we updated from PVE5 to PVE6 recently and noticed that nodes on our 4-node cluster leave randomly. Checking pvecm status states that CMAP cannot be initialized, so I had a look at corosync on the failed node only to learn that it obviously segfaulted.
This happened on 3 of 4 cluster...
I've got a Proxmox network question that I hope someone can answer as I can't seem to find a clue myself.
I've read through forums, best practices and the Admin Guide.
There are 4 servers in a cluster. However, Ceph is not going to be used in this deployment, because...
Hello all, I will keep this brief and to the point. I have moved my cluster to a permanent home on a new network and am recreating my cluster. In order to do so I followed the process of removing corosync.conf and deleting all the config files. I also removed all ceph config and all vm's are...
The title says it all, I've mistakenly rm -R /etc/pve/nodes on a quorated PVE 5.4 2-node cluster.
The cluster still runs happily, and pvecm status looks fine and all. But corosync did its job thoroughly removing this directory from the two nodes. I guess the configuration is still in RAM...
with pve 6 the new corosync version is used.
Are there any changes for the amount of cluster nodes in one cluster?
If I remember right, for now is the limit 32 nodes, but less are recommendet (amount?).