Had to shut down the entire cluster for maintenance on the power grid, After starting cluster back up (7 nodes), node 4 failed to join the cluster.
Syslog for the server is working.
I have moved all vms from the server to another host for now.
Short of removing the host and reinstalling, is...
I do not think they are planning on adding MFS support. It's a shame, because it works beautifully with ProxMox.
I would recommend having the storage and corosync networks on separate physical adapters and switches for better performance.
Last update on this, if anyone is interested or ever have the same issue.
Root cause was combination of faulty NIC on one of the servers causing chaos on the network and failed DNS on the node.
Update on this:
One server's on board network was knocked out, however it seems that corosync does start up on this server.
Is there anyway I can get the working (i assume) database on my other servers to get the cluster online?
We had a critical server power failure taking down the entire cluster (inverter on ups failed, knocking out mains breakers and killing all servers before maintenance staff could reach power room)
6 server cluster (vwk-prox01 to vwk-prox06)
Shared storage folder mounts properly on all servers...
Basically corosync did not like our WAN link at all.
I still have the storage on the same network after removing the remote server from the cluster and no issues.
It seems that corosync died every time the latency exceeded about 15ms on the wan link.
For now, it runs very well along with the...
This issue seems to be caused by corosync using the same physical network as the shared storage.
If latency on the network exceeds 2ms corosync throws a fit.
Splitting the two networks over different switches and interfaces seems to resolve this completely.
Losing migration would be extremely inconvenient, however it seems to be the only practical solution.
Under regular load, the latency between us and the remote site stays between 2 and 4ms, however under heavy load that increases to as much as 20.
This is most likely the leading cause of all...
I do have one big problem - one of my servers is located off site (about 6km away, across a river and on the other side of town) - there is a site to site link (600MBPS Wireless), however there is no way to connect that server to the new physical network I have to create for corosync.
It seems...
Fabian, you are a gentleman and a scholar.
Please pass on my recommendation to HR that you should receive a raise and promotion immediately.
I think you have just shown me the root cause of all the issues I have been having with proxmox.
Corosync is currently on the same network as my storage...
ProxMox was the only thing that changed.
I did full backup, clean install of 4, recreated cluster and restored backup op VM's
Currently Pveproxy is down on most of my cluster, it was running fine after the upgrade until servers started operating under heavy load - then everything went...
Pveproxy on all my servers crashed during the night.
All my vm's are up, (along with the disk access issue) however the web interface is down
I can ssh ddirectly to the hosts, however it I attempt to run "pveproxy start" everything stops responding.
It seems the issue occurs if I leave open a...
If it was an earlier version of proxmox - no contest, proxmox wins. However, after upgrading to proxmox v4 I am seriously considering vmware.
I have multiple issues with stability since the upgrade, I have VM's losing connection to their disk images, I have pveproxy and spiceproxy completely...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.