I have 4 node cluster running 5.1 VM. This is first time that it has happened, one of the VM when it failed over to second node on power failure it didnt turn back on.
Here is the error message when I try to start it
kvm: -drive...
Thank you for your guidance. I have changed it to 3/2.
About -machine 'accel=tcg' option, Where do I change to get better performance, Can I do it in production, will it have impact on existing VMs ?
Ceph status was unhealthy with all sorts of warning.
My vms can migrate and work fine if I do it manually but in this scenario that didn't happen.
Sorry for my ignorance but where do I change this option ?
-machine 'accel=tcg'
What is recommended Pool size and how can I change existing...
Hello all,
I have a 4 node cluster with multiple OSDs, 9 for each node. I have pool set for 2/2.
Recently one of the node (master) failed. All vms migrated to node2 but none of them came up. More strangely my vm on node4 and node4 also stopped working. I must have some huge mistake in my...
Hello experts,
I am having this problem not 4th time in same month, my master node in 4 node cluster goes down and even though I have HA setup and nodes migrate but vms are still not reachable. Also is there any way I can also find reason why this is happening. I had to manually hard power...
I successfully reinstalled Proxmox on cluster 1 nodes and added them to cluster 2.
Some commands that helped just for reference after adding node to cluster
Ceph should be installed prior to joining the cluster
sgdisk -Z /dev/sdb (For Ceph Disk)
pveceph createosd /dev/sdb
After adding the...
I do want to do that but problem is I am remote and DC is in another state and I don't entrust installation and configuration part to locals.
I have read "pvecm expected 1" could help me bypass the quorum problem. Is that true ?
I would love to do it all remotely without losing my Public IPs...
Hey Klaus,
Thanks for detailed explanation and setup. I think I would go with your suggestion of putting all 4 nodes in 1 cluster.
Both clusters have separate CEPH although its same local network.
What would be best way to destroy cluster 1 and merge it to cluster 2. I already have VMs that...
Last night when I node failed I checked the other node it said something about waiting for Quorum. I had to manually power cycle 1st node bring cluster and VMs back up.
root@hosting:~# pvecm status
Quorum information
------------------
Date: Wed Mar 7 15:55:22 2018
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1/9180
Quorate: Yes
Votequorum information...
Hello all,
I am new to proxmox community. I have 2 proxmox clusters with 2 nodes each setup with ceph storage in same local network with Public IPs.
Recently one node on one of the cluster died but the secondary node on that cluster did not bring up the VMs.
I have done all the research and I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.