When memory hotplug is enabled you can't downgrade memory properly for that VM. I know it's not possible to hot unplug/downgrade the memory, but when the VM is running and you downgrade the memory the new memory config should be highlighted red and will be active when you stop/start the VM. This...
Not sure, but here are some thoughts:
1. Is there a firewall enabled on the nodes?
2. Do you use bonding? If yes, which mode? Only active-backup works OK.
3. Does all the nodes show the same Multicast address when you execute:
corosync-cmapctl -g totem.interface.0.mcastaddr
4. What does the...
Thanks Udo,
Last weekend we did migration of +/- 50 VMs without any issue. In a couple of weeks we will migrate the remaining VM's also with less then 1 minute downtime :-)
Backup and restore is taking to much downtime. That's simply no option. In case this is really needed we better do a upgrade to Proxmox VE 4.x of the current Proxmox VE 3.x cluster and then join the new nodes in the current cluster, migrate the VM's and re-install the "old" nodes.
But I would...
Do you have a valid subscription? If not, buy one (https://www.proxmox.com/en/proxmox-ve/pricing) and install the subscription in Proxmox VE. If you already have a subscription install it in Proxmox VE. Then try again.
If you not want to have a subscription yet, disable the enterprise...
Currently we have a Proxmox VE 3.x cluster with 3 nodes (node1, node2 and node3). We want to upgrade this cluster to Proxmox VE 4.x but with the lowest possible downtime and, we don't want to upgrade but do a clean install. Since we also needed some more physical resources, we decided to buy 2...
Mmm, but do you think I have a problem right now, when the second switch become querier within a max. of 60 seconds? As far as I can oversee I don't, because it seems that it's no problem if Corosync loses multicast traffic up to 90 secs. That's the only thing I would like to be sure of, and if...
Each node is connected with 1x 10 Gbit (Twinax) to each switch (so 2x 10 Gbit in total on each node). Configured via bond-mode active-backup. The switches are connected to eachother with 4x 10 Gbit LACP. Switches are Dell N4032F's.
//Edit:
Some switch status information...
No, the second switch was not offline before (158 days of uptime). During the reload of the primary switch/querier I was logged in on the secondary switch and saw that the secondary switch was in "Non-Querier mode" for almost 2 mins during the reload. I don't think that's strange, because the...
Hi Thomas,
Thank you for your answer. I know Corosync will recognize a drop-out of multicast very soon and give 'Retransmit errors' almost instantly. However, this is not effecting running VM's at that time because they keep running, as you said. But, there is a maximum of time for this, right...
I don't mean the switch values/settings, I know them. I mean the time that Corosync on the Proxmox VE hosts may have lost multicast traffic before a quorum lost occurs.
Thomas, Thank you for the additional information regarding Proxmox VE 4.0. One more thing about totem retransmit errors: Recently I saw a setup with totem retransmit errors that, at first, I could not explain. After some more searching I saw the Proxmox VE host's used bonding with balance-tlb...
Hello,
Since we've seen (and fixed) "corosync [TOTEM ] Retransmit List: XXXX" errors in /var/log/cluster/corosync.log on several Proxmox VE clusters, and information on the internet is not always very clear about the solution, I thought it was a good idea to share some information on how to...
Problem seems to be 'solved'. I think this problem was caused by a not properly working multicast router. Since we have fixed this, the problem has not returned.
Hello,
We use a Ceph RBD cluster for our storage with Proxmox. Everything works fine, except when I make a snapshot of a running VM (I select the VM, tab 'Snapshot', choose 'Take Snapshot', give it a name and check 'Include RAM' and hit the button 'Take Snapshot'). The problem is that the VM...
Hello,
We have a Proxmox VE cluster (3 nodes at the moment) and are using Ceph/rbd for our primary storage. We also have some NFS shares for non-critical data, in example for ISO images and backups (our Ceph cluster is fully SSD, so not ideal for storing ISO images and backups). When Ceph is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.