You will need to create a cluster by freeing up one server (move all containers to an existing cluster member)at a time and adding it to the cluster.
Live migration works fine between cluster members with minimal downtime but no HA.
I actually fixed this!
/etc/init.d/pve-cluster stop
On one of the machines I KILLED all the processes of cman (dlm_Controld and fenced) the others shutdown with /etc/init.d/cman stop
At that point all the other machines got really happy and came back online and synced!!!!
That machine needed a...
did /etc/init.d/pve-cluster restart
pvecm e 1
pmxcfs --local
I can login on the WEB UI now BUT if i try to remove that old node I get " cluster not ready no quorum"
Even though all the machines shwo:
Version: 6.2.0
Config Version: 20
Cluster Name: luster
Cluster Id: 34356
Cluster Member: Yes...
I removed 1 machine from the clsuter and and now all the otehr machines are logging cp_send_message failed
I also tried pvecm e 4
no effect?
I am using different versions of proxmox on the machines
3 are 2.3
2 are 3.0
I remove one of the 3.0 ones.
I've been a user of PVE since v2 and at first I was taken aback by the change but after reading the truth behind, I think its a great move and I hope ProxMox benefits.
This is the strategy:
Business people "see unsupported/not for production" etc... and they panic and you tell them its...
Live Migration between a NEWER E5-2620 to a E5472 based CPUs and I was running into the XSAVE issue which I fixed via the NOXSAVE kernel boot parameter.
I am a migrating between a regular extsfs file system --> bind mount of an ext3fs/iscsi fs FOLDER ON top of a local ext3FS. I Am doing this...
this would be a great feature, right now I just rsync them between storage locations with the same options proxmox uses when you do live migrations and then I modify the containers conf file but this breaks the quotas....
in retrospect its kind of a silly question because these are kernel parameters so you have to obviously modify the host node's parameters for it to impact the containers
No, only in the case of failure i might create a new storage entry on one of the alive servers and mount the failed servers LUN there and modify the container conf files to point to that new container.
I have 4 ProxMox Servers
1. Each of them has a different LUN of 100GB mounted to /data/containers
2. Each of them hosts different containers
3. In the event of migration, I use the regular method and there is an ISCSI --> HOST --> ISCSI Data transfer (which is not optimal but works)
4. In...
Hi,
So I've been experimenting with the clustering, iscsi, vlans, bonding etc... anyways..... Now I ran into an issue where I am stuck in some sort of clustering limbo. I don't mind uninstalling all the proxmox related packages but I'd prefer not to have wipe and reinstall the server. Is it...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.