Create cluster with existing OpenVZ CT on node?

Jan 26, 2011
82
2
6
So, I've got two proxmox machines that I want to turn into a cluster to make it easy to migrate OpenVZ containers between them. At the moment, one machine has no containers and the other machine has only one.

According to the documentation (http://pve.proxmox.com/wiki/Proxmox_VE_Cluster#Delete_and_recreate_a_cluster_configuration), it looks like I have to use vzdump to take the one container offline, build the cluster, and then restore the container after the cluster is created.

However, since there is only one container, there are no conflicts to worry about... will it allow me to create the cluster without taking the container offline?

Thanks,

Curtis
 
Should work if the node with the container is master.

Thanks for your reply.

Ok, assuming I am successful at creating the cluster by using the server with the container as the master, would I be able to then change which server is the master?

Curtis
 
Should work if the node with the container is master.

After a few hours of digging to try to figure out why the node with the container on it had to be the master as you stated, I decided that perhaps you just missed the fact that I said the master was empty, so there was no chance of VMID conflicts.

And, since I really didn't want to make that node the master of the cluster, I went ahead and made the other server the master. It worked fine since there were no conflicting VMIDs.

Or was there some other reason for doing it the other way? :-)

Thanks,

Curtis
 
I was simply not sure if there are some consistency checks which prevents you from doing it - obviously not.


Very good. Oh, and I tried the live migration feature... it worked flawlessly. Proxmox is awesome! I am converting over from Citrix XenServer... Proxmox + OpenVZ has a significant edge in my opinion.

Curtis
 
isparks_curtis

I build and destroy my ProxMox cluster very often. In fact, after each new ProxMox release. The only limitation I have seen is that all VMIDs on every hardware node must be unique. Every time I could successfully connect back my cluster again after ProxMox version upgrade, regardless of the VMs placement: either on master or slave.

I am very interested in your experience of using the Citrix Xen server. Could you start another topic, something like "Citrix Xen server in comparison with ProxMox"? In my humble opinion based on my personal experience, sometimes you will see that OpenVZ is not a gift at all. It can be a real plague in several situations.
 
isparks_curtis

I build and destroy my ProxMox cluster very often. In fact, after each new ProxMox release. The only limitation I have seen is that all VMIDs on every hardware node must be unique. Every time I could successfully connect back my cluster again after ProxMox version upgrade, regardless of the VMs placement: either on master or slave.

I am very interested in your experience of using the Citrix Xen server. Could you start another topic, something like "Citrix Xen server in comparison with ProxMox"? In my humble opinion based on my personal experience, sometimes you will see that OpenVZ is not a gift at all. It can be a real plague in several situations.

Thanks for confirming what I thought to be true about using unique VMIDs. I posted the thread you requested here:

http://forum.proxmox.com/threads/5950-Proxmox-OpenVZ-vs-Citrix-XenServer?p=33723

I'd love to hear about the problems you've had with OpenVZ so that I can watch out for them.

Thanks again,

Curtis
 
isparks_curtis

I build and destroy my ProxMox cluster very often. In fact, after each new ProxMox release...
Hi,
is this for testing? Because i never break my cluster during pve-updates. Ok, mixed versions in a cluster are not recomended, but i had never problems in the short time till all nodes are on the same level. Especially in this time no data changed in the cluster (new iso, new VM, new storage).

Udo
 
is this for testing?
Half-and-half.
I have only two boxes I can run ProxMox on. I am always afraid to upgrade running production system. And my experience tells me that it is the right method of approach. So, first of all I move all my VEs to one of the boxes. Then I destroy cluster and upgrade another node. For a while I watch it's behaviour. If everything is allright, I begin to move back some of VEs one by one, using vzdump/vzrestore tools. At the same time I do backups. After migrating all VEs to upgraded machine I watch them again. Some later I do upgrade the rest box and restore cluster.

Too many times I got evidence that I was right performing upgrade this way, because "fresh" ProxMox releases were found buggy too often.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!