Migration from 2 node Proxmox 3.4 cluster to 5.2

TrevorJ

New Member
Mar 18, 2013
13
0
1
Dear All,

I have two Proxmox 3.4 clusters in different racks (at different DC's) each running 2 node HA using a third Quorum disk for each cluster. Both clusters also run DRDB serving each 2 node Proxmox Server.

I want to migrate everything to Promox 5.2 and do away with the need for DRDB.

I've noticed a few issues with the new setup and would appreciate some inputs.

1) There does not appear to be a Quorum disk / mkqdisk option in the latest Proxmox 5.2/ Corosync build. I've seen reference to people using a Raspberry PI or Intel NUC's as a third node. Is there a simpler solution?

If I really must have a fully functional third node for each cluster, can I add the nodes from the other cluster using a OpenVPN tunnel? In this case what happens if I lose a rack housing two of the nodes? Will the HA still work correctly?

2) I want to use lvm-thin storage for each node and rely upon replication/HA, but in testing I have found only the Migration option in Proxmox can move a VM from one server to another when the storage is local such as lvm-thin. If I try to use the Clone option, the target storage has to be shared if I want to Clone a VM to another server. Currently the workaround seems to be to Migrate a VM to another server in the cluster, then Clone it and Migrate it back. Does anyone have a simpler solution?

Your inputs are most gratefully received.
 
1) There does not appear to be a Quorum disk / mkqdisk option in the latest Proxmox 5.2/ Corosync build. I've seen reference to people using a Raspberry PI or Intel NUC's as a third node. Is there a simpler solution?
QDisc is removed since corosync protocol version 2.x.

2) I want to use lvm-thin storage for each node and rely upon replication/HA, but in testing I have found only the Migration option in Proxmox can move a VM from one server to another when the storage is local such as lvm-thin. If I try to use the Clone option, the target storage has to be shared if I want to Clone a VM to another server. Currently the workaround seems to be to Migrate a VM to another server in the cluster, then Clone it and Migrate it back. Does anyone have a simpler solution?
At the moment replication will only work with ZFS.
 
QDisc is removed since corosync protocol version 2.x.


At the moment replication will only work with ZFS.

Hi Wolfgang,

I did not realise you must use ZFS storage for replication, thin-lvm is a lot more optimal in my usage case.

This explains the following error message when trying to set-up replication. "missing replicate feature on volume 'local-lvm:vm-100-disk-1' (500)"

In order to use ZFS based storage, It looks like I will need to use additional storage such as a FreeNas server.

Is there a solution where I can make use of local lvm-thin storage and overlay ZFS to provide replication?

Also could you answer my question on how to provision a two node HA cluster?

Many Thanks
 
In order to use ZFS based storage, It looks like I will need to use additional storage such as a FreeNas server.
No, you need ZFS on the PVE host system.

Is there a solution where I can make use of local lvm-thin storage and overlay ZFS to provide replication?
Yes but this brings no benefit and cost the same resources and make it more complex what is in error case not preferable.

Also could you answer my question on how to provision a two node HA cluster?
We do not support any two nodes HA solution but you can have a look at corosync-qdevice.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!