Feedback requested for guide on configuring a Proxmox 2.0 cluster over OpenVPN

ned14

New Member
Feb 24, 2012
12
0
1
Hi,

First time post. Seeing as Proxmox 2.x loses a valuable feature of 1.9 (the ability to join nodes in a local network onto a public master), I have written a guide on how to achieve something similar using a private OpenVPN subnet as the IP multicast transport. This lets you "stitch together" discontinuous Proxmox 2.0 nodes which may live very far from one another, or as in the example in the guide where one node lives behind a NATed ADSL home connection (yes, it works, though timeouts are a problem due to 160ms round-trip latencies).

http://www.nedproductions.biz/wiki/configuring-a-proxmox-ve-2.x-cluster-running-over-an-openvpn-intranet


It's obviously a work in progress - I still must add a section on configuring haproxy to expose services on the VMs living inside the OpenVPN private subnet on the public server's public network interface, thus enabling a "poor man's" failover redundancy using cheap dedicated servers e.g. OVH Kimsufi's. However as a system for transparently keeping backups and syncing between local and public nodes, it works as-is!

Please do let me know what you think of the guide and if there are any improvements I could make. Oh and my thanks to the Proxmox VE team for such a great release - Proxmox VE 2.0 has worked far better for our needs than Ubuntu's Cloud Infrastructure.

Thanks,
Niall
 
Thanks ned14 for such a wonderful stuff that you wrote. Your writings are very comprehensive and fun to read. :-)

Shall try and let you know.

However, I would love to read two more articles from you on how you achieved "failover redundancy between containers" and also "OpenVPN failover backups"!

Keep the good work!

Hi,

First time post. Seeing as Proxmox 2.x loses a valuable feature of 1.9 (the ability to join nodes in a local network onto a public master), I have written a guide on how to achieve something similar using a private OpenVPN subnet as the IP multicast transport. This lets you "stitch together" discontinuous Proxmox 2.0 nodes which may live very far from one another, or as in the example in the guide where one node lives behind a NATed ADSL home connection (yes, it works, though timeouts are a problem due to 160ms round-trip latencies).

http://www.nedproductions.biz/wiki/configuring-a-proxmox-ve-2.x-cluster-running-over-an-openvpn-intranet


It's obviously a work in progress - I still must add a section on configuring haproxy to expose services on the VMs living inside the OpenVPN private subnet on the public server's public network interface, thus enabling a "poor man's" failover redundancy using cheap dedicated servers e.g. OVH Kimsufi's. However as a system for transparently keeping backups and syncing between local and public nodes, it works as-is!

Please do let me know what you think of the guide and if there are any improvements I could make. Oh and my thanks to the Proxmox VE team for such a great release - Proxmox VE 2.0 has worked far better for our needs than Ubuntu's Cloud Infrastructure.

Thanks,
Niall
 
Thanks ned14 for such a wonderful stuff that you wrote. Your writings are very comprehensive and fun to read. :-)

Shall try and let you know.

Thanks!

However, I would love to read two more articles from you on how you achieved "failover redundancy between containers" and also "OpenVPN failover backups"!

Keep the good work!

I may not go that far, as that is really the responsibility of the Proxmox wiki and documentation. However, I may say how to get haproxy to switch what VM is used to answer a request should the main one fail. I won't go into detail with haproxy - there's plenty of better docs on haproxy on the internet than I could write. But I will say how use to all the nodes which appear in the OpenVPN as a pool for serving requests.

Regarding backup, surely one simply configures a "backup" storage in LVM and sets it to replicate to the backup node(s). You then add a schedule to snapshot the relevant nodes to backup.

It's not dissimilar with "poor man's failover". Just have a schedule perform a snapshot every five mins or so, and have those snapshots replicate across the node pool.

I'm sure once 2.0 final gets released, lots more docs on this will appear. Corosync could do with a lot more docs I agree.

HTH,
Niall
 
Great work!
I thought about such a solution for over a long time. Now I've made it work!
Well, I hoped to read more about chapter 6 because I am new to Proxmox/OpenVZ. The Last years I worked with Xen and no GUI.

Thanks a lot and it would be nice to read more!
 
Great work!
I thought about such a solution for over a long time. Now I've made it work!
Well, I hoped to read more about chapter 6 because I am new to Proxmox/OpenVZ. The Last years I worked with Xen and no GUI.

Thanks a lot and it would be nice to read more!

Thanks for the feedback and I'm really glad to know it's useful.

In short, section 6 will be about configuring haproxy and section 7 onwards about self-backing up storage, as I want to do something a bit unique with DRBD to solve the split brain problem.

My apologies for taking so long, I only get to work on Proxmox after I finish work each day. Some days I am too tired, or have other commitments. It's been slow moving recently.

Niall
 
Hi Niall!

I know, i know... the time. :-)
One question for understanding: What kind of networking do you use for your VMs. Routed (venet) or bridged (vmbr0)? I think the last one?
Andy
 
Hi Niall!

I know, i know... the time. :-)
One question for understanding: What kind of networking do you use for your VMs. Routed (venet) or bridged (vmbr0)? I think the last one?
Andy

The public node is actually an OVH Kimsufi 2G, so it certainly has no KVM support and can only run OpenVZ.

For the OpenVZ nodes I've been simply setting an IP address, so it's routed (venet). Sorry, I thought that was obvious from the guide. I'll just go quickly fix it right now.

Niall
 
Hi Niall,

I have a quick question.

I set up 2 servers according to your guide but it looks I have a hard time convincing proxmox to use the openvpn network.

If I start to create a cluster it wants to bind to the main eth0 device ...

pvecm status
Version: 6.2.0
Config Version: 1
Cluster Name: fhcluster
Cluster Id: 52788
Cluster Member: Yes
Cluster Generation: 12
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: jupiter
Node ID: 1
Multicast addresses: 239.192.206.3
Node addresses: 78.47.xxx.xxx

As far as I understand the node address should be 10.1.xxx.xxx or something like that ?

Thanks a lot -

Oliver
 
Hi Niall,

I have a quick question.

I set up 2 servers according to your guide but it looks I have a hard time convincing proxmox to use the openvpn network.

If I start to create a cluster it wants to bind to the main eth0 device ...

As far as I understand the node address should be 10.1.xxx.xxx or something like that ?

First page, section 4 says:

Next step is to make Proxmox run the cluster from 10.xxx.xxx.10 instead of the default IP for the machine. How to do this isn't well documented, but it's very easy. Open /etc/hosts and have something like the following:

127.0.0.1 localhost.localdomain localhost10.xxx.xxx.10 milla.vpn.local milla pvelocalhost192.168.2.2 milla.nedland millaWhat you're doing is to change the default IP to 10.xxx.xxx.10 and to set pvelocalhost as an alias for it. You can reboot now.


If your pve is binding to eth0 even with the /etc/hosts reordered and reconfigured as above, you must have something wrong with your dummy network devices. They ought to come up before even eth0 does, so pve can bind to them.

HTH,
Niall

 
Hi Niall,

so the order does matter ... duh ;-)

So if I have something like this:

127.0.0.1 localhost.localdomain localhost
78.47.xx.xx jupiter
10.113.xx.xx jupiter.vpn.local jupiter pvelocalhost

I ned to change to put the real ip on the bottom.


Gotach.

I will try that out!

Thanks a lot.

Oliver
 
Thanks Niall. That was it indeed.

Sometimes its the litte things ;-)

I agree that it's a bit of a blunt instrument for changing the "primary" IP of a machine, but I didn't any other way of getting PVE to bind to a specific interface. The docs on PVE 2.0 are still being written, that's for sure!

Glad it worked!

Niall
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!