Manage all nodes from one location - don't care about HA

jdrews

Active Member
Jul 24, 2013
16
1
43
US
I'd like to manage all my nodes from one location (e.g. https://10.20.30.40:8006).

I know that if I clustered all my nodes I could do that. However I'd rather not cluster these nodes as it increases complexity, and gives me more work as I'd have to move all my VMs around so I can clear out the nodes before adding them to the cluster. These nodes are used in a test environment, where high availability is not needed.

Is there any way to manage these nodes from one location without having to cluster?
 
Ah. There might be some confusion here. I have more than one physical server. I have 7. But they're all used in a development/test environment, where high availability is not needed. Just to make sure we're all on the same page, as I understand it, Proxmox refers to "nodes" as physical machines capable of hosting VMs.

I want to be able to stop, start, clone, create, and delete VMs without having to log into 7 different webpages. Perhaps there's some way with the use of the JSON API?
 
Last edited:
Why don't you use cluster without HA? Clustering really doesn't add any complexities.

Sent from my Galaxy Nexus
 
Thanks for your suggestions!

I know that if I clustered these nodes I could control them from any node. I'm coming from a XenServer environment where getting control of all nodes was as easy as typing in an IP into a textbox on client software. I'm reluctant to start making these nodes aware of each other. I'll outline my fears below:

1. Clustering means I'll have to move around, and rebuild 200+ VMs since their VMIDs overlap (across each node, eg, there are 7 VMs with the VMID of 101, etc)

2. Clustering means I'll have to rethink the network setup. As it stands I'm using vmbr1 with NAT masquerading to create an internal network for each node, which is where the VMs live. We structure our VMs to have the vmbr1 network with vmid as the ending subnet. e.g. VM with VMID of 101 on vhost7 has an IP of 10.5.7.101. This makes it easy to remember IPs of VMs.

Below is an example of my /etc/network/interfaces on each node.
Code:
root@vhost7:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.100.2.117
        netmask 255.255.255.0
        gateway 10.100.2.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0


auto vmbr1
iface vmbr1 inet static
        address  10.5.7.1
        netmask  255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s '10.5.7.0/24' -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.5.7.0/24' -o vmbr0 -j MASQUERADE

Then I've injected static routes into our primary gateway to get to these VMs.

You may ask why I have a vmbr0 and vmbr1. Occasionally I need to set the IP of a VM to the node's network. In that case, I bridge the VMs nic card to vmbr0, and then I can apply an IP on the 10.100.2.0/24 network.

3. Clustering means I'll have to deal with split brain problems, fencing issues, quorum problems, update issues, taking nodes into maintenance (for good, for power cycling, for moving to a different network for testing with clients). All these headaches for something as simple as managing from one page. Probably not worth it.

You may also ask why I didn't just cluster in the beginning. I have two reasons. We started out with only one machine, and the second reason is because I'm a noob when it comes to Proxmox ;)

I'd love to hear your solutions to these problems! Let me know what you think. Thanks!
 
  • Like
Reactions: benitoll
Thanks for your suggestions!

I know that if I clustered these nodes I could control them from any node. I'm coming from a XenServer environment where getting control of all nodes was as easy as typing in an IP into a textbox on client software. I'm reluctant to start making these nodes aware of each other. I'll outline my fears below:

1. Clustering means I'll have to move around, and rebuild 200+ VMs since their VMIDs overlap (across each node, eg, there are 7 VMs with the VMID of 101, etc)

nothing you can do about this, backup and restore works fine though, just renumber them when you restore

2. Clustering means I'll have to rethink the network setup. As it stands I'm using vmbr1 with NAT masquerading to create an internal network for each node, which is where the VMs live. We structure our VMs to have the vmbr1 network with vmid as the ending subnet. e.g. VM with VMID of 101 on vhost7 has an IP of 10.5.7.101. This makes it easy to remember IPs of VMs.

Below is an example of my /etc/network/interfaces on each node.
Code:
root@vhost7:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.100.2.117
        netmask 255.255.255.0
        gateway 10.100.2.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0


auto vmbr1
iface vmbr1 inet static
        address  10.5.7.1
        netmask  255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s '10.5.7.0/24' -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.5.7.0/24' -o vmbr0 -j MASQUERADE

Then I've injected static routes into our primary gateway to get to these VMs.

You may ask why I have a vmbr0 and vmbr1. Occasionally I need to set the IP of a VM to the node's network. In that case, I bridge the VMs nic card to vmbr0, and then I can apply an IP on the 10.100.2.0/24 network.

if your not worried about HA (and moving vm's around) this, isnt such a critical issue, clustering wont break your network configs if you dont move your VM's around and as long as all the Hardware Nodes can see each other

but your right you will want to think about your network config should you want to be able to migrate VM's around (something that doesnt require the HA parts to do)
3. Clustering means I'll have to deal with split brain problems, fencing issues, quorum problems, update issues, taking nodes into maintenance (for good, for power cycling, for moving to a different network for testing with clients). All these headaches for something as simple as managing from one page. Probably not worth it.

you only have to worry about split brain, quorum and fencing issues if you want HA functionality - you can reasonably safely ignore these items if not - if the cluster cant get quorum(should be unlikely with 7 nodes), all that happens is the cluster will stop allowing VM requests (eg boot/shutdown etc) until it regains quorum - you have an odd number of nodes so split brain becomes less likely because you should generally always have a majority to win and have quorum

you can take nodes into maintenance without migrating VM's away (eg Node reboot) as long as your happy those vm's go offline for the duration of the reboot however cluster does make it easier for maintenance by making it easier to migrate vm's from one node to another for longer maintenances

You may also ask why I didn't just cluster in the beginning. I have two reasons. We started out with only one machine, and the second reason is because I'm a noob when it comes to Proxmox ;)

I'd love to hear your solutions to these problems! Let me know what you think. Thanks!

we all have to start somewhere i guess - i recently had the same issues with production cluster - started with one node, then before i knew it i had 3 and had to deal with VMID renumbering and getting those 3 servers + 1 new/spare (with running production VM's) into a 4 node cluster, not worrying about HA at this stage (thats next step) but central maintenance and ease of management/migrations were big wins
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!