Secondary virtual machines

altayaltan

New Member
Aug 4, 2020
15
2
3
32
Hello there,

I was wondering is it possible to configure at least 2 virtual machines doing the exact same job where the machines don't have to share the workload, one might just stay idle and become active when the other is down for maintenance/OS upgrade etc. so you'll have zero downtime and do your maintenance in order while the substitute machine takes over?

I'm not talking about the cluster architecture by the way. What i mean for example is i have a 3 node cluster environment with multiple vms running inside those and the 2 particular VMs are let's say VM101(main process) and VM201(substitute) which are in node1 and node2 respectively that have the exact same configuration.

Thanks in advance.
 
This would require clustering on the application level. I.e. the application running within the VM needs to be able to do such failover scenarios, the hypervisor is not responsible for what runs in the VMs themselves.

There does exist the so-called QEMU COLO mode (colocation), which runs two VMs perfectly in sync with each other, but that is not yet production ready and also not integrated in PVE (and probably won't be for quite a while, if ever).
 
  • Like
Reactions: altayaltan
I was wondering is it possible to configure at least 2 virtual machines doing the exact same job where the machines don't have to share the workload, one might just stay idle and become active when the other is down for maintenance/OS upgrade etc. so you'll have zero downtime and do your maintenance in order while the substitute machine takes over?

Well, as long as the two VMs do not need to share a single database other other uniq resources (e.g. active sessions) - why not?

I am running two identical but independent instances of pihole (https://pi-hole.net/) on two independent hosts. Of course they have independent ip addresses, hostnames etc. Both are fully intact and are running at the same time. Both could be used in "single mode" at any time - but that is not the goal.

I wanted to use mainly one specific instance and automatically switch to the other one if there are problems .

For this reason there is a third ip address, a "floating" one, used to implement the desired automatic switch-over. All clients only know this third address.

This magic comes with keepalived (https://keepalived.org/). The remaining instance will take over the mentioned floating ip address if the main instance dies or is shut down. Clients using pihole as a DNS-Server will not notice the malfunction. (After some seconds of delay as the switches need to associate the "new" MAC-address with this ip address.)

Best regards :)
 
Well, as long as the two VMs do not need to share a single database other other uniq resources (e.g. active sessions) - why not?

I am running two identical but independent instances of pihole (https://pi-hole.net/) on two independent hosts. Of course they have independent ip addresses, hostnames etc. Both are fully intact and are running at the same time. Both could be used in "single mode" at any time - but that is not the goal.

I wanted to use mainly one specific instance and automatically switch to the other one if there are problems .

For this reason there is a third ip address, a "floating" one, used to implement the desired automatic switch-over. All clients only know this third address.

This magic comes with keepalived (https://keepalived.org/). The remaining instance will take over the mentioned floating ip address if the main instance dies or is shut down. Clients using pihole as a DNS-Server will not notice the malfunction. (After some seconds of delay as the switches need to associate the "new" MAC-address with this ip address.)

Best regards :)

So did you have an instance where you put this configuration to test? Is it working as intended?
 
So did you have an instance where you put this configuration to test? Is it working as intended?

Well, I am using this actively at home - so yes, it works as intended.

All it needs is this configuration is /etc/keepalived/keepalived.conf:
Code:
vrrp_instance Instance0 {
   interface ens3               # Genutztes Interface
   state MASTER
   virtual_router_id 51          # ID der Route
   priority 150                  # Master Prio 150, Backup Prio 100
   virtual_ipaddress {
       10.2.2.130/16           # Virtuelle Failover IP-Adresse
   }
}
...and while it actually works it is a not-so-good example: It does not look for a failed / not reachable DNS-Server but it only looks for reachability like "ping". (This means if the pihole-process is not running but the VM itself is working this simple approach fails.)
 
  • Like
Reactions: altayaltan
Well, I am using this actively at home - so yes, it works as intended.

All it needs is this configuration is /etc/keepalived/keepalived.conf:
Code:
vrrp_instance Instance0 {
   interface ens3               # Genutztes Interface
   state MASTER
   virtual_router_id 51          # ID der Route
   priority 150                  # Master Prio 150, Backup Prio 100
   virtual_ipaddress {
       10.2.2.130/16           # Virtuelle Failover IP-Adresse
   }
}
...and while it actually works it is a not-so-good example: It does not look for a failed / not reachable DNS-Server but it only looks for reachability like "ping". (This means if the pihole-process is not running but the VM itself is working this simple approach fails.)

Looks pretty cool, I'll try to check it out. Thanks a lot, much appreciated.:)
 
There does exist the so-called QEMU COLO mode (colocation), which runs two VMs perfectly in sync with each other, but that is not yet production ready and also not integrated in PVE (and probably won't be for quite a while, if ever).

Only as a side node:
We - as the IT - where also questioned about this a while ago and why we are not using it. PVE does not support it (yet) and we had to get quotes from VMware and their implementation of COLO is only for a few (2-4 VMs) per host that support this kind of HA and the costs are horrendously high. After presenting the numbers, we shifted to a more application centric HA environment and changed our software stack so that we could use application level HA, which was in our case much cheaper.

For most software projects, there exist a very simple HA failover technology, that can be used, but the software has to be horizontally scalable. The easiest version is the keepalived stuff. There exist also traefik for HTTP and nowadays also TCP stuff, so that you can create your HA ingress router on multiple incoming IPs that take care of the HA stuff for you, if you have e.g. a good Docker or Kubernetes environment.
 
  • Like
Reactions: altayaltan

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!