Managing multiple PVEs without any HA

rmweiss

New Member
Nov 18, 2022
4
0
1
Hi

What is the best way to manage two (or more) "independent" PVE servers in the same frontend?

I don't need any high availability, automatic failover or shared/synced storage.

If one server goes done, then it should have no influence on the others (I should still be able to start/stop VMs on the running servers).

The same should happen if I had (for example) 5 Servers and only one is up: I should still be able to log in to the 1 running server, manage VMs on it, and see that the other 4 servers are currently down.

I just would like to have a centralized management interfaces which gives me an overview over all my PVEs and allows me to "manually" migrate VMs between the running ones.

Looking through this forum, I found:

# vim /etc/corosync/corosync.conf quorum { provider: corosync_votequorum two_node: 1 wait_for_all: 0 }

But I don't know if this is (still) the right way to do it, or if there are better options.
 
this is currently not possible, but being worked on. your (attempted) solution is very dangerous and cause corruption.
 
Too bad, but thanks.

So should I just set up independent installations and migrate VMs by backup / restore via a network share?
 
yes - but the (still experimental and with some limitations) remote-migration support to migrate from one cluster to another (or, single-node instance to another) has been merged last week and is available on the CLI and API (not on the GUI though).

https://git.proxmox.com/?p=qemu-server.git;a=commit;h=192bbfda82f82ce828179e5601a9b1c50ac2821d

packages with that patch included will be available in the repositories soon, they are still in internal testing at the moment.
 
yes - but the (still experimental and with some limitations) remote-migration support to migrate from one cluster to another (or, single-node instance to another) has been merged last week and is available on the CLI and API (not on the GUI though).

https://git.proxmox.com/?p=qemu-server.git;a=commit;h=192bbfda82f82ce828179e5601a9b1c50ac2821d

packages with that patch included will be available in the repositories soon, they are still in internal testing at the moment.
Will the remote-migration make use of "zfs send" too, so nodes with encrypted ZFS pools are still not supported?

What I would like to see is a way to bulk edit aliases/IP sets/security groups or to at least sync the datastore.fw between datacenters. I migrate my guests between different standalone PVE hosts using a shared PBS. But as aliases/IP sets/security groups aren't part of the guest backups, but still required for the VMs to operate, it is very annoying to keep those aliases/IP sets/security groups synced between datacenters. Right now I manually copy the datastore.fw from time to time between nodes so differences won't add up over time. When I then need to add or edit a guest, I have to log in into every servers webUI and do the same stuff again and again to edit or add aliases/IP sets/security groups for every datacenter.
 
Last edited:
Will the remote-migration make use of "zfs send" too, so nodes with encrypted ZFS pools are still not supported?

as usual, it depends. for containers, yes. for VMs, yes, if replication is used. yes, if it's an offline migration. no, if it's online and "just" the current disk is migrated, no snapshots/replication involved. the underlying migration mechanism is pretty much the same, just with
- websocket as tunnel mechanism instead of SSH
- more mapping, since bridges/vmids/storages are not/might not be the same on both ends

What I would like to see is a way to bulk edit aliases/IP sets/security groups or to at least sync the datastore.fw between datacenters. I migrate my guests between different standalone PVE hosts using a shared PBS. But as aliases/IP sets/security groups aren't part of the guest backups, but still required for the VMs to operate, it is very annoying to keep those aliases/IP sets/security groups synced between datacenters. Right now I manually copy the datastore.fw from time to time between nodes so differences won't add up over time. When I then need to add or edit a guest, I have to log in into every servers webUI and do the same stuff again and again to edit or add aliases/IP sets/security groups for every datacenter.

that would be a nice convenience feature - possibly even just having a new, separate, synced firewall file containing groups/aliases/ipsets that are common across clusters.
 
  • Like
Reactions: Dunuin
that would be a nice convenience feature - possibly even just having a new, separate, synced firewall file containing groups/aliases/ipsets that are common across clusters.
Jup, dedicated files for shared common groups/aliases/ipsets would be great. Should I add a feature request for that in the bug tracker?
as usual, it depends. for containers, yes. for VMs, yes, if replication is used. yes, if it's an offline migration. no, if it's online and "just" the current disk is migrated, no snapshots/replication involved. the underlying migration mechanism is pretty much the same, just with
- websocket as tunnel mechanism instead of SSH
- more mapping, since bridges/vmids/storages are not/might not be the same on both ends
Then I really hope there will be some upstream OpenZFS patches in the near future, so you can finally implement replication of encrypted zvols/datasets. :(
 
Jup, dedicated files for shared common groups/aliases/ipsets would be great. Should I add a feature request for that in the bug tracker?
sure, why not. just add "remote migration" somewhere in the subject, at the moment it only makes sense in that context.
Then I really hope there will be some upstream OpenZFS patches in the near future, so you can finally implement replication of encrypted zvols/datasets. :(
indeed. I think the unfortunate reason is that the original sponsors of the feature are no longer that active or active at all in OpenZFS, so it kind of lingers with the remaining complicated issues unsolved..
 
  • Like
Reactions: Dunuin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!