Storage requirement for offline migrations between cluster nodes?

Dunuin

Distinguished Member
Jun 30, 2020
14,793
4,628
258
Germany
Hi,

I was thinking about adding a second node, using a raspberry pi as a qdevice and creating a cluster. I don't need any HA here. It just would be nice to be able to offline migrate guests and to manage both nodes through the same webUI. I already got 3 networks setup between the hosts (dedicated Gbit for corosync, 10Gbit for migrations/NAS and Gbit for normal LAN/Internet communication). What I'm not sure about are two things:

1.) Guests are now running on a host with a E5-2683 v4 (Broadwell EP) with CPU set to "host". The other host got a older E3-1230v3 (Haswell): https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=91766,75054
I guess I should change my VMs back to "kvm64" to not run into problems when offline migrating VMs between node? Im not planning to migrate the VMs often, but would be nice to be able to move the most important VMs from one none to the other one in case I need to maintaince a server. Would that be possible at all to change the CPU type later to "kvm64" or can that be problematic when already installed software was compiled with instruction sets in mind that then would be missing?

2.) I'm not sure if my storage will fit for offline migrations. I know that I would need identical named ZFS pools and identical VGs for LVM in case I would want to use replication or live migration. But what are the storage requirements for offline migrations?
Both of my PVE nodes are heavily customized running on top of Debian with full disk encryption. The node that I'm now using (the E5 one) got a mdadm raid1 with LUKS encrypted LVM for PVE system, a encrypted ZFS pool for guests and a LUKS encrypted LVM-Thin for guests.
The second node would just get a single encrypted ZFS mirror for PVE system + guests in case I get a full system encryption with ZFS unlockable over SSH working. Otherwise I would need to setup a mdadm raid1 with LUKS encrypted LVM again (like with the other node) for the PVE system + an encrypted ZFS for guests. But I really would prefer to get rid of mdadm.
In the documentation I've read regarding LXC migration: "If it has local volumes or mount points defined, the migration will copy the content over the network to the target host if the same storage is defined there".
What does that actually mean? I got alot of unprivileged LXCs with bind-mounts that are bind-mounting SMB shares mounted on the host. Wouldn't be great if the migration would copy the contents of that SMB share too. And what is the requirement to be the "same storage"? Just a non-shared storage with the same name?

Or would it be more reliable to just stick with two unclustered hosts and move guests using restores from my PBS that then could store backups of both unclustered nodes using different namespaces?
 
I guess I should change my VMs back to "kvm64" to not run into problems when offline migrating VMs between node? Im not planning to migrate the VMs often, but would be nice to be able to move the most important VMs from one none to the other one in case I need to maintaince a server. Would that be possible at all to change the CPU type later to "kvm64" or can that be problematic when already installed software was compiled with instruction sets in mind that then would be missing?
for offline migration this should generally not be necessary, but in reality this depends on what runs inside the guests. if they expect features from one specific cpu, it'll probably not work on the other
(but also not with kvm64). note that quite a bit of software is compiled in a way to use modern instruction sets but can fall back to a more generic implementation


as for 2. no, the content of bind mounts should not be copied over, and you can give a '--target-storage' parameter to copy it to a different storage, so you don't even have to have the same storages across nodes
 
  • Like
Reactions: Dunuin
What about offline migration and encrypted storages (ZFS native encryption for ZFS and LUKS for LVM-Thin) that are always unlocked? Is offline migration of zvols/datasets based on ZFS replication where PVE isn't supporting encrypted pools yet (like explained here: https://bugzilla.proxmox.com/show_bug.cgi?id=2350)?
 
Last edited:
yes, offline migration will use the same mechanism as replication for ZFS based volumes, with all the same limitations. if the encryption happens a layer below (like with LVM-thin on top of dmcrypt), it should work since PVE can just transfer the unencrypted data.
 
  • Like
Reactions: Dunuin
Ok, thats bad. Then I will have to decide if I want to run my ZFS pool ontop of LUKS or drop the idea of running a cluster.

I really hope they do some changes upstream so replication of encrypted zvols/datasets will be possible.
 
What about LVM, LVM-Thin and ZFS storages in a non-HA cluster? Should I make sure that all local (non-shared/non-replicated) VGs, ZFS pools and PVE storage IDs are different between nodes to prevent conflicts or should I try to name them identically? Or doesn't it matter at all when just moving guestsabetween nodes by restoring backups?
 
if you think of them as being the same storage, I'd also name them identical (i.e., if you have ZFS on a 4TB SSD on each node, you could put an identically named zpool on each of them on each node and have a single storage.cfg entry for them). for a local storage, PVE will be aware that their contents are not the same on all nodes anyway, so there is no potential for "conflicts".

for some operations, having identically named storages makes things easier:
- migration -> no need to provide a storage mapping, unless you actually want to change storage while migrating
- backup/restore -> can re-use storage information from backed-up config

some things are also not (yet) possible if you have differently named storages:
- replication
- restoring multi-disk VM backups with different storages without manually moving disks after restoring
 
  • Like
Reactions: Dunuin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!