Hi,
I was thinking about adding a second node, using a raspberry pi as a qdevice and creating a cluster. I don't need any HA here. It just would be nice to be able to offline migrate guests and to manage both nodes through the same webUI. I already got 3 networks setup between the hosts (dedicated Gbit for corosync, 10Gbit for migrations/NAS and Gbit for normal LAN/Internet communication). What I'm not sure about are two things:
1.) Guests are now running on a host with a E5-2683 v4 (Broadwell EP) with CPU set to "host". The other host got a older E3-1230v3 (Haswell): https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=91766,75054
I guess I should change my VMs back to "kvm64" to not run into problems when offline migrating VMs between node? Im not planning to migrate the VMs often, but would be nice to be able to move the most important VMs from one none to the other one in case I need to maintaince a server. Would that be possible at all to change the CPU type later to "kvm64" or can that be problematic when already installed software was compiled with instruction sets in mind that then would be missing?
2.) I'm not sure if my storage will fit for offline migrations. I know that I would need identical named ZFS pools and identical VGs for LVM in case I would want to use replication or live migration. But what are the storage requirements for offline migrations?
Both of my PVE nodes are heavily customized running on top of Debian with full disk encryption. The node that I'm now using (the E5 one) got a mdadm raid1 with LUKS encrypted LVM for PVE system, a encrypted ZFS pool for guests and a LUKS encrypted LVM-Thin for guests.
The second node would just get a single encrypted ZFS mirror for PVE system + guests in case I get a full system encryption with ZFS unlockable over SSH working. Otherwise I would need to setup a mdadm raid1 with LUKS encrypted LVM again (like with the other node) for the PVE system + an encrypted ZFS for guests. But I really would prefer to get rid of mdadm.
In the documentation I've read regarding LXC migration: "If it has local volumes or mount points defined, the migration will copy the content over the network to the target host if the same storage is defined there".
What does that actually mean? I got alot of unprivileged LXCs with bind-mounts that are bind-mounting SMB shares mounted on the host. Wouldn't be great if the migration would copy the contents of that SMB share too. And what is the requirement to be the "same storage"? Just a non-shared storage with the same name?
Or would it be more reliable to just stick with two unclustered hosts and move guests using restores from my PBS that then could store backups of both unclustered nodes using different namespaces?
I was thinking about adding a second node, using a raspberry pi as a qdevice and creating a cluster. I don't need any HA here. It just would be nice to be able to offline migrate guests and to manage both nodes through the same webUI. I already got 3 networks setup between the hosts (dedicated Gbit for corosync, 10Gbit for migrations/NAS and Gbit for normal LAN/Internet communication). What I'm not sure about are two things:
1.) Guests are now running on a host with a E5-2683 v4 (Broadwell EP) with CPU set to "host". The other host got a older E3-1230v3 (Haswell): https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=91766,75054
I guess I should change my VMs back to "kvm64" to not run into problems when offline migrating VMs between node? Im not planning to migrate the VMs often, but would be nice to be able to move the most important VMs from one none to the other one in case I need to maintaince a server. Would that be possible at all to change the CPU type later to "kvm64" or can that be problematic when already installed software was compiled with instruction sets in mind that then would be missing?
2.) I'm not sure if my storage will fit for offline migrations. I know that I would need identical named ZFS pools and identical VGs for LVM in case I would want to use replication or live migration. But what are the storage requirements for offline migrations?
Both of my PVE nodes are heavily customized running on top of Debian with full disk encryption. The node that I'm now using (the E5 one) got a mdadm raid1 with LUKS encrypted LVM for PVE system, a encrypted ZFS pool for guests and a LUKS encrypted LVM-Thin for guests.
The second node would just get a single encrypted ZFS mirror for PVE system + guests in case I get a full system encryption with ZFS unlockable over SSH working. Otherwise I would need to setup a mdadm raid1 with LUKS encrypted LVM again (like with the other node) for the PVE system + an encrypted ZFS for guests. But I really would prefer to get rid of mdadm.
In the documentation I've read regarding LXC migration: "If it has local volumes or mount points defined, the migration will copy the content over the network to the target host if the same storage is defined there".
What does that actually mean? I got alot of unprivileged LXCs with bind-mounts that are bind-mounting SMB shares mounted on the host. Wouldn't be great if the migration would copy the contents of that SMB share too. And what is the requirement to be the "same storage"? Just a non-shared storage with the same name?
Or would it be more reliable to just stick with two unclustered hosts and move guests using restores from my PBS that then could store backups of both unclustered nodes using different namespaces?