To bring everyone up to speed, here's the plan: Replace ubuntu/kvm with PVE to include 2 additional servers with 1 being a pve/pbr and the other a offsite DR box.
1. migrate and upgrade from ubuntu/kvm to proxmox. Config local nas backups temporarily until the new environment is in place (done). All but 1 machine has been migrated. Having an issue with the Windows machine thinking there is already another 'nic' assigned to that IP and it won't let me 'run it' w/o a dhcp address. Got to admit, it's kicking my butt..but still working on that ...
2. rebuild the old server, after upgrading drive and load PVE/PBR on it. I want to also add these 2 into a cluster so that I can use 'Replication' so my 'data files' are in sync in case of a failure on the primary box. I think it'll require HA, so will enable if needed. I know that Replication is only for the data (qcow2 files) but if that works, I'm fine as long as I have a recent copy just in case the primary dies, or I need to do maintenance on pve1 - I can simple migrate all over to pve2 - do the maintenance and migrate back if needed. Regarding the disk or LVM for the 2nd server. It will have 6 10Tb drives in it for 'everything'. I was thinking of creating 2 raid-5 data sets so that I'd have 1 for the 'server/replication' side and the other for pbr backups. Is this overkill when I can just create 1 big zfs partition and crease separate data sets/folders for what I need in there? Suggestions or thoughts on how to create the partitions? Space won't be an issue, at this time either have 2 single 20Tb partitions, or 1 big-o partition. For replication to work, I think I'll have to set up the zfs partition on the 2nd box the same (either in name or size) for zfs to replicate but haven't gotten that far yet.
3. last phase would be to use PBR to replicate the data offsite for my DR box. It will also be loaded as a PBR/PVE box but can't use it in a cluster due to the 'latency' in distance between the 2 sites, so replication of the backups with the ability to 'mount/load' the servers at the DR site with current info if needed - is required.
Anyone using a similar solution or have any suggestions as whether this will work, or a better way?
1. migrate and upgrade from ubuntu/kvm to proxmox. Config local nas backups temporarily until the new environment is in place (done). All but 1 machine has been migrated. Having an issue with the Windows machine thinking there is already another 'nic' assigned to that IP and it won't let me 'run it' w/o a dhcp address. Got to admit, it's kicking my butt..but still working on that ...
2. rebuild the old server, after upgrading drive and load PVE/PBR on it. I want to also add these 2 into a cluster so that I can use 'Replication' so my 'data files' are in sync in case of a failure on the primary box. I think it'll require HA, so will enable if needed. I know that Replication is only for the data (qcow2 files) but if that works, I'm fine as long as I have a recent copy just in case the primary dies, or I need to do maintenance on pve1 - I can simple migrate all over to pve2 - do the maintenance and migrate back if needed. Regarding the disk or LVM for the 2nd server. It will have 6 10Tb drives in it for 'everything'. I was thinking of creating 2 raid-5 data sets so that I'd have 1 for the 'server/replication' side and the other for pbr backups. Is this overkill when I can just create 1 big zfs partition and crease separate data sets/folders for what I need in there? Suggestions or thoughts on how to create the partitions? Space won't be an issue, at this time either have 2 single 20Tb partitions, or 1 big-o partition. For replication to work, I think I'll have to set up the zfs partition on the 2nd box the same (either in name or size) for zfs to replicate but haven't gotten that far yet.
3. last phase would be to use PBR to replicate the data offsite for my DR box. It will also be loaded as a PBR/PVE box but can't use it in a cluster due to the 'latency' in distance between the 2 sites, so replication of the backups with the ability to 'mount/load' the servers at the DR site with current info if needed - is required.
Anyone using a similar solution or have any suggestions as whether this will work, or a better way?