Migrating Servers/Fresh Installs - Sanity Check

aeisan

New Member
May 1, 2024
14
3
3
Hello,

My current setup includes a primary storage server and a backup storage target, each running TrueNAS bare metal. On each TrueNAS server I am running a PBS VM. My typical backup scheme is the PVE nodes backup to the primary server PBS and less frequently the backup storage PBS runs a sync pull job from the primary. It's pretty simple and while not fully a 3-2-1 strategy, it works well enough for my home lab needs (I suppose it's important to point out this is a home lab).

Recently I have been mulling over the idea of (potentially) simplifying my server infrastructure. My thought was to move away from TrueNAS altogether and run PVE on the primary storage server and PBS on the backup storage server. I have a ~24TB ZFS pool on the primary storage server I intend to export from TrueNAS and import to PVE; I do not intent to wipe this pool and rebuild a pool to move data back onto.

The backup server currently has its datastore on an NFS share from the TrueNAS host. It's a little unorthodox but it works. I would probably do fresh install of PBS, wipe the disks and create a new datastore. I would then run a sync job to copy all the existing backups from the primary PBS to the new PBS server. Then I should be able to destroy the (old) primary PBS VM, go about installing PVE on the primary storage server, import the zpool, and all should be good.

I just have a couple questions, mostly to ask if the gurus think this is either a sound approach or if seems foolish.

1. Would you recommend putting PBS back on the primary storage server as a VM as I had before? I know running PBS in a VM is generally not recommended, but in a home lab environment I have been running it this way for a while with little issues. The idea here is to have just one more backup copy available, albeit local to the primary storage machine it is backing up. Or is that just dumb?

2. Does the transfer strategy detailed above seem sound? I'd like to keep the ~1 month of backups I have, but it is also not a deal breaker to wipe them and start new with backups.

Thanks all!
 
Suppose it would work but if you lose the primary server you then lose your backup server if it's a vm. Just more hassle recovering as you would have to install a backup server somewhere in order for you to access your backups. Personally my home lab has a dedicated PVE, a dedicated PBS with it's own local storage, both NUCs, and then in addition a second synced copy of my backups to a USB drive and a third to S3. I'm always hacking and it's come in useful to have multiple copies. One good thing about having PBS as a VM is you can run snapshots before you attempt upgrades. My first PBS upgrade from 8-9 broke so I had to fully reinstall PBS, luckily I has the USB drive to get my backups back to the backup server. Starting to sound complicated now.
 
  • Like
Reactions: UdoB
Suppose it would work but if you lose the primary server you then lose your backup server if it's a vm. Just more hassle recovering as you would have to install a backup server somewhere in order for you to access your backups. Personally my home lab has a dedicated PVE, a dedicated PBS with it's own local storage, both NUCs, and then in addition a second synced copy of my backups to a USB drive and a third to S3. I'm always hacking and it's come in useful to have multiple copies. One good thing about having PBS as a VM is you can run snapshots before you attempt upgrades. My first PBS upgrade from 8-9 broke so I had to fully reinstall PBS, luckily I has the USB drive to get my backups back to the backup server. Starting to sound complicated now.
Thanks for the feedback Mike!

To be clear... After all the migrating I will end up with one bare-metal, dedicated PBS machine locally (the machine which was previously refereed to the the backup storage server). If I run a VM with PBS on the primary storage server (which is going to run PVE as the host/hypervisor) it would be for extra backup redundancy. You are totally correct that if the PVE node goes down I'd also lose that VM and its backups, but those will not be the sole source of PVE backups, just "extra" copies.

I do intend to look into better offsite and/or cold storage options as well. I tried using an external USB HDD to send back ups to as cold storage, but I found the process of mounting/unmounting the datastore to cause some issues with PBS. It's entirely possible that this was due to running PBS in a VM; I intend to explore this more once I have PBS running bare-metal. S3 is also something I may look into.
 
Ah yeah I only do USB on a dedicated server using either removable storage type or manually mounting etc. Both work well.

S3 with Wasabi is working very well at the moment on 4.1.6. Been testing for a while through several versions and even though it's still in tech review I've started using it in production as it seems to be getting stronger with some early issues getting fixed real quick. Wasabi is very cheap and can run as many verifies, GCs and restores as I want with no egress charges.
 
What type of environment are you in? Business/enterprise or homelab? Being a subscriber I presume the former, but I don't know. If that is the case, I'm sure our use cases for S3 storage would be pretty different.

I looked at Wasabi and it's not terrible in terms o pricing, but I'm looking more into something along the lines of AWS S3 Glacier. In theory, this would only be for an extreme contingency such as natural disaster recovery where I lost all of my local data. I would pray this is a once in a lifetime event lol. So with that in mind, I'm inclined to opt for something that has a cheaper recurring cost for storage and would be OK with a reasonable egress fee rather than higher monthly cost and no egress fee. I would need ~3TB for my important data I'd be willing to pay someone to store. I have a lot of media I can re-rip etc. that I would not want to pay to store.

Thanks again for all your insight! Whether your use case matches mine or not, it's always great to get a taste for what others are using.
 
I have a production system in work with 50TBs of backup. I also have a home lab with 1TB on Wasabi but it isn't really a lab as the data/VMs are critical hence multiple backups. I am very strong on verifying/testing backups, I normally stick to the defaults of 30 days before re-verifying. 50TBs would cost a small fortune to verify with AWS and their egress charges increase with glacier. 3TBs for your data is a tad over $20 a month plus any local taxes with Wasabi but that's the total cost with no egress surprises. In my home testing with 500GB with AWS I wiped out my 1 month trial ($100 worth) in a week with more than 90% being egress charges on test restores and verify jobs. Well worth using the AWS calculator and chucking in your estimate of egress. A few years ago I worked in a company that had 25TBs in Wasabi using Veeam, again Veeam's health check was a big user of egress so Wasabi was a no brainer. A lot depends on how much you trust any S3 provider and will that data remain intact with no need to check. I've heard way too many horror stories so I have verify jobs running daily. Of course you need good bandwidth for all of this, that could dictate how you implement.