I have my dell server running proxmox with only a few containers. The biggest thing is my 60TB zpool of plex media. I want to have a good backup routine with this so I can not worry about the data. I have some options and I can't decide and all my research on this sub and the forum hasn't helped me make a decision.
Notes: I have a Dell enclosure that will house my 60TB pool that is just meant to backup the main pool.
1) I create a 2nd 60TB zpool and do some kind of zfs send and receive routine
-- I don't know about this option just because it seems pretty universal that everyone says use PBS because of how great it is. Especially since it has a nice GUI to do everything.
2) I create a VM on my dell server and run PBS. Give it this new 60TB zpool and do backups this way
-- Some people seem to do this without issue and is my favorite option so far. People say but if your main server goes down then you lose everything. But will I if my backup 60TB is in its own zpool then I can just restore it couldn't I? Especially since it's in its own enclosure. Plus with PBS I get the nice gui to do everything.
3) Install PBS on some old hardware I have and run it as a 2nd machine that will get that 60TB pool and do backups this way
-- I don't like this option just because even though I am on 10GB network I feel like its slower and with this option I have to run a 2nd machine which increases heat and power in my apartment so this is kinda last resort.
Bonus Questions:
I read today that if I run my stuff in a container the backups will take longer because it has to reread the bitmap everytime. Is this true? Should I be migrating my plex server to a VM instead of a container? My proxmox server is where I created the ZFS pool and my datasets that I use for all my storage and I just share them with the plex lxc.
Thanks to anyone that can help with this. Been stressing and overthinking this all day. Found my first error in a scrub so its making me do this finally. I have a backup of the file luckily but I need a good solution.
Notes: I have a Dell enclosure that will house my 60TB pool that is just meant to backup the main pool.
1) I create a 2nd 60TB zpool and do some kind of zfs send and receive routine
-- I don't know about this option just because it seems pretty universal that everyone says use PBS because of how great it is. Especially since it has a nice GUI to do everything.
2) I create a VM on my dell server and run PBS. Give it this new 60TB zpool and do backups this way
-- Some people seem to do this without issue and is my favorite option so far. People say but if your main server goes down then you lose everything. But will I if my backup 60TB is in its own zpool then I can just restore it couldn't I? Especially since it's in its own enclosure. Plus with PBS I get the nice gui to do everything.
3) Install PBS on some old hardware I have and run it as a 2nd machine that will get that 60TB pool and do backups this way
-- I don't like this option just because even though I am on 10GB network I feel like its slower and with this option I have to run a 2nd machine which increases heat and power in my apartment so this is kinda last resort.
Bonus Questions:
I read today that if I run my stuff in a container the backups will take longer because it has to reread the bitmap everytime. Is this true? Should I be migrating my plex server to a VM instead of a container? My proxmox server is where I created the ZFS pool and my datasets that I use for all my storage and I just share them with the plex lxc.
Thanks to anyone that can help with this. Been stressing and overthinking this all day. Found my first error in a scrub so its making me do this finally. I have a backup of the file luckily but I need a good solution.