[SOLVED] Rotated Drive Backup Workflow

meichthys

Active Member
Sep 25, 2019
35
6
28
34
I searched around on the forums but haven't seen to much relevant information about rotating backup drives. My current situation is this:
- I would like to keep a backup offsite but have almost no upload bandwidth to run a remote PBS host
- I have two 4TB portable USB 3 drives that I rotate off site on a weekly basis
- I have 1 zfs backup datastore in pbs for each usb drive
- I have 1 backup job for each Datastore running in pbs (I disabled one of them when the relevant drive is not connected)

My current workflow for changing drives is:
- Shutdown the pbs host (currently a proxmox vm) on Friday morning after the backup is completed
- Physically remove backup1 and take the drive off-site and store in a secure location
- Bring the other backup drive on-site Friday evening
- Physically connect backup2 and start the pbs vm
- Run the following from the pbs terminal `zpool import backup2`
- In the proxmox host, disable the backup1 job and enable the backup2 job
- Repeat weekly

Does anyone have a better way? This works, but is a bit labor intensive.
 
I haven't done this in years long before proxmox and pbs and far from a recommendation. I setup a backup server running ubuntu on a boot drive. I then added a Linux software raid mirror with an another internal drive and a drive in a "hotswap" bay. I can't remember or not but for some reason I feel like the fact that I was running ReiserFS was important. I'd take the hotswap drive offline, swap disk (I had 4 in rotation) and then resync the array. If I every needed anything off the "offline" drive I'd put it in another computer/ mount the drive and pull the files I needed. Probably not useful to you but I thought it'd share.
 
I haven't done this in years long before proxmox and pbs and far from a recommendation. I setup a backup server running ubuntu on a boot drive. I then added a Linux software raid mirror with an another internal drive and a drive in a "hotswap" bay. I can't remember or not but for some reason I feel like the fact that I was running ReiserFS was important. I'd take the hotswap drive offline, swap disk (I had 4 in rotation) and then resync the array. If I every needed anything off the "offline" drive I'd put it in another computer/ mount the drive and pull the files I needed. Probably not useful to you but I thought it'd share.
Thanks for your input! This makes sense and might be an option since I do have hot-swapable drives in my host machines. If I reconfigure, I could make it work like you said, but for now the usb is the most accessible in case of a failure since I don’t have another server to accept the hot swapped drive - I could always get a data to USB adapter, but maybe not ideal.
 
you could also think about not backing up directly onto your external disks, but use them as sync targets and use a permanently available (smaller) datastore as intermediate "buffer".

advantages:
  • if you "buffer" enough snapshots on the datastore that is not external, both external disks get the same snapshots
  • you have a single backup target on the client side, no need to switch jobs, for VMs the bitmaps stay valid
disadvantages:
  • you need more space (both for the "buffer" on the backup datastore, but also more snapshots end up on the external disks)
  • you need to think about appropriate pruning strategies for the buffer (e.g., keep X daily where X is bigger than your rotation window)
 
Thanks for your input! This makes sense and might be an option since I do have hot-swapable drives in my host machines. If I reconfigure, I could make it work like you said, but for now the usb is the most accessible in case of a failure since I don’t have another server to accept the hot swapped drive - I could always get a data to USB adapter, but maybe not ideal.
I don't think I'd ever do it that way again. At the time it solved the need I had and was in early USB 2 days where reliability and speed of USB made anything USB a no go solution. Times have defiantly changed.

Much like fabian noted, your best bet is probably not backing up directly to your external USB drives but a main datastore that is always online and setting your USB drives up as sync targets. This would require more storage.

But another option may be have one USB drive connected to a guest PBS, the other connected to another PBS guest. Setup separate backups to each PBS. Only power up the PBS when you have you have the USB drive available. Fairly sure the backups should fail gracefully when the PBS of the offsite drive is off. Very little manual intervention short of powering up the right PBS for the drive you have home but would have to deal with failed backups showing in the log.
 
  • Like
Reactions: meichthys
Hi,

I will use a single PBS datastore and a single backup task. On each usb disk I would create the same zpool(maybe it possible to create a zpool with the same id pool on both usb-disk - i do not know if it is possible )

When VM pbs start run from a cusom service or a rc.local zpool import.

You could also create a script that when see one of 2 usb-hdd to atach automatically to the pbs vm.

Good luck / Bafta !
 
To follow up on this, I've settled on the following:
  • [If running pbs in vm] Set USB pass through to use a specific USB port (use this same port for all backup drives)
  • Create separate data stores (one for each external drive)
  • Update the following in /etc/default/zfs if you have physical control of your server, otherwise, look for a different solution:
    - ZPOOL_IMPORT_ALL_VISIBLE="yes"
    -Uncomment #ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
  • Shutdown VM
  • Remove disk and take off site
  • Bring different disk onsite and attach to server (It will automatically be passed through to VM)
  • Restart VM
  • Run Backups :)
The zpool will automatically be mounted when the VM starts. I only need to manually turn on/off the VM when swapping drives.
 
Last edited:
  • Like
Reactions: TheDragon
Hi meichthys,
Can I get an update on your setup. Are you still using as described.
I'm 1/2 way through setting up my second PBS to do the same thing with offsite drives.
I would like to know if it is still working for you and have you made any changes?

Thanks.

NIvin37...
 
Yes I'm still using this setup with a few very minor tweaks. Let me save you some hassle and paste my notes here :)

```

Disk Setup​

Wipe Disk

A disk can be wiped in the proxmox (not proxmox backup server) GUI, by simply navigating to: node > Disks > Select Disk > Wipe Disk. Alternatively, the disk can be wiped from a terminal:

  1. List the available disks
    • fdisk -l
  2. Run fdisk with the disk you want to wipe (be sure it's not the host os disk)
    • fdisk /dev/<disk name>
  3. Wipe partitions
    • d then <ENTER>
    • Repeat above until all partitions are removed
  4. Commit changes
    • w then <ENTER>
Initialize Disk

  1. In proxmox backup server navigate to: Storage / Disks
  2. Select the disk you want to initialize
  3. Click Initialize Disk with GPT
Create ZFS Pool

  1. In proxmox backup server navigate to: Storage/Disks > ZFS > Create ZFS
  2. Give the zfs pool a name (we've been using a format like: 'backups-2022-1')
  3. Check Add as Datastore to initialize the zpool as a backup directory.
    • Later we will remove the temporary datastore and change the mountpoint on all drives to /mnt/datastore/backups (this will allow us to expose all our backup drives as a single storage device in proxmox)
    • If you skip this step you will likely get a permissions error when trying to make backups since the .chunks directory won't exist on the backup drive.
  4. Select the disk(s) to include
  5. Choose RAID Level: Generally we do Single Disk Raid level when setting up offsite backup drives, so you shouldn't need to change this.
  6. Click OK to create ZFS pool
Configure Rotated External USB Disks

In order to implement a rotating disks backup plan, we need to setup each disk to mount automatically to the same mountpoint when the proxmox backup server boots. Simply run the following for each disk's zpool while the disk is connected:

  • zfs set mountpoint=/mnt/datastore/backup <zpool_name>
  • zpool export <zpool_name>
  • zpool import <zpool_name>
  • Verify that the datastore is accessible via the web interface and that it shows the full storage amount of the disk.
Cleanup

Repeat the steps above (Wipe disk, initialize, create zfs pool, and configure) for each backup disk, then:

  • rm -rf /mnt/datastore/backup-2022-1
    • This is only to remove the temporary mountpoints that were initially created when setting up the zfs pools - we will mount at /mnt/datasets/backup from now on.
  • For each drive, remove its temporary datastore by running the following:
    • proxmox-backup-manager datastore remove backups-<year>-<drive_number>
Once finished, you should have a zpool on each disk that all mount to the same /mnt/datastores/backup location.

Datastores​

A datastore is simply a path on the proxmox backup server that is used to store backups. To create a datastore, just click Add Datastore and provide a name and path and click Add.

I currently use a single datastore that points to /mnt/datastore/backup. Each external drive will mount to this location: /mnt/datastore/backup



Drive Rotation​

To swap out drives:

  • Shutdown proxmox backup server
  • Swap drives
  • Restart proxmox backup server
  • Ensure the backup datastore shows as available in proxmox
Note: if a drive is not mounted when a backup is scheduled, it will fill up the local disk very fast and may cause OS issues due to low storage. If you can log into the terminal, you can unmount the zpool (if it is mounted), then remove the backups directory rm -rf /mnt/datastore/backups to free up some space on the local disk, then re-import the backup zpool: zpool import <pool_name>.
```
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!