I am new to home lab stuff and to zfs, but I am loving learning it. I am wanting to create a rotating offsite backup solution, I purchased two big ole HDDs that I plan to insert into my hotswap bays, run my import script to bring in the zfs pools nicely and replicate my backup pool to this drive and eject the other drive (which was there all month staying up to date) which will then be shipped off again to be stored in another location. After a month its mailed back to me, I plug it in and run the script again and send the other one away so the drives ware evenly.
Hardware: One PBS host (bare metal) running zfs pool called "backups" (its mirrored 18TB, with hot spare) is in charge of the datastore in PBS gui also called backups. I have Three other Proxmox VE hosts that run various VMs and containers. All use proxmox interface to send backups to the PBS datastore automatically like every hour. Basically if I mess up a VM I just roll back an hour and try again. I like the way this works currently. So my data is stored on local disks on the proxmox ve hosts, then backed up to backups datastore which is on the PBS host. Thus I want to create this rotating insurance policy 3rd copy of my data stored encrypted offsite.
The script chatgpt and google gemini helped me create imports my rotating disks correctly, prompts me for unlocking encryption, but when I try to update the zfs snapshots with zfs send / receive commands there is something wrong in the script and says "warning: cannot send 'backups@20240821-1733': not an earlier snapshot from the same fs". I have a simple crontab -e job running time stamped snapshots on the backups pool. I was able to zfs send an initial snapshot called "initial_replicate" from backups to offsite-a/backup-archive dataset, but now my zfs send is says "not an earlier snapshot from the same fs". I am just confused.
Here is the best script AI could come up with: (Offsite-a and Offsite-b are the rotating disks, I am sure you get the point by now what I am going for. There is a script for importing each when i plug in the drive to my bay.)
saved as import_offsite-a.sh, chmod +x to make it executable all done as root...(i know not best practice, but i am a home lab)
```
```
Thanks in advance if you made it this far...
Hardware: One PBS host (bare metal) running zfs pool called "backups" (its mirrored 18TB, with hot spare) is in charge of the datastore in PBS gui also called backups. I have Three other Proxmox VE hosts that run various VMs and containers. All use proxmox interface to send backups to the PBS datastore automatically like every hour. Basically if I mess up a VM I just roll back an hour and try again. I like the way this works currently. So my data is stored on local disks on the proxmox ve hosts, then backed up to backups datastore which is on the PBS host. Thus I want to create this rotating insurance policy 3rd copy of my data stored encrypted offsite.
The script chatgpt and google gemini helped me create imports my rotating disks correctly, prompts me for unlocking encryption, but when I try to update the zfs snapshots with zfs send / receive commands there is something wrong in the script and says "warning: cannot send 'backups@20240821-1733': not an earlier snapshot from the same fs". I have a simple crontab -e job running time stamped snapshots on the backups pool. I was able to zfs send an initial snapshot called "initial_replicate" from backups to offsite-a/backup-archive dataset, but now my zfs send is says "not an earlier snapshot from the same fs". I am just confused.
Here is the best script AI could come up with: (Offsite-a and Offsite-b are the rotating disks, I am sure you get the point by now what I am going for. There is a script for importing each when i plug in the drive to my bay.)
saved as import_offsite-a.sh, chmod +x to make it executable all done as root...(i know not best practice, but i am a home lab)
```
Code:
#!/bin/bash
OFFSITE_A_POOL="offsite-a"
OFFSITE_B_POOL="offsite-b"
BACKUP_POOL="backups"
ARCHIVE_DATASET="backup-archive"
# Function to check if the pool is already imported
is_pool_imported() {
zpool list | grep -q "${1}" # Pass the pool name as an argument
}
# Function to handle the import process
import_pool() {
echo "Attempting to import pool ${1}..." # Pass the pool name as an argument
zpool import -f -N ${1}
if is_pool_imported "${1}"; then # Check if the specific pool is imported
echo "Pool ${1} imported successfully!"
else
echo "Failed to import pool ${1}. Please check the passphrase and disk status."
exit 1
fi
}
# Main script Logic
# Import offsite-a pool
if ! is_pool_imported "${OFFSITE_A_POOL}"; then
read -s -p "Enter passphrase for pool ${OFFSITE_A_POOL}: " passphrase
echo ""
export ZFS_PASSPHRASE=$passphrase
import_pool "${OFFSITE_A_POOL}"
unset ZFS_PASSPHRASE
else
echo "Pool ${OFFSITE_A_POOL} is already imported."
fi
# Get the latest snapshot name from the backups pool
LATEST_SNAPSHOT=$(zfs list -t snapshot -o name -s creation -H -r "${BACKUP_POOL}" | tail -n 1)
# Check if any snapshots exist in the offsite-a/backup-archive dataset
if zfs list -t snapshot -H -r "${OFFSITE_A_POOL}/${ARCHIVE_DATASET}" | grep -q '.'; then
# Get the previous snapshot name from the offsite-a pool
PREVIOUS_SNAPSHOT=$(zfs list -t snapshot -o name -s creation -H -r "${OFFSITE_A_POOL}/${ARCHIVE_DATASET}" | tail -n 1)
# Perform the incremental send/receive to offsite-a
zfs send -v -i "${PREVIOUS_SNAPSHOT}" "${LATEST_SNAPSHOT}" | zfs receive -F "${OFFSITE_A_POOL}/${ARCHIVE_DATASET}"
else
# If no snapshots exist in offsite-a, check if "initial_replication" exists
if zfs list -t snapshot -H -r "${BACKUP_POOL}" | grep -q 'initial_replication'; then
# Send the "initial_replication" snapshot to offsite-a to establish the baseline
zfs send -v "${BACKUP_POOL}@initial_replication" | zfs receive -F "${OFFSITE_A_POOL}/${ARCHIVE_DATASET}"
else
# If "initial_replication" doesn't exist, perform a full send of the latest snapshot
zfs send -v "${LATEST_SNAPSHOT}" | zfs receive -F "${OFFSITE_A_POOL}/${ARCHIVE_DATASET}"
fi
fi
# Export the offsite-b pool
if is_pool_imported "${OFFSITE_B_POOL}"; then
zpool export "${OFFSITE_B_POOL}"
echo "Pool ${OFFSITE_B_POOL} exported."
else
echo "Pool ${OFFSITE_B_POOL} is not imported."
fi
echo "Incremental backup to ${OFFSITE_A_POOL} completed."
Thanks in advance if you made it this far...
Last edited: