ZFS Hotswap and Hot-Spare on PBS?

LBX_Blackjack

New Member
Jul 24, 2020
15
0
1
27
Hello Proxmox,
I have a server running PBS on two NVME in RAIDZ-1 and four enterprise HDD's in hotswap bays for storing the backups. I want HDD 1-3 in RAIDZ-1 with the ability to hotswap for cold storage, and HDD 4 as a hotspare in case of a failure. To set this up, I have taken the following steps:

  1. Enabled hotswap in BIOS
  2. Initialized HDD's to GPT via fdisk
  3. Added HDD 1-3 to RAIDZ-1 pool via web-portal
  4. Ran zpool add storage spare /dev/sdd
  5. Ran zpool set autoprelace=on storage
Everything looks right, but since this is my first time using ZFS I'm not sure and don't want to find out when it's too late. Did I do anything wrong/stupid? Any missing steps?
 

LBX_Blackjack

New Member
Jul 24, 2020
15
0
1
27
By the lack of responses I'm guessing this is an eyeroller and I probably am fine. So I have a question: How can I prevent the Hot Spare from coming online if I manually remove a drive? I only want it to engage when there is an actual failure, not when I'm swapping out a drive for cold storage.
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
5,082
510
133
32
Vienna
How can I prevent the Hot Spare from coming online if I manually remove a drive? I only want it to engage when there is an actual failure, not when I'm swapping out a drive for cold storage.
my guess would be to run

Code:
zpool set autoreplace=off POOL

before removing the drive?

also what exactly do you mean for cold storage? do you basically replace all drives of the zpool with new ones and make a new pool?

if thats the case, you have to do a 'zpool export' first, and i think it does not autoreplace disks on not imported zpools anyway

in any case, i would take a few hours and properly test those things before putting it in production
 

LBX_Blackjack

New Member
Jul 24, 2020
15
0
1
27
my guess would be to run

Code:
zpool set autoreplace=off POOL

before removing the drive?

also what exactly do you mean for cold storage? do you basically replace all drives of the zpool with new ones and make a new pool?

if thats the case, you have to do a 'zpool export' first, and i think it does not autoreplace disks on not imported zpools anyway

in any case, i would take a few hours and properly test those things before putting it in production
My server has four hot-swap bays, and I have six drives for them. The idea is to have one stay as a hot-spare, one stay as an online backup, and two be exchanged weekly for onsite and off site cold storage offline backups. I'm trying to set it up to be as automated as possible, ideally being able to just pull to drives and *poof* encrypted cold offline backups. Then drop two drives in the vacant slots so the mirror can rebuild and be ready for next week.

If running a script each time is as automated as I can get, that's fine. It's still better than what I've been doing. I'm sorry if these are basic questions. This is my first experience with ZFS, PBS, and PVE, as well as still being fairly green with Linux in general. I appreciate the help.
 
Last edited:

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
5,082
510
133
32
Vienna
caution, you cannot remove 2 drives from a raidz1 without data loss, you would have to first remove one disk, let zfs rebuild and then remove the second disk. but then with those 2 disks you will not be able to import the pool again since they are out of sync

easier would be to have a seperate pool where you zfs send/recv to and export import that pool with some rotating disk sets
alternatively you can have a second pbs instance and pull a datastore over the network for offsite storage

also there is tape backup support planned for the future (no timeframe though)
 

LBX_Blackjack

New Member
Jul 24, 2020
15
0
1
27
caution, you cannot remove 2 drives from a raidz1 without data loss, you would have to first remove one disk, let zfs rebuild and then remove the second disk. but then with those 2 disks you will not be able to import the pool again since they are out of sync

easier would be to have a seperate pool where you zfs send/recv to and export import that pool with some rotating disk sets
alternatively you can have a second pbs instance and pull a datastore over the network for offsite storage

also there is tape backup support planned for the future (no timeframe though)
I was a bit confused in regards to why three disks in RAIDZ-1 cannot rebuild after two losses, so I did some more research. Please correct me if I'm still wrong, but from what I understand now "RAIDZ-1" is not the same as a "ZFS mirror." My drives are set up in a mirror, as it shows when setting up a ZFS pool via the PBS web portal. Does this change things?
 

che

Member
Jul 10, 2020
71
15
8
I was a bit confused in regards to why three disks in RAIDZ-1 cannot rebuild after two losses, so I did some more research. Please correct me if I'm still wrong, but from what I understand now "RAIDZ-1" is not the same as a "ZFS mirror." My drives are set up in a mirror, as it shows when setting up a ZFS pool via the PBS web portal. Does this change things?
Yes, RAIDZ-1 and a mirror are not the same. But what you are trying to do is not a good idea, removing a single drive from a mirror and declaring it a backup is not what a mirror is intended for. This is intended as failure protection, not to backup data. You will have to resilver the zpool each time a new drive is added, which is not only IO intensive but also your data is at risk during the time while only one copy of the data is on disk.

The right way to go is as suggested by @dcsapak.
Insert the disk/disks you want to have your backup on, create a zpool on that disk/disks, create a snapshot on the pool you want to backup and send the zfs datasets to the backup zpool using send/recv. You can then export the backup zpool and store it as offline backup.

This will give you also the possibility to do incremental backups later on by just sending data changed between snapshots, speeding up backups by al lot.

This might help you further understand ZFS https://www.servethehome.com/an-introduction-to-zfs-a-place-to-start/
and hot spares https://www.thegeekdiary.com/solaris-zfs-how-to-designate-hot-spares-in-a-storage-pool/
and backups with zfs send / zfs recv https://xai.sh/2018/08/27/zfs-incremental-backups.html

Edit: This can of course all be automated so that your backups are ready when you want and you can take the disks without waiting.
 
Last edited:

LBX_Blackjack

New Member
Jul 24, 2020
15
0
1
27
Yes, RAIDZ-1 and a mirror are not the same. But what you are trying to do is not a good idea, removing a single drive from a mirror and declaring it a backup is not what a mirror is intended for. This is intended as failure protection, not to backup data. You will have to resilver the zpool each time a new drive is added, which is not only IO intensive but also your data is at risk during the time while only one copy of the data is on disk.

The right way to go is as suggested by @dcsapak.
Insert the disk/disks you want to have your backup on, create a zpool on that disk/disks, create a snapshot on the pool you want to backup and send the zfs datasets to the backup zpool using send/recv. You can then export the backup zpool and store it as offline backup.

This will give you also the possibility to do incremental backups later on by just sending data changed between snapshots, speeding up backups by al lot.

This might help you further understand ZFS https://www.servethehome.com/an-introduction-to-zfs-a-place-to-start/
and hot spares https://www.thegeekdiary.com/solaris-zfs-how-to-designate-hot-spares-in-a-storage-pool/
and backups with zfs send / zfs recv https://xai.sh/2018/08/27/zfs-incremental-backups.html

Edit: This can of course all be automated so that your backups are ready when you want and you can take the disks without waiting.
I see, thank you.

So for my case, having four bays, it would be prudent to have two contain disks in a pool together (allowing for redundancy in case of failure), and the other two set up with pools to export to. Awesome, thank you!
 

LBX_Blackjack

New Member
Jul 24, 2020
15
0
1
27
So I'm testing this ZFS setup out in an Ubuntu VM hosted on Proxmox. Is there a way to simulate hot-swapping the drives?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!