[SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

It will went ... but slower as before when expansion (which is even really slow) is done. Remember you are talking about extent a raidz2 pool with a single disk so even assume you have a single vdev. PBS needs iops which isn't the case a raidz(*) is good at and with higher capa in the vdev you get even more chunk files while the iops are that of a single disk in that vdev ... and your iops are need to scale with scaling files but wouldn't !!
 
  • Like
Reactions: Johannes S
It will went ... but slower as before when expansion (which is even really slow) is done. Remember you are talking about extent a raidz2 pool with a single disk so even assume you have a single vdev. PBS needs iops which isn't the case a raidz(*) is good at and with higher capa in the vdev you get even more chunk files while the iops are that of a single disk in that vdev ... and your iops are need to scale with scaling files but wouldn't !!
yes, you're right, it's a single vdev. Of course more vdevs gives you better performance - and are more expensive when achieving the same level of redundancy.
In real life we're quite happy with our backup storage performance. We write backups with about 1 GB/s, and we read (aka restore) backups with about 2 GB/s. In a datastore containing about 50 TB of backup data.
 
My news on this: It worked like a charm!
Code:
zpool attach <poolname> <raidlevel> <new disk as in /dev/disk/by-id>
For example (with random disk id):
Code:
zpool attach my-pool raidz2-0 nvme-WUS4EB076B7P3E3_B0626C3A

The expanding and scrubbing took a lot of time, but the filesystem was usable during the whole process.

Thanks! :)
 
Are you using HDDs or SSDs at the moment? For a hdd based pool you can speed up parts of the access times (especially garbage collection) with a special device: https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_special_device

It's basically a mirror of ssds (capacity should be around 2% of your HDD pool capacity) which will then be used to store metadata and (can be configured with the special_small_blocks parameter) optionally small files. The amount of SSDs should match the hdd redundancy and you will need to rewrite the data with zfs send/receive to rewrite the existing metadata to the ssds.
 
  • Like
Reactions: Onslow