There is a script that does what's needed at github.com/markusressel/zfs-inplace-rebalancing.
When I do potentially problematic things like changing the topology (and the current situation allows me to) I set a global checkpoint to have a way...
@UdoB agreed, I am in no way anticipating the raidz2 array as having backup tenancies at all. Only the fact that I have the resilience of two disks failing prior to data loss occurring in the array. The 3-2-1 backup strategy is what I would use...
You have a single vdev and you are adding a new, empty, second one. No resilvering, no rebalance, no nothing will happen. Everything is fine as it is.
As already said: all data is on the old vdev, at first. Technically that is fine! Seen from a...
Thanks for the insight! Yea as I've been thinking about it I actually feel like the two 12 disks Vdevs in the pool would be better than 1. If I simply create the new vdev and add it to the pool, should I do anything after that to optimize the new...
No, no, no - do not do this.
I see that this is tempting, but a 12 disk wide vdev is already considered "wide".
Additionally: two vdevs will double IOPS, which is always a good idea ;-)
Disclaimer: I've never setup a 24 wide vdev, not even...
Oh nice I wasn't aware that was needed. I went ahead and ran the command zpool upgrade ZFS1 and it looks like it added the feature raidz_expansion but that isn't strictly the vdev expansion feature (or so it seems). Do you know if the vdev...
Hello,
So I recently performed my upgrade from proxmox VE 8 to 9 and with it I was excited to see that v.9 now includes a form of ZFS expansion:
"ZFS now supports adding new devices to existing RAIDZ pools with minimal downtime."
I currently...