thanks for this hint! It's a bunch of NVMe (Western Digital Ultrastar DC SN640). It took so long, because it was an expansion from about 60 TB to about 70 TB.
My news on this: It worked like a charm!
zpool attach <poolname> <raidlevel> <new disk as in /dev/disk/by-id>
For example (with random disk id):
zpool attach my-pool raidz2-0 nvme-WUS4EB076B7P3E3_B0626C3A
The expanding and scrubbing took a lot...
Yes, we can.
We also ran into this issue: 3 node PVE/Ceph cluster (8.4.14), dedicated PBS. After upgrading PBS to 4.1 backup tasks randomly slowed down, VMs freezed with 100 % CPU load. Aborting the backup tasks and stopping and starting the...
yes, you're right, it's a single vdev. Of course more vdevs gives you better performance - and are more expensive when achieving the same level of redundancy.
In real life we're quite happy with our backup storage performance. We write backups...
try it! use the openZFS feature.
# https://openzfs.org/w/images/5/5e/RAIDZ_Expansion_2023.pdf
# https://freebsdfoundation.org/blog/openzfs-raid-z-expansion-a-new-era-in-storage-flexibility/
you can setup a files based zfs pool raidz2 and then...
Hi there,
does PBS 4 include a ZFS version that allows live expansion of a raidz2 pool with an additional disk? If so, has anyone successfully tried this yet?
Thanks and greets
Stephan