[SOLVED] ZFS raidz1: Expanding not possible?

Oct 28, 2013
301
45
93
www.nadaka.de
Hi there,

a few months ago we built up a new PBS with a bunch of U.2 NVMe. I naively thought ZFS must be the way to go creating the storage for the datastore. So I created this storage:

Code:
# zpool status nvme-pool
  pool: nvme-pool
 state: ONLINE
  scan: scrub repaired 0B in 04:34:39 with 0 errors on Sun May 14 04:58:41 2023
config:

        NAME                                           STATE     READ WRITE CKSUM
        nvme-pool                                      ONLINE       0     0     0
          raidz1-0                                     ONLINE       0     0     0
            nvme-eui.02000000000000000014xxxx  ONLINE       0     0     0
            nvme-eui.02000000000000000014xxxx  ONLINE       0     0     0
            nvme-WUSxxxx                       ONLINE       0     0     0
            nvme-eui.02000000000000000014xxxx  ONLINE       0     0     0
            nvme-eui.03000000000000000014xxxx  ONLINE       0     0     0
            nvme-eui.02000000000000000014xxxx  ONLINE       0     0     0
            nvme-eui.02000000000000000014xxxx  ONLINE       0     0     0
            nvme-eui.02000000000000000014xxxx  ONLINE       0     0     0

Code:
# zpool list nvme-pool
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
nvme-pool  55.9T  51.7T  4.21T        -         -    54%    92%  1.00x    ONLINE  -

As you can see we almost run out of space, so I just want to add another NVMe. I tried to figure out how to do it - and it seems that this is not possible. Is that correct? What options do I have?

Thanks and greets
Stephan
 
Last edited:
Jup, not possible yet. Usually you would either:
1.) add another vdev (best case would be an identical one but I guess you don't want to buy 8 more U.2 disks...) and stripe it, so you get no downtime
2.) migrate all data to another storage, destroy that pool, create a new one with 1 disk more, move data back
 
Last edited:
wow, that's... challenging.
Option no. 1 is no option, quite a bit too expensive.
So I will try to find a temporary storage which is large enough for option no. 2.
But: What are my future storage options? I need scalability, so raidz1 is out. Linux Software-RAID isn't supported by Proxmox. Can I do some RAID5ish stuff with LVM? btrfs? I'm baffled... :(
 
Depends on how much space you want to lose to parity overhead. With smaller vdevs (3 disk raidz1 or 2 disk mirrors) it would be easier to add more disks as you could buy 2 or 3 new disks and add a new vdev to the pool. But your U.2 disks are probably fast enough that you don't need that additional IOPS performance and less parity overhead might be more important.

Raidz extendability is on the horizon but sounds like you need that space now. And temporarily adding another small vdevs (like a 2 disk mirror) isn't really an option, as you won't be able to remove a vdev from a pool that contains a raidz1/2/3 vdev.
 
Sorry, I forgot: Thanks so much for your fast replies! :)
Well, basically I like the idea of a RAID 5: Some minor redundancy (it's "just" a backup storage) and a high percentage of usable storage - the more important because these WD SN640 7,68 TB are not cheap. The loss in write performance I can tolerate easily, because the overall performance of these disks are still impressive.
 
so... If I would like to stay with ZFS, I have to stripe some raidz1, correct? For example:

3 disk raidz1 vdevs: 12 disks for 8 disks of usable space
4 disk raidz1 vdevs: 12 disks for 9 disks of usable space
5 disk raidz1 vdevs: 10 disks for 8 disks of usable space

And this list goes from "less usable space ratio, but smaller-stepped to expand" to "more usable space ratio, but not so smaller-stepped to expand". Do I get this right?
 
so... If I would like to stay with ZFS, I have to stripe some raidz1, correct? For example:

3 disk raidz1 vdevs: 12 disks for 8 disks of usable space
4 disk raidz1 vdevs: 12 disks for 9 disks of usable space
5 disk raidz1 vdevs: 10 disks for 8 disks of usable space

And this list goes from "less usable space ratio, but smaller-stepped to expand" to "more usable space ratio, but not so smaller-stepped to expand". Do I get this right?
Yes. Also make sure how much NVMe SSDs your server can fit.
 
Last edited:
Linux Software-RAID isn't supported by Proxmox. Can I do some RAID5ish stuff with LVM?
Why not? Proxmox supports everything it can mount, and lvm on top of a md-raid5 or raid6 is not that hard to set up. (I personally prefer that over using lvm to do the RAID stuff too)
 
And in the linked https://pve.proxmox.com/wiki/ZFS_on_Linux it goes on to say that raidz is slow.

There is nothing a hardware RAID controller inherently does better than mdraid. It is very capable of checking for bit rot, see https://www.thomas-krenn.com/en/wiki/Mdadm_checkarray_function. And I can't take someone seriously who recommends a 4-disk raid10 over a raidz2/raid6 because it is faster while leaving the same usable space, disregarding that in a raidz2 or raid6 any two drives may fail, while a raid10 is dead when the wrong two drives fail.
 
I heard you can also swap one drive at a time to a bigger drive and wait for the rebuilt. Once you have done that with all drives the pool will be larger.
This is something that I like to do with a RAID1 where this is also feasible in terms of cost.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!