Adding new disks to Raidz

ipete

New Member
Mar 8, 2022
4
0
1
44
So I'm trying to do something I thought was simple. I have a SSD pool with 4 identical 1,6TB Dell enterprise drives, and I want to add two more. Here's the output of the pool status for that pool:

pool: fastsas5
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
fastsas5 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
scsi-35001e82002774394 ONLINE 0 0 0
scsi-35001e8200276d37c ONLINE 0 0 0
scsi-35001e8200276f06c ONLINE 0 0 0
scsi-35001e8200276ee24 ONLINE 0 0 0

errors: No known data errors

The two drives I'm trying to add is scsi-35001e8200276d3f4 and scsi-35001e8200276d068 with this command:
zpool add fastsas5 raidz /dev/disk/by-id/scsi-35001e8200276d3f4 /dev/disk/by-id/scsi-35001e8200276d068

but that gives the error:
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses 4-way raidz and new vdev uses 2-way raidz

Where am I going wrong?
 
How do you want to add them? As a new vdev or to make the 4 disk raidz1 a 6 disk raidz1?
 
Sorry, yes. I want to expand as extra storage. I will not need more than one drive for parity, so I want to add as extra storage
 
Since last year it is possible to expand a raidz but if you want to have the maximum possible capacity you still need to rewrite all the existing data on that pool, because otherwise the old data is still using the old data-to-parity-ratio. And if you run VMs that pool you might need to recreate the guests as well as with more disks you possibly need to increase your volblocksize so you don't loose too much capacity to padding overhead.
So adding new disks can still cause alot of work.
 
Last edited:
Sorry, yes. I want to expand as extra storage. I will not need more than one drive for parity, so I want to add as extra storage
Then you should look at the zpool attach command as the zpool add command will create an additional vdev and what you want IIUC is
Code:
NAME STATE READ WRITE CKSUM
  fastsas5 ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
      scsi-35001e82002774394 ONLINE 0 0 0
      scsi-35001e8200276d37c ONLINE 0 0 0
      scsi-35001e8200276f06c ONLINE 0 0 0
      scsi-35001e8200276ee24 ONLINE 0 0 0
      newdisk1
      newdisk2
and not
Code:
NAME STATE READ WRITE CKSUM
  fastsas5 ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
      scsi-35001e82002774394 ONLINE 0 0 0
      scsi-35001e8200276d37c ONLINE 0 0 0
      scsi-35001e8200276f06c ONLINE 0 0 0
      scsi-35001e8200276ee24 ONLINE 0 0 0
    new raidz
      newdisk1
      newdisk2

Also also keep in mind what @Dunuin mentioned. Performance will be a bit worse this way.
 
I clearly need to rethink how this works, coming from Unraid. Much simpler process for adding drives ;) Ok, then I think I will wait for my new drive cage/caddys to arrive, as I will be adding 8 more of these drives then. What would be the correct way of adding these drives and rewriting the data/parity?
 
Also keep in mind that adding more disks won't increase your IOPS performance when using any kind of raidz. A 12 disk raidz won't be faster for random reads/writes then a single disk. So if you need some IOPS or dont want to wait days/weeks for a resilver to finish, it might be better to stripe several smaller raidz1. And the bigger and larger your pool gets, the more likely a disk may fail so you need to resilver the pool and the bigger your pools gets the longer the resilver might take (up to months!!! where your pool would be basically unusable). So it might be better to use raidz2 then to be on the safe side and stripe several smaller raidz together so the resilver will be faster.

For example something like this (4x raidz1 of 3 disks striped together) which would give your 4 times the IOPS where 1 to 4 disks may fail with 33% parity loss:
Code:
NAME STATE READ WRITE CKSUM
  fastsas5 ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
      disk1
      disk2
      disk3
    raidz1-1 ONLINE 0 0 0
      disk4
      disk5
      disk6
    raidz1-2 ONLINE 0 0 0
      disk7
      disk8
      disk9
    raidz1-3 ONLINE 0 0 0
      disk10
      disk11
      disk12

Or 3x raidz1 of 4 disks striped together which would give your 3 times the IOPS where 1 to 3 disks may fail with 25% parity loss:
Code:
NAME STATE READ WRITE CKSUM
  fastsas5 ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
      disk1
      disk2
      disk3
      disk4
    raidz1-1 ONLINE 0 0 0
      disk5
      disk6
      disk7
      disk8
    raidz1-2 ONLINE 0 0 0
      disk9
      disk10
      disk11
      disk12
In this case you could keep your existing raidz1 and you add one or two more raidz1 of 4 disks later if you need more capacity.

Or 2x raidz2 of 6 disks striped together which would give your 2 times the IOPS where 2 to 4 disks may fail with 33% parity loss:
Code:
NAME STATE READ WRITE CKSUM
  fastsas5 ONLINE 0 0 0
    raidz2-0 ONLINE 0 0 0
      disk1
      disk2
      disk3
      disk4
      disk5
      disk6
    raidz2-1 ONLINE 0 0 0
      disk7
      disk8
      disk9
      disk10
      disk11
      disk12
 
Last edited:
Wow, great answers @aaron and @Dunuin! Thanks so much for bearing with me while I get used to Proxmox. I think that keeping this raidz and adding two more identical is the best way forward as your suggestion #2. I guess what you are saying is that the way I set up my RAIDZ now gives me 3X IOPS and 25% parity loss on my existing RAIDZ?
 
Wow, great answers @aaron and @Dunuin! Thanks so much for bearing with me while I get used to Proxmox. I think that keeping this raidz and adding two more identical is the best way forward as your suggestion #2. I guess what you are saying is that the way I set up my RAIDZ now gives me 3X IOPS and 25% parity loss on my existing RAIDZ?
Right now with your 4 disk raidz1 you got 25% parity loss and just 1x IOPS performance. When striping together multiple 4 disk raidz1s you multiply your capacity, IOPS performance and throughput by the number of raidz1 your pools consists of. So adding another 4 disks as a raidz1 would double capacity/performance. Adding even 4 more disks will tripple it and so on.

12 disk raidz1:3x 4 disk raidz1 striped:
Parity loss:8.3 %25 %
Throughput:11x9x
IOPS:1x3x
drives may fail:11 - 3 (it depends which disks fail...if two disks of the same raidz fail still all data of all 12 drives is lost)
Resilver time:badok
 
Last edited:
What's the performance difference on large/small files read/write for the above two setups (12 disk raidz1 vs 3x 4 disk raidz1 striped)? I'm just so used to the throughput being positively correlated with the IOPS but apparently, that intuition fails here.
 
What's the performance difference on large/small files read/write for the above two setups (12 disk raidz1 vs 3x 4 disk raidz1 striped)? I'm just so used to the throughput being positively correlated with the IOPS but apparently, that intuition fails here.
For small files you need IOPS, for big files throughput. With "3x 4 disk raidz1 striped" you got 3 small raids that can work in parallel so you get better IOPS. With "12 disk raidz1" you just got one big raid which can only do one thing at a time. So for IOPS you basically want to stripe as much small raidzs or mirrors as possible.
 
Last edited:
  • Like
Reactions: bosonbear

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!