Adding 2 new drives to expand a raidz2 pool

dominusdj

New Member
Aug 22, 2025
22
5
3
Hi everyone,
I need to expand the current zpool created in RAIDZ2 with two new hard drives.

What's the correct procedure?

Thanks a lot.


1757097351726.png
1757097303254.png
 
Hello, thanks for reply !!!
I use the last version of Proxmox

1757107016500.png

Tomorrow I'll read with calm that you wrote.
This is main server of company where I work.
I have to be sure of what I do :)
 
> This is main server of company where I work. I have to be sure of what I do

MAKE SURE YOU HAVE BACKUPS.

I'm not sure if you can add +2 drives to an existing RAIDZ2 even with ZFS 2.3, but as @news posted, adding on another RAIDZ2 vdev with the same number of drives (can be same size as existing vdev or larger) has been supported for years.

If you only have +2 drives and not 6, you could create a separate pool and mirror them.
 
i run some checks for you on the proxmox ve server, Version 9.0.6.

create some file based drives
Code:
#!/bin/bash
# drivea
for n in 0 1 2 3 4 5; do
  fallocate --length 1GB drivea$n
done

# driveb
for n in 0 1 2 3 4 5; do
  fallocate --length 1GB driveb$n
done

Create a new zfs pool datapool:
Code:
zpool create -o ashift=12 datapool raidz2 /rpool/test/drivea0 /rpool/test/drivea1 /rpool/test/drivea2 /rpool/test/drivea3 /rpool/test/drivea4 /rpool/test/drivea5

check the pool datapool:
Code:
$ zpool status datapool
# report
 pool: datapool
 state: ONLINE
config:
    NAME                     STATE     READ WRITE CKSUM
    datapool                 ONLINE       0     0     0
      raidz2-0               ONLINE       0     0     0
        /rpool/test/drivea0  ONLINE       0     0     0
        /rpool/test/drivea1  ONLINE       0     0     0
        /rpool/test/drivea2  ONLINE       0     0     0
        /rpool/test/drivea3  ONLINE       0     0     0
        /rpool/test/drivea4  ONLINE       0     0     0
        /rpool/test/drivea5  ONLINE       0     0     0
#
$ zpool list datapool -v
# report
NAME                      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
datapool                 5.50G  1.30M  5.50G        -         -     0%     0%  1.00x    ONLINE  -
  raidz2-0               5.50G  1.30M  5.50G        -         -     0%  0.02%      -    ONLINE
    /rpool/test/drivea0   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/drivea1   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/drivea2   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/drivea3   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/drivea4   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/drivea5   954M      -      -        -         -      -      -      -    ONLINE

Expand the pool datapool:
Code:
# add a new vdev zfs raidz2 with 6 drives
zpool add datapool raidz2 /rpool/test/driveb0 /rpool/test/driveb1 /rpool/test/driveb2 /rpool/test/driveb3 /rpool/test/driveb4 /rpool/test/driveb5

Code:
$ zpool status datapool
# report
 pool: datapool
 state: ONLINE
config:
    NAME                     STATE     READ WRITE CKSUM
    datapool                 ONLINE       0     0     0
      raidz2-0               ONLINE       0     0     0
        /rpool/test/drivea0  ONLINE       0     0     0
        /rpool/test/drivea1  ONLINE       0     0     0
        /rpool/test/drivea2  ONLINE       0     0     0
        /rpool/test/drivea3  ONLINE       0     0     0
        /rpool/test/drivea4  ONLINE       0     0     0
        /rpool/test/drivea5  ONLINE       0     0     0
      raidz2-1               ONLINE       0     0     0
        /rpool/test/driveb0  ONLINE       0     0     0
        /rpool/test/driveb1  ONLINE       0     0     0
        /rpool/test/driveb2  ONLINE       0     0     0
        /rpool/test/driveb3  ONLINE       0     0     0
        /rpool/test/driveb4  ONLINE       0     0     0
        /rpool/test/driveb5  ONLINE       0     0     0

#
$ zpool list datapool -v
# report
NAME                      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
datapool                   11G  1.79M  11.0G        -         -     0%     0%  1.00x    ONLINE  -
  raidz2-0               5.50G  1.64M  5.50G        -         -     0%  0.02%      -    ONLINE
    /rpool/test/drivea0   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/drivea1   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/drivea2   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/drivea3   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/drivea4   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/drivea5   954M      -      -        -         -      -      -      -    ONLINE
  raidz2-1               5.50G   156K  5.50G        -         -     0%  0.00%      -    ONLINE
    /rpool/test/driveb0   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/driveb1   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/driveb2   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/driveb3   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/driveb4   954M      -      -        -         -      -      -      -    ONLINE
    /rpool/test/driveb5   954M      -      -        -         -      -      -      -    ONLINE

Done, it works like i wrote.


I'm not so sure I understand what I need to do, unfortunately I'm not that good at command line commands, but I can try.

Could you explain better, please? I know it's a lot to ask, but I don't want to make any mistakes.

Thanks.
 
Last edited:
> This is main server of company where I work. I have to be sure of what I do

MAKE SURE YOU HAVE BACKUPS.

I'm not sure if you can add +2 drives to an existing RAIDZ2 even with ZFS 2.3, but as @news posted, adding on another RAIDZ2 vdev with the same number of drives (can be same size as existing vdev or larger) has been supported for years.

If you only have +2 drives and not 6, you could create a separate pool and mirror them.

Luckily, Proxmox OS is installed on two mirrored disks, so I can afford to make a mistake with the virtual machine pool since I have backups.

The problem is that if I make a mistake, it takes me two days to restore everything.



Yes, I only have 2 drives to add to an existing pool of 6 drives in raidz2

What should I do with the method you mentioned?

Thanks
 
And you example is not right, we touch in the last instance your real file system and if we do some bad things, they can not be reversed. you must have Backups, for all data you must have Backup.
yes, you're right, maybe if I can't understand the various steps perfectly I should destroy the pool and recreate a new one in raid2 with all the disks, including the new ones
 
Please read the manual on OpenZFS. I show you some test without any problems to your real System. files will be used to simulate real drives and the Artikel will work with this example commands, at some point you must change some commands for add to discs to zfs raidz2 vdev. These are also only files, each one Gigabyte size.
Yes, I know, I just need to understand which commands I need to use in my case.

Unfortunately, I don't have time to read the manual right now. I need to take action because, unfortunately, I find myself in this situation:


1757145841795.png

thanks again
 
# add a new vdev zfs raidz2 with 6 drives zpool add datapool raidz2 /rpool/test/driveb0 /rpool/test/driveb1 /rpool/test/driveb2 /rpool/test/driveb3 /rpool/test/driveb4 /rpool/test/driveb5
you add another 6 drives but is it possible to add only 2 as in my case to a raidz2 pool made of 6 drives? For a total of 8
 
Unfortunately, I don't have time to read the manual right now.
IF you read the manual you'll find that it is possible to add a drive (or two, one after another) to an existing RaidZ2 vdev. The search-term is RaidZ-Expansion.

This is possible since ZFS version 2.3 and as already mentioned that's the version used in current Proxmox systems.

I won't show the relevant command here - you need to read "the manual" to know what you are doing!

Start here: man zpool-attach and there: https://openzfs.org/w/images/5/5e/RAIDZ_Expansion_2023.pdf
 
IF you read the manual you'll find that it is possible to add a drive (or two, one after another) to an existing RaidZ2 vdev. The search-term is RaidZ-Expansion.

This is possible since ZFS version 2.3 and as already mentioned that's the version used in current Proxmox systems.

I won't show the relevant command here - you need to read "the manual" to know what you are doing!

Start here: man zpool-attach and there: https://openzfs.org/w/images/5/5e/RAIDZ_Expansion_2023.pdf

Wow... this PDF is very clear!
How can I add the first dive if my output is this?
1757151120508.png

I don't have /var/.... like in the guide


1757151277636.png



1757151203512.png



Sorry for stupid question I'm a beginner of Proxmox and ZFS :(
 

Attachments

  • 1757151256426.png
    1757151256426.png
    15.5 KB · Views: 1
I gave you advice in some code for testing the things with zfs, without any possible damage on your system!
So let some do the job, how want to learn some thing more.
Speak to some one how know the system and zfs.

See my post #4 above.

The person who managed the system has left, and now, unfortunately, I find myself in this situation and want to learn and try to solve the problem.
Currently, if I don't do it, no one can.

I don't want to waste anyone's time and patience; I'm just having trouble and looking for help on the community.
 
Wow... this PDF is very clear!
Well, it is exhaustive. And yes, there is more details than you need to know. You may prefer to read @news link in #4.

How can I add the first dive if my output is this?
man zpool-attach shows the required command in a very compact way.

To know which devices you have, look here (and/or in the neighbor-folders): ls -Al /dev/disk/by-id/ . You'll recognize your "ata-KINGSTON..." and also the new drives.


Edit/added: as already mentioned: make sure to have backups and know how to restore them! I do test things like this before I run it on a productive system. In my Homelab(!) I have a virtual cluster for "dangerous" experiments...
 
Last edited:
  • Like
Reactions: news
It works! Thanks everybody!

The highlighted error is my mistake. Unfortunately, with the first drive, I entered "zpool attach test raidz2-0 sdb" and not the disk name
"ata-KINGSTON_SEDC600M1920G_xxx" as I did with the second drive.

1757169140150.png


I tried using the "revome" function and then the "replace" function to fix it and make it the same as the others, but I didn't succeed.
I don't think it affects the functionality, so I think I'll keep it as is.


The thing I can't explain is this:

1757169300806.png

The pool was immediately full again, now I'm waiting for the "trim" to finish before running the "scrub" and seeing if the situation improves.

If anyone has any suggestions, please let me know.

Thanks again
 
This is no problem, the name convertion can later easy replaced.
please show the full output of zpool status <pool> and zpool list -v <pool>
With call lsblk -o+FSTYPE,MODEL and ls -lA /dev/disk/by-id/ you can see there are more names for one real device.

Here is the screen you requested :)

1757170059905.png


Ok, I know the disk ID but I don't know what command to replace "wwn-0x50026b76872d9748" with "ata-KINGSTON_SEDC600M1920G_50026B76872D9748"
 
I followed what you told me, but the server keeps saying "pool is busy." I probably need to stop other services manually.

If having a different name for that disk doesn't affect its current and/or future operation, perhaps it's best to avoid the risk and keep it as is.

Do you have any ideas for the abnormal space consumption on the RAIDZ2 pool?
 
1757176440232.png
After that Pool size remain the same




Pool size is more on less right (in my opinion)

This is before adding the 2 new drives

1757176496950.png
about 1,9TB x 4 = about 7,6 TB

Tis is after adding the 2 new drives

1757176744546.png
about 1,9TB x 6 = about 11,4 TB


There is a little less total space than there should be but the strangest thing is the occupation which has increased a lot without me adding any VM or data
 
I followed what you told me, but the server keeps saying "pool is busy." I probably need to stop other services manually.

If having a different name for that disk doesn't affect its current and/or future operation, perhaps it's best to avoid the risk and keep it as is.

Do you have any ideas for the abnormal space consumption on the RAIDZ2 pool?
https://github.com/kneutron/ansitest/blob/master/drivemap.sh

The difference between wwn and the actual "disk name" is just a symlink in /dev; the important thing is not to have the pool using short disknames. Which as mentioned above, is fixed with a simple export/import.

Run the above script and it will show you all possible names for e.g. /dev/sdc

PROTIP - when you get a maintenance window, shutdown the system and take pictures of the physical disk labels and document what slot they're in using a spreadsheet; when a disk fails you'll need to know which one to replace right away. You can also print paper labels and put them outside the case matching the inside disk positions for easy reference.

An example of the above is given here: https://github.com/kneutron/largefiles

You've got a lot to learn about ZFS in a short time, so I would recommend reading the last 30 days of the Reddit ZFS forum to start with. Also standup a VM at home and play with it in a safe environment where you have snapshots.

After that, for proxmox learning I would recommend reading the last 30 days of both this and the Reddit forum. It may seem like a bit of a load, but you'll get through it and the education will be invaluable.

https://github.com/kneutron/ansitest

There are a lot of useful admin scripts in the ZFS and proxmox sections as well. Feel free to PM me or reply here if you have questions.

There are 2 vital things you should know:

o Use ashift=12 when creating a pool (unless you're SURE it will always be 512-sectors on the backing storage and NO replacement disks will be 4k-sector)

o If you're building a new pool, start with raidz2 (or mirrors; the important thing to remember = skip raidz1 with modern large disk sizes) and don't go above ~10 disks per vdev unless you're doing DRAID.

https://wintelguy.com/zfs-calc.pl
^ check this out before you build any new pools

Try deleting old snapshots to free up disk space.

https://github.com/kneutron/ansitest/tree/master/ZFS
( search page for "snap" )
 
  • Like
Reactions: UdoB
Thanks for your valuable advice and for the links that you wrote here.

I'll take some time to speed up my training; that's already on the agenda.

Okay, about the "symlink" issue. Since it's not critical, I'll leave it "in pause" for now.

More importantly, adding two drives to the RAIDZ2 didn't free up space because it immediately increased without explanation.

As you can see, I don't have snapshots, and the sum of the virtual disks in each VM doesn't explain the total space used.



1757186691617.png


There must be something wrong, and I need to figure out what's causing it.

Thanks again.
 
  • Like
Reactions: Kingneutron