How to add new mirrored disks to an existing zpool?

waffleireon

New Member
Nov 25, 2022
5
0
1
I'm currently running proxmox on two 8TB drives, using zfs, and they're mirrored. My `zpool status` looks like this:

Code:
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 1 days 03:13:29 with 0 errors on Mon Nov 14 03:37:30 2022
config:

        NAME                                       STATE     READ WRITE CKSUM
        rpool                                      ONLINE       0     0     0
          mirror-0                                 ONLINE       0     0     0
            ata-ST8000VN004-2M2101_WSDOTAER-part3  ONLINE       0     0     0
            ata-ST8000VN004-2M2101_WSDOUING-part3  ONLINE       0     0     0

errors: No known data errors

I'm looking at getting some 16 or 18TB Ironwolf Pro disks on black friday sales, but before I do that I want to make sure that what I'm thinking is possible? Ideally I'd get two 18TB drives, mirror them as another pair, and just end up with 8 + 18 = 26TB total storage. Therefore I could go to OMV, for example, and increase its disk size from my current 4TB to something like 10 or more.
 
I'm looking at getting some 16 or 18TB Ironwolf Pro disks on black friday sales, but before I do that I want to make sure that what I'm thinking is possible? Ideally I'd get two 18TB drives, mirror them as another pair, and just end up with 8 + 18 = 26TB total storage. Therefore I could go to OMV, for example, and increase its disk size from my current 4TB to something like 10 or more.
Jup, correct.

But keep in mind that a ZFS pool should always have 20% free space. So more like (8 + 18) * 80% = 20.8TB of real usable storage.
 
Last edited:
If I'm understanding it right, would I be adding to the rpool by adding another mirror, most likely mirror-1? Do you know where I can find instructions to do this?
See the "zpool add" command: https://openzfs.github.io/openzfs-docs/man/8/zpool-add.8.html
It also gives an example:

Example 1: Adding a Mirror to a ZFS Storage Pool​

The following command adds two mirrored disks to the pool tank, assuming the pool is already made up of two-way mirrors. The additional space is immediately available to any datasets within the pool.
# zpool add tank mirror sda sdb
 
Last edited:
See the "zpool add" command: https://openzfs.github.io/openzfs-docs/man/8/zpool-add.8.html
It also gives an example:
Do I need to do anything special before running the "zpool add" command? For example, do I need to pre-mirror the two new drives, or will they be mirrored by "zpool add"? And since the existing pool is the boot pool, do I need to do anything for that?

Or is it really as simple as just doing "zpool add" and it should all work?
 
Do I need to do anything special before running the "zpool add" command? For example, do I need to pre-mirror the two new drives, or will they be mirrored by "zpool add"?
When using the correct command, then no. "zpool add tank mirror sda sdb" will already add two disks as a mirror. That is what the "mirror sda sdb" means.
And since the existing pool is the boot pool, do I need to do anything for that?
Depends. If you want to be able to also boot from the new disks, then you would need to manually partition the new disks first, write the bootloader to them and so on. If you don't care about that, because you already got enough disks with a bootloader, then you could just run the zpool add command.
 
When using the correct command, then no. "zpool add tank mirror sda sdb" will already add two disks as a mirror. That is what the "mirror sda sdb" means.
Ok, that makes sense. One last thing... I noticed the existing disks in the existing mirror have long names (as seen in my zpool status in my original post), whereas the new disks I'd be adding are simply "sda" and "sdb"; is there a different ID I should be using to add the new disks? These are the first few lines of "lsblk":

Code:
NAME     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda        8:0    0 14.6T  0 disk
sdb        8:16   0 14.6T  0 disk
sdc        8:32   0  7.3T  0 disk
├─sdc1     8:33   0 1007K  0 part
├─sdc2     8:34   0  512M  0 part
└─sdc3     8:35   0  7.3T  0 part
sdd        8:48   0  7.3T  0 disk
├─sdd1     8:49   0 1007K  0 part
├─sdd2     8:50   0  512M  0 part
└─sdd3     8:51   0  7.3T  0 part

sda and sdb are the new disks, sdc and sdd are the existing ones. There's a whole bunch of zdNNpY disks under that too.
 
Yes, best would be to add them with unique IDs, like shown in your first post. You can run ls -la /dev/disk/by-id to see the disk IDs.
And then use the IDs that are referring to sda and sdb.

So something like this:
zpool add rpool mirror /dev/disk/by-id/IdOfYourSda /dev/disk/by-id/IdOfYourSdb
 
Yes, best would be to add them with unique IDs, like shown in your first post. You can run ls -la /dev/disk/by-id to see the disk IDs.
And then use the IDs that are referring to sda and sdb.

So something like this:
zpool add rpool mirror /dev/disk/by-id/IdOfYourSda /dev/disk/by-id/IdOfYourSdb
Aha, I see... makes sense that the existing disks have 'part3' on them since they're partitioned, but the new ones aren't. I'm guessing that's not an issue though.
 
Aha, I see... makes sense that the existing disks have 'part3' on them since they're partitioned, but the new ones aren't. I'm guessing that's not an issue though.
Jup, the existing disks are using the 1st and 2nd partitions for booting and only the third partitions are used for ZFS.
But you don't have to partition the disks you want to add. If you don't partition them and only tell ZFS to use the whole disk, then ZFS will partition it on its own creating a 1st and a 9th partition where the 1st partition will be used for the ZFS pool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!