ZFS - expanding pool after replacing with larger disks

wywywywy

Member
Dec 6, 2017
9
1
21
44
Hi all,

Basically I had 2x 240GB SSDs in a ZFS mirror in my Proxmox 6.3 box, and I replaced them with 1x 500GB (/dev/sda) and 1x 750GB (/dev/sdb) SSDs, by replicating partition tables, randomising GUIDs, replacing ZFS disks, then re-silvering.

And that worked fine with no issue. System is up and running again.

But the total size of the pool did not expand - I was expecting it to be 500GB (smallest of the 2 disks).

Am I right in saying that to expand the pool, I have to do the following?

1. Delete partition 9 from both /dev/sda and /dev/sdb
2. Expand the partition 2 of /dev/sda (the 500GB SSD) to 499GB
3. Write down the total number of sectors in /dev/sda2
4. Expand the partition 2 of /dev/sdb (the 750GB SSD) to exactly the same number of sectors
5. zpool online -e /dev/sda2 and /dev/sdb2
6. Reboot for good measure

Does this sound right? Can I do it online on the PVE box? Do I need to recreate partition 9 at all?

Thanks.
 
  • Like
Reactions: kochin
That sounds like a really complicated Way of doing what you want.
Here is, what i personally would do, if i where in the state you are now:
Pull out one Disk and clear it completely, no gpt, no partition.
Throw it back in, and use the string given by ls /dev/disk/by-id to replace the disk.
For that just use zpool replace <poolname> <id of removed device> <id of new device>
Let it Resilver. Do the same thing again for the other disk.
After that just reboot.

If you cant reboot, because its mission critical stuff, then try zpool -e <poolname> <devicename> and do the same for the other ssd.

EDIT: And please do urself a favour and dont use sdx or something to identify disks in a pool. ZFS should take care of that, but better safe than sorry, so use the IDs of the disk for that, which are provided by ls /dev/disk/by-id as written above.
 
Last edited:
  • Like
Reactions: kochin
Good point about using IDs thank you. The installer originally used sdx and I've stuck with it but using IDs is definitely way better.

As for clearing & replacing & resilvering again - why is that necessary? The disks have just been cleared & replaced & resilvered when upgrading from the old 250GB disks to these.
 
As for clearing & replacing & resilvering again - why is that necessary?
To have a "clean base", as I dont know, what exactly you have done to the disks before.
You said you replicated the tables n stuff, so yeah.. In my opinion thats completly unnecessary. If you want to expand, then just put clean disks in, with no gpt or something on there and just let zfs handle all of the partitioning. Thats why I said: clean the first disk, replace it with the ID, clean as it is, and do the same again for the second drive, after resilvering. ZFS takes care of the formatting and Partition layout.

EDIT again: I do the "cleaning" on a Windows Machine, as i know the commands for that. Just plug it in, open Powershell with Admin and run
diskpart
list disk
sel disk <number of the disk>
clean
exit

shut down the machine, remove drive and insert it back into the server. Done.
Proceed with Instructions from above.
 
Last edited:
I see. I'll give it a go thank you.

I was cloning partition table etc because I was following the instructions from Proxmox. I thought it was Proxmox specific as I have never had to do that on Freenas.

One potential issue letting zfs handle the formatting and partitioning is grub. I assume I can just grub-install as usual after resilvering before rebooting?
 
I see. I'll give it a go thank you.

I was cloning partition table etc because I was following the instructions from Proxmox. I thought it was Proxmox specific as I have never had to do that on Freenas.

One potential issue letting zfs handle the formatting and partitioning is grub. I assume I can just grub-install as usual after resilvering before rebooting?
Ohh, so you want to extend the Rpool ?
Well, idk if the instruction then work as intended. I thought you want to expand a seperat pool, besides rpool. Well, that makes things complicated, at least for me, as i didnt have done that at all.
 
I have already tried zpool online but that didn't work.

I'm quite sure that what I need to do is to increase the size of partition 2 on both disks. But I want to do it in a way that's safe and that they are exactly the same size on both disks (which would be easier if they were the same model). Also I don't know whether I need to re-create partition 9.

Here's all the outputs -

lsblk
Code:
 NAME      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda         8:0    1 698.7G  0 disk
├─sda1      8:1    1  1007K  0 part
├─sda2      8:2    1 223.6G  0 part
└─sda9      8:9    1     8M  0 part
sdb         8:16   1 447.1G  0 disk
├─sdb1      8:17   1  1007K  0 part
├─sdb2      8:18   1 223.6G  0 part
└─sdb9      8:25   1     8M  0 part

zpool status
Code:
  pool: rpool
 state: ONLINE
  scan: resilvered 138G in 0 days 00:22:55 with 0 errors on Sat Feb 13 21:20:43 2021
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

zpool list
Code:
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   222G   135G  86.6G        -         -    84%    60%  1.00x    ONLINE  -

fdisk /dev/sda
Code:
Disk /dev/sda: 698.7 GiB, 750156374016 bytes, 1465149168 sectors
Disk model: Crucial_CT750MX3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 47183FA4-DFF3-469F-9A8E-BE5C399B1DA3

Device         Start       End   Sectors   Size Type
/dev/sda1         34      2047      2014  1007K BIOS boot
/dev/sda2       2048 468845709 468843662 223.6G Solaris /usr & Apple ZFS
/dev/sda9  468845710 468862094     16385     8M Solaris reserved 1

fdisk /dev/sdb
Code:
Disk /dev/sdb: 447.1 GiB, 480103981056 bytes, 937703088 sectors
Disk model: Crucial_CT480M50
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 66C5A709-9280-4886-8DB2-58B00E15CD24

Device         Start       End   Sectors   Size Type
/dev/sdb1         34      2047      2014  1007K BIOS boot
/dev/sdb2       2048 468845709 468843662 223.6G Solaris /usr & Apple ZFS
/dev/sdb9  468845710 468862094     16385     8M Solaris reserved 1

Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.
 
In theory and thats all it is, a theory, you can just increase sda2 and sdb2 to you desired size and leave the rest alone. After that try again with zpool online -e rpool sda2. Repeat the command for sdb2.
 
Hi all,

Basically I had 2x 240GB SSDs in a ZFS mirror in my Proxmox 6.3 box, and I replaced them with 1x 500GB (/dev/sda) and 1x 750GB (/dev/sdb) SSDs, by replicating partition tables, randomising GUIDs, replacing ZFS disks, then re-silvering.

And that worked fine with no issue. System is up and running again.

But the total size of the pool did not expand - I was expecting it to be 500GB (smallest of the 2 disks).

Am I right in saying that to expand the pool, I have to do the following?

1. Delete partition 9 from both /dev/sda and /dev/sdb
2. Expand the partition 2 of /dev/sda (the 500GB SSD) to 499GB
3. Write down the total number of sectors in /dev/sda2
4. Expand the partition 2 of /dev/sdb (the 750GB SSD) to exactly the same number of sectors
5. zpool online -e /dev/sda2 and /dev/sdb2
6. Reboot for good measure

Does this sound right? Can I do it online on the PVE box? Do I need to recreate partition 9 at all?

Thanks.
Hello.

I have the same doubt.

I installed standard Proxmox 7.3 installation with ZFS raid1 (on boot disks with zfs rpool for system root) on two 250GB disks and now I've swapped them both for 512GB disks. These are the system's boot disks. Two of equal size, but larger than the previous ones.

But the rpool partitions remained small.

How was it then? Should one expand partition of each disk manually first? What's the best way to expand partitions in Proxmox? Do I need to restart the nodes afterwards?

Thanks!
 
Yes, you need to expand the partitions first. And don't forget to back everything up first. I would try booting a gparted ISO to expand the partitions.
 
Yes, you need to expand the partitions first. And don't forget to back everything up first. I would try booting a gparted ISO to expand the partitions.
Thanks! Can i expand the partition on-the-fly in Proxmox? Does any tool enable me to do this?
 
This is exactly what I also want to do. Have someone done this? Can you explain the steps? Dealing with these disk tools is a somewhat new thing to me.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!