How to add a drive to a Proxmox Btrfs Pool

Tiscan

New Member
May 29, 2022
7
3
3
I just did this (and things appear to be working well) so I thought I would share. Feel free to provide critical feedback.

Also full disclosure I did get hints on this from Proxmox support but they couldn't provide assistance as I don't have that kind of license (but the hints they gave were invaluable). Just wanted to call out how awesome that was. Buy a license if you are able, it's worth supporting this project.

This is all provided informationally, so I encourage you to research the commands and what they do before executing them (aka, no warranty here ;)).

First of all for purposes of this we are assuming the following drives:

/dev/sda /dev/sdb /dev/sdc /dev/sdd

And that the new drive is

/dev/sde

First we are going to copy the partition table from one of the drives currently in the Pool to the new drive.

Format: sgdisk <source> -R <target>
Example: sgdisk /dev/sda -R /dev/sde

Now we need to randomize the GUID of the new drive, this is important as if we don't do this both drives from the sgdisk command will have the same GUID. Also be very careful of the target for this command as if you overwrite the GUID of a drive currently in your RAID your results would be... poor.

Format: sgcode -G <target>
Example: sgcode -G /dev/sde

Next we probably want to make the new disk bootable.

Format: grub-install <target>
Example: grub-install /dev/sde

If you want to boot with UEFI sometime in the future, we need to do these steps, might as well:

Format: proxmox-boot-tool format <new disk and partition> This will always partition # 2
Format: proxmox-boot-tool init <new disk and partition> This will always partition # 2

Example: proxmox-boot-tool format /dev/sde2
Example: proxmox-boot-tool init /dev/sde2

Now we need to add the new drive to our Btrfs pool.

Format: btrfs device add <new drive and partition> <pool mount point> Remember because we copied the partition table our Btrfs partition will be # 3
Example: btrfs device add /dev/sde3 /

Now that our drive is added we need to rebalance the data. Read below before running this as if you want to balance your metadata differently there is another command you should use.

Format: btrfs balance start --full-balance <pool mount point>
Example: btrfs balance start --full-balance /

You can watch the status of the balance in a separate shell window via:

Format: btrfs filesystem us -T <pool mount point>
Example: btrfs filesystem us -T /

Now here is where we can get fancy assuming you have multiple drives and have your pool in RAID10. We can change the way metadata is spread across the drives. You can run this instead of the command directly above or run it after. For the example here we have 5 drives so I selected raid1c4 for the metadata.

Format: btrfs balance start -mconvert=<raid type> --full-balance <target pool>
Example: btrfs balance start -mconvert=raid1c4 --full-balance /

Thats all I have, hope it helps someone. I am sure there are things that could be done better so feel free to call them out. :)
 
Last edited:
Be careful, ZFS has very different ways of managing pools and adding disks. I don’t think much here will be relevant for adding a disk to ZFS.
 
I just did this (and things appear to be working well) so I thought I would share. Feel free to provide critical feedback.

...

Thats all I have, hope it helps someone. I am sure there are things that could be done better so feel free to call them out. :)

My criticism is that thank you very much for this information. Thanks to you, I will continue to explore btrfs because I thought I would have to solve it with mdadm.

Only thing I couldn't find was the sgcode command

Paweł
 
Last edited:
Now we need to randomize the GUID of the new drive, this is important as if we don't do this both drives from the sgdisk command will have the same GUID. Also be very careful of the target for this command as if you overwrite the GUID of a drive currently in your RAID your results would be... poor.

Format: sgcode -G <target>
Example: sgcode -G /dev/sde
Hi! Thank you very much for this guide! but I can't find sgcode program.
So I find another way to change GUID.
Method 1.

root@pve:~# sgdisk --replicate=/dev/sda /dev/sdb
The operation has completed successfully.
root@pve:~# fdisk /dev/sda
Command (m for help): p
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4B25A6EA-F285-4B38-BF99-533D76B53B9F

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 209715166 208664543 99.5G Solaris /usr & Apple ZFS

Command (m for help): q

root@pve:~# fdisk /dev/sdb

Command (m for help): p
Disk /dev/sdb: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4B25A6EA-F285-4B38-BF99-533D76B53B9F

Device Start End Sectors Size Type
/dev/sdb1 34 2047 2014 1007K BIOS boot
/dev/sdb2 2048 1050623 1048576 512M EFI System
/dev/sdb3 1050624 209715166 208664543 99.5G Solaris /usr & Apple ZFS

Command (m for help): q

root@pve:~# sgdisk --randomize-guids /dev/sda
The operation has completed successfully.
root@pve:~# fdisk /dev/sda

Command (m for help): p
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 592B82D5-E9FB-4239-9463-AB5561F5BFAF

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 209715166 208664543 99.5G Solaris /usr & Apple ZFS



Method 2.
GUID can be generated on this site
https://www.uuidgenerator.net/version4
And after that open disk via fdisk and change GUID to generated on site, in my example is disk is /dev/sda


root@pve:~# fdisk /dev/sda

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D2C7FC6A-8C81-474C-B6D1-B0D0F2557CC6

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 209715166 208664543 99.5G Linux filesystem

Command (m for help): x

Expert command (m for help): i

Enter new disk UUID (in 8-4-4-4-12 format): 34050E53-5681-4C8D-94D2-5194FDB2BE4F

Disk identifier changed from D2C7FC6A-8C81-474C-B6D1-B0D0F2557CC6 to 34050E53-5681-4C8D-94D2-5194FDB2BE4F.


Expert command (m for help): r

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

root@pve:~# fdisk /dev/sda

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 34050E53-5681-4C8D-94D2-5194FDB2BE4F

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 209715166 208664543 99.5G Linux filesystem

Command (m for help): q
 
Last edited:
Hi! Thank you very much for this guide! but I can't find sgcode program.
So I find another way to change GUID. GUID can be generated on this site
https://www.uuidgenerator.net/version4
And after that open disk via fdisk and change GUID to generated on site, in my example is disk is /dev/sda


root@pve:~# fdisk /dev/sda

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D2C7FC6A-8C81-474C-B6D1-B0D0F2557CC6

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 209715166 208664543 99.5G Linux filesystem

Command (m for help): x

Expert command (m for help): m

Expert command (m for help): i

Enter new disk UUID (in 8-4-4-4-12 format): 34050E53-5681-4C8D-94D2-5194FDB2BE4F

Disk identifier changed from D2C7FC6A-8C81-474C-B6D1-B0D0F2557CC6 to 34050E53-5681-4C8D-94D2-5194FDB2BE4F.


Expert command (m for help): r

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

root@pve:~# fdisk /dev/sda

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 34050E53-5681-4C8D-94D2-5194FDB2BE4F

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 209715166 208664543 99.5G Linux filesystem

Command (m for help): q
Tq for the info
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!