[SOLVED] Adding more disks to an already exist ZFS RaidZ1

YaseenKamala

Member
Feb 2, 2021
158
4
23
Paris
Dear All,

We are planning to increase the size of the array of disks on our ProxMox server or let say adding more disks. We already know which pool we are going to increase the size for it's called "hddpool"

1620132523388.png

Could you please tell me how this could be done/ what are steps? and what are the security measure to think of before starting the process to avoid problems.

Thank you in advance for any help that you could provide...

Best regards,
Yaseen KAMALA
 
Expanding a pool can only be done by adding another vdev.
Since you already have a raidz1, you would add another raidz1 with zpool add hddpool raidz1 /dev/disk/by-id/...
 
  • Like
Reactions: YaseenKamala
Could you please tell me how this could be done/ what are steps? and what are the security measure to think of before starting the process to avoid problems.

Hi,

As @ph0x @ph0 said it is possible, but you need to know what maybe the downsides :

Your pool will be from raid5/raidz to raid10. And the best will be to use 3 new disk with the same size as the disks that you allready have. Your extended pool will be unbalanced (your old raidz will have a lot of data and the new raidz vdev will have no data), so only NEW data will be landind on both raidz vdevs.

Also think that your old raidz vdev will have a higher probability to fail compared with new raidz vdev. For this reson I will start to replace a old hdd from older raidz with a new disk (2 old disk and 1 new disk) and then add the new raidz vdev (2 new disk and 1 old disk).

Good luck / Bafta !
 
Hi,

As @ph0x @ph0 said it is possible, but you need to know what maybe the downsides :

Your pool will be from raid5/raidz to raid10. And the best will be to use 3 new disk with the same size as the disks that you allready have. Your extended pool will be unbalanced (your old raidz will have a lot of data and the new raidz vdev will have no data), so only NEW data will be landind on both raidz vdevs.

Also think that your old raidz vdev will have a higher probability to fail compared with new raidz vdev. For this reson I will start to replace a old hdd from older raidz with a new disk (2 old disk and 1 new disk) and then add the new raidz vdev (2 new disk and 1 old disk).

Good luck / Bafta !
Thanks a lot for your advise I have already bought 3 disks with the exact same sizes as the old ones, I just want be sure I understood correctly from what you said.

You meant after I install or connect 3 new disks I have to do the below changes:

  1. Replace 1 old disk with a new disk which is empty.
  2. Use the old disk with the 2 new remain Disks.
Bellow are my current disks and labels:

1620208992906.png

Since my current disk are sda, sdb, sdc, sdd, sde the new ones will be sdf, sdg, sdh.

Concerning the replacement I saw this case which I think it's same as mine:
https://forum.proxmox.com/threads/how-do-i-replace-a-hard-drive-in-a-healthy-zfs-raid.64528/

and this one as well
https://docs.oracle.com/cd/E19120-01/open.solaris/817-2271/ghzvx/index.html

Lastly why it will become Raid10?

thanking in advance for your help and great support

Best regards,
Yaseen
 
Last edited:
It would be better to define the disks by their id (/dev/disk/by-id/...) rather than /dev/sdX because then it doesn't matter in which order the OS places the disks.
It will become sort of a raid 10 because adding a vdev will stripe (new) data across the two vdevs, therefore it's not a real raidz anymore.
 
  • Like
Reactions: YaseenKamala
It would be better to define the disks by their id (/dev/disk/by-id/...) rather than /dev/sdX because then it doesn't matter in which order the OS places the disks.
It will become sort of a raid 10 because adding a vdev will stripe (new) data across the two vdevs, therefore it's not a real raidz anymore.
Noted!
 
You meant after I install or connect 3 new disks I have to do the below changes:

  1. Replace 1 old disk with a new disk which is empty.
  2. Use the old disk with the 2 new remain Disks.

Hi again,


NOW: raidz1 with this HDDs: N1, N2, N3

You have 3 new HDDs: X1,X2,X3

Before you start, you will need to test X1,2,3 with badblocks for several days(3-4) in any OTHER sistem(preferabily) so you can be safe enough(but not 100%) that your new HDDs will not be broken after few deays of usage(1-2 days). In the same period also run a smartctl -t long /dev/X1,2,3 once/day.

Step1: Stop the server
Step2: Keep in the server N1,2,3 and X1(and disconnect N3 and X1,2 from the server)
Step3: Power on the server:
- you will see that your pool is degraded(missing N3)
- replace the N3 with X3:

zpool replace hddpool /dev/N3 /dev/disk/by-id/X3
- when the resilver is finished, then run a scrub
zpool scrub hddpool
- at the end, your hddpool will have N1,N2 and X3

Step4: put the N3 in other sistem and clear it from zfs(sgdisk --randomize-guids /dev/N3... and so on)
Step5: Power down the server, and connect all HDDs(N1,2,3 and X1,2,3)
Step6: Power ON
- extend your hddpool

zpool add hddpool raidz /dev/disk/by-id/N3 /dev/disk/by-id/X2 /dev/disk/by-id/X1

- set autoexpand so the pool can expand(on):
zpool set autoexpand=on hddpool
- scrub the pool

Good To Have: Note on paper what SerialNumber of the HDD do you have in EACH bay, so in case of the next replacement it will be easy(and make a photo on this paper)

LAST Step: Back to bussines and RELAX ;)

Very Important NOTE:
- test all this steps on a separate PC, using simple files, instead of real HDDs and note on paper each step, see this:

https://alp-notes.blogspot.com/2011/09/adding-vdev-to-raidz-pool.html


Good luck / Bafta !
 
Last edited:
  • Like
Reactions: YaseenKamala
Hi again,


NOW: raidz1 with this HDDs: N1, N2, N3

You have 3 new HDDs: X1,X2,X3

Before you start, you will need to test X1,2,3 with badblocks for several days(3-4) in any OTHER sistem(preferabily) so you can be safe enough(but not 100%) that your new HDDs will not be broken after few deays of usage(1-2 days). In the same period also run a smartctl -t long /dev/X1,2,3 once/day.

Step1: Stop the server
Step2: Keep in the server N1,2,3 and X1(and disconnect N3 and X1,2 from the server)
Step3: Power on the server:
- you will see that your pool is degraded(missing N3)
- replace the N3 with X3:

zpool replace hddpool /dev/N3 /dev/disk/by-id/X3
- when the resilver is finished, then run a scrub
zpool scrub hddpool
- at the end, your hddpool will have N1,N2 and X3

Step4: put the N3 in other sistem and clear it from zfs(sgdisk --randomize-guids /dev/N3... and so on)
Step5: Power down the server, and connect all HDDs(N1,2,3 and X1,2,3)
Step6: Power ON
- extend your hddpool

zpool add hddpool raidz /dev/disk/by-id/N3 /dev/disk/by-id/X2 /dev/disk/by-id/X1

- set autoexpand(on):
zpool set autoexpand=on hddpool
- scrub the pool

Good To Have: Note on paper what SerialNumder of the HDD do you have in EACH bay, so in case of the next replacement it will be easy(and make a photo on this paper)

LAST Step: Back to bussines and RELAX ;)

Very Important NOTE:
- test all this steps on a separate PC, using simple files, instead of real HDDs and note on paper each step, see this:

https://alp-notes.blogspot.com/2011/09/adding-vdev-to-raidz-pool.html


Good luck / Bafta !
Indeed I must be very careful because the server that I am going to work on it's core of our work! thank you so much for your help and advices. :)
 
Indeed I must be very careful because the server that I am going to work on it's core of our work! thank you so much for your help and advices. :)

Wellcome!

It was also for me very scary when I do it for first time, but for the next replacement, was not so uggly ;) In my case, first time I had have a lot of blood pressure!
Also be sure that you have a backup!

Good luck / Bafta!
 
Last edited:
  • Like
Reactions: YaseenKamala
And how will be the best to do:

- send the entire poll via zfs send/receive(or pve-zsync) on a different host(DF) with zfs
- export the zpool history to a file and keep it on other system
- destroy the hddpool
- put all HDDs in the server
- create the same pool(striped raidz/raid10) using all 6 HDDs
- send the data from DF to this new pool


The advantages will be:
- you will have a balanced pool(all data will be split in half on both raidz vdev)
- you will have a lower fragmentation for the pool(see FRAG using zpool list -v)
- downtime for the PMX host will be lower
- in the end, all of this will give you better performance for the pool

Note: One of the best candidate as DF is PBS if you have enough free space and use zfs on it ;)


Good luck / Bafta !
 
@guletz and @ph0x In fact we have decided to backup all the VMs and recreate the pool because we don't want to use another disk for the parity.

now what I am looking for is to find a way to backup my VMs in which each of them is having 4 disks.

1620309024500.png

For that reason I have added our NAS as NFS disk so that I can backup the VMs to it.

My question is when we backup the VM by selecting the VM \Backup\Backup Now\ ....

1620309286395.png

Is it backing up the VM with all disks which are connected to/ mounted to it, or it's only backing up the VM itself ?

Thank,
Yaseen :oops:
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!