RAIDZ1 resizing

AxelTwin

Well-Known Member
Oct 10, 2017
133
6
58
39
Hi everybody,

I have a set of 4 ssd drives configured in raidz1.

here is the config:
Code:
storage                         4.75T   411G      140K  /storage
storage/data                    4.75T   411G     1.44T  /storage/data
storage/data/subvol-100-disk-0  2.38T   411G     1.93T  /storage/data/subvol-100-disk-0
storage/data/subvol-171-disk-0   952G   411G      887G  /storage/data/subvol-171-disk-0

1681214503461.png

As we are running out of space, I thought I could add another drive to the raidz but it looks not to be the right thing to do.
I am thinking of taking 1 disk off the raidz1 and create a new mirror pool with an additional disk, as we only have one disk slot remainig on the server

what would be the best solution ? what are your suggestions ?
 
Last edited:
u cannot take away one disk of the zfs pool. if wanna increase the space u have to add a new vdev with min 4 disks. with zfs its not easy to grow.
 
And keep in mind that your pool already is too full. You usually don't want to fill a ZFS pool more than 80 or 90% for best performance. And you are already over 90%, so your pool is operating in the slower space-saving mode.

A feature to expand raidz is already in testing phase, but not released yet (and even with that feature added, it would be more space efficient and more performant to destroy that pool and recreate it with one more disk from scratch).

So you either:
A.) move all data to another pool using "zfs send | zfs recv", destroy that 4 disk raidz, create a new 5 disk raidz and move the data back with "zfs send | zfs recv"
B.) you add another vdev. It doesn't have to be an identical vdev (you could even stripe a 4 raidz and a 2 disk mirror) but it would be highly recommended to add an identical vdev. So in case option A is not possible because you don't got another pool to move the data to or you can't have that downtime, you would usually add another 4 disk as a an additional raidz and then stripe those.
C.) third option would be to replace all 4 SSDs with larger ones. You can do that one disk at a time and then resilver. But the pool will only grow after you replaced all disks with bigger ones.
 
Ok, thanks for the explaination.
So I wanted to move all data to another pool on another server using "zfs send | zfs recv" but when checking the space used on the target server it seems that it doesn't reflect the reality.
Storage shows 3.12T used while there is less than 1T
Could it be possible that the space used by deleted vm/ct is still considered as used ?

1681318452629.png
 
Search this forum for "padding overhead". When storing a zvol with a too low volblocksize on an raidz1/2/3 with too much disks, you will get padding overhead and everything will consume more space.
And this volblocksize can only be set once at creation of a zvol.
 
Last edited:
  • Like
Reactions: AxelTwin
Could it be possible that the space used by deleted vm/ct is still considered as used ?
Run "zpool list -v" and "zfs list -o space" to get a better view of the pool. If deleted stuff wouldn't be freed up yet, it should be showed as "usedsnap" or "usedrefreserv".
 
  • Like
Reactions: AxelTwin
Ok, usedsnap was representig a lot of space. deleting snapshot made the job. thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!