Raidz1 but all space available?

PetitChien

New Member
Nov 12, 2022
2
0
1
Hi everyone, long time reader, first time writer.

I run Proxmox v6.4 in my home lab and recently had to replace all 3 SSDs in my raidz1 pool because they were HP and they failed pretty much at the same time. I replaced 1TB SSDs with 2TB SSDs from Samsung because the 1TB SSDs from Samsung were slightly smaller than HP's so zfs rejected them.

In any case, the system is working but I am very surprised to see that the pool has a size of 5.45TB instead of 3.something since I'm using a raidz1 setup:
root@proxmox:~# zpool list rpool NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 5.45T 2.09T 3.37T - - 31% 38% 1.00x ONLINE -

I confirmed that it's raidz1 with the status command:
root@proxmox:~# zpool status -v rpool pool: rpool state: ONLINE scan: resilvered 557G in 00:22:21 with 0 errors on Fri Nov 11 18:33:38 2022 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 nvme-Samsung_SSD_980_PRO_2TB_S6B0NL0T508235Z-part3 ONLINE 0 0 0 nvme-Samsung_SSD_980_PRO_2TB_S6B0NL0T914793Y-part3 ONLINE 0 0 0 nvme-Samsung_SSD_980_PRO_2TB_S6B0NL0T914782B-part3 ONLINE 0 0 0 errors: No known data errors

Did replacing the disks somehow mess up raidz1?

Thanks in advance, I'm pretty worried that I have no redundancy
 
The zpool command returns the size as raw capacity (so data + parity data). The zfs command returns "usable" data only (so raw capacity - parity data). You got 3x 2TB so 6TB of raw capacity and "5.45T" (5.45 TiB) shows that.

Also keep in mind to increase the block size of your zfspool storage in PVE webUI to atleast 16k. Otherwise only 40% of those 6TB will be usable for virtual disks. With default values those 6TB will only result in 2.4TB of real usable storage for VMs. Because 33% of raw capacity is lost because of parity data, another 17% because of padding overhead and because a ZFS pool should always have 20% of free space you loose another 20% of that 50%.
 
Last edited:
  • Like
Reactions: B.Otto
Ah thanks! I have 2x 1TB disks in mirror-mode and zpool list showed ~1TB usable so I thought it was showing the usable data regardless of disk arrangement.
Code:
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hdd     928G   120G   808G        -         -    12%    12%  1.00x    ONLINE  -


zfs list is indeed correct
Code:
root@proxmox:~# zfs list
NAME                           USED  AVAIL     REFER  MOUNTPOINT
,,,
rpool                         1.39T  2.13T      139K  /rpool
,,,

Thanks about the tip about reservation; I updated the storage default to 16k
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!