Hi everyone, long time reader, first time writer.
I run Proxmox v6.4 in my home lab and recently had to replace all 3 SSDs in my raidz1 pool because they were HP and they failed pretty much at the same time. I replaced 1TB SSDs with 2TB SSDs from Samsung because the 1TB SSDs from Samsung were slightly smaller than HP's so zfs rejected them.
In any case, the system is working but I am very surprised to see that the pool has a size of 5.45TB instead of 3.something since I'm using a raidz1 setup:
I confirmed that it's raidz1 with the status command:
Did replacing the disks somehow mess up raidz1?
Thanks in advance, I'm pretty worried that I have no redundancy
I run Proxmox v6.4 in my home lab and recently had to replace all 3 SSDs in my raidz1 pool because they were HP and they failed pretty much at the same time. I replaced 1TB SSDs with 2TB SSDs from Samsung because the 1TB SSDs from Samsung were slightly smaller than HP's so zfs rejected them.
In any case, the system is working but I am very surprised to see that the pool has a size of 5.45TB instead of 3.something since I'm using a raidz1 setup:
root@proxmox:~# zpool list rpool
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 5.45T 2.09T 3.37T - - 31% 38% 1.00x ONLINE -
I confirmed that it's raidz1 with the status command:
root@proxmox:~# zpool status -v rpool
pool: rpool
state: ONLINE
scan: resilvered 557G in 00:22:21 with 0 errors on Fri Nov 11 18:33:38 2022
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
nvme-Samsung_SSD_980_PRO_2TB_S6B0NL0T508235Z-part3 ONLINE 0 0 0
nvme-Samsung_SSD_980_PRO_2TB_S6B0NL0T914793Y-part3 ONLINE 0 0 0
nvme-Samsung_SSD_980_PRO_2TB_S6B0NL0T914782B-part3 ONLINE 0 0 0
errors: No known data errors
Did replacing the disks somehow mess up raidz1?
Thanks in advance, I'm pretty worried that I have no redundancy