Adding 2 new drives to expand a raidz2 pool

Post output of ' zpool get all ' and ' zfs get all ' ( I suggest you use e.g. pastebin website , so you don't take up large chunks of text space here )
 
I just noticed that the same problem also occurs on the second node where I perform replications.

Mastrer Node

1757231777468.png

Replication Node

1757231818200.png



Always about 9,63TB


So whatever it is that is using up all this space in an abnormal way (about 3TB more than it really should be) is also replicated on the second node
 
Last edited:
I don't see anything out of the ordinary on the zpool side, but recommend you turn atime=off on all datasets to save on writes

DataZFS available 427G

^ That is a more accurate representation than the 'zpool' free

DataZFS recordsize 128K default

You can modify recordsize per-dataset, if you have e.g. large media files then it would benefit from recordsize=1M

I thought you might have had ' copies=2 ' somewhere, but this is not the case.

The other usual suspect, dedup, is Off everywhere

This stands out a bit:
[[
DataZFS/vm-103-disk-0 used 6.01T -
DataZFS/vm-103-disk-0 available 4.48T -
DataZFS/vm-103-disk-0 referenced 1.94T -
DataZFS/vm-103-disk-0 compressratio 1.01x -
DataZFS/vm-103-disk-0 reservation none default
DataZFS/vm-103-disk-0 volsize 4.00T local
DataZFS/vm-103-disk-0 volblocksize 16K default
DataZFS/vm-103-disk-0 checksum on default
DataZFS/vm-103-disk-0 compression on inherited from DataZFS
DataZFS/vm-103-disk-0 readonly off default
DataZFS/vm-103-disk-0 createtxg 32700 -
DataZFS/vm-103-disk-0 copies 1 default
DataZFS/vm-103-disk-0 refreservation 4.07T local
DataZFS/vm-103-disk-0 guid 18439402590032835261 -
DataZFS/vm-103-disk-0 primarycache all default
DataZFS/vm-103-disk-0 secondarycache all default
DataZFS/vm-103-disk-0 usedbysnapshots 1.03G -
DataZFS/vm-103-disk-0 usedbydataset 1.94T -
]]

You may want to move that disk $somewhere_else to other storage (possibly XFS or lvm-thin) and see if it fixes the space discrepancy.
 
OK , i set atime=off on DataZFS.

If you look at the pool before and after adding the two new drives to the raidz2 you see there must be something that has expanded automatically so there is definitely something wrong

Before
1757315429828.png
After
1757315441214.png


Vm 103 is Truenas and this is the setting


1757315550465.png
It's probably because it's ZFS on ZFS, but I don't think that's the real reason

It's probably this machine that's causing problems, especially since I can't explain this memory usage.
I never told him it had to take up all these TB.


DataZFS/vm-103-disk-0 used 6.01T
DataZFS/vm-103-disk-0 available 4.48T



I expanded the pool to be able to take snapshots and I'm in the same situation as before with all the space occupied and no snapshots :confused:
 
Last edited: