Memory usage

Apr 5, 2023
7
0
6
Hi,

I've installed PBS on a physical machine. I've read the recomended requirements:

Recommended Server System Requirements​

  • CPU: Modern AMD or Intel 64-bit based CPU, with at least 4 cores
  • Memory: minimum 4 GiB for the OS, filesystem cache and Proxmox Backup Server daemons. Add at least another GiB per TiB storage space.

I went for a EPY 7313P with 256GB of RAM installed
I setup 2 Datastores:
1 using 2 vdevs in Raidz2 with 7 disks per vdec (8TB HDD SATA) and 2 spares.
the 2nd is 2 vdevs in RAIDz1 with 6 disks per vdev (22TB HDD Sata), mirrored special device of 2TB and 2 spare HDD's linked.

obviously the 2nd pool is alot faster for GC.
Trying to optimize the GC on the 1st pool, i want ot add a mirrord special device aswell.

I've got 2 questions:
1. After adding the special device to the 1st vdev, is there a way to fill up the special device with the metadata without having to copy all the data away and back again? Whats the best approach?
2. the system is only using half the memory. Probably as per default zfs only uses half the ram for L2arc. Is it a good idea to change the limit so it consumes more memory? If so, how and how much?
1752232674394.png

thanks for your advice.
 
Memory never exceeds the use 132GB of the available 256GB according to the history graph
I now increased GC cache capacity to 8388608
ZFS mirroring would indeed be ideal. We chose to go another route to increase available space while still having some redundancy. This is our offsite backup that doesn't get bombarded with writes from PVE's. It's a backup of the backup with longer retention, just in case.

Still it would be benificial to use the memory available, I presume.
 
obviously the 2nd pool is alot faster for GC.
Really?

Both have two vdevs = the IOPS of two spindles. RaidZ vs. RaidZ2 is irrelevant! Only the actually used capacity influences the duration. The second one (if without an SD) should take longer - if both are filled e.g. 50%.

Whats the best approach?
You need to read and write the .chunks folder. During that write the metadata is separated and put onto the special device. One possible approach: https://github.com/markusressel/zfs-inplace-rebalancing. Expect it to run for a loooong time.

Note that this is not a recommendation but only a random hint.

Good luck :-)
 
Last edited:
  • Like
Reactions: Johannes S
Really?

Both have two vdevs = the IOPS of two spindles. RaidZ vs. RaidZ2 is irrelevant! Only the actually used capacity influences the duration. The second one (if without an SD) should take longer - if both are filled e.g. 50%.

Well, thats what I see in practise. The first vdev is at ~78% and without special device. The 2nd is at ~68% with special device.

You need to read and write the .chunks folder. During that write the metadata is separated and put onto the special device. One possible approach: https://github.com/markusressel/zfs-inplace-rebalancing. Expect it to run for a loooong time.

Note that this is not a recommendation but only a random hint.
thanks for the hint. I'll have a look.