High ram usage, slow system

kolt

New Member
Oct 25, 2023
7
0
1
So I put together a new server build (12900k, 64GB DDR5 ECC RAM, 4x 1TB NVMe drives) and I planned on hopefully putting everything I need on here, including game servers, CCTV NVR, router, plex/sonarr/radarr on it but came to the conclusion that I am going to run into some issue with RAM as I plan to use 2x 14TB and a single 20TB for media storage which will kill my RAM. I only used the 2 14TB drives today and made some VMs and ram usage was super high... I found online that high RAM usage is normal with ZFS but it was really slowing things down. Question is, what is the best way to mitigate this?

Thanks.
 
Hi kolt,

Welcome to the forum!
Question is, what is the best way to mitigate this?

Which 'this' do you want to mitigate?
I can think of a few more or less useful mitigations for "using ZFS with this amount of storage and this amount of RAM is slower than I have patience for"

  • Add more RAM, or
  • Buy smaller disks, or
  • Use another file system all together, or
  • Exercise more patience
One does not exclude the other.

The third point would be the most serious suggestion. ZFS is great, but so is LVM. LVM offers most of the benefits of ZFS, with much lower memory footprint.

For the functions that you plan to use your computer for, do you need specific benefits that ZFS would give you?
 
Hi kolt,

Welcome to the forum!


Which 'this' do you want to mitigate?
I can think of a few more or less useful mitigations for "using ZFS with this amount of storage and this amount of RAM is slower than I have patience for"

  • Add more RAM, or
  • Buy smaller disks, or
  • Use another file system all together, or
  • Exercise more patience
One does not exclude the other.

The third point would be the most serious suggestion. ZFS is great, but so is LVM. LVM offers most of the benefits of ZFS, with much lower memory footprint.

For the functions that you plan to use your computer for, do you need specific benefits that ZFS would give you?
Honestly, couldn't tell you if ZFS would benefit me. I believe it was the only option in the install, and people talk about how it's great. So my plan would be to set my nvmes in zfs, then do LVM for the larger disks. Is it possible to make an LVM mirror through the panel? I see the option select a single disk, but can't seem to make a mirror
 
Is it possible to make an LVM mirror through the panel? I see the option select a single disk
Do you mean install-time or on the installed system?

For the installed system I don't know actually, and have no unused disks to try it out. I'm quite frugal with disks, and mostly assign just partitions instead of full disks (so I have not used the GUI for that all too often).
 
I see the option select a single disk, but can't seem to make a mirror
Only software raid mirror PVE officially supports is ZFS or btrfs. When using hardware raid you could of cause use what you want like LVM on top.
And ZFS doesn't need that much RAM. By default it will use UP TO 50% of your hosts RAM for caching, but you could limit that. Smaller ARC of cause also means lower read performance of your ZFS pool. But with 34TB of raw capacity it should also work with something like 8-16GB RAM instead of the 32GB.
 
  • Like
Reactions: wbk
Hi! Limit the RAM for ZFS, always will use 50% of the RAM installed

When using ZFS on SSDs without power loss protection, TRIMs must be configured at least once every week, otherwise you will always suffer from very high IO delay.

the command is:
zpool trim rpool
to notify of blocks that are no longer in use.
 
  • Like
Reactions: wbk
When using ZFS on SSDs without power loss protection, TRIMs must be configured at least once every week, otherwise you will always suffer from very high IO delay.
I had to think about this one. The rationale would be:

  • With power loss protection, ZFS will perform write-back, ie, YOLO-writes to storage
  • Without power loss protection, ZFS will perform write-through writes to storage
  • Before running TRIM, there might not be enough freely available blocks available for the current write action
    • in that case, the SSD controller has to clear erase blocks while you are waiting for an I/O to complete
  • After running TRIM, there will be blocks available, writes can go through directly and the OS will get notified, completing I/O before too long
Did I understand correctly?

Thanks for mentioning!
 
  • Like
Reactions: VDNKH
I had to think about this one. The rationale would be:

  • With power loss protection, ZFS will perform write-back, ie, YOLO-writes to storage
  • Without power loss protection, ZFS will perform write-through writes to storage
  • Beforerunning TRIM, there might not be enough freely available blocks available for the current write action
    • in that case, the SSD controller has to clear erase blocks while you are waiting for an I/O to complete
  • After running TRIM, there will be blocks available, writes can go through directly and the OS will get notified, completing I/O before too long
Did I understand correctly?

Thanks for mentioning!
Yes i put in simple Words, i have several server with zfs raid 1 and suffer with the io delay, thats was the only way i found to solve it
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!