How to configure Zram on ZFS

davidecasalino25

New Member
Jul 13, 2024
5
1
1
Hi pve users,
I recently finished installing my services on the proxmox server running on zimaboard 432.
The installation is on two mirrored SATA disks with ZFS.
Since I have the version with 4GB of soldered RAM, the memory runs out immediately, at the moment I only use four containers which take up little memory but the ZFS ARC takes up almost all of it. I know that when more RAM is needed for the system the ARC will be freed up, but I wanted to install the ZRAM. I only heard about it recently and I know that it is a solution that is implemented with swap memory but actually compresses a part of the RAM memory.
I found the official proxmox documentation, https://pve.proxmox.com/wiki/Zram, but it doesn't mention zfs and uses udev to create a swap disk which I know is not supported by ZFS.
Does anyone have any advice on how to best configure Zram on ZFS?
Can I use zram-generator as recommended on the archlinux wiki https://wiki.archlinux.org/title/Zram?
 
Is there a specific use case for which you need ZRAM? Because ZFS will compress a lot of the data that's on the ARC automatically by default.
 
Is there a specific use case for which you need ZRAM? Because ZFS will compress a lot of the data that's on the ARC automatically by default.
Thanks you all for your quick replies,
because I'm already thinking about installing other services in containers such as immich, nextcloud and also because I have installed a container with borg backup which will have to do backups of quite large quantities of data.
For this reason I thought that 4GB (3.68) is too little and I thought of using ZRAM.
However, I don't discount what you said about limiting the ARC memory since it uses 1.8GB out of 3.6, so half. Most of the time when I have all the services active, running htop on the host the used memory is just 1.6gb while on the proxmox dashboard, it is always around 80/85%, surely it will be the arc memory.
 
Last edited:
I'm already thinking about installing other services in containers such as immich, nextcloud and also because I have installed a container with borg backup which will have to do backups of quite large quantities of data.
Immich alone recomends 6GB ram with a minimum of 4 [1]. And Nextcloud is barely usable with 1GB. IMHO you need more ram to properly run all those services.

[1] https://immich.app/docs/install/requirements/
 
I think your hardware is terrible for PVE but as long as you see it as an adventure, I'll play along.

ZFS ARC is (due to stupid license conflicts) counted a used memory instead of cache memory. KSM and ballooning kick in at 80% host memory usage, so (when you use those, and you'll need it) it makes it impossible to use that 20% of memory. Maybe use Btrfs (if you want the same bitrot detection) or LVM(-thin) with Ext4 instead. Maybe also setup KSM to kick in at 50% instead. Given that Proxmox needs 2GB (although it might run with less) and your integrated graphics takes some memory (less limit it to 32MB or less) and let's assume that your chosen filesystem uses 20% (of only 4GB) for cache, that's leaves a little over 1GB for VMs or better use CTs as they share the kernel and can run with less memory.
I would not go for the addition overhead of ZRAM af your CPU is weak and you have little memory to spare. Do use some swap space (on enteprise SSD with PLP if you can) to allow the most memory for guests.

Then again, it's terrible, and maybe you would be better off with running Docker containers on a minimal Linux (like Core OS or something) instead of trying to use a clustered enterprise hypervisor on underspeced consumer toys. But feel free to try and please do share your experiences.
 
  • Like
Reactions: Max Carrara
I think your hardware is terrible for PVE but as long as you see it as an adventure, I'll play along.

ZFS ARC is (due to stupid license conflicts) counted a used memory instead of cache memory. KSM and ballooning kick in at 80% host memory usage, so (when you use those, and you'll need it) it makes it impossible to use that 20% of memory. Maybe use Btrfs (if you want the same bitrot detection) or LVM(-thin) with Ext4 instead. Maybe also setup KSM to kick in at 50% instead. Given that Proxmox needs 2GB (although it might run with less) and your integrated graphics takes some memory (less limit it to 32MB or less) and let's assume that your chosen filesystem uses 20% (of only 4GB) for cache, that's leaves a little over 1GB for VMs or better use CTs as they share the kernel and can run with less memory.
I would not go for the addition overhead of ZRAM af your CPU is weak and you have little memory to spare. Do use some swap space (on enteprise SSD with PLP if you can) to allow the most memory for guests.

Then again, it's terrible, and maybe you would be better off with running Docker containers on a minimal Linux (like Core OS or something) instead of trying to use a clustered enterprise hypervisor on underspeced consumer toys. But feel free to try and please do share your experiences.
Thank you for your answer.
Yes, this is just an experiment, so I run everything on the zimaboard, since the loss of these data or containers would not mean work interruptions and so on.
For now I limited myself to using the Zimaboard as it was the only low-cost x86 hw on which I could test proxmox, but in the future it will be replaced, given that the main reason for its purchase is school projects with Linux for which it is quite supported.
On the server that I will create in the future which I will also need to work on and on which I will configure a 3-2-1 backup, it won't even cross my mind to use ZRAM. The only problem will be consumption.
keep in mind that the zimaboard has a tdp of 6w, but I can't yet estimate the actual consumption of a PC/server.
I also wanted to tell you that KSM sharing counts 0B on my dashboard, I'm not sure what it is, but I don't know if it's essential now
 
I also wanted to tell you that KSM sharing counts 0B on my dashboard, I'm not sure what it is, but I don't know if it's essential now
If your run multiple VMs, then it can share identical memory pages (which are 4KB in size). This works best with empty memory pages (all zeros) and Windows VMs. But does Windows nowadays run in 512MB or less (otherwise how would you run more than one)? Maybe stick to (two?) Linux containers instead.

EDIT: If you cannot run more than two containers anyways with Proxmox, maybe run some other Linux distribution and run your software on hardware (or in containers, as any Linux supports that), as that may require less memory than PVE.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!