Memory Management

RockNLol

Member
Sep 13, 2020
45
2
13
hi,
I recently set up my homeserver with PVE 6.2. Hardware is as follows:
Intel Xeon E3-1275v3
Supermicro X10SAE
32GB ECC Memory
2x Samsung 850 EVO SATA-SSDs as ZFS mirrored system drives for proxmox on the 2-port ASM1061 SATA controller of the board
3x Seagate 16TB drives on the C226 SATA controller passed through to a TrueNAS core VM

I don't know if it matters, but one of the Samsung SSDs is damaged, ZFS state degraded (explains my issues with my windows hyper-v server that i replaced with proxmox now). Replacement SSD is in the mail.

I allocated 16 of the 32GB of RAM to the TrueNAS VM. Another 8 went to my windows server VM, 1 to my opnsense VM. the rest is available for proxmox and various small LXC containers running mysql, openhab etc.
The last couple of days I noticed that randomly the TrueNAS vm crashed, so I kept an eye on this and found out, that sometimes I run out of RAM and then proxmox kills the TrueNAS vm, which is arguably the most important one for me.
So I googled a bit and activated an 8GB swapfile using the command zfs create -V 8G -b $(getconf PAGESIZE) -o logbias=throughput -o sync=always -o primarycache=metadata -o com.sun:auto-snapshot=false rpool/swap. Before that no swap partition or file was created by the initial proxmox setup routine (i guess because of zfs?).
Also I reduced the allocated RAM of the TrueNAS VM to 12GB, leaving a little more headroom for proxmox. Now I converted an old vhdx of my last server over to qcow by using qemu-img convert. This slowly filled the swapfile to the brim without using the RAM which ultimately crashed the whole server.

I guess something is fundamentally ill configured here, as I am completely new to proxmox, only having some experience with little debian server vms I set up in the past on my windows server.
How can I prevent proxmox from killing a vm because of a lack of ram (not just gracefully shutting them down, straight up crashing them)?
why does proxmox even crash when swap is full but ram isn't?
Is 8GB of swap too little? Harddrive space is not an issue at the moment.
And last but not least: where da logfiles at? /var/log/syslog does not tell me anything about a lack of ram, killing a vm or a full on crash.

best regards,
RockNLol
 
Hi RockNLol

I‘m quite new to Proxmox and ZFS, but I‘ve read somewhere that running out of memory could be related to the RAM usage for ZFS. By default, the memory usage is limited to half the amount of installed RAM, in your case 16GB. You have assigned 24GB (16 + 8) to your VMs. If the host needs more than 8GB RAM for ZFS, then it will probably run out of RAM for the VMs.

You can try to limit the RAM usage to 8GB (or less) setting „zfs_arc_max“ in „/etc/modprobe.d/zfs.conf“ (https://pve.proxmox.com/wiki/ZFS_on_Linux), in order to avoid running out of memory, but I think that expanding RAM to 64GB would be the way to go, if you plan to create additional VMs in the future.

I hope that this information helps you on fixing the issue.

Best regards,
Belegnor
 
  • Like
Reactions: RockNLol
ZFS on Linux grows to use half your RAM unless you limit it, as pointer out in the post above. Please note that using swap on ZFS might cause issues as well. And sync-writes currently slow down all I/O, which will be fixed soon. Regarding /var/log/syslog: try using journalctl instead on Proxmox/Debian. (fixed typo's)
 
Last edited:
  • Like
Reactions: RockNLol
Alright that makes sense. Since I'm not doing a lot of read/write on the host I'll cap ZFS to 2GB. I'd love to go for 64GB of RAM, but my motherboard is only rated for 32GBs and I don't want to swap my whole server. I totally forgot about journalctl. I'll try to use that.

Edit: OK new problem: udev fills up to 100% when copying a 50GB image file from my NAS via smb to local-zfs. Copying fails with "no space left on device", even though I copy to /rpool/data/ and not /dev. Why could that be?
 
Last edited:
Alright that makes sense. Since I'm not doing a lot of read/write on the host I'll cap ZFS to 2GB. I'd love to go for 64GB of RAM, but my motherboard is only rated for 32GBs and I don't want to swap my whole server. I totally forgot about journalctl. I'll try to use that.

... 2GB could be a little bit low, but it should be OK, if your host doesn't handle with a big ZFS storage pool ...

You can optimize your RAM assigning memory dynamically (memory ballooning) to the VMs, instead of allocating memory in a static way. In my case, I have running a FreeNAS VM with 8GB RAM (min. memory = 2GB, max. memory = 8GB) and a 15TB ZFS pool (5x 3TB virtual disks). I haven't experienced any performace issues yet ...

Edit: OK new problem: udev fills up to 100% when copying a 50GB image file from my NAS via smb to local-zfs. Copying fails with "no space left on device", even though I copy to /rpool/data/ and not /dev. Why could that be?

I suppose you store the VMs on the root storage, as you pass the HDDs through to the NAS VM. How much disk space is allocated to both VMs on the root disk? Do you perform backups on the root disk? Do you perform snapshots for these VMs on the root disk? Even thought snapshots don't use much disk space initially, they can grow in size over the time. If you store all this on your root pool, then you can run into disk space issues ...

You can check the disk space using the commands df or zfs list, but using the zfs command will allow you to get more specific information about the disk space usage ...
 
I don't really think I have a diskspace problem.
zfs list gives me:
Code:
root@proxmox:~# zfs list
NAME                                     USED  AVAIL     REFER  MOUNTPOINT
rpool                                   48.8G   401G      104K  /rpool
rpool/ROOT                              1.22G   401G       96K  /rpool/ROOT
rpool/ROOT/pve-1                        1.22G   401G     1.22G  /
rpool/data                              39.0G   401G      112K  /rpool/data
rpool/data/subvol-105-disk-0             966M  7.10G      923M  /rpool/data/subvol-105-disk-0
rpool/data/subvol-106-disk-0             647M  7.37G      645M  /rpool/data/subvol-106-disk-0
rpool/data/subvol-107-disk-0             573M  7.44G      569M  /rpool/data/subvol-107-disk-0
rpool/data/subvol-110-disk-0             915M  7.15G      870M  /rpool/data/subvol-110-disk-0
rpool/data/subvol-111-disk-0            1.26G  6.80G     1.20G  /rpool/data/subvol-111-disk-0
rpool/data/vm-101-disk-0                10.1G   401G     9.47G  -
rpool/data/vm-101-state-SafetySnapshot   882M   401G      882M  -
rpool/data/vm-102-disk-0                12.5G   401G     12.5G  -
rpool/data/vm-103-disk-0                6.32G   401G     5.09G  -
rpool/data/vm-103-state-SafetySnapshot   569M   401G      569M  -
rpool/data/vm-104-disk-0                1.33G   401G     1.33G  -
rpool/data/vm-104-disk-1                  92K   401G       92K  -
rpool/data/vm-109-disk-0                2.94G   401G     2.94G  -
rpool/data/vm-112-disk-0                  56K   401G       56K  -
rpool/swap                              8.50G   409G       92K  -

df:
Code:
root@proxmox:~# df
Filesystem                     1K-blocks       Used   Available Use% Mounted on
udev                            16360156          0    16360156   0% /dev
tmpfs                            3283532      14980     3268552   1% /run
rpool/ROOT/pve-1               421460608    1281920   420178688   1% /
tmpfs                           16417640      43680    16373960   1% /dev/shm
tmpfs                               5120          0        5120   0% /run/lock
tmpfs                           16417640          0    16417640   0% /sys/fs/cgroup
rpool                          420178816        128   420178688   1% /rpool
rpool/data                     420178816        128   420178688   1% /rpool/data
rpool/ROOT                     420178816        128   420178688   1% /rpool/ROOT
rpool/data/subvol-106-disk-0     8388608     660096     7728512   8% /rpool/data/subvol-106-disk-0
rpool/data/subvol-111-disk-0     8388608    1257472     7131136  15% /rpool/data/subvol-111-disk-0
rpool/data/subvol-110-disk-0     8388608     890880     7497728  11% /rpool/data/subvol-110-disk-0
rpool/data/subvol-107-disk-0     8388608     583168     7805440   7% /rpool/data/subvol-107-disk-0
rpool/data/subvol-105-disk-0     8388608     945024     7443584  12% /rpool/data/subvol-105-disk-0
/dev/fuse                          30720         24       30696   1% /etc/pve
//192.168.1.5/PVE          1999297764 1089860012   909437752  55% /mnt/pve/PC
tmpfs                            3283528          0     3283528   0% /run/user/0
//192.168.1.201/Proxmox     18292320900  150335727 18141985173   1% /mnt/pve/TrueNAS

as you can see in the df output nothing of udev is used at all. only when I copy a 50GB qcow2 image file from my PC (/mnt/pve/PC) to /rpool/data/ this fills up and the whole system crashes shortly after.

I'll remove the swap again, if that can cause issues; hope that resolves this.

*edit: oh my, how embarassing. The copy command I used wrote to /dev/rpool/data/, of course /dev is going to run out of memory....

But still, as soon as I reach maximum RAM by starting all VMs the whole server just crashes. Can I prevent this somehow?
 
Last edited:
[...]
*edit: oh my, how embarassing. The copy command I used wrote to /dev/rpool/data/, of course /dev is going to run out of memory....

... At least, the one issue seems to be resolved ... ;)

But still, as soon as I reach maximum RAM by starting all VMs the whole server just crashes. Can I prevent this somehow?

It seems that the RAM usage for ZFS isn't limited or the limitation isn't active yet. Did you limit the ZFS memory usage?

1601722994309.png

After that, have you changed the memory settings of the VMs? I'd recommend dynamic memory assignment/allocation for both TrueNAS and Windows Server, so the VMs use as much RAM as they really need. I think setting min. memory = 2GiB and max. memory = 8GiB in both cases should work without performance issues. As I mentioned yesterday, this is my setup for FreeNAS VM (5x 3TB virtual HDDs) and it works fine for me ...

1601724064972.png
 
  • Like
Reactions: RockNLol
...
But still, as soon as I reach maximum RAM by starting all VMs the whole server just crashes. Can I prevent this somehow?
Please make sure that you also set the minimum memory for ZFS. If you only set the maximum and it is lower than the automatically calculated minimum, it will still use more memory.
You can use cat /proc/spl/kstat/zfs/arcstats to check whether c_min is less thanc_max and whether your settings are applied correctly (in Bytes).
 
  • Like
Reactions: RockNLol
Thanks for all your help!
I disabled swap, set zfs min and max memory to 2GB, so it always uses the same amount of RAM and played around with RAM allocation of the VMs.
Since TrueNAS seems to just take what it can and run I just gave it 8GB without ballooning. It would take it anyways. The Windows Server now has your recommended value of 2-8GB with ballooning and runs fine.
Now with all my current VMs running I'm sitting at around 80% RAM, so that works out nicely.

Is it normal though, that proxmox crashes, or kills vms, when there is no more RAM available?

I really worry about that because I had trouble before with my (unprivileged) openhab CT. Openhab uses java and its process used 11 gigs of RAM after a couple days of runtime, even though I limited the CTs RAM in Proxmox to 1GB. I fixed this hopefully by limiting the java XmX value in openhabs start parameters to 512MB, but I don't know if maybe an update of openhab breaks something again. If this java process just blows itself up again at some point, and there is no way of hard capping it, this could potentially crash my server and endanger my files by killing the TrueNAS VM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!