[SOLVED] ZFS RAM peaks?

killmasta93

Renowned Member
Aug 13, 2017
973
58
68
31
Hi,
I was wondering if someone could shed some light on the issue im having with the ram. As I know ZFS consumes lots of ram, but what i dont get is how come the ram peaks when the servers are idle and when there actually working it drops, in theory should be the other way around. I was found out by disabling ballooning did help alot for proxmox to not suck the ram out. My question is how come when a VM has fixed ram why does it still consume from the host? I ran a stress test on the RAM of the VM and it starts consuming the RAM of proxmox. My current setup is RAID 1 ZFS for the OS proxmox and RAID 1 ZFS for vmdata. I also read by lowing ARC but not sure whats the rule if I have 8 gigs of ram on a test server.

Thank you

https://imgur.com/a/uDkeG

Code:
cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min                           4    33554432
c_max                           4    4052455424
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    644899720
arc_meta_limit                  4    3039341568
arc_meta_max                    4    646538696
arc_meta_min                    4    16777216
arc_need_free                   4    0
arc_sys_free                    4    126636032

Code:
 arcstat
    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
09:51:13    10     0      0     0    0     0    0     0    0   3.3G  3.3G

Code:
 zfs get all
NAME                  PROPERTY               VALUE                  SOURCE

vmdata                type                   filesystem             -
vmdata                creation               Fri Dec  8 13:36 2017  -
vmdata                used                   14.9G                  -
vmdata                available              884G                   -
vmdata                referenced             96K                    -
vmdata                compressratio          1.00x                  -
vmdata                mounted                yes                    -
vmdata                quota                  none                   default
vmdata                reservation            none                   default
vmdata                recordsize             128K                   default
vmdata                mountpoint             /vmdata                default
vmdata                sharenfs               off                    default
vmdata                checksum               on                     default
vmdata                compression            off                    default
vmdata                atime                  on                     default
vmdata                devices                on                     default
vmdata                exec                   on                     default
vmdata                setuid                 on                     default
vmdata                readonly               off                    default
vmdata                zoned                  off                    default
vmdata                snapdir                hidden                 default
vmdata                aclinherit             restricted             default
vmdata                canmount               on                     default
vmdata                xattr                  on                     default
vmdata                copies                 1                      default
vmdata                version                5                      -
vmdata                utf8only               off                    -
vmdata                normalization          none                   -
vmdata                casesensitivity        sensitive              -
vmdata                vscan                  off                    default
vmdata                nbmand                 off                    default
vmdata                sharesmb               off                    default
vmdata                refquota               none                   default
vmdata                refreservation         none                   default
vmdata                primarycache           all                    default
vmdata                secondarycache         all                    default
vmdata                usedbysnapshots        0                      -
vmdata                usedbydataset          96K                    -
vmdata                usedbychildren         14.9G                  -
vmdata                usedbyrefreservation   0                      -
vmdata                logbias                latency                default
vmdata                dedup                  off                    default
vmdata                mlslabel               none                   default
vmdata                sync                   disabled               local
vmdata                refcompressratio       1.00x                  -
vmdata                written                96K                    -
vmdata                logicalused            14.7G                  -
vmdata                logicalreferenced      40K                    -
vmdata                filesystem_limit       none                   default
vmdata                snapshot_limit         none                   default
vmdata                filesystem_count       none                   default
vmdata                snapshot_count         none                   default
vmdata                snapdev                hidden                 default
vmdata                acltype                off                    default
vmdata                context                none                   default
vmdata                fscontext              none                   default
vmdata                defcontext             none                   default
vmdata                rootcontext            none                   default
vmdata                relatime               off                    default
vmdata                redundant_metadata     all                    default
vmdata                overlay                off                    default

Thank you
 
With 8 GB of RAM you have 4 GB of ARC if you do not change anything. As long as you do not swap a lot, it should be fine. If you're used to the "Windows way" of dealing with RAM, Linux handles it differently. RAM should and is always used, so even if the system is idle, the optimal system should have as much RAM used as possible, only then it could be fast system. The difference between Windows and non-Windows is that RAM that is used for buffering is counted free or non-free respectively. More on that topic can be found here: https://www.linuxatemyram.com/
 
Thanks for the reply, out of curiosity i have read that ballooning on windows is not so great, what do you recommend? should i enable it or disable it? What also bugs me is how the vm sucks the ram out of host more then its fixed rate.

Thank you
 
Thanks for the reply,will disable ballooning. is there a way i can find what is consuming ram on the host (proxmox) i tried htop and top but all i see are the kvm which should be around 5gigs total or so but instead its hitting 6-7gigs on proxmox

Thank you
 
You disabled sync on your pool but did not enable compression? Why? A compressed pool is faster than a non-compressed pool.

Your server has 8 GB of RAM and your KVMs hit 6-7 GB? That is a lot. Hae you overprovisioned your memory?
 
Thanks for the quick reply, your right i guess i forgot to compress the vmdata pool :( just did it right now with
Code:
zfs set compression=lz4 vmdata
Also not really sure what you mean over provisioned your memory?

Thank you
Pics
https://imgur.com/a/wzWUT
and this is the new data
Code:
vmdata                type                   filesystem             -
vmdata                creation               Fri Dec  8 13:36 2017  -
vmdata                used                   14.9G                  -
vmdata                available              884G                   -
vmdata                referenced             96K                    -
vmdata                compressratio          1.00x                  -
vmdata                mounted                yes                    -
vmdata                quota                  none                   default
vmdata                reservation            none                   default
vmdata                recordsize             128K                   default
vmdata                mountpoint             /vmdata                default
vmdata                sharenfs               off                    default
vmdata                checksum               on                     default
vmdata                compression            lz4                    local
vmdata                atime                  on                     default
vmdata                devices                on                     default
vmdata                exec                   on                     default
vmdata                setuid                 on                     default
vmdata                readonly               off                    default
vmdata                zoned                  off                    default
vmdata                snapdir                hidden                 default
vmdata                aclinherit             restricted             default
vmdata                canmount               on                     default
vmdata                xattr                  on                     default
vmdata                copies                 1                      default
vmdata                version                5                      -
vmdata                utf8only               off                    -
vmdata                normalization          none                   -
vmdata                casesensitivity        sensitive              -
vmdata                vscan                  off                    default
vmdata                nbmand                 off                    default
vmdata                sharesmb               off                    default
vmdata                refquota               none                   default
vmdata                refreservation         none                   default
vmdata                primarycache           all                    default
vmdata                secondarycache         all                    default
vmdata                usedbysnapshots        0                      -
vmdata                usedbydataset          96K                    -
vmdata                usedbychildren         14.9G                  -
vmdata                usedbyrefreservation   0                      -
vmdata                logbias                latency                default
vmdata                dedup                  off                    default
vmdata                mlslabel               none                   default
vmdata                sync                   disabled               local
vmdata                refcompressratio       1.00x                  -
vmdata                written                96K                    -
vmdata                logicalused            14.8G                  -
vmdata                logicalreferenced      40K                    -
vmdata                filesystem_limit       none                   default
vmdata                snapshot_limit         none                   default
vmdata                filesystem_count       none                   default
vmdata                snapshot_count         none                   default
vmdata                snapdev                hidden                 default
vmdata                acltype                off                    default
vmdata                context                none                   default
vmdata                fscontext              none                   default
vmdata                defcontext             none                   default
vmdata                rootcontext            none                   default
vmdata                relatime               off                    default
vmdata                redundant_metadata     all                    default
vmdata                overlay                off                    default
 
Looks fine. You have 8 GB of RAM of which per default 4 GB are at maximum reserved for ZFS ARC. You have 5 GB RAM for VMs without the RAM the Hypervisor itself uses. Your usage 6-7 is totally normal. I'd like to see a usage if 8 GB for best performance. You can also see that you already use KSM (kernel samepage merging) that combines identical memory pages to reduce memory usage.
 
  • Like
Reactions: killmasta93
Thanks for the reply after many hours of troubleshooting i think finally its good and stable this is what i did for anyone else having the same pickle
For servers that are only AD/DC file sharing and dont need to suck ram lower the arc to max 2gig using ratio 4:1 in /etc/modprobe.d/zfs.conf
Code:
# Min 512MB / Max 2048 MB Limit
options zfs zfs_arc_min=536870912
options zfs zfs_arc_max=2147483648

then go to /etc/sysctl.conf and at the bottom add this
Code:
vm.min_free_kbytes = 262144
vm.dirty_ratio = 5
vm.dirty_background_ratio = 1
vm.swappiness = 1

reboot and i have a smooth 4-5 gigs with balloning service on, that only thing im somewhat unsure is howcome some users complain about ballooning. and KSM is sharing around 3.5 gigs pretty good i guess

not sure if anyone else has any more tips? either changing config for the zfs?

Hope this helps someone else

Also to refresh the arc so it goes down run this

Code:
free && sync && echo 3 > /proc/sys/vm/drop_caches && free

but normally no need to run this unless your really going to panic on the ram
 
Also to refresh the arc so it goes down run this

Code:
free && sync && echo 3 > /proc/sys/vm/drop_caches && free
but normally no need to run this unless your really going to panic on the ram

Again: You really, absolutely do not want to have free RAM on a Linux box. Dropping all caches will, of course, free all buffers, but your system will be very, very slow until your buffer is filled with data again.
 
Thanks for the reply, i did read that there are some consequences but tell you the truth have not notice it very slow. of course speaking only in test enviroment so really its just ad/dc file share with pfsense I would only run this if proxmox is at 99% ram but talking extreme measures. But curious question if rebooting proxmox wouldn't i loose all buffers?
 
Thanks for the reply, i did read that there are some consequences but tell you the truth have not notice it very slow. of course speaking only in test enviroment so really its just ad/dc file share with pfsense I would only run this if proxmox is at 99% ram but talking extreme measures. But curious question if rebooting proxmox wouldn't i loose all buffers?
The system for example caches files ans if you empty the cache it needs to fetch the files from the HDD again. High RAM usage isn’t always bad. Depending on the size of the zfs pool you may even need more RAM. Do you have a L2ARC (Cache) configured?
Note that flushing the cache does not affect ZFS ARC as only kernel cache is freed but ZFS cache does not count to kernel cache, so you would only free the RAM the system has cached but not ZFS cache what I believe is what you want although I recommend not to do this. AFAIK you can clear the ZFS ARC (almost) completely by setting the max arc size to a very low value.
 
  • Like
Reactions: killmasta93
Thanks for the reply after many hours of troubleshooting i think finally its good and stable this is what i did for anyone else having the same pickle
For servers that are only AD/DC file sharing and dont need to suck ram lower the arc to max 2gig using ratio 4:1 in /etc/modprobe.d/zfs.conf
Code:
# Min 512MB / Max 2048 MB Limit
options zfs zfs_arc_min=536870912
options zfs zfs_arc_max=2147483648

then go to /etc/sysctl.conf and at the bottom add this
Code:
vm.min_free_kbytes = 262144
vm.dirty_ratio = 5
vm.dirty_background_ratio = 1
vm.swappiness = 1

reboot and i have a smooth 4-5 gigs with balloning service on, that only thing im somewhat unsure is howcome some users complain about ballooning. and KSM is sharing around 3.5 gigs pretty good i guess

not sure if anyone else has any more tips? either changing config for the zfs?

Hope this helps someone else

Also to refresh the arc so it goes down run this

Code:
free && sync && echo 3 > /proc/sys/vm/drop_caches && free

but normally no need to run this unless your really going to panic on the ram
Thanks for those Sysctl Settings. Had that problem myself that some vm grew up to absurd high ram usages, like a pfsense vm which used 400mb ram but consumed 10g
 
Last edited:
Thanks for the reply after many hours of troubleshooting i think finally its good and stable this is what i did for anyone else having the same pickle
For servers that are only AD/DC file sharing and dont need to suck ram lower the arc to max 2gig using ratio 4:1 in /etc/modprobe.d/zfs.conf
Code:
# Min 512MB / Max 2048 MB Limit
options zfs zfs_arc_min=536870912
options zfs zfs_arc_max=2147483648

then go to /etc/sysctl.conf and at the bottom add this
Code:
vm.min_free_kbytes = 262144
vm.dirty_ratio = 5
vm.dirty_background_ratio = 1
vm.swappiness = 1

reboot and i have a smooth 4-5 gigs with balloning service on, that only thing im somewhat unsure is howcome some users complain about ballooning. and KSM is sharing around 3.5 gigs pretty good i guess

not sure if anyone else has any more tips? either changing config for the zfs?

Hope this helps someone else

Also to refresh the arc so it goes down run this

Code:
free && sync && echo 3 > /proc/sys/vm/drop_caches && free

but normally no need to run this unless your really going to panic on the ram
thank you for tips, youre savior
 
  • Like
Reactions: killmasta93

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!