Hi,
I was wondering if someone could shed some light on the issue im having with the ram. As I know ZFS consumes lots of ram, but what i dont get is how come the ram peaks when the servers are idle and when there actually working it drops, in theory should be the other way around. I was found out by disabling ballooning did help alot for proxmox to not suck the ram out. My question is how come when a VM has fixed ram why does it still consume from the host? I ran a stress test on the RAM of the VM and it starts consuming the RAM of proxmox. My current setup is RAID 1 ZFS for the OS proxmox and RAID 1 ZFS for vmdata. I also read by lowing ARC but not sure whats the rule if I have 8 gigs of ram on a test server.
Thank you
https://imgur.com/a/uDkeG
Thank you
I was wondering if someone could shed some light on the issue im having with the ram. As I know ZFS consumes lots of ram, but what i dont get is how come the ram peaks when the servers are idle and when there actually working it drops, in theory should be the other way around. I was found out by disabling ballooning did help alot for proxmox to not suck the ram out. My question is how come when a VM has fixed ram why does it still consume from the host? I ran a stress test on the RAM of the VM and it starts consuming the RAM of proxmox. My current setup is RAID 1 ZFS for the OS proxmox and RAID 1 ZFS for vmdata. I also read by lowing ARC but not sure whats the rule if I have 8 gigs of ram on a test server.
Thank you
https://imgur.com/a/uDkeG
Code:
cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 33554432
c_max 4 4052455424
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 0
arc_meta_used 4 644899720
arc_meta_limit 4 3039341568
arc_meta_max 4 646538696
arc_meta_min 4 16777216
arc_need_free 4 0
arc_sys_free 4 126636032
Code:
arcstat
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
09:51:13 10 0 0 0 0 0 0 0 0 3.3G 3.3G
Code:
zfs get all
NAME PROPERTY VALUE SOURCE
vmdata type filesystem -
vmdata creation Fri Dec 8 13:36 2017 -
vmdata used 14.9G -
vmdata available 884G -
vmdata referenced 96K -
vmdata compressratio 1.00x -
vmdata mounted yes -
vmdata quota none default
vmdata reservation none default
vmdata recordsize 128K default
vmdata mountpoint /vmdata default
vmdata sharenfs off default
vmdata checksum on default
vmdata compression off default
vmdata atime on default
vmdata devices on default
vmdata exec on default
vmdata setuid on default
vmdata readonly off default
vmdata zoned off default
vmdata snapdir hidden default
vmdata aclinherit restricted default
vmdata canmount on default
vmdata xattr on default
vmdata copies 1 default
vmdata version 5 -
vmdata utf8only off -
vmdata normalization none -
vmdata casesensitivity sensitive -
vmdata vscan off default
vmdata nbmand off default
vmdata sharesmb off default
vmdata refquota none default
vmdata refreservation none default
vmdata primarycache all default
vmdata secondarycache all default
vmdata usedbysnapshots 0 -
vmdata usedbydataset 96K -
vmdata usedbychildren 14.9G -
vmdata usedbyrefreservation 0 -
vmdata logbias latency default
vmdata dedup off default
vmdata mlslabel none default
vmdata sync disabled local
vmdata refcompressratio 1.00x -
vmdata written 96K -
vmdata logicalused 14.7G -
vmdata logicalreferenced 40K -
vmdata filesystem_limit none default
vmdata snapshot_limit none default
vmdata filesystem_count none default
vmdata snapshot_count none default
vmdata snapdev hidden default
vmdata acltype off default
vmdata context none default
vmdata fscontext none default
vmdata defcontext none default
vmdata rootcontext none default
vmdata relatime off default
vmdata redundant_metadata all default
vmdata overlay off default
Thank you