arcstat.py confusing output

WhiteStarEOF

Active Member
Mar 6, 2012
96
10
28
Hello,

I'm running into some confusion with the output of arcstat.py. As I understand it, at the tail end of the output arcsz is the current size of the ARC, and c is the ARC maximum size.

Code:
root@myprox:~# arcstat.py 1
    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
17:31:19     3     3    100     3  100     0    0     1  100   3.7G  3.7G
17:31:20  2.3K   467     20   434   18    33   76    10   58   3.7G  3.7G
17:31:21  1.1K   791     71   502   62   289   98     8   38   3.7G  3.7G
17:31:22  1.3K   832     63   468   50   364   92    38   20   3.7G  3.7G
17:31:23  1.9K  1.2K     66   643   51   598   97    28   11   3.5G  3.5G
17:31:24  2.2K   749     34   515   27   234   72    33   26   3.5G  3.5G
17:31:25  2.1K  1.0K     50   446   31   595   93    32    8   3.5G  3.5G
17:31:26  1.6K   880     54   862   53    18   66    34   82   3.5G  3.5G
17:31:27  2.8K  1.5K     53   663   37   824   83    31    7   3.5G  3.5G
17:31:28  2.8K  1.4K     50   460   26   971   86    41    6   3.5G  3.5G
17:31:29  1.1K   775     68   697   66    78   93    31   58   3.5G  3.5G
17:31:30  1.8K   733     41   519   34   214   84    23   17   3.5G  3.5G
17:31:31  2.7K   825     30   467   22   358   59    24    8   3.5G  3.5G
17:31:32  1.5K   722     47   615   49   107   38    16   11   3.5G  3.5G
17:31:33  2.1K  1.2K     57   450   34   761   95    20    4   3.6G  3.6G

arcsz is never larger than c, which leads me to believe that c is the max. But I don't understand what would make the ARC max change size. The reason this becomes a problem is because that number gets as low as 50M. I need to figure out why.

This is Proxmox 4.4-18.
 
Hi,

arcsz is the ARC Size
c is the ARC Target Size and not the max.
c_max is the max and this is static.

It is interesting what is your c_min and c_max.
I personally like static zfs ARC size to avoid this load and offload overhead.
Also it is better to calculate the memory usage of the system.
 
It looks like I don't have c_min recorded. I know we set "options zfs zfs_arc_max=4000000000" in /etc/modprobe.d/zfs.conf, but it never takes. We have a cron entry @reboot to echo 4000000000 into /sys/module/zfs/parameters/zfs_arc_max. Does that not set it to a static size?
 
Last edited:
pleas send the output of

Code:
cat /proc/spl/kstat/zfs/arcstats
 
Here it is, though unfortunately on a clean boot, so c still shows 3.7G. I will keep an eye on it and repost whenever c changes value.

Code:
# cat /proc/spl/kstat/zfs/arcstats
6 1 0x01 91 4368 2442007230 349203140231
name                            type data
hits                            4    51445
misses                          4    14800
demand_data_hits                4    38775
demand_data_misses              4    1881
demand_metadata_hits            4    11171
demand_metadata_misses          4    10518
prefetch_data_hits              4    49
prefetch_data_misses            4    257
prefetch_metadata_hits          4    1450
prefetch_metadata_misses        4    2144
mru_hits                        4    15690
mru_ghost_hits                  4    0
mfu_hits                        4    34256
mfu_ghost_hits                  4    0
deleted                         4    67
mutex_miss                      4    0
evict_skip                      4    5887
evict_not_enough                4    0
evict_l2_cached                 4    0
evict_l2_eligible               4    529408
evict_l2_ineligible             4    8192
evict_l2_skip                   4    0
hash_elements                   4    4968
hash_elements_max               4    4969
hash_collisions                 4    10
hash_chains                     4    8
hash_chain_max                  4    1
p                               4    2000000000
c                               4    4000000000
c_min                           4    33554432
c_max                           4    4000000000
size                            4    158319816
hdr_size                        4    2029392
data_size                       4    104048640
metadata_size                   4    47247872
other_size                      4    4993912
anon_size                       4    90112
anon_evictable_data             4    0
anon_evictable_metadata         4    0
mru_size                        4    95644672
mru_evictable_data              4    53079552
mru_evictable_metadata          4    31628800
mru_ghost_size                  4    0
mru_ghost_evictable_data        4    0
mru_ghost_evictable_metadata    4    0
mfu_size                        4    55561728
mfu_evictable_data              4    50969088
mfu_evictable_metadata          4    1614848
mfu_ghost_size                  4    0
mfu_ghost_evictable_data        4    0
mfu_ghost_evictable_metadata    4    0
l2_hits                         4    0
l2_misses                       4    0
l2_feeds                        4    0
l2_rw_clash                     4    0
l2_read_bytes                   4    0
l2_write_bytes                  4    0
l2_writes_sent                  4    0
l2_writes_done                  4    0
l2_writes_error                 4    0
l2_writes_lock_retry            4    0
l2_evict_lock_retry             4    0
l2_evict_reading                4    0
l2_evict_l1cached               4    0
l2_free_on_write                4    0
l2_cdata_free_on_write          4    0
l2_abort_lowmem                 4    0
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    0
l2_asize                        4    0
l2_hdr_size                     4    0
l2_compress_successes           4    0
l2_compress_zeros               4    0
l2_compress_failures            4    0
memory_throttle_count           4    0
duplicate_buffers               4    0
duplicate_buffers_size          4    0
duplicate_reads                 4    0
memory_direct_count             4    0
memory_indirect_count           4    0
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    54271176
arc_meta_limit                  4    3000000000
arc_meta_max                    4    54272384
arc_meta_min                    4    16777216
arc_need_free                   4    0
arc_sys_free                    4    261963776
 
Here we go. I tried to get an output of arcstat.py to go along with this, but I didn't realize until after I disconnected that the file generated was 0 bytes. Oops. I can go back and get that if it's important. Currently arcsz and c are hanging out around 600MB. Here is the output of /proc/spl/kstats/zfs/arcstats

Edit:
Also, right now we have the ARC max at 5000000000 because basically updating the ARC max was fixing an issue we had where txg_group processes (one for each VM) would consume an entire core, and the ARC would just go to all zeros for activity. By changing the ARC max, that seemed to kick things into gear, and got transaction groups processing again.

Code:
name                            type data
hits                            4    15303689
misses                          4    60285975
demand_data_hits                4    2346852
demand_data_misses              4    47045508
demand_metadata_hits            4    12579198
demand_metadata_misses          4    12852666
prefetch_data_hits              4    105526
prefetch_data_misses            4    375774
prefetch_metadata_hits          4    272113
prefetch_metadata_misses        4    12027
mru_hits                        4    6758971
mru_ghost_hits                  4    201189
mfu_hits                        4    8167153
mfu_ghost_hits                  4    5181
deleted                         4    2060393
mutex_miss                      4    154
evict_skip                      4    620030
evict_not_enough                4    49500
evict_l2_cached                 4    0
evict_l2_eligible               4    25508987392
evict_l2_ineligible             4    2241914880
evict_l2_skip                   4    0
hash_elements                   4    60092
hash_elements_max               4    489293
hash_collisions                 4    416986
hash_chains                     4    471
hash_chain_max                  4    4
p                               4    631904972
c                               4    684983144
c_min                           4    33554432
c_max                           4    5000000000
size                            4    650997976
hdr_size                        4    23970528
data_size                       4    308298240
metadata_size                   4    293523456
other_size                      4    25205752
anon_size                       4    12797440
anon_evictable_data             4    0
anon_evictable_metadata         4    0
mru_size                        4    408603136
mru_evictable_data              4    307830784
mru_evictable_metadata          4    60514816
mru_ghost_size                  4    143155200
mru_ghost_evictable_data        4    20586496
mru_ghost_evictable_metadata    4    122568704
mfu_size                        4    180421120
mfu_evictable_data              4    156160
mfu_evictable_metadata          4    175513600
mfu_ghost_size                  4    13008896
mfu_ghost_evictable_data        4    0
mfu_ghost_evictable_metadata    4    13008896
l2_hits                         4    0
l2_misses                       4    0
l2_feeds                        4    0
l2_rw_clash                     4    0
l2_read_bytes                   4    0
l2_write_bytes                  4    0
l2_writes_sent                  4    0
l2_writes_done                  4    0
l2_writes_error                 4    0
l2_writes_lock_retry            4    0
l2_evict_lock_retry             4    0
l2_evict_reading                4    0
l2_evict_l1cached               4    0
l2_free_on_write                4    0
l2_cdata_free_on_write          4    0
l2_abort_lowmem                 4    0
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    0
l2_asize                        4    0
l2_hdr_size                     4    0
l2_compress_successes           4    0
l2_compress_zeros               4    0
l2_compress_failures            4    0
memory_throttle_count           4    0
duplicate_buffers               4    0
duplicate_buffers_size          4    0
duplicate_reads                 4    0
memory_direct_count             4    2
memory_indirect_count           4    64238
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    342699736
arc_meta_limit                  4    3000000000
arc_meta_max                    4    908804536
arc_meta_min                    4    16777216
arc_need_free                   4    0
arc_sys_free                    4    525123584
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!