zfs - arc_summary -> VDEV cache disabled - cache existiert jedoch?!

djdomi

Renowned Member
Mar 1, 2014
35
2
73
Moin

ich bin ein wenig verwirrt, was mein cache Setup angeht -

ich habe hier 2 SSDs die als cache fungieren und laut zpool iostat auch schön funktionieren

Bash:
root@pve:/opt# zpool iostat
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rpool       82.9G  3.54T      3     82  48.2K   749K
root@pve:/opt# zpool iostat -v
                                                      capacity     operations     bandwidth
pool                                                alloc   free   read  write   read  write
--------------------------------------------------  -----  -----  -----  -----  -----  -----
rpool                                               82.9G  3.54T      3     82  48.2K   749K
  raidz2-0                                          82.9G  3.54T      3     82  48.1K   742K
    ata-WDC_WD10JFCX-68N6GN0_WD-WXD1A25L2PLF-part3      -      -      0     23  12.0K   185K
    ata-WDC_WD10JFCX-68N6GN0_WD-WX11A354V7EP-part3      -      -      0     23  12.0K   185K
    ata-ST1000NM0011_Z1N3V2V0-part3                     -      -      0     17  12.0K   186K
    ata-ST1000NM0011_Z1N4L3LC-part3                     -      -      0     17  12.2K   186K
logs                                                    -      -      -      -      -      -
  sda2                                               384K  49.5G      0      0     56  3.48K
  sdb2                                                36K  49.5G      0      0     56  3.38K
cache                                                   -      -      -      -      -      -
  sda3                                              21.0G  32.2G      1      0  10.1K  18.1K
  sdb3                                              21.1G  32.2G      1      0  9.92K  19.9K
--------------------------------------------------  -----  -----  -----  -----  -----  -----
errors: No known data errors

Code:
Nachtrag für mr44er:
cat /proc/spl/kstat/zfs/arcstats |grep "l2_"
evict_l2_cached                 4    0
evict_l2_eligible               4    940032
evict_l2_eligible_mfu           4    196608
evict_l2_eligible_mru           4    743424
evict_l2_ineligible             4    8192
evict_l2_skip                   4    0
l2_hits                         4    23616
l2_misses                       4    25770
l2_prefetch_asize               4    905216
l2_mru_asize                    4    43358922752
l2_mfu_asize                    4    1760623104
l2_bufc_data_asize              4    42974898688
l2_bufc_metadata_asize          4    2145552384
l2_feeds                        4    13019
l2_rw_clash                     4    0
l2_read_bytes                   4    168286208
l2_write_bytes                  4    401588224
l2_writes_sent                  4    3160
l2_writes_done                  4    3160
l2_writes_error                 4    0
l2_writes_lock_retry            4    1
l2_evict_lock_retry             4    0
l2_evict_reading                4    0
l2_evict_l1cached               4    0
l2_free_on_write                4    0
l2_abort_lowmem                 4    0
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    62823760384
l2_asize                        4    45120451072
l2_hdr_size                     4    91861824
l2_log_blk_writes               4    36
l2_log_blk_avg_asize            4    13968
l2_log_blk_asize                4    13810176
l2_log_blk_count                4    1026
l2_data_to_meta_ratio           4    497
l2_rebuild_success              4    2
l2_rebuild_unsupported          4    0
l2_rebuild_io_errors            4    0
l2_rebuild_dh_errors            4    0
l2_rebuild_cksum_lb_errors      4    0
l2_rebuild_lowmem               4    0
l2_rebuild_size                 4    63706043392
l2_rebuild_asize                4    45487608832
l2_rebuild_bufs                 4    1011780
l2_rebuild_bufs_precached       4    22143
l2_rebuild_log_blks             4    990

wenn ich jedoch arc_summary ausführe

Code:
VDEV cache disabled, skipping section

Das verwirrt man, da hier, soweit ich andere Bilder bei Google gesehen habe, eigentlich cache (healthy) oder so ähnlich zu finden sein sollte.




Daher die Frage - wie bekomme ich es aktiviert, oder habe ich lediglich ein Denkfehler?
 
Last edited:
Prüf mal mit cat /proc/spl/kstat/zfs/arcstats gegen, ob du bei l2_* hits und misses stehen hast.
 
Ist vdev Cache nicht das Special Device ?

Nein:
cache
A device used to cache storage pool data. A cache device cannot be configured as a mirror or raidz group. For more information, see the Cache Devices section.
https://openzfs.github.io/openzfs-docs/man/7/zpoolconcepts.7.html#cache

special
A device dedicated solely for allocating various kinds of internal metadata, and optionally small file blocks. The redundancy of this device should match the redundancy of the other normal devices in the pool. If more than one special device is specified, then allocations are load-balanced between those devices.

For more information on special allocations, see the Special Allocation Class section.
https://openzfs.github.io/openzfs-docs/man/7/zpoolconcepts.7.html#special
 
Da vermischt du aber Dinge. Wenn ich mir den Quellcode angucke, ist vdev_cache genau das: vdev_cache_stats. Das bezieht sich NICHT auf den L2Arc.


Edit:
arcstats', 'dmu': 'dmu_tx', 'l2arc': 'arcstats', # L2ARC stuff lives in arcstats 'vdev': 'vdev_cache_stats', 'xuio': 'xuio_stats', 'zfetch': 'zfetchstats', 'zil': 'zil'}
 
Last edited:
Da vermischt du aber Dinge.

Das kann gut sein, ja. :eek:
Ich hab mich einzig auf die beiden unterschiedlichen VDEV-Typen im Allgemeinen bezogen und hab nicht gecheckt, dass du dich explizit auf:
wenn ich jedoch arc_summary ausführe

Code:
VDEV cache disabled, skipping section
bezogen hast.

Aber sicher, dass damit das/ein Special Device gemeint ist?
Laut dem hier:
Code:
def section_vdev(kstats_dict):
    """Collect information on VDEV caches"""

    # Currently [Nov 2017] the VDEV cache is disabled, because it is actually
    # harmful. When this is the case, we just skip the whole entry. See
    # https://github.com/openzfs/zfs/blob/master/module/zfs/vdev_cache.c
    # for details
    tunables = get_vdev_params()

    if tunables[VDEV_CACHE_SIZE] == '0':
        print('VDEV cache disabled, skipping section\n')
        return
https://github.com/openzfs/zfs/blob/master/cmd/arc_summary#L962
Code:
 * Virtual device read-ahead caching.
 *
 * This file implements a simple LRU read-ahead cache.
https://github.com/openzfs/zfs/blob/master/module/zfs/vdev_cache.c#L37
Code:
 * TODO: Note that with the current ZFS code, it turns out that the
 * vdev cache is not helpful, and in some cases actually harmful.  It
 * is better if we disable this.  Once some time has passed, we should
 * actually remove this to simplify the code.  For now we just disable
 * it by setting the zfs_vdev_cache_size to zero.  Note that Solaris 11
 * has made these same changes.
https://github.com/openzfs/zfs/blob/master/module/zfs/vdev_cache.c#L79
hört es sich, für mich, nicht danach an.

Aber um ehrlich zu sein, an dem Punkt hab ich keine Ahnung. Sorry für die Störung.
 
  • Like
Reactions: dMopp

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!