Hello.
I'm starting to update various servers from 5.0 to PVE 5.1, in most of them I have ZFS and a L2 ARC cache on SSD or NVME, usually I check arcstats l2_hdr_size to see if l2 arc header is using too much of my ARC memory.
With the new ZFS packages and kernel mod version 0.7.2-1 (from /sys/module/zfs/version) I have to report that l2_hdr_size is always 0, even if cache device is used.
Cache device is enabled and used:
ARC is being used, actually low use because it's a test server with only a idle Windows VM
Content of arcstats, l2_hdr_size is 0
Is something changed from older version of ZFS?
I'm on the latest upgrades
I'm starting to update various servers from 5.0 to PVE 5.1, in most of them I have ZFS and a L2 ARC cache on SSD or NVME, usually I check arcstats l2_hdr_size to see if l2 arc header is using too much of my ARC memory.
With the new ZFS packages and kernel mod version 0.7.2-1 (from /sys/module/zfs/version) I have to report that l2_hdr_size is always 0, even if cache device is used.
Cache device is enabled and used:
Code:
$ zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
----------- ----- ----- ----- ----- ----- -----
rpool 18.3G 3.61T 0 25 2.98K 309K
mirror 9.22G 1.80T 0 11 1.53K 126K
sda2 - - 0 5 795 63.2K
sdb2 - - 0 5 771 63.2K
mirror 9.05G 1.80T 0 14 1.45K 183K
sdc2 - - 0 7 750 91.3K
sdd2 - - 0 7 736 91.3K
cache - - - - - -
nvme0n1p3 569M 79.4G 0 1 26 116K
----------- ----- ----- ----- ----- ----- -----
ARC is being used, actually low use because it's a test server with only a idle Windows VM
Code:
$ arcstat
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
18:38:59 0 0 0 0 0 0 0 0 0 1.4G 24G
Content of arcstats, l2_hdr_size is 0
Code:
$ cat /proc/spl/kstat/zfs/arcstats | grep l2_
evict_l2_cached 4 0
evict_l2_eligible 4 199680
evict_l2_ineligible 4 2048
evict_l2_skip 4 0
l2_hits 4 0
l2_misses 4 679711
l2_feeds 4 423232
l2_rw_clash 4 0
l2_read_bytes 4 0
l2_write_bytes 4 2450306048
l2_writes_sent 4 66737
l2_writes_done 4 66737
l2_writes_error 4 0
l2_writes_lock_retry 4 13
l2_evict_lock_retry 4 0
l2_evict_reading 4 0
l2_evict_l1cached 4 125079
l2_free_on_write 4 6135
l2_abort_lowmem 4 0
l2_cksum_bad 4 0
l2_io_error 4 0
l2_size 4 875388416
l2_asize 4 597329920
l2_hdr_size 4 0
Is something changed from older version of ZFS?
I'm on the latest upgrades
Code:
$ pveversion -v
proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.4-1-pve: 4.13.4-25
pve-kernel-4.10.17-4-pve: 4.10.17-24
pve-kernel-4.10.17-2-pve: 4.10.17-20
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.2-pve1~bpo90