Regarding Proxmox and VMs: use stripes or mirrors instead of (d)RAIDz(1/2/3) if you want better IOPS. This is not Proxmox specific , so do more research about ZFS and learn that L2ARC and SLOG don't usually help for IOPS. Maybe a special device would help?I have 2 proxmox hosts each with local zfs with draid2 with 8 sas disks, 2 sas ssd nvme for zfs cache and other 2 sas ssd nvme for zfs log. The utilization of zfs cache is very low only 458G out of 1.5TB. What can I do ?
How much RAM cache is displayed in arc_summary?I have 2 proxmox hosts each with local zfs with draid2 with 8 sas disks, 2 sas ssd nvme for zfs cache and other 2 sas ssd nvme for zfs log. The utilization of zfs cache is very low only 458G out of 1.5TB. What can I do ?
Really depends on what you want or what use case you have.I only use for VMs so I must add a special mirror device ?
Yes, by default Proxmox allows only 16GB for ARC since I think version 8?16GB ? My host has 1TB of RAM!
Your RAM cache is at 100%, zfs recommends 1GB of RAM every 1TB of data, but I found that will bottleneck even with L2ARC or SLOG nvme disks, I personally use 4GB of RAM every 1TB, try increasing the max limit of it16GB ? My host has 1TB of RAM!
You could also add a bigger L2ARC disk and you would benefit from a SLOG + ZIL diskL2 cached evictions: 16.2 GiB
L2 eligible evictions: 12.0 TiB
echo 67108864 > /sys/module/zfs/parameters/l2arc_write_max
options zfs zfs_arc_max=42949672960
options zfs zfs_arc_min=4294967296
options zfs zfs_arc_min_prefetch_ms=12000
options zfs zfs_arc_min_prescient_prefetch_ms=10000
options zfs zfs_dirty_data_max_max=17179869184
options zfs zfs_dirty_data_max=8589934592
echo 67108864 > /sys/module/zfs/parameters/l2arc_write_max
~$ echo $[ 64 * 1024 * 1024 ]
67108864
Four steps down, one to the left.Thanks for your help! And regarding the writes, what are your hints ?