So here is the last 150 lines from
(see link)
(Apparently, I can't post a comment with more than 16384 chars here, and it didn't give me the option to upload the text file with the last 150 lines here directly, so pastebin.com it is then.)
Here is my hardware specs:
Dual Xeon E5-2697A v4
Supermicro X10DRi-T4+ motherboard
256 GB DDR4-2400 ECC Reg Ram
16x HGST 10 TB + 8x HGST 6 TB ZFS pool for bulk storage (in three raidz2 vdevs in one ZFS pool)
8x HGST 6 TB for VM disk storage (also raidz2)
4x HGST 3TB RAID6 handled by Broadcom MegaRAID SAS 12 Gbps SAS RAID HW HBA for the Promox 7.3-3 OS
Here is the output of
And here is the output of ZFS
I am not sure why the system appears to be sending the
Any help in this regard would be greatly appreciated.
Thank you.
/var/log/syslog
:(see link)
(Apparently, I can't post a comment with more than 16384 chars here, and it didn't give me the option to upload the text file with the last 150 lines here directly, so pastebin.com it is then.)
Here is my hardware specs:
Dual Xeon E5-2697A v4
Supermicro X10DRi-T4+ motherboard
256 GB DDR4-2400 ECC Reg Ram
16x HGST 10 TB + 8x HGST 6 TB ZFS pool for bulk storage (in three raidz2 vdevs in one ZFS pool)
8x HGST 6 TB for VM disk storage (also raidz2)
4x HGST 3TB RAID6 handled by Broadcom MegaRAID SAS 12 Gbps SAS RAID HW HBA for the Promox 7.3-3 OS
Here is the output of
free -g
:
Code:
root@pve:/var/log# free -g
total used free shared buff/cache available
Mem: 251 93 125 31 32 124
Swap: 7 7 0
And here is the output of ZFS
arc_summary
:
Code:
root@pve:/var/log# arc_summary
------------------------------------------------------------------------
ZFS Subsystem Report Mon Apr 24 23:44:24 2023
Linux 5.15.74-1-pve 2.1.6-pve1
Machine: pve (x86_64) 2.1.6-pve1
ARC status: HEALTHY
Memory throttle count: 0
ARC size (current): 9.1 % 11.4 GiB
Target size (adaptive): 9.2 % 11.5 GiB
Min size (hard limit): 6.2 % 7.9 GiB
Max size (high water): 16:1 125.9 GiB
Most Frequently Used (MFU) cache size: 44.5 % 4.9 GiB
Most Recently Used (MRU) cache size: 55.5 % 6.2 GiB
Metadata cache size (hard limit): 75.0 % 94.4 GiB
Metadata cache size (current): 0.9 % 875.0 MiB
Dnode cache size (hard limit): 10.0 % 9.4 GiB
Dnode cache size (current): 1.9 % 181.9 MiB
ARC hash breakdown:
Elements max: 1.6M
Elements current: 13.7 % 220.4k
Collisions: 7.6M
Chain max: 4
Chains: 778
ARC misc:
Deleted: 242.7M
Mutex misses: 28.8M
Eviction skips: 2.6G
Eviction skips due to L2 writes: 0
L2 cached evictions: 0 Bytes
L2 eligible evictions: 24.2 TiB
L2 eligible MFU evictions: 26.3 % 6.4 TiB
L2 eligible MRU evictions: 73.7 % 17.8 TiB
L2 ineligible evictions: 6.1 TiB
ARC total accesses (hits + misses): 2.8G
Cache hit ratio: 89.4 % 2.5G
Cache miss ratio: 10.6 % 292.8M
Actual hit ratio (MFU + MRU hits): 88.4 % 2.4G
Data demand efficiency: 69.6 % 487.5M
Data prefetch efficiency: 4.4 % 89.6M
Cache hits by cache type:
Most frequently used (MFU): 80.3 % 2.0G
Most recently used (MRU): 18.7 % 460.2M
Most frequently used (MFU) ghost: 0.7 % 17.1M
Most recently used (MRU) ghost: 1.8 % 44.6M
Cache hits by data type:
Demand data: 13.8 % 339.5M
Demand prefetch data: 0.2 % 4.0M
Demand metadata: 84.2 % 2.1G
Demand prefetch metadata: 1.9 % 46.5M
Cache misses by data type:
Demand data: 50.5 % 148.0M
Demand prefetch data: 29.3 % 85.7M
Demand metadata: 7.4 % 21.6M
Demand prefetch metadata: 12.8 % 37.5M
DMU prefetch efficiency: 359.8M
Hit ratio: 19.7 % 70.9M
Miss ratio: 80.3 % 288.8M
L2ARC not detected, skipping section
Solaris Porting Layer (SPL):
spl_hostid 0
spl_hostid_path /etc/hostid
spl_kmem_alloc_max 1048576
spl_kmem_alloc_warn 65536
spl_kmem_cache_kmem_threads 4
spl_kmem_cache_magazine_size 0
spl_kmem_cache_max_size 32
spl_kmem_cache_obj_per_slab 8
spl_kmem_cache_reclaim 0
spl_kmem_cache_slab_limit 16384
spl_max_show_tasks 512
spl_panic_halt 0
spl_schedule_hrtimeout_slack_us 0
spl_taskq_kick 0
spl_taskq_thread_bind 0
spl_taskq_thread_dynamic 1
spl_taskq_thread_priority 1
spl_taskq_thread_sequential 4
I am not sure why the system appears to be sending the
oom-killer
when I have plenty of free RAM available.Any help in this regard would be greatly appreciated.
Thank you.