Hi,
ZFS arc growing without limit after update from pve 7.4 to pve 8.1 when torrent seeding.
My hardware:
8GB ram
NVME ssd for hypervisor and VM's
2TB usb hdd with ZFS for data and torrents.
I have 1 VM (homeassistant, 4GB RAM) and 1 LXC (nas, transmission, nextcloud, samba 2GB RAM)
There are no problem on proxmox 7.4
Graph:
Blue: arc current
Purple: arc target mx size
Logs:
Any idea?
EDIT:
The problem occurs only with intensive random reading of highly fragmented data (torrents seeding in my case).
In other cases, such as copy files inside system, sequential copy highly fragmented data (e.g. torrents), samba does not cause problems.
Besides, if I copy downloaded torrents to another place and then I'll return it back (eliminating fragmentation) than torrent seeding does not cause ARC overflow.
ZFS arc growing without limit after update from pve 7.4 to pve 8.1 when torrent seeding.
My hardware:
8GB ram
NVME ssd for hypervisor and VM's
2TB usb hdd with ZFS for data and torrents.
I have 1 VM (homeassistant, 4GB RAM) and 1 LXC (nas, transmission, nextcloud, samba 2GB RAM)
There are no problem on proxmox 7.4
Graph:
Blue: arc current
Purple: arc target mx size
Logs:
Code:
cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=2147483648
arcstat 5
time read ddread ddh% dmread dmh% pread ph% size c avail
11:08:42 0 0 0 0 0 0 0 2.0G 2.0G 3.3G
11:08:47 0 0 0 0 0 0 0 2.0G 2.0G 3.3G
11:08:52 0 0 0 0 0 0 0 2.0G 2.0G 3.3G
11:08:57 0 0 0 0 0 0 0 2.0G 2.0G 3.3G
11:09:02 0 0 0 0 0 0 0 2.0G 2.0G 3.3G
11:09:07 0 0 0 0 0 0 0 2.0G 2.0G 3.3G
11:09:12 0 0 0 0 0 0 0 2.0G 2.0G 3.3G
11:09:17 0 0 0 0 0 0 0 2.0G 2.0G 3.3G
11:09:22 29 0 0 29 79 0 0 2.0G 2.0G 3.3G
11:09:27 0 0 0 0 0 0 0 2.0G 2.0G 3.3G
-----------Start torrent seeding-------------------------------------------
11:09:52 1.4K 105 98 869 52 440 0 2.0G 2.0G 3.3G
11:09:57 4 0 0 3 100 1 0 2.0G 2.0G 3.3G
11:10:02 9 2 100 6 100 1 0 2.0G 2.0G 3.3G
11:10:07 10 3 100 6 100 1 0 2.0G 2.0G 3.3G
11:10:12 3.4K 286 97 2.0K 26 1.1K 0 2.0G 2.0G 3.3G
11:10:17 755 75 96 455 9 223 1 2.0G 2.0G 3.3G
---------------------------------------------------------------------------
11:54:06 954 170 92 525 20 257 0 6.3G 595M 17M
11:54:11 1.6K 225 93 935 11 450 0 6.1G 595M 67M
11:54:16 2.3K 269 93 1.3K 6 645 0 6.2G 593M -28M
11:54:21 1.7K 260 94 955 13 471 1 6.1G 591M 149M
11:54:26 954 193 94 507 57 249 0 6.0G 729M 281M
11:54:31 1.0K 245 94 527 20 241 0 6.0G 1.3G 262M
11:54:36 2.8K 212 94 1.7K 5 845 0 6.2G 1.9G 45M
11:54:41 2.5K 330 93 1.5K 28 727 2 6.2G 1.8G 26M
11:54:46 2.3K 318 92 1.4K 9 647 0 6.4G 1.6G -16M
11:54:51 1.9K 309 92 1.0K 19 512 2 6.2G 1.5G 59M
11:54:56 1.3K 221 94 819 23 276 3 6.1G 1.7G 92M
11:55:01 2.2K 287 91 1.2K 29 668 0 6.0G 1.7G 130M
11:55:09 2.8K 228 93 1.8K 18 700 1 6.3G 1.6G -10M
11:55:14 383 63 88 76 96 242 0 6.3G 1.5G 55M
11:55:19 1.6K 202 92 972 14 456 0 6.3G 1.4G 29M
11:55:24 3.0K 211 91 1.9K 10 911 1 6.2G 1.3G 16M
11:55:29 1.5K 276 92 906 19 271 1 6.2G 1.3G 18M
11:55:34 1.4K 281 93 672 22 483 1 6.0G 1.3G 182M
11:55:39 2.1K 159 91 1.3K 6 641 0 6.2G 2.0G -53M
11:55:44 1.2K 298 93 617 31 303 3 6.1G 1.9G 119M
11:55:49 352 157 92 135 98 57 17 6.0G 2.0G 215M
11:55:54 777 339 94 379 43 57 0 5.8G 2.0G 347M
11:55:59 3.0K 118 93 2.0K 7 893 1 5.9G 2.0G 197M
11:56:04 3.1K 184 92 1.8K 31 1.1K 1 6.0G 2.0G 51M
11:56:09 2.2K 118 93 1.4K 10 693 1 6.2G 1.8G -12M
11:56:14 2.2K 197 94 1.4K 8 658 0 6.3G 1.7G 17M
11:56:19 3.1K 153 94 2.0K 38 888 1 6.3G 1.7G -3.0M
11:56:24 3.4K 248 93 2.0K 19 1.1K 1 6.3G 1.5G 20M
Connection closing...Socket close.
-------OOM kill...---------------------------------------------------------
arc_summary -s arc
------------------------------------------------------------------------
ZFS Subsystem Report Thu Mar 28 11:25:55 2024
Linux 6.5.13-3-pve 2.2.3-pve1
Machine: pve (x86_64) 2.2.3-pve1
ARC status: HEALTHY
Memory throttle count: 0
ARC size (current): 217.2 % 4.3 GiB
Target size (adaptive): 100.0 % 2.0 GiB
Min size (hard limit): 11.9 % 243.4 MiB
Max size (high water): 8:1 2.0 GiB
Anonymous data size: 0.0 % 0 Bytes
Anonymous metadata size: 0.0 % 0 Bytes
MFU data target: 70.8 % 3.1 GiB
MFU data size: 5.8 % 258.6 MiB
MFU ghost data size: 954.5 MiB
MFU metadata target: 5.4 % 240.0 MiB
MFU metadata size: 0.9 % 40.4 MiB
MFU ghost metadata size: 125.0 MiB
MRU data target: 18.5 % 822.7 MiB
MRU data size: 93.0 % 4.0 GiB
MRU ghost data size: 154.2 MiB
MRU metadata target: 5.2 % 230.8 MiB
MRU metadata size: 0.2 % 9.7 MiB
MRU ghost metadata size: 20.4 MiB
Uncached data size: 0.0 % 0 Bytes
Uncached metadata size: 0.0 % 0 Bytes
Bonus size: < 0.1 % 146.9 KiB
Dnode cache target: 10.0 % 204.8 MiB
Dnode cache size: 0.4 % 752.4 KiB
Dbuf size: < 0.1 % 430.3 KiB
Header size: 0.2 % 10.5 MiB
L2 header size: 0.0 % 0 Bytes
ABD chunk waste size: < 0.1 % 24.0 KiB
ARC hash breakdown:
Elements max: 66.2k
Elements current: 69.2 % 45.8k
Collisions: 50.6k
Chain max: 3
Chains: 1.0k
ARC misc:
Deleted: 1.6M
Mutex misses: 6.2k
Eviction skips: 2.9M
Eviction skips due to L2 writes: 0
L2 cached evictions: 0 Bytes
L2 eligible evictions: 29.8 GiB
L2 eligible MFU evictions: 42.0 % 12.5 GiB
L2 eligible MRU evictions: 58.0 % 17.3 GiB
L2 ineligible evictions: 167.3 GiB
Any idea?
EDIT:
The problem occurs only with intensive random reading of highly fragmented data (torrents seeding in my case).
In other cases, such as copy files inside system, sequential copy highly fragmented data (e.g. torrents), samba does not cause problems.
Besides, if I copy downloaded torrents to another place and then I'll return it back (eliminating fragmentation) than torrent seeding does not cause ARC overflow.
Last edited: