ZFS Memory issues (leak?)

Phinitris

Active Member
Jun 1, 2014
83
12
28
Hello,
we're facing serious issues with Proxmox since upgrading to PVE 5.1.
The host uses more than 95% memory usage and then swaps heavily which causes ZFS to crash due to blocked tasks (task x blocked for more than 120 seconds). This happens on a daily basis, although the VMs itself do not use much RAM and ZFS ARC is limited to 3 GB (for testing).

Screenshot at Nov 03 23-32-33.png Screenshot at Nov 03 23-33-35.png

Code:
              total        used        free      shared  buff/cache   available
Mem:          96640       82148       14298          56         192       13888
Swap:          8191         400        7791

Code:
MemTotal:       98959416 kB
MemFree:        14505500 kB
MemAvailable:   14084660 kB
Buffers:               0 kB
Cached:           124884 kB
SwapCached:         3128 kB
Active:         18564224 kB
Inactive:        1284172 kB
Active(anon):   18530840 kB
Inactive(anon):  1265452 kB
Active(file):      33384 kB
Inactive(file):    18720 kB
Unevictable:      228432 kB
Mlocked:          228432 kB
SwapTotal:       8388604 kB
SwapFree:        7978492 kB
Dirty:                28 kB
Writeback:            24 kB
AnonPages:      19950472 kB
Mapped:            73668 kB
Shmem:             62212 kB
Slab:            2813728 kB
SReclaimable:      76060 kB
SUnreclaim:      2737668 kB
KernelStack:       19912 kB
PageTables:        84644 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    57868312 kB
Committed_AS:   68509212 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:   1759232 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      478892 kB
DirectMap2M:    19408896 kB
DirectMap1G:    80740352 kB

Code:
43624 root      20   0 2995380 2.030g   7996 S  67.0  2.2  16:12.18 kvm
  164 root      25   5       0      0      0 S  52.9  0.0 153:15.23 ksmd
64317 root      20   0 5136908 2.143g   8016 S  36.9  2.3  36:07.25 kvm
56653 root      20   0 3029508 1.080g   4764 S  30.7  1.1 348:01.25 kvm
48314 root      20   0 7276344 1.956g   6324 S  24.8  2.1 190:31.68 kvm
52790 root      20   0 2985336 970128   7912 S  16.3  1.0   3:02.96 kvm
48118 root      20   0 1882308 432452   7880 S   9.8  0.4   1:32.24 kvm
 6287 root      20   0 3036688 1.883g   4832 S   9.2  2.0 150:56.06 kvm
59919 root      20   0 5236348 3.904g   5124 S   7.5  4.1 145:19.38 kvm
 9324 root      20   0 3114608 1.081g   4984 S   5.6  1.1  62:30.45 kvm
 5872 root      20   0  316440  71324   7848 S   5.2  0.1   0:48.35 pvestatd
11425 root      20   0 1886420 363444   4932 S   4.9  0.4  48:06.05 kvm
47694 root      20   0 4247644 283364   4840 S   4.6  0.3  59:23.60 kvm
48049 root      20   0 5214828 1.733g   4892 S   4.6  1.8  67:22.25 kvm
56808 root      20   0 5203572 1.646g   7992 S   4.6  1.7   2:07.44 kvm
29439 root      20   0 2968040 1.219g   7984 S   3.3  1.3   2:22.91 kvm
54045 root      20   0 1941100 250880   4740 S   2.9  0.3  42:57.13 kvm
  441 root      20   0 2965740 773216   7848 S   2.0  0.8   0:50.43 kvm
47339 root      20   0 1871000 232064   4936 S   2.0  0.2  20:22.70 kvm
    2 root      20   0       0      0      0 S   1.3  0.0  13:55.21 kthreadd
  431 root      20   0       0      0      0 S   1.3  0.0   2:24.06 arc_reclaim
34552 root      10 -10 1877908 218236   8232 S   1.3  0.2  15:23.28 ovs-vswitchd
39637 root      20   0   45844   4192   2676 R   1.3  0.0   0:00.15 top
48454 root      20   0       0      0      0 S   1.0  0.0  15:24.69 vhost-48314
53439 root      20   0       0      0      0 S   1.0  0.0   0:06.50 vhost-52790
  418 root       0 -20       0      0      0 S   0.7  0.0   6:34.05 spl_dynamic_tas
 3919 root      20   0  317724  71240   6504 S   0.7  0.1   0:06.16 pve-firewall
17078 root      20   0 1881264 203200   7860 S   0.7  0.2   0:17.57 kvm
24515 root      20   0 3036688 465872   7956 S   0.7  0.5   1:09.30 kvm
46240 root      20   0 1892572 293304   4760 S   0.7  0.3   3:44.11 kvm
47079 www-data  20   0  545876 106096  10824 S   0.7  0.1   0:02.08 pveproxy worker
47686 www-data  20   0  545880 106716  11292 S   0.7  0.1   0:03.33 pveproxy worker
47742 root      20   0       0      0      0 S   0.7  0.0   8:35.77 vhost-47694
50676 root      20   0 1914096 1.016g   7912 S   0.7  1.1   1:07.20 kvm
    8 root      20   0       0      0      0 S   0.3  0.0   2:07.07 ksoftirqd/0
   41 root      20   0       0      0      0 S   0.3  0.0   1:10.84 ksoftirqd/5
  419 root       0 -20       0      0      0 S   0.3  0.0   1:08.40 spl_kmem_cache
  425 root       0 -20       0      0      0 S   0.3  0.0  10:44.95 zvol
 6580 root      20   0       0      0      0 S   0.3  0.0   0:00.29 kworker/22:1
 8545 root       1 -19       0      0      0 S   0.3  0.0   0:25.06 z_wr_iss
 8549 root       1 -19       0      0      0 S   0.3  0.0   0:24.78 z_wr_iss
 8560 root       1 -19       0      0      0 S   0.3  0.0   0:24.91 z_wr_iss
 8566 root       0 -20       0      0      0 S   0.3  0.0   0:30.17 z_wr_int_2
12531 root       0 -20       0      0      0 S   0.3  0.0   0:34.06 z_null_int
12532 root       0 -20       0      0      0 S   0.3  0.0   0:00.32 z_rd_iss
12533 root       0 -20       0      0      0 S   0.3  0.0   0:19.90 z_rd_int_0
12534 root       0 -20       0      0      0 S   0.3  0.0   0:20.24 z_rd_int_1
12536 root       0 -20       0      0      0 S   0.3  0.0   0:19.59 z_rd_int_3
12538 root       0 -20       0      0      0 S   0.3  0.0   0:19.47 z_rd_int_5

Code:
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
hddpool  2.72T   428G  2.30T         -    15%    15%  1.00x  ONLINE  -
  mirror   928G   143G   785G         -    15%    15%
    ata-HGST_HTS721010A9E630_JR1004D31UZBAM      -      -      -         -      -      -
    ata-HGST_HTS721010A9E630_JR1004D31VDMKM      -      -      -         -      -      -
  mirror   928G   143G   785G         -    16%    15%
    ata-HGST_HTS721010A9E630_JR1020D31401XM      -      -      -         -      -      -
    ata-HGST_HTS721010A9E630_JR10044M2LE1MM      -      -      -         -      -      -
  mirror   928G   142G   786G         -    16%    15%
    ata-HGST_HTS721010A9E630_JR1020D314GPDM      -      -      -         -      -      -
    ata-HGST_HTS721010A9E630_JR1020D3146ALM      -      -      -         -      -      -
log      -      -      -         -      -      -
  ata-WDC_WDS250G1B0A-00H9H0_164230800081-part1  4.97G  3.39M  4.97G         -    63%     0%
cache      -      -      -         -      -      -
  ata-WDC_WDS250G1B0A-00H9H0_164230800081-part2   228G  7.29G   221G         -     0%     3%
rpool   136G  17.5G   119G         -    45%    12%  1.00x  ONLINE  -
  mirror   136G  17.5G   119G         -    45%    12%
    sdi2      -      -      -         -      -      -
    sdj2      -      -      -         -      -      -
ssdpool   464G  46.9G   417G         -    28%    10%  1.00x  ONLINE  -
  mirror   464G  46.9G   417G         -    28%    10%
    ata-WDC_WDS500G1B0A-00H9H0_165161800154      -      -      -         -      -      -
    ata-WDC_WDS500G1B0A-00H9H0_165161800854      -      -      -         -      -      -
webhddpool  1.81T   409G  1.41T         -    29%    22%  1.00x  ONLINE  -
  mirror   928G   203G   725G         -    30%    21%
    ata-HGST_HTS721010A9E630_JR1020D314GAWM      -      -      -         -      -      -
    ata-HGST_HTS721010A9E630_JR1020D314RKGE      -      -      -         -      -      -
  mirror   928G   206G   722G         -    29%    22%
    ata-HGST_HTS721010A9E630_JR1020D310X1XN      -      -      -         -      -      -
    ata-HGST_HTS721010A9E630_JR1020D315HP4E      -      -      -         -      -      -
log      -      -      -         -      -      -
  ata-WDC_WDS250G1B0A-00H9H0_164304A00904-part1  4.97G  1.38M  4.97G         -    43%     0%
cache      -      -      -         -      -      -
  ata-WDC_WDS250G1B0A-00H9H0_164304A00904-part2   228G   697M   227G         -     0%     0%
webssdpool   464G  36.7G   427G         -    33%     7%  1.00x  ONLINE  -
  mirror   464G  36.7G   427G         -    33%     7%
    ata-WDC_WDS500G1B0A-00H9H0_164501A01151      -      -      -         -      -      -
    ata-WDC_WDS500G1B0A-00H9H0_164401A02B23      -      -      -         -      -      -

ZFS Parameters: https://pastebin.com/1QuPwV9H

Memory usage is currently at 85% and going even higher. Maybe there is a memory leak in ZFS as I can not see high memory usage in top.
 
Hi,

You restrict the ZFS ARC cache but you have L2ARC and this is not for free.
This means all stored data in the L2ARC need a referent and this cost Memory.
Every block L2ARC need 440Byte in Memory so you have 456GB L2ARC and 8k block size is max 57GB Memory.
 
  • Like
Reactions: chrone
Hello @wolfgang,
thanks for clearing that up - makes sense.
However why does ZFS then report a L2ARC size of 10 GB with 100 Mb headers?
Is the RAM not freed after I destroyed the cache device and readded it?
 
Please send the raw stat and not the pared one because you cant be sure that there is an convert error.

Code:
cat /proc/spl/kstat/zfs/arcstats
 
ARC Stats:
Code:
13 1 0x01 92 4416 5674521768 291213078098343
name                            type data
hits                            4    142724564
misses                          4    134614645
demand_data_hits                4    116295284
demand_data_misses              4    108563068
demand_metadata_hits            4    10520333
demand_metadata_misses          4    3921880
prefetch_data_hits              4    12558009
prefetch_data_misses            4    20980352
prefetch_metadata_hits          4    3350938
prefetch_metadata_misses        4    1149345
mru_hits                        4    109390602
mru_ghost_hits                  4    13135922
mfu_hits                        4    22098489
mfu_ghost_hits                  4    1868796
deleted                         4    124641260
mutex_miss                      4    13728
evict_skip                      4    7620982
evict_not_enough                4    43668
evict_l2_cached                 4    595766941184
evict_l2_eligible               4    566560142336
evict_l2_ineligible             4    61698627584
evict_l2_skip                   4    293
hash_elements                   4    15602660
hash_elements_max               4    15603295
hash_collisions                 4    198538230
hash_chains                     4    4007403
hash_chain_max                  4    9
p                               4    1771792542
c                               4    3221225472
c_min                           4    3166701312
c_max                           4    3221225472
size                            4    3166586864
compressed_size                 4    1356926976
uncompressed_size               4    2561638400
overhead_size                   4    162468864
hdr_size                        4    179436976
data_size                       4    1369269248
metadata_size                   4    150126592
dbuf_size                       4    7338560
dnode_size                      4    11134656
bonus_size                      4    3536960
anon_size                       4    13054464
anon_evictable_data             4    0
anon_evictable_metadata         4    0
mru_size                        4    1487015936
mru_evictable_data              4    1239550464
mru_evictable_metadata          4    46236160
mru_ghost_size                  4    1722805248
mru_ghost_evictable_data        4    100417536
mru_ghost_evictable_metadata    4    1622387712
mfu_size                        4    19325440
mfu_evictable_data              4    7088640
mfu_evictable_metadata          4    290816
mfu_ghost_size                  4    1390900736
mfu_ghost_evictable_data        4    991570432
mfu_ghost_evictable_metadata    4    399330304
l2_hits                         4    35108759
l2_misses                       4    99497743
l2_feeds                        4    303087
l2_rw_clash                     4    0
l2_read_bytes                   4    200322809344
l2_write_bytes                  4    319378724352
l2_writes_sent                  4    281935
l2_writes_done                  4    281935
l2_writes_error                 4    0
l2_writes_lock_retry            4    267
l2_evict_lock_retry             4    1
l2_evict_reading                4    17
l2_evict_l1cached               4    132527
l2_free_on_write                4    18036
l2_abort_lowmem                 4    19
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    126999335424
l2_asize                        4    107111065088
l2_hdr_size                     4    1445743872
memory_throttle_count           4    55
memory_direct_count             4    29373
memory_indirect_count           4    4477
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    1797317616
arc_meta_limit                  4    2415919104
arc_dnode_limit                 4    241591910
arc_meta_max                    4    2336638312
arc_meta_min                    4    16777216
sync_wait_for_async             4    618391
demand_hit_predictive_prefetch  4    11565161
arc_need_free                   4    0
arc_sys_free                    4    1583350656

I believe the important values are:
l2_asize: 100 GB
l2_hdr_size: 1.35 GB

Even if 100 GB of data is stored in L2 ARC, why does it only report 1.35 GB headers stored in ARC (RAM)?

Current RAM usage:
Code:
              total        used        free      shared  buff/cache   available
Mem:          96640       93493        2935          66         211        2529
Swap:          8191        1134        7057

VM Stats:
Screenshot at Nov 06 15-21-41.png
 
Hello @wolfgang ,
I have upgraded the RAM now to 144 GB RAM, however the issue is still present.
I even limited the L2ARC to 2x 50 GB for our pools and still it uses all the RAM available within hours.

Screenshot at Nov 12 12-20-10.png

ZFS Stats:
Code:
13 1 0x01 95 4560 35972193653 69701583200712
name                            type data
hits                            4    179711600
misses                          4    84624945
demand_data_hits                4    150392724
demand_data_misses              4    68455502
demand_metadata_hits            4    15977372
demand_metadata_misses          4    1882060
prefetch_data_hits              4    8626448
prefetch_data_misses            4    12415182
prefetch_metadata_hits          4    4715056
prefetch_metadata_misses        4    1872201
mru_hits                        4    141028886
mru_ghost_hits                  4    21145847
mfu_hits                        4    30199498
mfu_ghost_hits                  4    1113265
deleted                         4    72724419
mutex_miss                      4    38460
evict_skip                      4    3554459
evict_not_enough                4    22972
evict_l2_cached                 4    161214776320
evict_l2_eligible               4    657898092544
evict_l2_ineligible             4    70898956288
evict_l2_skip                   4    127
hash_elements                   4    7287870
hash_elements_max               4    7357933
hash_collisions                 4    30110390
hash_chains                     4    667931
hash_chain_max                  4    6
p                               4    4670776775
c                               4    9664421224
c_min                           4    4752147584
c_max                           4    17179869184
size                            4    9617609264
compressed_size                 4    7407116800
uncompressed_size               4    10815884800
overhead_size                   4    1057216512
hdr_size                        4    581684528
data_size                       4    6194549760
metadata_size                   4    2269783552
dbuf_size                       4    18763488
dnode_size                      4    11012352
bonus_size                      4    3627200
anon_size                       4    1192395264
anon_evictable_data             4    0
anon_evictable_metadata         4    0
mru_size                        4    3452561920
mru_evictable_data              4    3212173824
mru_evictable_metadata          4    61816320
mru_ghost_size                  4    4811801600
mru_ghost_evictable_data        4    3665674752
mru_ghost_evictable_metadata    4    1146126848
mfu_size                        4    3819376128
mfu_evictable_data              4    1957639168
mfu_evictable_metadata          4    1790018048
mfu_ghost_size                  4    1902993408
mfu_ghost_evictable_data        4    1615118336
mfu_ghost_evictable_metadata    4    287875072
l2_hits                         4    5959320
l2_misses                       4    74553741
l2_feeds                        4    75546
l2_rw_clash                     4    0
l2_read_bytes                   4    31319743488
l2_write_bytes                  4    98742455808
l2_writes_sent                  4    57027
l2_writes_done                  4    57027
l2_writes_error                 4    0
l2_writes_lock_retry            4    182
l2_evict_lock_retry             4    81
l2_evict_reading                4    0
l2_evict_l1cached               4    46952
l2_free_on_write                4    11378
l2_abort_lowmem                 4    95
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    52275206144
l2_asize                        4    46716516864
l2_hdr_size                     4    538188384
memory_throttle_count           4    56796
memory_direct_count             4    298786
memory_indirect_count           4    78750
memory_all_bytes                4    152068722688
memory_free_bytes               4    5156622336
memory_available_bytes          3    2780549120
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    3423059504
arc_meta_limit                  4    12884901888
arc_dnode_limit                 4    1288490188
arc_meta_max                    4    3842737552
arc_meta_min                    4    16777216
sync_wait_for_async             4    1144764
demand_hit_predictive_prefetch  4    6008880
arc_need_free                   4    0
arc_sys_free                    4    2376073792

Pool list:
Code:
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
hddpool  2.72T   363G  2.36T         -    17%    13%  1.00x  ONLINE  -
  mirror   928G   121G   807G         -    16%    13%
    ata-HGST_HTS721010A9E630_JR1004D31UZBAM      -      -      -         -      -      -
    ata-HGST_HTS721010A9E630_JR1004D31VDMKM      -      -      -         -      -      -
  mirror   928G   121G   807G         -    18%    13%
    ata-HGST_HTS721010A9E630_JR1020D31401XM      -      -      -         -      -      -
    ata-HGST_HTS721010A9E630_JR10044M2LE1MM      -      -      -         -      -      -
  mirror   928G   121G   807G         -    18%    13%
    ata-HGST_HTS721010A9E630_JR1020D314GPDM      -      -      -         -      -      -
    ata-HGST_HTS721010A9E630_JR1020D3146ALM      -      -      -         -      -      -
log      -      -      -         -      -      -
  ata-WDC_WDS250G1B0A-00H9H0_164230800081-part1  4.97G   384K  4.97G         -     0%     0%
cache      -      -      -         -      -      -
  ata-WDC_WDS250G1B0A-00H9H0_164230800081-part2  50.0G   895M  49.1G         -     0%     1%
rpool   136G  17.8G   118G         -    55%    13%  1.00x  ONLINE  -
  mirror   136G  17.8G   118G         -    55%    13%
    sda2      -      -      -         -      -      -
    sdj2      -      -      -         -      -      -
ssdpool   464G   199G   265G         -    44%    42%  1.00x  ONLINE  -
  mirror   464G   199G   265G         -    44%    42%
    ata-WDC_WDS500G1B0A-00H9H0_165161800154      -      -      -         -      -      -
    ata-WDC_WDS500G1B0A-00H9H0_165161800854      -      -      -         -      -      -
webhddpool  1.81T   445G  1.38T         -    32%    23%  1.00x  ONLINE  -
  mirror   928G   220G   708G         -    32%    23%
    ata-HGST_HTS721010A9E630_JR1020D314GAWM      -      -      -         -      -      -
    ata-HGST_HTS721010A9E630_JR1020D314RKGE      -      -      -         -      -      -
  mirror   928G   224G   704G         -    32%    24%
    ata-HGST_HTS721010A9E630_JR1020D310X1XN      -      -      -         -      -      -
    ata-HGST_HTS721010A9E630_JR1020D315HP4E      -      -      -         -      -      -
log      -      -      -         -      -      -
  ata-WDC_WDS250G1B0A-00H9H0_164304A00904-part1  4.97G     3M  4.97G         -     0%     0%
cache      -      -      -         -      -      -
  ata-WDC_WDS250G1B0A-00H9H0_164304A00904-part2  50.0G  42.6G  7.37G         -     0%    85%
webssdpool   464G  1.62G   462G         -    33%     0%  1.00x  ONLINE  -
  mirror   464G  1.62G   462G         -    33%     0%
    ata-WDC_WDS500G1B0A-00H9H0_164501A01151      -      -      -         -      -      -
    ata-WDC_WDS500G1B0A-00H9H0_164401A02B23      -      -      -         -      -      -

Do you have any suggestion what is causing this? The VMs are using about 50 GB RAM.
 
I have now found out that it's probably not caused by ZFS. I'm currently investigating this and will update this thread as soon as I know more.
It seems to be related to networking.
 
Well it seems that IO is the issue. A Graphite VM that writes a lot of metrics caused the issue with about 10 MB/s and about 200 IOPS write.
After turning off the VM the RAM increases only slightly with about 200 MB per hour but the bug somehow still exists. I guess that there is a memory leak in ZFS or the kernel itself.

I'll try to debug this further.
 
@wolfang. The ZFS Pool is already upgraded to ZFS 0.7. I guess it's not possible to go back to the 4.10 kernel isn't it?
 
You can go back but only the last pve-kernel-4.10.17-5-pve_4.10.17-25_amd64.deb support zfs 0.7.
 
  • Like
Reactions: chrone
I have finally tracked down the issue to a specific VM.
The VM receives traffic and inspects it to find patterns of ddos attacks. The network interface is configured as virtio attached to the bridge.
The RAM usage is increasing steadily. However if I change the network device to e1000 the issue does not occur. It seems that a memory leak exists in the virtio network device, however the KVM process is not using much RAM (~ 0.5 GB) so we can rule out an user space memory leak which means that the memory leak can only exist in the kernel.

Do you have any idea how to further debug this?

I'm not sure if this is a rare memory leak or the leak occurs with any VM that uses virtio as network device. The VM receives about 500 Mbit/s and 200.000 packets per second.
 
  • Like
Reactions: chrone
Can you reproduce it in a generic way?
I mean without this specific VM on an other node?
 
Well, indeed. It seems that I can reproduce it on another node.

  1. Create a bridge
    Code:
    auto vmbr1
    iface vmbr1 inet manual
      bridge_fd 0
      bridge_ports none
    Code:
    ifup vmbr1

  2. Create two VMs with the following configuration.
    Code:
    net0: virtio=<MAC>,bridge=vmbr1

  3. Install an operating system (I used CentOS 7) on both VMs
  4. Assign a local IP to each VM (like 172.16.1.1 and 172.16.1.2)
  5. Install dsniff on the first VM (yum install dsniff)
  6. Run dsniff in the first VM.
    Code:
    macof -i eth0 -s 172.16.2.1 -d 172.16.2.2
  7. Set bridge to promisc on the Proxmox host to forward all packets
    Code:
    ip link set vmbr1 promisc on
  8. Make yourself a coffee and watch the RAM usage going up :)
It seems that this leak only occurs if a high amount of ethernet frames with different Mac-addresses are received by the VM as I could not replicate this issue with 200.000 pps of single-mac flood. However with about 8.000 pps random Macs I was able to leak 200 MB of memory within 20 minutes.

Just a assumption:
If the Mac address of the ethernet frame is not freed and a memory leak is happening it would probably leak with about the following rate:
Code:
6 bytes * pps / 1024 / 1024 * 60 * 60

Which means that with 50.000 pps (packets with different Mac pairs on my production host) I would be able to leak about 1 GB of memory which is indeed the case.

For my test host that means that I would leak about 41 MB of memory per hour as it's currently generating 2.000 packets per second.

I will let it run through the night and will report the memory usage tomorrow.
 
Last edited:
  • Like
Reactions: chrone
Hi,

can't reproduce it here with Debian but I will retry with CentOS7 as you did.
 
I can't also not reproduce it with CentOS7.
We use here intel i350 nic.
What do you use as network cards.
 
@wolfang. The ZFS Pool is already upgraded to ZFS 0.7. I guess it's not possible to go back to the 4.10 kernel isn't it?
Hello.
I have the same issue with zfs on 5.1. I have backported to pve-kernel-4.10.17-5-pve_4.10.17-25_amd64.deb and now it seems to work properly:20171204-pve00-kashtan-memory.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!