[SOLVED] Proxmox is using swap with lot of RAM available

batijuank

Member
Nov 16, 2017
45
0
11
35
Hello, my name is Juan Carlos and this is my first time writing into this mailing list. I'm testing Proxmox using an ASUS P6T with 24GB RAM. Currently I have 4KVM using up to 12GB, which means I have 12GB free for ZFS, yet every time I do a VM backup, OS installation or any other workload demanding I/O, my node ends up using swap even when node has 8GB or more of available RAM. I read that with ZFS, nodes should avoid using swap. Moreover, since I set vm.swappiness to zero I shouldn't have any swapping at all unless my nodes gets out of RAM. Could someone explain me what's happening and how can I fix this?

This are the apps that gets swapped the most: pve-ha-crm, pve-ha-lrm, pvedaemon and workers, spiceproxy and workers, spiceproxy and workers, pveproxy and workers, pve-firewall and pvestatd.

This is my node current setup:

Code:
# cat /proc/sys/vm/swappiness
0

Code:
# pveversion -v 
proxmox-ve: 5.1-41 (running kernel: 4.13.13-6-pve) 
pve-manager: 5.1-46 (running version: 5.1-46/ae8241d4) 
pve-kernel-4.13.13-6-pve: 4.13.13-41 
pve-kernel-4.13.13-2-pve: 4.13.13-33 
corosync: 2.4.2-pve3 
criu: 2.11.1-1~bpo90 
glusterfs-client: 3.8.8-1 
ksm-control-daemon: 1.2-2 
libjs-extjs: 6.0.1-2 
libpve-access-control: 5.0-8 
libpve-common-perl: 5.0-28 
libpve-guest-common-perl: 2.0-14 
libpve-http-server-perl: 2.0-8 
libpve-storage-perl: 5.0-17 
libqb0: 1.0.1-1 
lvm2: 2.02.168-pve6 
lxc-pve: 2.1.1-3 
lxcfs: 2.0.8-2 
novnc-pve: 0.6-4 
proxmox-widget-toolkit: 1.0-11 
pve-cluster: 5.0-20 
pve-container: 2.0-19 
pve-docs: 5.1-16 
pve-firewall: 3.0-5 
pve-firmware: 2.0-3 
pve-ha-manager: 2.0-5 
pve-i18n: 1.0-4 
pve-libspice-server1: 0.12.8-3 
pve-qemu-kvm: 2.9.1-9 
pve-xtermjs: 1.0-2 
qemu-server: 5.0-22 
smartmontools: 6.5+svn4324-1 
spiceterm: 3.0-5 
vncterm: 1.5-3 
zfsutils-linux: 0.7.6-pve1~bpo9

Code:
# arc_summary 

------------------------------------------------------------------------ 
ZFS Subsystem Report                Wed Mar 07 15:22:25 2018 
ARC Summary: (HEALTHY) 
    Memory Throttle Count:            0 

ARC Misc: 
    Deleted:                27 
    Mutex Misses:                0 
    Evict Skips:                1 

ARC Size:                1.57%    188.72    MiB 
    Target Size: (Adaptive)        100.00%    11.76    GiB 
    Min Size (Hard Limit):        6.25%    752.93    MiB 
    Max Size (High Water):        16:1    11.76    GiB 

ARC Size Breakdown: 
    Recently Used Cache Size:    45.62%    83.14    MiB 
    Frequently Used Cache Size:    54.38%    99.09    MiB 

ARC Hash Breakdown: 
    Elements Max:                3.28k 
    Elements Current:        99.97%    3.27k 
    Collisions:                2 
    Chain Max:                1 
    Chains:                    2 

ARC Total accesses:                    106.18k 
    Cache Hit Ratio:        86.80%    92.17k 
    Cache Miss Ratio:        13.20%    14.01k 
    Actual Hit Ratio:        86.35%    91.68k 

    Data Demand Efficiency:        94.31%    39.99k 
    Data Prefetch Efficiency:    8.47%    59 

    CACHE HITS BY CACHE LIST: 
      Anonymously Used:        0.53%    486 
      Most Recently Used:        28.81%    26.55k 
      Most Frequently Used:        70.67%    65.13k 
      Most Recently Used Ghost:    0.00%    0 
      Most Frequently Used Ghost:    0.00%    0 

    CACHE HITS BY DATA TYPE: 
      Demand Data:            40.92%    37.72k 
      Prefetch Data:        0.01%    5 
      Demand Metadata:        58.55%    53.96k 
      Prefetch Metadata:        0.52%    482 

    CACHE MISSES BY DATA TYPE: 
      Demand Data:            16.24%    2.28k 
      Prefetch Data:        0.39%    54 
      Demand Metadata:        80.68%    11.30k 
      Prefetch Metadata:        2.69%    377 


DMU Prefetch Efficiency:                    39.23k 
    Hit Ratio:            0.44%    174 
    Miss Ratio:            99.56%    39.05k 



ZFS Tunables: 
    dbuf_cache_hiwater_pct                            10 
    dbuf_cache_lowater_pct                            10 
    dbuf_cache_max_bytes                              104857600 
    dbuf_cache_max_shift                              5 
    dmu_object_alloc_chunk_shift                      7 
    ignore_hole_birth                                 1 
    l2arc_feed_again                                  1 
    l2arc_feed_min_ms                                 200 
    l2arc_feed_secs                                   1 
    l2arc_headroom                                    2 
    l2arc_headroom_boost                              200 
    l2arc_noprefetch                                  1 
    l2arc_norw                                        0 
    l2arc_write_boost                                 8388608 
    l2arc_write_max                                   8388608 
    metaslab_aliquot                                  524288 
    metaslab_bias_enabled                             1 
    metaslab_debug_load                               0 
    metaslab_debug_unload                             0 
    metaslab_fragmentation_factor_enabled             1 
    metaslab_lba_weighting_enabled                    1 
    metaslab_preload_enabled                          1 
    metaslabs_per_vdev                                200 
    send_holes_without_birth_time                     1 
    spa_asize_inflation                               24 
    spa_config_path /etc/zfs/zpool.cache 
    spa_load_verify_data                              1 
    spa_load_verify_maxinflight                       10000 
    spa_load_verify_metadata                          1 
    spa_slop_shift                                    5 
    zfetch_array_rd_sz                                1048576 
    zfetch_max_distance                               8388608 
    zfetch_max_streams                                8 
    zfetch_min_sec_reap                               2 
    zfs_abd_scatter_enabled                           1 
    zfs_abd_scatter_max_order                         10 
    zfs_admin_snapshot                                1 
    zfs_arc_average_blocksize                         8192 
    zfs_arc_dnode_limit                               0 
    zfs_arc_dnode_limit_percent                       10 
    zfs_arc_dnode_reduce_percent                      10 
    zfs_arc_grow_retry                                0 
    zfs_arc_lotsfree_percent                          10 
    zfs_arc_max                                       0 
    zfs_arc_meta_adjust_restarts                      4096 
    zfs_arc_meta_limit                                0 
    zfs_arc_meta_limit_percent                        75 
    zfs_arc_meta_min                                  0 
    zfs_arc_meta_prune                                10000 
    zfs_arc_meta_strategy                             1 
    zfs_arc_min                                       0 
    zfs_arc_min_prefetch_lifespan                     0 
    zfs_arc_p_aggressive_disable                      1 
    zfs_arc_p_dampener_disable                        1 
    zfs_arc_p_min_shift                               0 
    zfs_arc_pc_percent                                0 
    zfs_arc_shrink_shift                              0 
    zfs_arc_sys_free                                  0 
    zfs_autoimport_disable                            1 
    zfs_compressed_arc_enabled                        1 
    zfs_dbgmsg_enable                                 0 
    zfs_dbgmsg_maxsize                                4194304 
    zfs_dbuf_state_index                              0 
    zfs_deadman_checktime_ms                          5000 
    zfs_deadman_enabled                               1 
    zfs_deadman_synctime_ms                           1000000 
    zfs_dedup_prefetch                                0 
    zfs_delay_min_dirty_percent                       60 
    zfs_delay_scale                                   500000 
    zfs_delete_blocks                                 20480 
    zfs_dirty_data_max                                2526400921 
    zfs_dirty_data_max_max                            4294967296 
    zfs_dirty_data_max_max_percent                    25 
    zfs_dirty_data_max_percent                        10 
    zfs_dirty_data_sync                               67108864 
    zfs_dmu_offset_next_sync                          0 
    zfs_expire_snapshot                               300 
    zfs_flags                                         0 
    zfs_free_bpobj_enabled                            1 
    zfs_free_leak_on_eio                              0 
    zfs_free_max_blocks                               100000 
    zfs_free_min_time_ms                              1000 
    zfs_immediate_write_sz                            32768 
    zfs_max_recordsize                                1048576 
    zfs_mdcomp_disable                                0 
    zfs_metaslab_fragmentation_threshold              70 
    zfs_metaslab_segment_weight_enabled               1 
    zfs_metaslab_switch_threshold                     2 
    zfs_mg_fragmentation_threshold                    85 
    zfs_mg_noalloc_threshold                          0 
    zfs_multihost_fail_intervals                      5 
    zfs_multihost_history                             0 
    zfs_multihost_import_intervals                    10 
    zfs_multihost_interval                            1000 
    zfs_multilist_num_sublists                        0 
    zfs_no_scrub_io                                   0 
    zfs_no_scrub_prefetch                             0 
    zfs_nocacheflush                                  0 
    zfs_nopwrite_enabled                              1 
    zfs_object_mutex_size                             64 
    zfs_pd_bytes_max                                  52428800 
    zfs_per_txg_dirty_frees_percent                   30 
    zfs_prefetch_disable                              0 
    zfs_read_chunk_size                               1048576 
    zfs_read_history                                  0 
    zfs_read_history_hits                             0 
    zfs_recover                                       0 
    zfs_resilver_delay                                2 
    zfs_resilver_min_time_ms                          3000 
    zfs_scan_idle                                     50 
    zfs_scan_min_time_ms                              1000 
    zfs_scrub_delay                                   4 
    zfs_send_corrupt_data                             0 
    zfs_sync_pass_deferred_free                       2 
    zfs_sync_pass_dont_compress                       5 
    zfs_sync_pass_rewrite                             2 
    zfs_sync_taskq_batch_pct                          75 
    zfs_top_maxinflight                               32 
    zfs_txg_history                                   0 
    zfs_txg_timeout                                   5 
    zfs_vdev_aggregation_limit                        131072 
    zfs_vdev_async_read_max_active                    3 
    zfs_vdev_async_read_min_active                    1 
    zfs_vdev_async_write_active_max_dirty_percent     60 
    zfs_vdev_async_write_active_min_dirty_percent     30 
    zfs_vdev_async_write_max_active                   10 
    zfs_vdev_async_write_min_active                   2 
    zfs_vdev_cache_bshift                             16 
    zfs_vdev_cache_max                                16384 
    zfs_vdev_cache_size                               0 
    zfs_vdev_max_active                               1000 
    zfs_vdev_mirror_non_rotating_inc                  0 
    zfs_vdev_mirror_non_rotating_seek_inc             1 
    zfs_vdev_mirror_rotating_inc                      0 
    zfs_vdev_mirror_rotating_seek_inc                 5 
    zfs_vdev_mirror_rotating_seek_offset              1048576 
    zfs_vdev_queue_depth_pct                          1000 
    zfs_vdev_raidz_impl                               [fastest] original scalar sse2 ssse3 
    zfs_vdev_read_gap_limit                           32768 
    zfs_vdev_scheduler                                noop 
    zfs_vdev_scrub_max_active                         2 
    zfs_vdev_scrub_min_active                         1 
    zfs_vdev_sync_read_max_active                     10 
    zfs_vdev_sync_read_min_active                     10 
    zfs_vdev_sync_write_max_active                    10 
    zfs_vdev_sync_write_min_active                    10 
    zfs_vdev_write_gap_limit                          4096 
    zfs_zevent_cols                                   80 
    zfs_zevent_console                                0 
    zfs_zevent_len_max                                256 
    zil_replay_disable                                0 
    zil_slog_bulk                                     786432 
    zio_delay_max                                     30000 
    zio_dva_throttle_enabled                          1 
    zio_requeue_io_start_cut_in_line                  1 
    zio_taskq_batch_pct                               75 
    zvol_inhibit_dev                                  0 
    zvol_major                                        230 
    zvol_max_discard_blocks                           16384 
    zvol_prefetch_bytes                               131072 
    zvol_request_sync                                 0 
    zvol_threads                                      32 
    zvol_volmode                                      1

Code:
# dmesg -Hw --level err,warn,crit,emerg,alert 
[Mar 7 15:19] DMAR-IR: This system BIOS has enabled interrupt remapping 
              on a chipset that contains an erratum making that 
              feature unstable.  To maintain system stability 
              interrupt remapping is being disabled.  Please 
              contact your BIOS vendor for an update 
[  +0,043887] core: CPUID marked event: 'bus cycles' unavailable 
[  +0,000000]   #2  #3  #4  #5  #6  #7 
[  +0,017972] PCCT header not found. 
[  +0,030469] pmd_set_huge: Cannot satisfy [mem 0xe0000000-0xe0200000] with a huge-page mapping due to MTRR override. 
[  +0,053351] ACPI: \: failed to evaluate _DSM (0x1001) 
[  +0,000828] Expanded resource Reserved due to conflict with PCI Bus 0000:00 
[  +0,735938] r8169 0000:07:00.0: can't disable ASPM; OS doesn't have ASPM control 
[  +0,000405] r8169 0000:06:00.0: can't disable ASPM; OS doesn't have ASPM control 
[  +0,074693] ACPI Warning: SystemIO range 0x0000000000000400-0x000000000000041F conflicts with OpRegion 0x0000000000000400-0x000000000000040F (\SMRG) (20170531/utaddress-247) 
[  +3,731825] spl: loading out-of-tree module taints kernel. 
[  +0,002757] znvpair: module license 'CDDL' taints kernel. 
[  +0,000000] Disabling lock debugging due to kernel taint 
[  +7,656360] ATK0110 ATK0110:00: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). 
[  +0,067613] ACPI Warning: SystemIO range 0x0000000000000828-0x000000000000082F conflicts with OpRegion 0x0000000000000800-0x000000000000084F (\PMRG) (20170531/utaddress-247) 
[  +0,000009] ACPI Warning: SystemIO range 0x0000000000000530-0x000000000000053F conflicts with OpRegion 0x0000000000000500-0x000000000000053F (\GPS0) (20170531/utaddress-247) 
[  +0,000005] ACPI Warning: SystemIO range 0x0000000000000500-0x000000000000052F conflicts with OpRegion 0x0000000000000500-0x000000000000053F (\GPS0) (20170531/utaddress-247) 
[  +0,000005] lpc_ich: Resource conflict(s) found affecting gpio_ich 
[  +0,140678] kvm: VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL does not work properly. Using workaround 
[  +0,014275] Error: Driver 'pcspkr' is already registered, aborting... 
[  +0,484725] CRAT table not found 
[  +3,880545] new mount options do not match the existing superblock, will be ignored
 
Same Problem on us Servers too. Servers are using swap after a day.
 
ZFS on Linux sets the max ARC size (c_max) per default to half of memory. This is quite good for I/O intensive workloads, but can be a problem for hosting virtual machines. During for example backup also system processes need memory, so 12G for arc plus 12G for VM's = 24G -> no free mem for other processes -> machine has to swap

You can set c_max since zol version 0.6.2 at runtime, but you have to drop caches (or export pool) to see the effect:

https://serverfault.com/questions/581669/why-isnt-the-arc-max-setting-honoured-on-zfs-on-linux

you can for example pull down c_max to 8G, try if it would run your workload adequately.

For hosting of VM'S and ZFS (and much more for both on a host): memory + memory + memory .......
 
Ok this is crazy I just set vm.swappiness=90 (just to for testing) and my node ignore it
I did it like this
Code:
 sysctl -w vm.swappiness=90

And like by setting vm.swappiness=90 on /etc/sysctl.conf and then rebooting. Both ways are ignored. Also, my node keeps swapping every time a do a backup and ARC Size hits 13-14 GB approximately, when it hits this value it crumbles to 752.93 MB (Min Size, set by default), I set zfs zfs_arc_max=21474836480 and yet the system never passed the 13-14 GB.
 
24 GB RAM - 12 GB RAM for VM's, so you only have up to 12 GByte available for System, vzdump, other running processes and ZFS ARC
No wonder why the system wants to swap. don't up the ARC that will not help as long as you don't add physical memory. lock down the arc to 8 or 4 G or it will keep swapping.
 
  • Like
Reactions: batijuank
@Klaus Steinberger I follow your advice and set zfs zfs_arc_max=8589934592 rebooted and did the backup again without staring any VM or anything else, yet the same happend, I tried backing up a Windows KVM and a Linux KVM, same thing. Proxmox started to swap after 55% completeness, I'm clueless, I also did it with 2GB on zfs_arc_max and the same happend it. Also, proxmox graph on the web interface shows that ram never reaches the 24 GB.
 
This is my zfs setup

Code:
# grep -H '' /sys/module/zfs/parameters/*
/sys/module/zfs/parameters/dbuf_cache_hiwater_pct:10
/sys/module/zfs/parameters/dbuf_cache_lowater_pct:10
/sys/module/zfs/parameters/dbuf_cache_max_bytes:67108864
/sys/module/zfs/parameters/dbuf_cache_max_shift:5
/sys/module/zfs/parameters/dmu_object_alloc_chunk_shift:7
/sys/module/zfs/parameters/ignore_hole_birth:1
/sys/module/zfs/parameters/l2arc_feed_again:1
/sys/module/zfs/parameters/l2arc_feed_min_ms:200
/sys/module/zfs/parameters/l2arc_feed_secs:1
/sys/module/zfs/parameters/l2arc_headroom:2
/sys/module/zfs/parameters/l2arc_headroom_boost:200
/sys/module/zfs/parameters/l2arc_noprefetch:1
/sys/module/zfs/parameters/l2arc_norw:0
/sys/module/zfs/parameters/l2arc_write_boost:8388608
/sys/module/zfs/parameters/l2arc_write_max:8388608
/sys/module/zfs/parameters/metaslab_aliquot:524288
/sys/module/zfs/parameters/metaslab_bias_enabled:1
/sys/module/zfs/parameters/metaslab_debug_load:0
/sys/module/zfs/parameters/metaslab_debug_unload:0
/sys/module/zfs/parameters/metaslab_fragmentation_factor_enabled:1
/sys/module/zfs/parameters/metaslab_lba_weighting_enabled:1
/sys/module/zfs/parameters/metaslab_preload_enabled:1
/sys/module/zfs/parameters/metaslabs_per_vdev:200
/sys/module/zfs/parameters/send_holes_without_birth_time:1
/sys/module/zfs/parameters/spa_asize_inflation:24
/sys/module/zfs/parameters/spa_config_path:/etc/zfs/zpool.cache
/sys/module/zfs/parameters/spa_load_verify_data:1
/sys/module/zfs/parameters/spa_load_verify_maxinflight:10000
/sys/module/zfs/parameters/spa_load_verify_metadata:1
/sys/module/zfs/parameters/spa_slop_shift:5
/sys/module/zfs/parameters/zfetch_array_rd_sz:1048576
/sys/module/zfs/parameters/zfetch_max_distance:8388608
/sys/module/zfs/parameters/zfetch_max_streams:8
/sys/module/zfs/parameters/zfetch_min_sec_reap:2
/sys/module/zfs/parameters/zfs_abd_scatter_enabled:1
/sys/module/zfs/parameters/zfs_abd_scatter_max_order:10
/sys/module/zfs/parameters/zfs_admin_snapshot:1
/sys/module/zfs/parameters/zfs_arc_average_blocksize:8192
/sys/module/zfs/parameters/zfs_arc_dnode_limit:0
/sys/module/zfs/parameters/zfs_arc_dnode_limit_percent:10
/sys/module/zfs/parameters/zfs_arc_dnode_reduce_percent:10
/sys/module/zfs/parameters/zfs_arc_grow_retry:0
/sys/module/zfs/parameters/zfs_arc_lotsfree_percent:10
/sys/module/zfs/parameters/zfs_arc_max:2147483648
/sys/module/zfs/parameters/zfs_arc_meta_adjust_restarts:4096
/sys/module/zfs/parameters/zfs_arc_meta_limit:0
/sys/module/zfs/parameters/zfs_arc_meta_limit_percent:75
/sys/module/zfs/parameters/zfs_arc_meta_min:0
/sys/module/zfs/parameters/zfs_arc_meta_prune:10000
/sys/module/zfs/parameters/zfs_arc_meta_strategy:1
/sys/module/zfs/parameters/zfs_arc_min:0
/sys/module/zfs/parameters/zfs_arc_min_prefetch_lifespan:0
/sys/module/zfs/parameters/zfs_arc_p_aggressive_disable:1
/sys/module/zfs/parameters/zfs_arc_pc_percent:0
/sys/module/zfs/parameters/zfs_arc_p_dampener_disable:1
/sys/module/zfs/parameters/zfs_arc_p_min_shift:0
/sys/module/zfs/parameters/zfs_arc_shrink_shift:0
/sys/module/zfs/parameters/zfs_arc_sys_free:0
/sys/module/zfs/parameters/zfs_autoimport_disable:1
/sys/module/zfs/parameters/zfs_compressed_arc_enabled:1
/sys/module/zfs/parameters/zfs_dbgmsg_enable:0
/sys/module/zfs/parameters/zfs_dbgmsg_maxsize:4194304
/sys/module/zfs/parameters/zfs_dbuf_state_index:0
/sys/module/zfs/parameters/zfs_deadman_checktime_ms:5000
/sys/module/zfs/parameters/zfs_deadman_enabled:1
/sys/module/zfs/parameters/zfs_deadman_synctime_ms:1000000
/sys/module/zfs/parameters/zfs_dedup_prefetch:0
/sys/module/zfs/parameters/zfs_delay_min_dirty_percent:60
/sys/module/zfs/parameters/zfs_delay_scale:500000
/sys/module/zfs/parameters/zfs_delete_blocks:20480
/sys/module/zfs/parameters/zfs_dirty_data_max:2526399283
/sys/module/zfs/parameters/zfs_dirty_data_max_max:4294967296
/sys/module/zfs/parameters/zfs_dirty_data_max_max_percent:25
/sys/module/zfs/parameters/zfs_dirty_data_max_percent:10
/sys/module/zfs/parameters/zfs_dirty_data_sync:67108864
/sys/module/zfs/parameters/zfs_dmu_offset_next_sync:0
/sys/module/zfs/parameters/zfs_expire_snapshot:300
/sys/module/zfs/parameters/zfs_flags:0
/sys/module/zfs/parameters/zfs_free_bpobj_enabled:1
/sys/module/zfs/parameters/zfs_free_leak_on_eio:0
/sys/module/zfs/parameters/zfs_free_max_blocks:100000
/sys/module/zfs/parameters/zfs_free_min_time_ms:1000
/sys/module/zfs/parameters/zfs_immediate_write_sz:32768
/sys/module/zfs/parameters/zfs_max_recordsize:1048576
/sys/module/zfs/parameters/zfs_mdcomp_disable:0
/sys/module/zfs/parameters/zfs_metaslab_fragmentation_threshold:70
/sys/module/zfs/parameters/zfs_metaslab_segment_weight_enabled:1
/sys/module/zfs/parameters/zfs_metaslab_switch_threshold:2
/sys/module/zfs/parameters/zfs_mg_fragmentation_threshold:85
/sys/module/zfs/parameters/zfs_mg_noalloc_threshold:0
/sys/module/zfs/parameters/zfs_multihost_fail_intervals:5
/sys/module/zfs/parameters/zfs_multihost_history:0
/sys/module/zfs/parameters/zfs_multihost_import_intervals:10
/sys/module/zfs/parameters/zfs_multihost_interval:1000
/sys/module/zfs/parameters/zfs_multilist_num_sublists:0
/sys/module/zfs/parameters/zfs_nocacheflush:0
/sys/module/zfs/parameters/zfs_nopwrite_enabled:1
/sys/module/zfs/parameters/zfs_no_scrub_io:0
/sys/module/zfs/parameters/zfs_no_scrub_prefetch:0
/sys/module/zfs/parameters/zfs_object_mutex_size:64
/sys/module/zfs/parameters/zfs_pd_bytes_max:52428800
/sys/module/zfs/parameters/zfs_per_txg_dirty_frees_percent:30
/sys/module/zfs/parameters/zfs_prefetch_disable:0
/sys/module/zfs/parameters/zfs_read_chunk_size:1048576
/sys/module/zfs/parameters/zfs_read_history:0
/sys/module/zfs/parameters/zfs_read_history_hits:0
/sys/module/zfs/parameters/zfs_recover:0
/sys/module/zfs/parameters/zfs_resilver_delay:2
/sys/module/zfs/parameters/zfs_resilver_min_time_ms:3000
/sys/module/zfs/parameters/zfs_scan_idle:50
/sys/module/zfs/parameters/zfs_scan_min_time_ms:1000
/sys/module/zfs/parameters/zfs_scrub_delay:4
/sys/module/zfs/parameters/zfs_send_corrupt_data:0
/sys/module/zfs/parameters/zfs_sync_pass_deferred_free:2
/sys/module/zfs/parameters/zfs_sync_pass_dont_compress:5
/sys/module/zfs/parameters/zfs_sync_pass_rewrite:2
/sys/module/zfs/parameters/zfs_sync_taskq_batch_pct:75
/sys/module/zfs/parameters/zfs_top_maxinflight:32
/sys/module/zfs/parameters/zfs_txg_history:0
/sys/module/zfs/parameters/zfs_txg_timeout:5
/sys/module/zfs/parameters/zfs_vdev_aggregation_limit:131072
/sys/module/zfs/parameters/zfs_vdev_async_read_max_active:3
/sys/module/zfs/parameters/zfs_vdev_async_read_min_active:1
/sys/module/zfs/parameters/zfs_vdev_async_write_active_max_dirty_percent:60
/sys/module/zfs/parameters/zfs_vdev_async_write_active_min_dirty_percent:30
/sys/module/zfs/parameters/zfs_vdev_async_write_max_active:10
/sys/module/zfs/parameters/zfs_vdev_async_write_min_active:2
/sys/module/zfs/parameters/zfs_vdev_cache_bshift:16
/sys/module/zfs/parameters/zfs_vdev_cache_max:16384
/sys/module/zfs/parameters/zfs_vdev_cache_size:0
/sys/module/zfs/parameters/zfs_vdev_max_active:1000
/sys/module/zfs/parameters/zfs_vdev_mirror_non_rotating_inc:0
/sys/module/zfs/parameters/zfs_vdev_mirror_non_rotating_seek_inc:1
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_inc:0
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_seek_inc:5
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_seek_offset:1048576
/sys/module/zfs/parameters/zfs_vdev_queue_depth_pct:1000
/sys/module/zfs/parameters/zfs_vdev_raidz_impl:[fastest] original scalar sse2 ssse3
/sys/module/zfs/parameters/zfs_vdev_read_gap_limit:32768
/sys/module/zfs/parameters/zfs_vdev_scheduler:noop
/sys/module/zfs/parameters/zfs_vdev_scrub_max_active:2
/sys/module/zfs/parameters/zfs_vdev_scrub_min_active:1
/sys/module/zfs/parameters/zfs_vdev_sync_read_max_active:10
/sys/module/zfs/parameters/zfs_vdev_sync_read_min_active:10
/sys/module/zfs/parameters/zfs_vdev_sync_write_max_active:10
/sys/module/zfs/parameters/zfs_vdev_sync_write_min_active:10
/sys/module/zfs/parameters/zfs_vdev_write_gap_limit:4096
/sys/module/zfs/parameters/zfs_zevent_cols:80
/sys/module/zfs/parameters/zfs_zevent_console:0
/sys/module/zfs/parameters/zfs_zevent_len_max:256
/sys/module/zfs/parameters/zil_replay_disable:0
/sys/module/zfs/parameters/zil_slog_bulk:786432
/sys/module/zfs/parameters/zio_delay_max:30000
/sys/module/zfs/parameters/zio_dva_throttle_enabled:1
/sys/module/zfs/parameters/zio_requeue_io_start_cut_in_line:1
/sys/module/zfs/parameters/zio_taskq_batch_pct:75
/sys/module/zfs/parameters/zvol_inhibit_dev:0
/sys/module/zfs/parameters/zvol_major:230
/sys/module/zfs/parameters/zvol_max_discard_blocks:16384
/sys/module/zfs/parameters/zvol_prefetch_bytes:131072
/sys/module/zfs/parameters/zvol_request_sync:0
/sys/module/zfs/parameters/zvol_threads:32
/sys/module/zfs/parameters/zvol_volmode:1
 
Look at that line:
/sys/module/zfs/parameters/zfs_arc_max:2147483648

if you change the zfs.conf file you have to regenerate initram, as the zfs configuration happens very early in the boot process (when only initram is available)
 
  • Like
Reactions: batijuank
@Klaus Steinberger, thanks for your quick response and for helping me, yes I did:

I added this options zfs zfs_arc_max=2147483648 to /etc/modprobe.d/zfs.conf using nano and then did update-initramfs -u && reboot. Also
2147483648=2G which is what I have right now. I tried with 20G, 8G, 4G, 2G. Every time I do a backup, I end up having the same issue, proxmox swapping with lot of RAM available. Have you look it to any other parameter on my setup and compared to the one you have? Maybe I'm using a more another version of ZFS? I'm still testing proxmox so I don't have a valid subsription.
 
Have you permanent set swappiness in /etc/sysctl.d/ ? Otherwise after reboot it's back to default
 
Have you permanent set swappiness in /etc/sysctl.d/ ? Otherwise after reboot it's back to default

Also yes. Right now, I reinstalled Proxmox and the problem disappeared, I think the problem is related with a package upgrade. Since I installed the ISO Proxmox VE 5.1 ISO Installer (3rd ISO release) which have Kernel 4.13.3 and ZFS 0.7.2 and there is an upgrade for both, maybe I have to take some precautions or take some steps before doing the upgrade (My bet is on ZFS but I don't know yet what do I have to do). Any ideas?
 
Last edited:
Ok, here is a thing, I read that one must do a zpool export before upgrading zfs, I tried the command but got device busy. However using zpool history I saw an export done at some point. How can I run zpool export without getting a device busy error.
 
Ok, here is a thing, I read that one must do a zpool export before upgrading zfs, I tried the command but got device busy. However using zpool history I saw an export done at some point. How can I run zpool export without getting a device busy error.

You do not have to do a zpool export before upgrading ZFS. Just do a "apt-get update && apt-get dist-upgrade", then reboot.

Anyone that wants to argue the point should ask themselves how a ZFS root filesystem would get updated if it had to be exported (inaccessible) to be updated (requiring access).
 
  • Like
Reactions: batijuank
You do not have to do a zpool export before upgrading ZFS. Just do a "apt-get update && apt-get dist-upgrade", then reboot.

Anyone that wants to argue the point should ask themselves how a ZFS root filesystem would get updated if it had to be exported (inaccessible) to be updated (requiring access).

Your right, now I understand that, thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!