[SOLVED] Proxmox Sysctl Tuneup

Haider Jarral

Well-Known Member
Aug 18, 2018
121
5
58
38
Hello everyone,

Just wanted to get feedback on using this tune ups, all these valid for any proxmox version any configuration ? I do not understand most of them even though going to the links mentioned, wanted to get some insight if these are useful to do in a prod 4 node cluster with ceph.

https://gist.github.com/sergey-dryabzhinsky/bcc1a15cb7d06f3d4606823fcc834824

Code:
# allow that much active connections
net.core.somaxconn = 256000

# do less swap but not disable it
vm.swappiness = 1

# allow application request allocation of virtual memory
# more than real RAM size (or OpenVZ/LXC limits)
vm.overcommit_memory = 1

net.core.netdev_max_backlog = 16000
net.ipv4.tcp_max_syn_backlog = 32000
net.ipv4.tcp_syncookies = 1

net.unix.max_dgram_qlen = 1024

# Don't need IPv6 for now
# If you use IPv6 - comment this line
net.ipv6.conf.all.disable_ipv6 = 1

# Flush TIME_WAIT connections faster
net.ipv4.tcp_fin_timeout = 10
# same for nf_conntrac moule
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 15

# Increase ephermeral IP ports
net.ipv4.ip_local_port_range = 10240    61000

# https://www.serveradminblog.com/2011/02/neighbour-table-overflow-sysctl-conf-tunning/
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096

# http://www.opennet.ru/opennews/art.shtml?num=44945
net.ipv4.tcp_challenge_ack_limit = 9999

# https://major.io/2008/12/03/reducing-inode-and-dentry-caches-to-keep-oom-killer-at-bay/
vm.vfs_cache_pressure = 10000

##
# Adjust vfs cache
# https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
# Decriase dirty cache to faster flush on disk
vm.dirty_background_ratio = 5
vm.dirty_ratio = 10
# Only on Proxmox 3.x with OpenVZ
ubc.dirty_ratio = 20
ubc.dirty_background_ratio = 10

# Don't slow network - save congestion window after idle
# https://github.com/ton31337/tools/wiki/tcp_slow_start_after_idle---tcp_no_metrics_save-performance
net.ipv4.tcp_slow_start_after_idle = 0

# https://tweaked.io/guide/kernel/
# Don't migrate processes between CPU cores too often
kernel.sched_migration_cost_ns = 5000000
# Kernel >= 2.6.38 (ie Proxmox 4+)
kernel.sched_autogroup_enabled = 0
@2w-consultoria
 
# allow application request allocation of virtual memory # more than real RAM size (or OpenVZ/LXC limits) vm.overcommit_memory = 1

This can get some VM/CTs get out-of-memory killed fast. Memory overcommitment is something I would heavily advise against.
Can just get you trouble and make the system slow.

net.core.netdev_max_backlog = 16000
net.ipv4.tcp_max_syn_backlog = 32000

A high backlog queue may give you negative effects, similar to buffer-bloat. If you have no specific good reason I'd not mess with that one. It's much better to reject/drop packets if we cannot even handle a queue of size 1000 than to let them pile up much more until that happens anyway if there's so much traffic going on.

# Only on Proxmox 3.x with OpenVZ
ubc.dirty_ratio = 20
ubc.dirty_background_ratio = 10

# Don't slow network - save congestion window after idle # https://github.com/ton31337/tools/wiki/tcp_slow_start_after_idle---tcp_no_metrics_save-performance
net.ipv4.tcp_slow_start_after_idle = 0

# https://tweaked.io/guide/kernel/
# Don't migrate processes between CPU cores too often
kernel.sched_migration_cost_ns = 5000000
# Kernel >= 2.6.38 (ie Proxmox 4+) kernel.sched_autogroup_enabled = 0

Mostly old outdated stuff, messing with the scheduler parameters - which is seldom a good idea on general purpose computers or hypervisors. The TCP slow start can result in a lot of spikes, like the link shows - while it may seem that it handles more it does so at the cost of the remaining network.

In general it would be good to only alter things which are really required, and then only one after each other to be able to measure each effect independently.

The vm.swappiness one seems the most reasonable to me, it's value is really a bit high by default. But hey, whatever works for your specific setup and HW.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!