I am unable to get any tools to easily show disk usage this is ZFS "iotop" command
This does show VM 522 is using 20% IO but if I look at it's graph it is not the VM using the main resources
None of the running KVM machines show correct disk read/write always 0/0 where as without ZFS iotop...
I already did that inside the VM to test with no results. As you can see from above there is no caching and the VMs RAM usage has gone below 1GB but the nodes total RAM usage is still near full as if it's still using the full 20GB
The issue with using minimum memory is some applications show out of memory errors in my experience if I set it lower and also free -g command does not show accurate memory allocations.
This is happening in another Proxmox node too which is a shame and to correct my last message it looks like it only changes on complete stop/start of the VM.
https://i.gyazo.com/2dd8bb8effb41abc6819091676178e81.png - Before starting VM RAM usage...
I am unsure if this is normal behavior for a Linux ballooning + KSM VM but I created a VM and ran the following command to consume full RAM
stress-ng --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1
The nodes total RAM usage went up by 90GB I then...
Something is not right I just created a virtual machine with 100GB RAM and ran the following to reach 100% usage
stress-ng --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1
total nodes usage went from 700GB to 800GB but after turning the VPS off it's not...
Thanks for the reply and clarifying KSM_THRES_COEF but I noticed ksmd has been using 100% CPU usage non stop since around 200GB RAM usage which makes me think I need to optimize the settings?
KSM is constantly using 100% of 1 CPU I am willing to increase that to 500% (5vcpu) if that will give better results.
Node settings: https://i.gyazo.com/32618f24be8a9fb282173cd1ebb9b740.png
Right now I have only KSM_THRES_COEF=70 set in /etc/ksmtuned.conf but I am not sure if that means it...
Some progress. This seems to be an issue with CentOS and OVH. After I noticed it was working on a Windows VM using DHCP I changed I tried the same on both CentOS 7 and Debian 10. Debian 10 and Windows both work flawlessly however CentOS seems to be an issue.
Debian 10:
ip a
1: lo...
It's not about being cheap I am using these IP blocks already also it doesn't explain why the VPN is not working on the node itself.
100.64.0.1 is the IP I got from booting the server into rescue mode these new Scale range servers are different the IPv6 for example only works using gateway...
Thank you for the reply but OVH vrack is not an option for me now due to most of my IP blocks being /30 and OVH vrack would reserve two of those IPs?
ip r
default via 100.64.0.1 dev vmbr0 proto kernel onlink
IPBLOCK/27 dev vmbr0 scope link
IPBLOCK/30 dev vmbr0 scope link
192.168.0.0/16 dev...
No the VPN client fails on the main node too.
tun0 is the VPN
Fri May 21 12:47:31 2021 WARNING: file '/etc/openvpn/vpn/keys/vpn.key' is group or others accessible
Fri May 21 12:47:31 2021 WARNING: file '/etc/openvpn/vpn/keys/ta.key' is group or others accessible
Fri May 21 12:47:31 2021...
1. I did stop it but after 5-15 seconds of the VPN being established it stops working no mention of why in logs
2. The only other thing is when a VM boots up this is shown (VMID 100)
1. OpenVPN
2. Yes they can ping each other before connecting
3. Below is connect log can't get other end
Fri May 21 10:32:24 2021 WARNING: file '/etc/openvpn/wfvpn/keys/wfvpn.key' is group or others accessible
Fri May 21 10:32:24 2021 WARNING: file '/etc/openvpn/wfvpn/keys/ta.key' is group or...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.