@chrcoluk thanks, I'll find out about transport_hugepage and txg.
For 5 months my servers have been running with a bad workaround: a cron job which drops the memory cache every 15 min :rolleyes:
Hi,
I got the same issue on a second node : https://0bin.net/paste/++4Q1++1#Z9wbvOTP71W4pZylTMoWELlMJEAG0GWg5sLlz-tce0t
I think there is a problem with the kernel version 5.4.103-1-pve. I'll try to downgrade on 5.4.78-2-pve since I have a node on this version without kmv crashes...
Hi,
My KMV got an out of memory, 3 times since yesterday, any idea ?
syslog: https://0bin.net/paste/9hcAzeJa#Wm976GOGKHVrMIZBk1yqz5ptcFqUHQ4824uwcQoxeNs
It works by adding "transport: udp" !
This option only works with "crypto_cipher: none" and "crypto_auth: none ".
I think disabling cryptography is not a problem since nodes are connected with a specific vpn.
My final corosync.conf:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {...
Average ping is about 14.5ms through tinc vpn and 13.5ms without vpn.
Nodes are on different agencies connected by an enterprise network.
So it therefore seems difficult to get LAN performance.
Currently, the easiest way to edit corosync.conf is to restore the cluster by downgrading to corosync...
After restarting lrm, crm and corosync :
https://hastebin.com/ebaxujaqed
After restarting pve-cluster :
https://hastebin.com/opedatudiq
Here a part of the second:
May 13 08:49:45 node3 pmxcfs[29854]: [status] notice: cpg_send_message retry 30
May 13 08:49:46 node3 corosync[27946]: [KNET ]...
@gradinaruvasile I tried but it didn't change anything
@t.lamprecht I have the same result on all nodes:
root@node1:~# md5sum /etc/pve/corosync.conf
0164366e7424ffcdc99c881ac5c7960d /etc/pve/corosync.conf
root@node1:~# md5sum /etc/corosync/corosync.conf
0164366e7424ffcdc99c881ac5c7960d...
I know it's an old topic but this could help someone :
EXAMPLE : rename an LXC container from 100 to 101
1- Be sure there are NO VM with the target id (here 101)
2- Please be CARREFUL, be sure you know what you are doing and check every line and name that it is correct (do not use the names in...
Hi,
I have a proxmox with 3 containers and 2 virtual machines, all on a ZFS storage. There is one thing that is automatic with containers and that I would like to do with VMs:
I create a virtual machine with a 500 GB disk. On the ZFS, I enabled the "Thin Provisioning" function, which means that...
I found a solution : update the file /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm line 672 (pve 5.2-9):
replace
my $cmd = ['zfs', 'send', '-Rpv'];
by
my $cmd = ['zfs', 'send', '-RpvcD'];
It is not necessary to restart any service.
It's not a very clean solution (and it can possibly be...
Hi,
Is there a way through proxmox (pve 5) to replicate with compression ?
Something like :
zfs send -Rpvi tank/vm-100-disk-0@__replicate_100-0_1538395933_ tank/vm-100-disk-0@__replicate_100-0_1538474876_ | bzip2 -c | ssh node2 "bzcat | zfs recv -F tank/vm-100-disk-0"
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.