Hi.
After upgrading to the latest Proxmox VE 3.4 kernel (I believe it's the source), some of our OpenVZ containers suck out all available CT memory:
I've never seen such behavior before.
The CT seems to be working well and does not feel that no more memory available. The only issue at the moment is that our monitoring system (nagios) alarms about it and I don't quite understand how to deal with it further: disable monitoring (which is obviously a bad option), use some other way of free memory/memory consumption monitoring (how?) or somehow revert memory consumption back without returning to the old kernels.
Any ideas?
After upgrading to the latest Proxmox VE 3.4 kernel (I believe it's the source), some of our OpenVZ containers suck out all available CT memory:
Code:
top - 15:35:07 up 3 days, 13:34, 2 users, load average: 0.07, 0.05, 0.01
Tasks: 58 total, 1 running, 57 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 2097152k total, 2097152k used, 0k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 455060k cached
The CT seems to be working well and does not feel that no more memory available. The only issue at the moment is that our monitoring system (nagios) alarms about it and I don't quite understand how to deal with it further: disable monitoring (which is obviously a bad option), use some other way of free memory/memory consumption monitoring (how?) or somehow revert memory consumption back without returning to the old kernels.
Any ideas?
# pveversion -v
proxmox-ve-2.6.32: 3.4-184 (running kernel: 2.6.32-48-pve)
pve-manager: 3.4-15 (running version: 3.4-15/e1daa307)
pve-kernel-2.6.32-48-pve: 2.6.32-184
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-46-pve: 2.6.32-177
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-20
qemu-server: 3.4-9
pve-firmware: 1.1-5
libpve-common-perl: 3.0-27
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-35
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-28
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
proxmox-ve-2.6.32: 3.4-184 (running kernel: 2.6.32-48-pve)
pve-manager: 3.4-15 (running version: 3.4-15/e1daa307)
pve-kernel-2.6.32-48-pve: 2.6.32-184
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-46-pve: 2.6.32-177
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-20
qemu-server: 3.4-9
pve-firmware: 1.1-5
libpve-common-perl: 3.0-27
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-35
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-28
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1