Cannot allocate memory

Morell

Member
Dec 29, 2020
7
0
6
31
Hello, I have a 32GB RAM machine with a proxmox and 3 virtual machines installed, one uses 8GB, another 6GB and the last one 6GB, from time to time when one of these machines is turned off (due to some failure) and doesn't allow me to start it again, I get this error: "kvm: cannot set up guest memory 'pc.ram': Cannot allocate memory"
and forces me to lower the RAM he's using even further.

I've looked at the proxmox HTOP and it tells me that I'm using 29GB of 32GB and I can't see where that RAM is used:

65UUmXPiCW.png


I need urgent help, thank you.
 
Last edited:
Well you actually should know what type of storage you use ;)

You can provide a screenshot from the storage configuration or do a "zpool status" on the CLI.
 
This?
uVgUDncUea.png


zpool status
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.
 
That shows you are not using ZFS. It uses lots of memory so was one way to run out of memory.

What shows free -h
If you do not have swap, add swap partition.
Use top or htop commands to see which application uses memory.
 
That shows you are not using ZFS. It uses lots of memory so was one way to run out of memory.

What shows free -h
If you do not have swap, add swap partition.
Use top or htop commands to see which application uses memory.
This is the problem, the HTOP is only showing me a 63% usage between the 3 virtual machines, but in the total it indicates me a 90%.

Code:
:~# free -h
              total        used        free      shared  buff/cache   available
Mem:            30G         27G        2.7G        246M        417M        2.5G
Swap:          2.0G        2.0G         31M

Code:
top - 10:22:45 up 1121 days, 22:28,  2 users,  load average: 3.65, 3.35, 3.10
Tasks: 213 total,   1 running, 212 sleeping,   0 stopped,   0 zombie
%Cpu(s):  8.8 us,  5.0 sy,  0.0 ni, 86.1 id,  0.0 wa,  0.0 hi,  0.1 si,  0.0 st
KiB Mem : 91.7/32483132 [                                                                                                    ]
KiB Swap: 98.4/2095096  [                                                                                                    ]

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 7935 root      20   0 7358716 6.035g   7284 S  94.7 19.5  43:15.45 kvm
   65 root      25   5       0      0      0 S  20.6  0.0 269153:30 ksmd
29288 root      20   0 8760664 5.545g   5156 S   4.0 17.9 111853:47 kvm
24650 root      20   0 9459568 7.885g   5248 S   3.7 25.5 259:16.14 kvm
 8117 root      20   0       0      0      0 S   1.7  0.0   0:37.69 vhost-7935
    8 root      20   0       0      0      0 S   0.3  0.0 572:03.19 rcu_sched
  387 root      20   0  148512  73272  73036 S   0.3  0.2 228:22.69 systemd-journal
 1617 root      20   0  315132  29492   5932 S   0.3  0.1   3036:32 pvestatd
 8161 root      20   0       0      0      0 S   0.3  0.0   0:06.82 vhost-7935
10663 www-data  20   0  544380 106904  12952 S   0.3  0.3   0:01.44 pveproxy worker
17917 root      20   0  536676  39464   6480 S   0.3  0.1   0:06.95 pvedaemon worke
24695 root      20   0       0      0      0 S   0.3  0.0  12:16.18 vhost-24650
    1 root      20   0   57684   5536   4048 S   0.0  0.0  51:55.38 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:21.01 kthreadd
    4 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:0H
    6 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 mm_percpu_wq
    7 root      20   0       0      0      0 S   0.0  0.0   7:06.25 ksoftirqd/0
    9 root      20   0       0      0      0 S   0.0  0.0   0:00.03 rcu_bh
   10 root      rt   0       0      0      0 S   0.0  0.0   0:46.54 migration/0
   11 root      rt   0       0      0      0 S   0.0  0.0   1:08.10 watchdog/0
   12 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/0
   13 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/1
   14 root      rt   0       0      0      0 S   0.0  0.0   1:10.33 watchdog/1
   15 root      rt   0       0      0      0 S   0.0  0.0   0:57.49 migration/1
 
Make bigger swap or add more RAM. Swap being full means all memory is used and even swap as additional memory, so exhaustion.
You could try modifying the vm memory settings, I think it is ballooning device that makes it request only the memory actually used, so should release some RAM when not super active.
Try sorting the top output by MEM column, to see the memory hogs.
 
I had balloning enabled on all the virtual machines.

TOP by memory:
Code:
top - 10:34:35 up 1121 days, 22:40,  2 users,  load average: 2.77, 2.83, 2.94
Tasks: 216 total,   1 running, 215 sleeping,   0 stopped,   0 zombie
%Cpu(s):  7.3 us,  3.5 sy,  0.0 ni, 89.0 id,  0.1 wa,  0.0 hi,  0.1 si,  0.0 st
KiB Mem : 32483132 total,  2885284 free, 29167236 used,   430612 buff/cache
KiB Swap:  2095096 total,    32588 free,  2062508 used.  2704560 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
24650 root      20   0 9459568 7.885g   5248 S   4.0 25.5 259:46.21 kvm
7935 root      20   0 7358716 6.035g   7284 S  96.7 19.5  54:13.87 kvm
29288 root      20   0 8760664 5.545g   5156 S   4.3 17.9 111854:22 kvm
11372 www-data  20   0  544544 106960  12868 S   0.0  0.3   0:01.53 pveproxy worker
15671 www-data  20   0  544252 105948  12128 S   0.3  0.3   0:00.24 pveproxy worker
17622 www-data  20   0  535980 102356   9788 S   0.0  0.3   0:00.00 pveproxy worker
3506 www-data  20   0  533644  95080   2692 S   0.0  0.3   0:00.79 pveproxy
3537 www-data  20   0  531860  92968   2380 S   0.0  0.3   0:00.93 spiceproxy work
3536 www-data  20   0  529416  90352      0 S   0.0  0.3   0:00.74 spiceproxy
  387 root      20   0  148512  74592  74356 S   0.0  0.2 228:22.99 systemd-journal
8374 root      20   0  536712  42160  11228 S   0.0  0.1   0:01.62 pvedaemon worke
17917 root      20   0  536676  40356   6556 S   0.3  0.1   0:07.63 pvedaemon worke
1606 root      20   0  901208  33096  24400 S   0.3  0.1 637:39.14 pmxcfs
16881 root      20   0  536168  32992  10748 S   0.0  0.1   0:00.03 pvedaemon worke
1617 root      20   0  315132  29492   5932 S   0.3  0.1   3036:33 pvestatd
1618 root      20   0  316512  23976   4592 S   0.0  0.1 814:42.93 pve-firewall
1647 root      20   0  324344  23256   2856 S   0.0  0.1 118:19.48 pve-ha-lrm
1632 root      20   0  525920  14088   2628 S   0.0  0.0  10:34.51 pvedaemon
1355 bind      20   0  735944   9136      0 S   0.0  0.0 179:53.09 named
18300 root      20   0   92720   6456   5544 S   0.0  0.0   0:00.00 sshd
1638 root      20   0  324708   6200   4252 S   0.0  0.0  74:13.62 pve-ha-crm
18307 root      20   0   69948   5708   4960 S   0.0  0.0   0:00.00 sshd
18308 root      20   0   69948   5600   4852 S   0.0  0.0   0:00.00 sshd
    1 root      20   0   57684   5536   4048 S   0.0  0.0  51:55.41 systemd
1440 root      10 -10   25488   4912   3664 S   0.0  0.0   0:00.00 iscsid
16292 root      20   0   45092   3864   3040 R   0.0  0.0   0:00.97 top
4049 root      20   0   19912   3388   2792 S   0.0  0.0   0:00.57 bash
18309 sshd      20   0   69948   3320   2556 S   0.0  0.0   0:00.00 sshd
18301 sshd      20   0   69948   3304   2536 S   0.0  0.0   0:00.00 sshd
18314 sshd      20   0   69948   3292   2528 S   0.0  0.0   0:00.00 sshd
  790 root      20   0   37984   2984   2856 S   0.0  0.0   5:42.09 systemd-logind
4043 root      20   0   92860   2816   1880 S   0.0  0.0   0:00.01 sshd
1595 root      20   0  950260   2768    732 S   0.0  0.0 198:51.41 rrdcached
1386 root      20   0   69948   2756   2624 S   0.0  0.0  22:35.80 sshd
18313 root      20   0   33072   2624   2308 S   0.0  0.0   0:00.00 showmount
4035 root      20   0   93164   2460   1240 S   0.0  0.0   0:00.44 sshd
  801 root      20   0   19708   2376   2156 S   0.0  0.0  39:10.13 ksmtuned
  803 message+  20   0   45120   2248   1984 S   0.0  0.0  11:33.53 dbus-daemon
  413 root      20   0   46496   2120   1960 S   0.0  0.0   0:39.87 systemd-udevd
1616 root      20   0   29668   2008   1884 S   0.0  0.0   4:24.61 cron
7007 postfix   20   0   83232   1908   1116 S   0.0  0.0   0:00.00 pickup
29304 statd     20   0   35528   1700   1536 S   0.0  0.0   0:00.02 rpc.statd
4052 root      20   0   12700   1588   1416 S   0.0  0.0   0:00.00 sftp-server
16832 root      20   0   12700   1524   1520 D   0.0  0.0   0:04.07 sftp-server
  576 root      20   0   15768   1516   1380 S   0.0  0.0   0:08.19 mdadm
17882 root      20   0   19928   1460   1448 D   0.0  0.0   0:00.00 bash
  804 root      20   0   25152   1420   1224 S   0.0  0.0   0:16.66 smartd
1588 root      20   0   14540   1180   1180 S   0.0  0.0   0:00.04 agetty
  797 root      20   0   35908   1060    908 S   0.0  0.0  50:52.01 irqbalance
  805 root      20   0  250992   1028      0 S   0.0  0.0  58:51.04 rsyslogd
1578 root      20   0   81168    980    868 S   0.0  0.0   2:23.57 master
  792 root      20   0    4052    972    952 S   0.0  0.0  22:12.53 watchdog-mux
1393 root      20   0   41640    852    852 S   0.0  0.0   0:00.00 lxc-monitord
18283 root      20   0    5844    720    644 S   0.0  0.0   0:00.00 sleep
1581 postfix   20   0   83396    660    468 S   0.0  0.0   0:43.51 qmgr
25479 vnstat    20   0    7344    588    476 S   0.0  0.0  68:58.01 vnstatd
  683 root      20   0   49944    448    372 S   0.0  0.0   1:04.94 rpcbind
  681 systemd+  20   0  127288    440    388 S   0.0  0.0   0:50.32 systemd-timesyn
8760 root      20   0   89900    188      0 S   0.0  0.0   0:02.08 pvefw-logger
1439 root      20   0   24984     88      0 S   0.0  0.0  14:01.89 iscsid
    2 root      20   0       0      0      0 S   0.0  0.0   0:21.01 kthreadd
    4 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:0H
    6 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 mm_percpu_wq
    7 root      20   0       0      0      0 S   0.0  0.0   7:06.26 ksoftirqd/0
    8 root      20   0       0      0      0 S   0.0  0.0 572:03.58 rcu_sched
    9 root      20   0       0      0      0 S   0.0  0.0   0:00.03 rcu_bh
   10 root      rt   0       0      0      0 S   0.0  0.0   0:46.54 migration/0
   11 root      rt   0       0      0      0 S   0.0  0.0   1:08.10 watchdog/0
   12 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/0
   13 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/1
   14 root      rt   0       0      0      0 S   0.0  0.0   1:10.33 watchdog/1
   15 root      rt   0       0      0      0 S   0.0  0.0   0:57.49 migration/1
   16 root      20   0       0      0      0 S   0.0  0.0   3:40.12 ksoftirqd/1
   18 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/1:0H
   19 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/2
   20 root      rt   0       0      0      0 S   0.0  0.0   1:08.82 watchdog/2
 
I did a lot of Google searches before posting on the forum.

Code:
~# pveversion -v
proxmox-ve: 5.1-27 (running kernel: 4.13.8-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.8-1-pve: 4.13.8-27
pve-kernel-4.10.17-1-pve: 4.10.17-18
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9

gkrGkmfnGr.png
 
Don't forget the host needs memory as well.
Filesystem's use caches.
What we can see is that you run out of memory. Likely due to caches as it is not shown in htop.
Try to find out how to reduce fs-caching or constant restaging that cache, so it does not blow up like this.
 
I'm afraid it will break.
Then shutdown the virtual machines, make a backup dump from all of them, copy the dump files offsite. Then reboot the host.

Consider updating your Proxmox, 5.1 is very old and not supported anymore.

If it breaks, you have the virtual machine dumps. You can install a new Proxmox and copy the dump files back.
 
I'm afraid it will break.
Systems need maintenance. And a reboot from time to time is maintenance and helps.
It is also necessary to get security patches into your system.

Honestly: I am not wasting time in trying something to find which is just caused by an unmaintained system. This is a hunt for a ghost.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!