proxmox eating RAM

reajin

New Member
Feb 4, 2017
3
0
1
35
Hello. I have a problem with the RAM. Established Virtual Environment 4.4-5 / c43015a5, ZFS file system, with 8 GB of RAM. Installation of 3 VM, all Debian. allocated to each machine on the 1.25 GB of RAM. After the launch of the proxmox he eats about 6 GB of all, but then gradually eats all operational (hour 2 comes up to 8 GB). I tried to trace through htop, but there is nothing suspicious there. So I do not even understand what could be the problem?
 
You're familiar why Linux is different than windows with respect to memory usage? If not, please read this http://www.linuxatemyram.com/

ZFS will - if it can - consume 4 GB of your RAM in its default configuration for caching purpose (ARC), which is also normal. You can reduce your ZFS RAM usage, but I rather have a fast system with no unused ram than a slow system with unused ram that is practically not used at all.
 
Footnote on this thread for what it is worth / in case it isn't obvious from the prior reply -- I believe RAM requirements for the Proxmox host will be lower, if you don't use ZFS filesystem. My stock deployment is on vanilla EXT3/EXT4 and then if I need to do a 'not officially supported software raid' setup I do a clean minimal install Debian/Jesse SW Raid install first (respecting the desired LVM config that Proxmox will expect); and then add in proxmox after-the-fact.

For added IO performance, if desired, I can recommend that 'bcache' SSD is both stable and fast.

ie, for example, install to a bare server, Debian Jesse with SW raid config
-- pair of SSD drives in raid;
-- pair of SATA (bulk block storage drives for bcache) in Raid

then reserve a chunk of SSD raid space for cache for the bcache layer.

And .. it works rather nicely; despite being commodity hardware with low costs.

Of course, maybe you have specific reasons for using ZFS, in which case I might advise "get more ram" since ram is cheap, really, at least up to the ~32gb/host price point assuming commodity DIMMS and consumer channel parts. (ie, 4 x 8gb modules).

Tim
 
  • Like
Reactions: RobFantini
Thanks for answering. And how bad that I use the zfs? It is better to alter the server and use the ext4?
 
Thanks for answering. And how bad that I use the zfs? It is better to alter the server and use the ext4?
Depending on your situation and your hardware. I think ZFS is a real good solution for everything. I have it for virtualsation, backup, desktop... but yes, you need RAM. But if you do not need the possibilities of ZFS, and have no hardware that is big enough, then it is better you use ext with an hardware raidcontroller.
 
I would agree with the above suggestion :-) -- if you are not going to upgrade ram, backup your VMs, reinstall the environment without ZFS / use EXT4 / and restore your VMs. If you have hardware raid, good, if not, either no raid, or do an unsupported SW raid Jessie -> Proxmox install. No raid is fine for temporary dev test environment IMHO but really otherwise not suitable for production or 'serious' testing either really (ie, which has a habit of becoming production sometimes).

Tim
 
Thank you all for your answers. Reinstalled proxmox with ext4, everything is fine)))
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!