Proxmox storage config, pve-root partition, SWAP, SSDs

irk

Member
Feb 19, 2022
1
0
6
34
Hello comunity,
I've been playing and testing around the Proxmox the last two months, and one question occurred recently. I tried to search the forum for exact answers, but found some common ones only, so I am writing this post, hopping some of you will share information from your experience.

So I do have a Dell server with 32GB ram, H730 hardware raid with 2 x 240GB SSD disks in RAID-1. Standard installation of Proxmox creates 56GB pve-root partition, which is almost empty and leaves less space for VMs.

So my question here is, does Proxmox need such big partition, does it create it for SWAP because of the 32G Ram I have, or it needs it lor logging in the future? Can I reduce this to 20G lets say, as it use even less then 3GB at the moment ( this is a fresh installation ).

One more If I may, in general concerning storage and in terms of performance - do you recommend to have one RAID array just for Proxmox, and another RAID array for the VMs, will it matter on the read/write speed performance compares to only one RAID where Proxmox and VMs are placed?

Thank you all in advance!

Regards!
Irk
 
Last edited by a moderator:
Hi,

I disabled SWAP because it also have a lot of writes to my SSDs but my servers have >200GB RAM. For me its working. For 32GB it could be better to use swap if you want to use a lot of its ram in the vms. Earlier where I used servers with 32GB RAM I had SWAP disabled but also had a look on how much ram was used by the vms. You can fast run in process killing with low ram.

Root size is also important for temp files like uploading files or creating backups. So setting this to less could make problems. I had problems with <10 GB. At the moment I use 20GB.

Different raids always bring more performance if you also use different disks. Proxmox writes a lot of logs. I use RAID1 with SSDs for proxmox. The SSDs are small 60gb SSDs. RAID 10 with bigger SSDs for vms and RAID50 on slow sata drives for big data. Also it is important which hardware you use. The Hardware raid controller must support fast path or whatever for ssds. Otherwise you will have too less iops. So software raid could be faster. I don't know your controller.
 
Last edited:
Hi,

I disabled SWAP because it also have a lot of writes to my SSDs but my servers have >200GB RAM. For me its working. For 32GB it could be better to use swap if you want to use a lot of its ram in the vms. Earlier where I used servers with 32GB RAM I had SWAP disabled but also had a look on how much ram was used by the vms. You can fast run in process killing with low ram.
In my opinion it is always useful to have some GBs of swap. If you don't want it to be used regularily you can set the swappiness to 0 or 1. In that case linux won't use the swap (so your SSD won't see much additional wear) in normal operation and only use it on emergencies so processes don't have to be killed to free up RAM. With 32GB RAM 2 or 4 GB swap should be fine. Got 64GB of swap and with swappiness of 1 the swap usage is 99,99% of the time below 10MB.
Root size is also important for temp files like uploading files or creating backups. So setting this to less could make problems. I had problems with <10 GB. At the moment I use 20GB.
I wouldn't decrease it below 16GB. I'm using 20GB too. That space is also used for ISOs/LXC templates and your logs can grow to several GBs.
 
Last edited:
In my opinion it is always useful to have some GBs of swap. If you don't want it to be used regularily you can set the swappiness to 0 or 1. In that case linux won't use the swap (so your SSD won't see much additional wear) in normal operation and only use it on emergencies so processes don't have to be killed to free up RAM. With 32GB RAM 2 or 4 GB swap should be fine. Got 64GB of swap and with swappiness of 1 the swap usage is 99,99% of the time below 10MB.

I wouldn't decrease it below 16GB. I'm using 20GB too. That space is also used for ISOs/LXC templates and your logs can grow to several GBs.
Du you have VMs only? With LXC there are logs in Ramdisk which could grow and eats lots of RAM (https://forum.proxmox.com/threads/l...mory-leaks-killing-processes-wtf.26983/page-2). It was so in my case. I had to limit the size in the lxc container itself. And I don't know if I'm right but I read something about that setting swapiness not working as expected in the never versions. But perhaps only for some users.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!