Sizing calculator

Lucky Man

New Member
Sep 23, 2025
4
0
1
Hi All,

We are trying to migrate from Vmware, we have rvtool for around 9 server, not sure any sizing tool to calc how much server require in Proxmox
or any best practice to calc
 
There isn’t a single answer that fits all environments, or even a set of best practices you can apply formulaically. The sizing really comes down to:
  • The number of physical CPUs you have today
  • The number of virtual cores you’ve provisioned
  • Your overprovisioning ratio and actual utilization
  • RAM and disk usage
  • Some allowance for growth for all of the above
  • Your failover requirements (i.e., how many servers you can afford to lose before service degradation)
  • How (or whether) your existing storage integrates with PVE
Engaging with a seasoned PVE partner or architect might be one way forward.

Good luck


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
There isn’t a single answer that fits all environments, or even a set of best practices you can apply formulaically. The sizing really comes down to:
  • The number of physical CPUs you have today
  • The number of virtual cores you’ve provisioned
  • Your overprovisioning ratio and actual utilization
  • RAM and disk usage
  • Some allowance for growth for all of the above
  • Your failover requirements (i.e., how many servers you can afford to lose before service degradation)
  • How (or whether) your existing storage integrates with PVE
Engaging with a seasoned PVE partner or architect might be one way forward.

Good luck


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thanks, may i know that, if we have 3 hosts using SAN, should i use ZFS ? as i saw the document, 1GB ram per TB data size is recommend.

So assume my san have 80TB let proxmox 3 host to connect using FC. then each node need to use 80G ram for ZFS?

Thanks
 
That is a long outdated and not very good to begin with rule.
Just as the old and not very good rule "don't fill your pool over 80%".

My problem with both rules is that they are way too general and ignoring the actual use case. For some use cases, the rules are way too strict, and for others the rules are way too lax. That is why both rules are pretty much useless IMHO.

The only real rule you have to follow is having at least 16GB RAM.
But the more RAM you have, the more RAM you can use for ARC, which then in return leads to faster read performance.
So having 512GB RAM instead of 16GB would of course help performance wise. But there is nothing stopping you from only using 16GB.
 
Last edited:
Engaging with a seasoned PVE partner or architect might be one way forward.
Based on your questions asked, and not even describing a clear performance goal, I would also recommend you a partner.
Or you get a system that is on the save side, meaning having at least the RAM amount you have know, CPU performance you have know and combine that with a ZFS 3 way SSD mirrors as your ZFS storage pool. That way you probably are not having a performance downgrade (except maybe sync write performance)
 
Hi bbgeek17, the link mention ZFS over iSCSI support shared, i though it mean cluster cluster-aware filesystem, just scare it can support fiber or not , or have to use ISCSI
"ZFS over iSCSI" is a special scheme that involves root SSH access into the storage appliance, requires the storage appliance to run ZFS internally, and directly manipulates said ZFS. And, of course, exporting the resulting ZFS volumes via iSCSI.

You did not specify what SAN appliance you are using. I am guessing its not one that adheres to above requirements. If I am correct, then ZFS/iSCSI is not for you.

You may want to read these articles that deal with most common block storage SANs:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage
https://kb.blockbridge.com/technote/proxmox-qcow-snapshots-on-lv
https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage

Cheers

Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
"ZFS over iSCSI" is a special scheme that involves root SSH access into the storage appliance, requires the storage appliance to run ZFS internally, and directly manipulates said ZFS. And, of course, exporting the resulting ZFS volumes via iSCSI.

You did not specify what SAN appliance you are using. I am guessing its not one that adheres to above requirements. If I am correct, then ZFS/iSCSI is not for you.

You may want to read these articles that deal with most common block storage SANs:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage
https://kb.blockbridge.com/technote/proxmox-qcow-snapshots-on-lv
https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage

Cheers

Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thanks, we are using hitachi storage with FC, with 6 ESX server,so my target is even 1 host down, VM should auto failover to another node.
I built test lab VM on top ESX, using ZFS HA/CIFS can do that. but no san connection test

from the https://pve.proxmox.com/wiki/Storage , NFS/CIFS no need SAN.
it said LVM can but seem only 1 host can access the LUN at a time, so if i have 6 server, i need 6 LUN. it is still ok just would like to understand the behavior.