About Proxmox VE & PBS features

zohaibqureshi

New Member
Oct 15, 2025
10
4
3
Hi everyone,
I’m looking into using Proxmox for a deployment and had a few questions about its capabilities. I’d really appreciate it if anyone could share their experience or insights:
  1. Scaling: Can Proxmox handle a cluster with 128 nodes or more? Are there any practical limitations or tips for large-scale setups?
  2. Hot-Swap Disks: Does Proxmox have a disk replacement wizard with a graphical interface that makes hot-swapping drives easy and reliable? How well does it work in real-world scenarios?
  3. Storage & Backups:
    • Can I set up internal storage, SAN, or NAS as a backup repository for VMs?
    • Does it support common storage protocols like iSCSI, FC, or NFS?
  4. Architecture & Hyper-Converged Support:
    • Can Proxmox run on both x86 and ARM servers?
    • Is it possible to mix x86 and ARM nodes in the same cluster for unified management?
Any tips, advice, or experiences would be super helpful.
Thank You
 
Hi zohaibqureshi, for your question, I try to answer it as best I known.
1. Scaling: May you can reference to https://forum.proxmox.com/threads/t...ed-by-the-pve-host-cluster.123507/post-537308, without tunning can scale 25-50 nodes.
2. Hot-Swap Disks: if you just want sample to do hot-swapping drives, select ext4 in the installation step and use Server's HW RAID will let you do so. but for ZFS, CEPH you will need replace failure disk using CLI.
3. Storage & Backup: It's can sample give you answer YES. You can reference to https://pve.proxmox.com/wiki/Storage in advanced.
4. Architecture & Hyper-Converged Support: In my knowledge, currently Proxmox VE doesn't official support ARM. so the answer I think is no.
 
Thanks, @david_tao, for your response to my queries.
Hi zohaibqureshi, for your question, I try to answer it as best I known.
1. Scaling: May you can reference to https://forum.proxmox.com/threads/t...ed-by-the-pve-host-cluster.123507/post-537308, without tunning can scale 25-50 nodes.
2. Hot-Swap Disks: if you just want sample to do hot-swapping drives, select ext4 in the installation step and use Server's HW RAID will let you do so. but for ZFS, CEPH you will need replace failure disk using CLI.
3. Storage & Backup: It's can sample give you answer YES. You can reference to https://pve.proxmox.com/wiki/Storage in advanced.
4. Architecture & Hyper-Converged Support: In my knowledge, currently Proxmox VE doesn't official support ARM. so the answer I think is no.
 
  1. Scaling: Can Proxmox handle a cluster with 128 nodes or more? Are there any practical limitations or tips for large-scale setups?

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
2. Hot-Swap Disks: if you just want sample to do hot-swapping drives, select ext4 in the installation step and use Server's HW RAID will let you do so. but for ZFS, CEPH you will need replace failure disk using CLI.
One important things to be aware of: Since Ceph and ZFS are both basically a kind of software raid (actually Ceph is even more a distributed network storage with SW raid features) they don't play really nice with HW RAID. So if you decide that you want to test or use them (they have features HW RAID lack) you need to change the mode of your HW RAID to disable the RAID.
 
  • Like
Reactions: david_tao
  1. Hot-Swap Disks: Does Proxmox have a disk replacement wizard with a graphical interface that makes hot-swapping drives easy and reliable? How well does it work in real-world scenarios?
No. that functionality is dependent on your hardware, and might need some intelligence/built tooling by the operator. PVE isnt a storage device.

Can I set up internal storage, SAN, or NAS as a backup repository for VMs?
yes. as a matter or fact, ANY filestore can be used for backup (a SAN would need a local resource to map a filesystem)

    • Does it support common storage protocols like iSCSI, FC, or NFS?

Yes. FC is a little more fraught than others simply because its much less used in 2025 then the others.

Architecture & Hyper-Converged Support:
No.
 
  • Like
Reactions: Johannes S
One important things to be aware of: Since Ceph and ZFS are both basically a kind of software raid (actually Ceph is even more a distributed network storage with SW raid features) they don't play really nice with HW RAID. So if you decide that you want to test or use them (they have features HW RAID lack) you need to change the mode of your HW RAID to disable the RAID.
Thanks @johannes, I forget to mention one thing, normally the HW RAID and non-raid disk function can not coexist on same RAID card. But Dell PERC is a exception! ^_^, I was use the function to protect boot volume by RAID1 and create ZFS on others non-raid disk.
 
Thanks @johannes, I forget to mention one thing, normally the HW RAID and non-raid disk function can not coexist on same RAID card. But Dell PERC is a exception! ^_^, I was use the function to protect boot volume by RAID1 and create ZFS on others non-raid disk.
That's interesting, do you have a reference? Up to now I always assumded Dell PERC behaves the same as other HW RAID controllers. I tried to google a confirmation for your hint but couldn't find anything.
 
I use a Dell PERC H730P in a Proxmox Backup Server (Dell R530). I deleted the virtual disks before converting it to HBA-mode. It's also running the latest firmware from Dell. Proxmox uses the megaraid_sas driver.

As for the other Dells in production, I swapped out the PERCs for Dell HBA330s (with latest firmware) which is a true IT/HBA-mode storage controller. Proxmox uses the much simpler mpt3sas driver. This controller was originally for VMware vSAN but obviously works fine under Proxmox. it uses the LSI3008 chipset. Since it's not a RAID controller, can't create virtual disks with it. No issues with ZFS & Ceph.
 
  • Like
Reactions: Johannes S
the OG percs did. ever since 13g all percs including the H7xx and H8xx series offer naked passthrough mode- so anything 12gig SAS or newer.
Ok then I missunderstood @david_tao I though he meant that one could combine HW raid and zfs on PERCs:

I forget to mention one thing, normally the HW RAID and non-raid disk function can not coexist on same RAID card. But Dell PERC is a exception! ^_^, I was use the function to protect boot volume by RAID1 and create ZFS on others non-raid disk.

I already knew that modern controllers can switch between HBA- and RAID-Mode
 
May be the issue is with the megaraid_sas driver not able to use "mixed-mode" RAID configuration (RAID and HBA [passthrough]-mode) is the root cause. No idea since I don't use mixed-mode.

Still stand with my recommendation on getting a used Dell HBA330 controller. They are cheap to get. Got one for $15. From there, choose any number drives for any ZFS RAID configuration and/or Ceph use.
 
  • Like
Reactions: Johannes S