Question about drive distribution for CEPH and HA.

rsr911

Member
Nov 16, 2023
60
4
8
In my first thread I had three Lenovo RD440 servers, two identical with 8 SSDs each. The third has 5 HDDs in raid 5 with a hotspare and a few SSDs. Its the same otherwise but one step down on the CPUs. I decided to buy a fourth machine that matches the first two and do a true 3 node cluster. But after a lot of reading and videos this weekend I have more questions. Here are the options I'm thinking off:

1) three identical servers, 8tb of SSDs at 1tb per drive as a 3 node. Fourth server to run HDDs for backups.

2) go to a four node. Put 6 SSDs in each as one pool for data. Put one HDD in each for use as a backup data pool. Use erasure coding etc to give me (what I believe) will be something like hardware Raid 5.

Which option makes the most sense? I am thinking two as it should give more usable space and gives server level fault tolerance to my backup data. I would only have three monitors for quorum.

If two makes the most sense how do I set that up in terms of replicas and fault domains. Is it 2/4? I mean how do I set the drives up as parity instead of mirrors.

Next if I go with two does it make sense to use a 5th low power machine in another location to make the quorum odd? I have two buildings. I'll have two servers in each building. Meaning rack failure could be a problem. But if Ive got a quorum machine on it's own circuit and UPS connected to the management network does that give me the advantage I think it does?
 
Hello rsr911,

1) I would go for a 3 node setup, maybe even with Full Meshed Network for Ceph (what Network cards do you have for ceph?) I would go for a Proxmox Backup Server for the fourth server.

Next if I go with two does it make sense to use a 5th low power machine in another location to make the quorum odd? I have two buildings. I'll have two servers in each building. Meaning rack failure could be a problem. But if Ive got a quorum machine on it's own circuit and UPS connected to the management network does that give me the advantage I think it does?

If you have two buildings and you wanna go ceph replication via rooms/buildings you need 4:2 , 4 servers, LOW latency between those two rooms (you have any values)? If its to high you wont have fun with it. And yes going to a 4:2 location based replication you need a 5th node to get quorum even when one room/location fails.

You should take care that latency is as low as possible. How far away are those two rooms between each other?
 
Last edited:
  • Like
Reactions: rsr911
Hello rsr911,

1) I would go for a 3 node setup, maybe even with Full Meshed Network for Ceph (what Network cards do you have for ceph?) I would go for a Proxmox Backup Server for the fourth server.



If you have two buildings and you wanna go ceph replication via rooms/buildings you need 4:2 , 4 servers, LOW latency between those two rooms (you have any values)? If its to high you wont have fun with it. And yes going to a 4:2 location based replication you need a 5th node to get quorum even when one room/location fails.

You should take care that latency is as low as possible. How far away are those two rooms between each other?
Distance is 145 feet of OM fiber. I'm running Intel 10g SFP+ dual port cards connected to an 8 port L2/L3 switch in building one. Management side of the network is a 10g switch with 4 SFP+ ports and 16 RJ45 ports. Everything is wired with cat6a except for some legacy equipment connected to a 1g switch in building one. These switches are also smart switches L2/L3. Currently the 10g switches are connected with 2 fiber connections. All office PCs are connected with 10g over cat6a in each building. Will do the same for "terminals" at production equipment, although 1g is plenty there it's all wired for 10g.

So I suppose my main question is if I go with 4 servers plus a quorum machine do I gain anything? Such as more usable space? The quorum machine was my witness node for Vsphere. Its a Xeon workstation of the same generation as my servers. I am thinking with four plus a quorum I would gain space if I used erasure coding. Or I'm just complicating things for no real gain.

What is the advantage of three nodes plus a backup server vs four nodes with backup server as a VM?

Latency shouldn't be an issue but I can run iperf to be certain.

I'm a newbie but I'm thinking with running backups in a VM and one HDD per server for backups I would also have HA for the backup solution. But I have not yet investigated Proxmox backup. However here I definitely don't want three copies of data. Id want the HDDs to function like a raid 5 array. The HDDs could be a RaidZ ZFS pool I suppose. The VM boot could be on the SSD CEPH pool.

LnxBil yes it will be CEPH and HA. Sorry should have made that more clear.
 
Last edited:
What is the advantage of three nodes plus a backup server vs four nodes with backup server as a VM?
You should not run your backup server ON your hardware that may need a restore. Remind the 3-2-1 rule of backups.

LnxBil yes it will be CEPH and HA. Sorry should have made that more clear.
Okay, so you will have a SSD ceph pool and a disk cheph pool for the backups? Do I interpret that correctly?
 
  • Like
Reactions: rsr911
You should not run your backup server ON your hardware that may need a restore. Remind the 3-2-1 rule of backups.


Okay, so you will have a SSD ceph pool and a disk cheph pool for the backups? Do I interpret that correctly?
I fixed the title of the thread to include CEPH and HA.

I have nothing setup yet. Planning stage. I will have four dual CPU Lenovo 8 bay servers with 128gb ram each and 32 logical processors each. I have all 10g cards. SFP+ and RJ 45. The SFP are there to connect the back end together and to connect to the switches. I have one smart switch in each building, mixed SFP+ and RJ 45. I have a dedicated 8 port SFP+ 10g smart switch for the back end.

I forgot about the 3,2,1 rule so it makes sense to keep it as a three node and use the fourth server with its raid card for the HDD backup array.

So now the question is this: say I have two nodes in building two and one node in building one plus backup server in building one. Is there any need to add a quorum machine? I mean what happens if power is lost in building two and two servers are down? If I do that then don't I need the backup server also part of the quorum for an odd number. Or am I ok with just the three node quorum?
 
Let me explain my layout and usage better.

Building one is the office building with the most users doing the most PC work.

Building two is the production building where primary data flow to the HA CEPH cluster will just be data entry into a database. Very little office work happens in building two.

Building one has the most secure server room. But I will be partitioning the room in building two for the servers to be in a closest on the rack.

To that end I am thinking node one and backup server in the office building and nodes two and three in production.

For the public side both buildings have a 10g mixed switch that is smart and multigig. These switches are connected together with a pair of fiber connections. CEPH back end will also be connected with fiber on a dedicated smart switch to isolate CEPH traffic.

Right now the server rack in building two is just in the corner of our QC lab. I plan to wall around it, install a cooling fan and dust filter and double doors for easy access but locked. In building one the rack is in the basement in a locked, cool and dry room. I monitor temp and humidity here.

Idk of that extra info helps or not.
 
I am thinking with four plus a quorum I would gain space if I used erasure coding. Or I'm just complicating things for no real gain.
You should start with K3/M2, but you need at least 5 active storage nodes. If you want to distribute this redundantly over two buildings, there are already 10 servers + 3 monitors, each of which should also be in its own fire compartments. From my point of view, two locations are equivalent to one, because you will never reach a meaningful majority if the connection between the buildings is lost.

From my experience, however, even an EC with 5 nodes is very far from triple replication in terms of performance. You'll also need a lot more processing power to be able to redistribute the chunks. From my point of view, it makes sense to start with 11 storage nodes and do K8/M3, which also results in a very high efficiency of 72.73% of usable storage space.
 
  • Like
Reactions: rsr911
You should start with K3/M2, but you need at least 5 active storage nodes. If you want to distribute this redundantly over two buildings, there are already 10 servers + 3 monitors, each of which should also be in its own fire compartments. From my point of view, two locations are equivalent to one, because you will never reach a meaningful majority if the connection between the buildings is lost.

From my experience, however, even an EC with 5 nodes is very far from triple replication in terms of performance. You'll also need a lot more processing power to be able to redistribute the chunks. From my point of view, it makes sense to start with 11 storage nodes and do K8/M3, which also results in a very high efficiency of 72.73% of usable storage space.
Do NOT mention the word "fire"!! :). We actually HAD a fire in early 2021. Our main warehouse burned. Thankfully I did have a single server in a fire container and while it didnt see any fire it did need a thorough professional cleaning with dry ice blasting which is kinda neat to watch.

Having said that fire IS a reason for wanting a cluster. Previously I had ESXI on two hosts each running a Windows server VM as primary and secondary domain controllers and a Linux VM running Amanda backup to a Buffalo terrastations. We had two other terrastations for data, one in each building doing replication. It worked ok but everything was a 1g connection.

Given 8tb of storage is far more than enough for now do you think my three node will work ok? I assume you're adding nodes for speed and space? I'd be OK without EC if it eats up that much resources, at least for the time being. I suppose as I grow I can just add servers to my racks if need be.

Not sure what you mean there are already 10 servers, I only have 4? Are you saying I need 6-7 more to do this right? Or can I get by with 3 nodes plus a backup server.

I never did ask if Proxmox ties into SmartUPS systems to let a cluster know servers will be shutdown but that would be interesting if it did.
 
Or can I get by with 3 nodes plus a backup server.
From a base design and redundancy standpoint that's definitively fine, we know many such set-ups.

Still, depending on the use case more servers can help, or even be necessary, but mostly for getting even more redundancy (e.g., when doing maintenance) or scaling out.

8 TB of raw storage is not that much, and if you do not plan to have many (compute hungry) virtual guests, then going full hyper-converged can be totally fine too.
 
  • Like
Reactions: rsr911
Do NOT mention the word "fire"!! :). We actually HAD a fire in early 2021. Our main warehouse burned. Thankfully I did have a single server in a fire container and while it didnt see any fire it did need a thorough professional cleaning with dry ice blasting which is kinda neat to watch.
Ouch, hopefully you were able to remember the new emergency number "0118 999 881 999 119 725 3" in time. :eek:

Not sure what you mean there are already 10 servers, I only have 4? Are you saying I need 6-7 more to do this right? Or can I get by with 3 nodes plus a backup server.
You said you wanted to use EC. If you want to use EC seriously and safely, you have to choose K3/M2, which needs to be added - so you need at least 5 storage nodes with identical equipment. However, I can also tell you from experience that you will not be satisfied with the performance from K3/M2.

I've already tested this in October 2021 and came up with the following values (tested on 5x Dell PowerEge R620, 2x E5-2630 v2, 384 GB DDR3, 2x 10 GbE, 6x PM883 per Node):
EC read/write IOPS: 8776 / 2933
EC read/write MB/s: 35.1 / 11.7

RBD replica read/write IOPS: 21675 / 7243
RBD replica read/write MB/s: 86.7 / 28.9

So if you're really serious about using the two fire compartments, you need to make sure that you can lose a fire compartment at any time without losing the data. The other question would then be whether you want to maintain operation in the event of an error or whether it is only important that the data remains with integrity. If you also want to ensure the availability of a fire compartment in the event of a breakdown, you have to think about how you can do it 3 CEPH Mons so that the quorum is guaranteed at all times. This usually doesn't work with only two fire compartments, you always have to put the majority in one fire compartment. So there is a 50% chance that the fire compartment that currently has 2 out of 3 Mons/PVE servers will fail.

A possibly better solution would be to keep the production data and backups strictly separate. For example, you can also set up a replication in CEPH and replicate from the productive CEPH to a single node CEPH. If you also use PBS and mirror the PBS into the other section, you can restore the systems or simply restart them with a little effort.
In addition to the CEPH approach, you could also make a cluster of, say, 3 nodes, where the third node is a ZFS replication target of the other two nodes. You just need to have the capacity of the first two nodes available in the third. If you want to be able to start everything in the event of an error, it will of course also need CPU / RAM.
 
  • Like
Reactions: rsr911
From a base design and redundancy standpoint that's definitively fine, we know many such set-ups.

Still, depending on the use case more servers can help, or even be necessary, but mostly for getting even more redundancy (e.g., when doing maintenance) or scaling out.

8 TB of raw storage is not that much, and if you do not plan to have many (compute hungry) virtual guests, then going full hyper-converged can be totally fine too.
I have 8tb per server and depending on configuration up to 32 in the backup server.

Very few media files. The "meat and potatos" of our data is less than a TB. Mostly word docs and excel sheets. At most I'll have three VMs, two Windows domain controllers and in time an SQL server.

Right now I just have one Windows DC with a vmdk data drive attached as NTFS running on a hopped up workstation and a Linux box doing backups. This is too limp along until the servers are back up. I don't know how to manage the data though. Leave it as a virtual drive on the Windows server where I have access control and shadow copies or move it directly onto Proxmox somehow as a share. But I need to read more about how to do that and keep it NTFS with all the permissions because the virtual drive has 8 shares and a few are locked from other users and I like shadow copies aka "previous versions".

I don't even know what fully hyperconverged means. I'm coming from independent ESXI hosts which I had planned to make into an HA cluster on VMware but I hit every roadblock imaginable and finally gave up which lead me here. In one night I was able to stand up a Ubuntu VM on a three node ceph pool and make it HA. This is after months of fooling with VMware. I don't even want to describe the nightmare that was. Poorly planned by my previous IT guy. We first learned our raid cards were too old. Then needed special cache drives, etc. And configuration is not even remotely intuitive. Get it wrong and you have to boot into something else to destroy partitions etc.

Proxmox is Debian based. Ive been running Debian or Ubuntu for over 10 years at home and in my office. I'd dump Windows altogether if we weren't so tied to it by my users, accounting software, and some databases I've written over the years. When I saw the shell right in the main window of Proxmox I was sold!
 
Ouch, hopefully you were able to remember the new emergency number "0118 999 881 999 119 725 3" in time. :eek:


You said you wanted to use EC. If you want to use EC seriously and safely, you have to choose K3/M2, which needs to be added - so you need at least 5 storage nodes with identical equipment. However, I can also tell you from experience that you will not be satisfied with the performance from K3/M2.

I've already tested this in October 2021 and came up with the following values (tested on 5x Dell PowerEge R620, 2x E5-2630 v2, 384 GB DDR3, 2x 10 GbE, 6x PM883 per Node):
EC read/write IOPS: 8776 / 2933
EC read/write MB/s: 35.1 / 11.7

RBD replica read/write IOPS: 21675 / 7243
RBD replica read/write MB/s: 86.7 / 28.9

So if you're really serious about using the two fire compartments, you need to make sure that you can lose a fire compartment at any time without losing the data. The other question would then be whether you want to maintain operation in the event of an error or whether it is only important that the data remains with integrity. If you also want to ensure the availability of a fire compartment in the event of a breakdown, you have to think about how you can do it 3 CEPH Mons so that the quorum is guaranteed at all times. This usually doesn't work with only two fire compartments, you always have to put the majority in one fire compartment. So there is a 50% chance that the fire compartment that currently has 2 out of 3 Mons/PVE servers will fail.

A possibly better solution would be to keep the production data and backups strictly separate. For example, you can also set up a replication in CEPH and replicate from the productive CEPH to a single node CEPH. If you also use PBS and mirror the PBS into the other section, you can restore the systems or simply restart them with a little effort.
In addition to the CEPH approach, you could also make a cluster of, say, 3 nodes, where the third node is a ZFS replication target of the other two nodes. You just need to have the capacity of the first two nodes available in the third. If you want to be able to start everything in the event of an error, it will of course also need CPU / RAM.
Perhaps not the most elegant solution but I have a Nextcloud server at work and a backup server at home. Prior to my fire I made a "backup" account and attached the Windows share to my Nextcloud server as external storage. That sychronized with home which was nice for working from home. Later I attached the backup server in the same way and had it synch with my home backup server which has an 8 HDD Raid 6 array. That handles my offsite needs. But I haven't decided on another storage medium for the 3,2,1 rule.

I think for now EC is two much to ask. My servers have 2x E5-2440 V2 CPUs, 128gb ram, and Sandisk 1TB drives. So even with two more I'm not going to get close to your five server speed. I didn't realize EC was that resource intensive. I was just hoping to pickup some space I don't really need that badly. What I need mostly is availability and redundancy. As trouble free as possible.

I can easily place my backup server in another area of the office building. It doesnt need to live in the rack or the server room for that matter. No reason it couldn't live in what amounts to another fire container such as behind our research lab. Then continue with my Nextcloud synch to home.
 
I have 8tb per server and depending on configuration up to 32 in the backup server.
Ok, a bit more, but still far from huge.
I don't even know what fully hyperconverged means.
I mean that terminology is marketing speech, so probably nobody knows, but what I meant is doing everything (i.e., mostly storage, and virtualization) from the same nodes, i.e., no separate nodes that are doing just ceph stuff, or some that do just virtualization/containers but no ceph stuff.

And tbf, I only jumped in to state that we have many users happily running 3 node cluster (and nowadays also an extra host for PBS), sb-jw insights are much more detailed and cater to more details about your use case.

I don't know how to manage the data though. Leave it as a virtual drive on the Windows server where I have access control and shadow copies or move it directly onto Proxmox somehow as a share. But I need to read more about how to do that and keep it NTFS with all the permissions because the virtual drive has 8 shares and a few are locked from other users and I like shadow copies aka "previous versions".
Why not move it into a VM into Proxmox VE? Then you'd not change out too much, still have existing benefits. As doing it on Proxmox VE would mean you need to have some custom solution anyway, we have no native file share management support (some users run TrueNAS or the like in a VM).

W.r.t. setup possibilities, as sb-jw said, it really depends on what you are OK with in terms of service interruption, how recent the backed up data must be, and work needed to rebuild after a destructive event.

If, for the worst case, some restore time and data-loss of a few hours (or at least minutes) is acceptable then an extra PBS off-site (or at least into another fire compartment) would be all you need.
If you mostly run virtual machines on Proxmox VE you can profit from our dirty-bitmap fast-backup feature, where we can very efficiently read only those virtual disk block that really changed, thus even doing a backup every ten minutes can be possible. Syncing that then off-site ensures that you got your data (recent up to your backup interval) available to recover from to restore on new HW

If that doesn't cut it for you, then a ceph replication, off-site or into another fire compartment, could allow even faster recovery and a more up-to-date state of the available data, but it certainly also means higher cost, most upfront, but also some more for periodic maintenance.
 
  • Like
Reactions: rsr911
Why not move it into a VM into Proxmox VE? Then you'd not change out too much, still have existing benefits. As doing it on Proxmox VE would mean you need to have some custom solution anyway, we have no native file share management support (some users run TrueNAS or the like in a VM).
The plan is to migrate my current Windows server VM off VMware and onto this proxmox cluster as well as a secondary domain controller and eventually SQL server.

I was just wondering if I could move the virtual data drive attached to the Windows server directly onto the ceph cluster as an NTFS share. But thats not a critical need, just a curiosity.

Critical is almost no downtime once we transition to the database for production from paper records, a solid backup solution (which Proxmox provides) and protection from fire, having had a fire I never want to deal with another one. At the time of the fire we were doing replication with Terrastations in each building which were painfully slow over 1g and HDDs. Back then I only had 50% storage usability. The added benefits of three nodes completely justify 33% useable space.

As we speak my assistant is wiping out the raid arrays, wiping the drives, and setting the cards to HBA. I'm waiting on my server to arrive. Hopefully by tonight I'll have set up a cluster for testing. He's going to setup Proxmox Backup server on one of the servers with HDDs and use raid 5 with a hotspare and mirrored boot drives.

I suppose that presents a question. Can (or should) the backup server be tied into the ceph back end network or should it only be on the management/public network?

I'm perfectly happy with Nextcloud as an offsite solution given its already backing up to my home server. Being able to access my work files from anywhere is really handy. The backup account only will backup the backups. First by syncing them to itself, then pushing them onto an array. By the time I'm done Nextcloud will live on Proxmox VE as well. Its already a VM.
 
I don't know if this should be a new question or not. I have a question about how to configure the disks (8 1tb SSD per server).

I have set up all three servers. I've made 3 linux bridges with bonds on each server. One for management, two bonded ports. One for VM traffic, four bonded ports. And one for ceph traffic, 2 bonded ports. All are 10Gbe. The ceph traffic is on a separate switch not connected to the rest of the network.

Now I want to get the disks set up. Do I set up each as an OSD, then create a group on each server, then make the ceph pool from that? I have a blank slate with the drives right now and want to know how to set them up but most places I read just show how to do one drive per server.
 
I didn't reread the whole thread closely, but IIRC you now want only the SSDs being part of the Ceph storage as the HDDs are in an unrelated RAID for backup.

I assume you already created a Ceph monitor and Ceph manager? E.g., by using our Ceph installation wizard.
In that case you'd create an OSD per SSD (just for the record, if those are really fast NVMe's it might make even sense to create more than one OSD per SSD) and then create a pool.

What do you mean with groups then?

As it would all stay quite simple if you do not want to do any tiering or the like, where e.g., one has a Ceph setup with HDDs and SSDs and one want's to create a pool that favors the SSD one that favors the HDDs and one that doesn't care.
But as you got the same SSDs for each node such tiering doesn't really make sense, so after creating the OSDs, and checking that health status is alright, you can just create a pool and be basically done, at least if you want to use the pool storage only for VM and CT disks/volumes (block storage). If you need a file system from Ceph directly you could also set up a CephFS now.

As it was never linked in this thread, checking out our ceph chapter in the docs might be useful too, if you haven't already:
https://pve.proxmox.com/pve-docs/chapter-pveceph.html
 
  • Like
Reactions: rsr911
One for management, two bonded ports. One for VM traffic, four bonded ports. And one for ceph traffic, 2 bonded ports. All are 10Gbe. The ceph traffic is on a separate switch not connected to the rest of the network.
So I think 6x 10 GbE per server is a bit much. Basically you can put everything on one bond. From experience I can say that this doesn't cause any problems. You just have to keep an eye on the load and upgrade if necessary. If I were you, I would simply do 4x 10 GbE, 2x of which would then terminate on a redundant pair of switches. Then you can also create two Corosync rings.

But pay attention to the network when creating the CEPH cluster. If you have a separate link, you must also state that explicitly.
 
  • Like
Reactions: rsr911
Thanks. I haven't built the cluster or setup ceph yet.

These are sata ssds. So I assume make each an osd. Like 1-8 on each server. Then create the ceph pool.

I'll be sure to have the separate network for ceph when I make the pool/cluster.

I made one mistake in my last comment. For ceph I have two dual port SFP+ NICs. Each have their ports bonded and connected to a switch. I have another switch on the way so I can have redundant switches. My plan there is to bond the bonds and connect to ceph using a Linux bridge. I previously stated I had four ports (2 2 port cards) bonded for VMs. That was a mistake. I don't need more than two for VM traffic. But I do want a robust backend network for ceph completely isolated from the main network. If I ever need more ports for VMs I could always enable the onboard 1gbe ports and bond them for management then bond the current 10g dual port to the other 10g for more.

Setup: two dual port SFP+ for ceph backend (cluster network).
Two dual port 10g RJ45, currently one for management and one for VMs. Heck maybe it makes the most sense just to set up these rj45 cards as a four port bond. Then use the onboard NICs for management. Already have the cards and cables. As far as I can tell the management ports don't carry much traffic anyway.

Right now I'm waiting on some SFP modules and the other switch. For the cluster network should the switches be isolated or should I uplink them together?

All of this is overkill for my needs but I already had most of the hardware.

Finally yes my fourth server has hardware raid with HDDs for storage a mirror pair for boot. This will run Proxmox backup server. Contemplating running it on top of VE to make backing up the backup server easier. Ive seen that done in a few videos. That machine has 12 cores, 24 threads, 96gb ram. I have optanes available that would be nice to cache the raid OR the card itself will cache with sata or SAS ssds. Backups will run nightly and monthly. Eventually I'd like to archive annual backups onto another server.

I have read some of the link posted. I'll read more. I'm anxious to get it all working as I did a test with just one drive per server and was happy with the results.

Long story short. I assume these are the steps.

1) Make the Proxmox cluster.
2) install ceph on all three.
3) setup each drive as an OSD
4) create the pool
5) make certain its running on the right network for ceph
6) install cephfs
7) setup HA
8) migrate my VMs from VMware to the cluster.
9) test the system.
10) get to work on the PBS.

These machines are all Lenovo RD440s dual 8 core xeons in each. 128gb in each. The fourth machine for the backup server is the same, just less ram and less CPU (6 cores each).

Lastly when I do get PBS up and running should it connect to the ceph network at all or is the main network fine?
 
Last edited:
My cluster is built. Trying to set up ceph with the GUI, got the following error:

Multiple IPs for ceph public network '192.168.1.204/24' detected on host1:192.168.1.141192.168.1.204use 'mon-address' to specify one of them. (500)

Networks: vmbr0 192.168.1.141-143 for management, bonded 1g pair on each server. Vmbr1 192.168.0.141-143 ceph isolated network 4 bonds SFP+. Vmbr2 VM "public" network 10G, four bonds each server. I've run iperf3 and bound devs to test all three networks, they work at expected speeds.

Am I correct that the public network for ceph should be the one for traffic to and from VMs and possibly shares on the ceph cluster and the cluster network is the isolated backend dedicated network for server to server ceph traffic? I got a little confused on that reading the link above.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!