New to PVE - I have few questions

laskolnyk

Member
Jul 5, 2022
4
0
6
Hello all,
I'm new here, so thanks in advance for your patiance. I just want to start a new project and after years of using ESXi in the past, I want to try something new. Currently, my IT career does not give me a lot of time to tinker with hardware and software but pushes me to be in the middle between business and engineers. I feel I need to keep my engineering spark. I need your opinion about my planned proxmox configuration.

For this case, I intend to use 4 DELL micro systems to save some physical space. I don't have space for the rack. I am buying used (but still on warranty) 3080/3090 models and plan to use them for building a ceph cluster for my home services. Does the config of the units need to be similar? 2 main units will be (6C/12T | 32GB | 500GB-1TB nvme for the storage). The other two or three will base on older i5 CPUs and half the size of RAM. Do you think this will work for the PVE cluster? Can I mix different configurations including storage size? Also, I may have access to the older i5-6500T-based machines so my cluster may contain 5+ nodes.

The boot drives will be small SATA SSD 32-120GB. Storage - nvme consumer-grade. Will this cause a big issue? Are dramless SSDs a valid option or a big NO?

I am aware single NIC is suboptimal, delicate to say, but in case, is having USB3 NIC connected to each node a good option? Currently, I don't need to virtualize machines. I am focusing on containers. My primary server (UNRAID) is running docker, but I also was considering going into RancherOS, which I believe, Proxmox can host as well. I intend to use my current unraid machine as an extra nfs storage.

This project is mostly for updating my knowledge, my home media management needs (stored on old QNAP NAS), home automation, and degoogling. There are some servers I want to try, you can't just run as containers, and having vms could help to close this gap.

Looking forward to hearing your opinion.

cheers
L
 
Recommended for ceph would be enterprise SSDs and atleast two NICs (better 3) where one should be 10+ Gbit. The fast 10+ Gbit NIC for Ceph trafic, a dedicated Gbit NIC just for corosync because low latency is required if you don't want that your nodes randomly reboot because they timeout and loose quorum and a third NIC for everything else (for example the services your guests are running). And if you care about downtimes of cause everthing twice in a bond, so you can make use of stacked managed switched to eleminate the single point of failure from the network.

https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#_precondition:

Network​

We recommend a network bandwidth of at least 10 GbE or more, which is used exclusively for Ceph. A meshed network setup [4] is also an option if there are no 10 GbE switches available.
The volume of traffic, especially during recovery, will interfere with other services on the same network and may even break the Proxmox VE cluster stack.
Furthermore, you should estimate your bandwidth needs. While one HDD might not saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwidth will ensure that this isn’t your bottleneck and won’t be anytime soon. 25, 40 or even 100 Gbps are possible.

https://www.proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark-2020-09:
Can I use consumer or pro-sumer SSDs, as these are much cheaper than enterprise-class SSDs?
No. Never. These SSDs wont provide the required performance, reliability or endurance. See the fio results from before and/or run your own fio tests.
 
Last edited:
  • Like
Reactions: Neobin
Thanks @Dunuin. As I wrote this will be my learning platform. I intend to keep my services primary on unraid and over the time duplicate them on pve. I am aware such configuration would not work in SMB. This is pure homelab project.

On the other side, I had some experience with Scale Computing's smallest NUC based solution and I'm pretty sure they are using ceph on their h150 nucs with single gigabit ethernet, single ssd.

So you're saying for proxmox I need enterprise grade hardware otherwise no go? I know you all guys have big racks with huge enterprise grade shared storage, but what kind of equipment you need to have to learn and play with pve and it's all options.
 
If it is really only a home testing and learning playground, just try it out and see if it suits your needs or rather works at all. :)

Would see it as the first learning/experience step. You will never know for sure, if you do not test/try it by yourself. :cool:
 
I really wouldn't run anything important on it. But if you just want to learn, insufficient hardware might be not that bad. You will probably run into more problems you need to diagnose and fix but you can learn by doing that. And some disaster recovery experience can't hurt either. ;)
But I wouldn't pay much for it. If you just want to test a PVE HA Cluster with Ceph for learning you could also just virtualize your PVE nodes. There it is easy and cheap to give each node multiple virtual 10Gbit virtio NICs.

But sounded more like you are planning to run services on it you are really planning to use and not just some nonsense VMs just for learning without practical use.
 
Last edited:
Ok, my first experience here. Currently single node 6500T/16GB/500GB Samsung 970 Evo Plus (nvme) for the storage, 32GB SATA SSD for the boot. Everything is p'n'p. Web interface is snappy. I loaded 6 vms (linux, windows 10 and win server) and performance is suppprisingly good. I'm connecting remotely to windows machines and experience i like normal desktop (performance) and cpu has enough capacity to handle all work easly. I am going to deploy much more containers (copy my UNRAID server docker config) and see how it is performing. This DELL is going to be my weakest configuration of the 4 nodes. My primary nodes are on the way to me with 10500T/32GB/500GB nvme. Main NIC will be used for ceph replication, while USB3 (AX88179-based) NIC I'll use for access the machines from the client side. 3rd node is like the primary, but "only" with 8500T/9500T (negotiations in progress).
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!