Server Suggestions

Hi @fanton ,

The server task suitability is not measured by who produced it. Any Enterprise server is generally suitable to run Linux.

What you need to ask yourself:
- how many virtual machines will I need?
- how many virtual cores?
- how much RAM?
- how much disk space?
- how critical is part support?
- how familiar are you with IPMI management and do you need advanced features?
- how many disks ?
- how many NICs ?

PVE is a Debian userland with Ubuntu derived Kernel. Will your hardware vendor support you?
Is this going to be a cluster? What shared storage will you use? How many servers are you going to need? What is your budget?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you for the good advice.

I need 6 virtual machines. 2 to 4 virtual cores each. 64 Gigs of RAM. At least 4 Tera bites of SSD shared storage. Support is very critical. I am familiar with HPE iLo for remote management. I would like my shared storage to be RAID 10. 4 NICs per server. I want to built a cluster for maximum uptime.

The reason I was asking is that I've read the docs and it seems that very old or very new hardware might have some driver support issues. I wanted to see what people have tried lately that will provide me with an easy install and management.

Thanks,
 
Most of the hardware complaints reported in the forum are by homelab users. A single-packet AMD server from any of the major vendors will do the job just fine. You will likely need at least two Compute servers with additional node for quorum. Or, getting 3 identical ones may be an easiest route that future-proofs your installation.

Make sure to use Enterprise NVMe disks. Mellanox is always a safe bet for networking.

RAID10 is a storage configuration, it is not Shared storage configuration. The most common types are SAN, NAS and Distributed.

You may want to engage one of the PVE Partners to assist with spec'ing out the infrastructure. Making good decisions on networking and storage is critical at your stage of design.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
Migrating Dell VMware clusters at work to Proxmox. I just make sure all the hardware is the same (CPU, memory, storage, storage controller, networking, firmware, etc).

Swapped out Dell PERCs for Dell HBA330s since ZFS & Ceph don't work with RAID controllers.

Standalone Dells are running ZFS whereas clustered Dells are running Ceph (Ceph is like open-source vSAN, IMO). With Ceph, the faster the networking the better. Minimum 10GbE networking (Dells are using Intel NICs). I do use 2 small drives to mirror Proxmox using ZFS RAID-1. 5-nodes minimum for Ceph cluster, so can lose 2 nodes and still have quorum.

No issues besides the typical storage drive and RAM going bad. Replacing storage under ZFS & Ceph is easy.

If going to use flash storage, make sure it's enterprise (for endurance) and has PLP (power-loss prevention). Using consumer flash storage will cause havoc and chaos due to lack of endurance and PLP.
 
This is exactly w
Migrating Dell VMware clusters at work to Proxmox. I just make sure all the hardware is the same (CPU, memory, storage, storage controller, networking, firmware, etc).

Swapped out Dell PERCs for Dell HBA330s since ZFS & Ceph don't work with RAID controllers.

Standalone Dells are running ZFS whereas clustered Dells are running Ceph (Ceph is like open-source vSAN, IMO). With Ceph, the faster the networking the better. Minimum 10GbE networking (Dells are using Intel NICs). I do use 2 small drives to mirror Proxmox using ZFS RAID-1. 5-nodes minimum for Ceph cluster, so can lose 2 nodes and still have quorum.

No issues besides the typical storage drive and RAM going bad. Replacing storage under ZFS & Ceph is easy.

If going to use flash storage, make sure it's enterprise (for endurance) and has PLP (power-loss prevention). Using consumer flash storage will cause havoc and chaos due to lack of endurance and PLP.
This is exactly what I was looking for. Real world advice.

I have a couple of HPE ProLiant DL360 Gen 9 and one HPE ProLiant DL360 Gen 10. I am not sure if I should try to use these or buy new servers.

Again, Thank you for your reply.
 
This is exactly what I was looking for. Real world advice.

I have a couple of HPE ProLiant DL360 Gen 9 and one HPE ProLiant DL360 Gen 10. I am not sure if I should try to use these or buy new servers.

Again, Thank you for your reply.

If it matters, the Dells in production are 13th-generation using Intel Broadwell CPUs. Still firmware supported by Dell. They will eventually get EOL'd by Dell but then again, I have 10th-, 11th-, 12th-gen Dells running Proxmox.

I recommend single-socket servers. If you want to save even more money, get older generation single-socket servers. Plenty of options.

Below are optimizations learned through trial-and-error. YMMV.

Code:
    Set SAS HDD Write Cache Enable (WCE) (sdparm -s WCE=1 -S /dev/sd[x])
    Set VM Disk Cache to None if clustered, Writeback if standalone
    Set VM Disk controller to VirtIO-Single SCSI controller and enable IO Thread & Discard option
    Set VM CPU Type to 'Host' for Linux and 'x86-64-v2-AES' on older CPUs/'x86-64-v3' on newer CPUs for Windows
    Set VM CPU NUMA
    Set VM Networking VirtIO Multiqueue to 1
    Set VM Qemu-Guest-Agent software installed and VirtIO drivers on Windows
    Set VM IO Scheduler to none/noop on Linux
    Set Ceph RBD pool to use 'krbd' option
 
Last edited:
  • Like
Reactions: fanton