Best way to install Proxmox on workstation hardware

rdue1992

New Member
Sep 30, 2023
3
0
1
Denmark
Hi everyone!

I'm currently planning to build my first ProxmoxVE server, i already have the hardware that i want to use, but i'm seeking some knowledge on how i install the VE best practice with the hardware i have.

Currently i have following hardware installed in my machine:

CPU: Intel(R) Core(TM) i5-7600K CPU @ 3.80GHz (1 Socket)
RAM: 32GB DDR4 Memory
Storage:
- 1 x Western Digital Blue SN570 1TB (Planning to use this one for the VE OS, since it's an M.2 NVME and i only have 1 M.2 slot)
- 4 x Kingston SSDNow A400 SSD - 960GB (Planning to use these for a disk pool of a kind)
NET:
- 1 x Onboard Gigabit Ethernet (Planning to have this port only to access the server if anything goes completely wrong with it)
- 1 x PCI-e HP network card with 4 x 1G ethernet ports (Planning to assign all 4 ports directly to pfSense running in a VM)
MOBO: I currently cant remember what make or model the motherboard is, only thing i know is that it is a known model for gaming hardware.
PSU: A Corsair 600W (In the future i might swap out the PCI-e network card with a GPU instead, to assign directly to the windows client, to be able to use Cura and Fusion 360 for my 3D-printing projects)

I'm planning to run following on the server:
- pfSense
- Home Assistant OS
- Adguard
- Ubuntu Server (For SQL server and Mosquitto Broker to be used by Home Assistant for logging and driving my Zigbee Network)
- Truenas (For some SMB shares to our workstations at home and some NFS shares for backup between PVE and Truenas)
- A windows 10 client machine (Only to be accessed from other locations, possible via a Wireguard VPN running on pfSense)

Things i already had in mind is:
- Which kind of RAID setup do i choose for the disk pool, as i have 4 SSD's that i want to have the best perfomance of, but also the most storage?
* I have no issues with a fail tollerance of 1 disk, as i already have bought 2 extra of these disks
* I have heard a lot about ZFS, but people tell me that ZFS needs alot of RAM from the hypervisor and as you can see, i only have 32GB and i actually do not want the server to max out the RAM at start. Can i create a RAID setup inside Proxmox that is not using ZFS or should i use the onboard RAID controller to create the RAID before i install Proxmox?
- For the pfSense, i plan to create 3 VLANS (USER, IOT and MGMT) i have a COAX ISP and a mobile broadband for backup when the COAX goes down, which happens alot where i live, which is best? (Just before someone says, i do not understant what you want, i can draw the network setup for you, just ask):
1- Use the PCI-e network card ( Port 1 = WAN 1, Port 2 = WAN 2, Port 3 = N/A, Port 4 = Trunk with the VLANS to switch )
2- Use the onboard network card ( Make a trunk with all networks, including both WAN connections )

Besides all this, i would like to have your opinion on this way to setup a VE for home use, is there anything else i have to consider, anything that i need to change, anything that is completely stupid about this setup and so on :)
 
Last edited:
First, this is in no way workstation hardware, it's consumer desktop hardware.

i want to have the best perfomance of, but also the most storage?
The only solution for your requirements is RAID0 and it has no redundancy at all. So no one would recommend that. You may want to use RAID5.

Can i create a RAID setup inside Proxmox that is not using ZFS or should i use the onboard RAID controller to create the RAID before i install Proxmox?
No, ZFS is the only supported software raid solution that you can create from the PVE installer. If you don't want it, go with a hardware raid controller.

For the pfSense, i plan to create 3 VLANS (USER, IOT and MGMT) i have a COAX ISP and a mobile broadband for backup when the COAX goes down, which happens alot where i live, which is best? (Just before someone says, i do not understant what you want, i can draw the network setup for you, just ask):
1- Use the PCI-e network card ( Port 1 = WAN 1, Port 2 = WAN 2, Port 3 = N/A, Port 4 = Trunk with the VLANS to switch )
2- Use the onboard network card ( Make a trunk with all networks, including both WAN connections )
Depends on your switch capabilities. If you already have a vlan aware switch with a lot of parallel workloads, LACP all cards together and do VLAN over them.
 
First, this is in no way workstation hardware, it's consumer desktop hardware.
Bummer, i was thinking if it was called workstation or consumer hardware :)

The only solution for your requirements is RAID0 and it has no redundancy at all. So no one would recommend that. You may want to use RAID5.
Ok, i see that i did give a weak explanation, i wanted the best median of performance and redundency, if RAID5 is the way, i'll go with that :)

No, ZFS is the only supported software raid solution that you can create from the PVE installer. If you don't want it, go with a hardware raid controller.
Is the on-board RAID controller of the motherboard fine for this useage and am i able to get RAID alarts of some kind inside PVE with this?

Depends on your switch capabilities. If you already have a vlan aware switch with a lot of parallel workloads, LACP all cards together and do VLAN over them.
Yes, i have 2 x 8 Port VLAN aware switches, one by the server and one by out TV Furniture as i run all our game consoles, tv, Apple Tv, etc via ethernet, as I have a lot of Wi-Fi networks in the building that i live in.

What is the benefit of running all 4 ports in the server in LACP down to the switch? Does it only give higher speed? When it's connected as I think, port redundancy doesn't really give any advantage since there aren't 2 switches at the server and the server only has 1 NIC with 4 ports?
 
Is the on-board RAID controller of the motherboard fine for this useage and am i able to get RAID alarts of some kind inside PVE with this?
You have to check if it works, but often those consumer RAID controllers are only fake-raid so that they are not good at all. The RAID implementation is then in the driver and therefore useless for "real" RAID, in which the implementation is in hardware.

Just to mention / for completeness: You could install via software RAID (mdadm) in Linux, yet you would need to install Debian Bookworm first and then upgrade to PVE. This is unsupported by Proxmox and you're on your own.

Yes, i have 2 x 8 Port VLAN aware switches, one by the server and one by out TV Furniture as i run all our game consoles, tv, Apple Tv, etc via ethernet, as I have a lot of Wi-Fi networks in the building that i live in.

What is the benefit of running all 4 ports in the server in LACP down to the switch? Does it only give higher speed? When it's connected as I think, port redundancy doesn't really give any advantage since there aren't 2 switches at the server and the server only has 1 NIC with 4 ports?
Yes, higher speed. Using only ONE trunk has the advantage, that the networking is a bit simpler. You have only one bridge and the configuration is only done in setting VLAN tags. Otherwise, you would have VLAN tags AND need to choose the correct bridge. You don't need to set all 4 ports, but having two is better than just using one. You can also bond one onboard NIC and one of those 4-port NICs together in order to have some kind of failover, yet it does not matter that much.
 
You have to check if it works, but often those consumer RAID controllers are only fake-raid so that they are not good at all. The RAID implementation is then in the driver and therefore useless for "real" RAID, in which the implementation is in hardware.

Just to mention / for completeness: You could install via software RAID (mdadm) in Linux, yet you would need to install Debian Bookworm first and then upgrade to PVE. This is unsupported by Proxmox and you're on your own.
Hmm ok, can you recommend a HBA or RAID controller that is supported?

Yes, higher speed. Using only ONE trunk has the advantage, that the networking is a bit simpler. You have only one bridge and the configuration is only done in setting VLAN tags. Otherwise, you would have VLAN tags AND need to choose the correct bridge. You don't need to set all 4 ports, but having two is better than just using one. You can also bond one onboard NIC and one of those 4-port NICs together in order to have some kind of failover, yet it does not matter that much.
I see, maybe i should look a bit more into this :)
 
Hmm ok, can you recommend a HBA or RAID controller that is supported?
Anything enterprisy ... e.g. MegaSAS/LSI-based stuff, HPE, Dell ... all major players's controller should work fine.
In generel, the Ubuntu HCL is also very good (PVE is based on the Ubuntu LTS Kernel, so this list matches perfectly)

I see, maybe i should look a bit more into this :)
Yes, if your switch is capable of LACP or even LAG, then go for it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!