Home server running Proxmox - Suggestions for setup please

yancho

Member
Nov 4, 2021
8
0
6
40
Hi

Warning: newbie post below - trying to sort out my head on Proxmox

I am currently building the parts for my home server. These are the specs I am currently looking at (fyi: https://de.pcpartpicker.com/list/CrtmnL ):


  1. CPU: Intel Core i7-6700T 2.8 GHz Quad-Core OEM/Tray Processor
  2. Motherboard: Asus PRIME Q270M-C Micro ATX LGA1151 Motherboard
  3. Memory: G.Skill Ripjaws Series 32 GB (4 x 8 GB) DDR4-2400 CL15 Memory
  4. Storage: Patriot P300 256 GB M.2-2280 NVME Solid State Drive
  5. Storage: Patriot P300 512 GB M.2-2280 NVME Solid State Drive
  6. Storage: Seagate Barracuda 1 TB 3.5" 7200RPM Internal Hard Drive
  7. Storage: Seagate Barracuda 1 TB 3.5" 7200RPM Internal Hard Drive
  8. Power Supply: be quiet! System Power 9 400 W 80+ Bronze Certified ATX Power Supply
  9. Wired Network Adapter: TP-Link TG-3468 PCIe x1 1 Gbit/s Network Adapter
  10. Wired Network Adapter: TP-Link TG-3468 PCIe x1 1 Gbit/s Network Adapter
  11. M2 Card: Google M.2 Accelerator
  12. Case fan: Inter-Tech LC-CC-95
  13. PCI Card: KT016 x4 PCIe to M.2 (M key)
  14. Case: Inter-Tech Midi-Tower B42
  15. USB Zigbee Gate: Zigate USB Din
With regards to data storage I plan to use:
  1. ext4 for the 512GB M.2 which will be used to install Proxmox and all the VMs [512SSD]
  2. use BIOS software raid for the 2 SATA drives as raid 1 mirrored
  3. the software raided disks will be set as LVM with ext4 for data array [1TBDA]
  4. use the 2nd 256GB M.2 as LVM Cache for the data array (3 above) as per: https://blog.jenningsga.com/lvm-caching-with-ssds/ [256SSD]
  5. Using 75% of the 1TBDA as the main data disk for: -
    1. Seafile
    2. and Frigate
  6. Using the remaining 25% of the 1TBDA as the main disk onto which I save the weekly backups of the VMs. I do not believe the VMs will change often so I do not think it should be a problem.

I plan to install Proxmox as a hypervisor (installed on the 512GB M.2) and on it:
  1. pfSense (will have around 6-8 VLANs) - it will get the 2 PCI NICs: 1 for WAN and 1 for LAN. Proxmox's ethernet will connect to LAN switch after setup
    1. Will be installed on 512SSD
    2. Passthrough the 2 NICs
  2. Seafile (as a local cloud - most of my stuff is in OneDrive)
    1. Installed on the 512SSD
    2. Use 1TBDA as storage + 256SSD
    3. Use Motherboard's nic as Virtual NIC
  3. debian with docker + portainer:
    1. homeassistant
    2. unifi controller
    3. cups server
    4. mosquitto ++++ (other future containers)
    5. *** Hardware
      1. Installed on 512SSD
      2. Passthrough the ZiGate USB
      3. Use Motherboard's nic as Virtual NIC
  4. UPS monitor - probably APC with own OVA
    1. Installed on 512SSD
    2. Passthrough the USB interface to the UPS
    3. Use Motherboard's nic as Virtual NIC
  5. Frigate for NVR which will take care of 12 Reolink cameras and record only on motion. The place is not very busy and at most 2 cameras will work simultaneously.
    1. Installed on 512SSD
    2. Use 1TBDA as storage + 256SSD
    3. Use Motherboard's nic as Virtual NIC

I have a few questions on the above:

  1. Are you seeing any warnings which I might be overlooking, please?
  2. How do you suggest I set up the storage? Do you agree with my choices?
Note, my private cloud on Seafile would be synced to OneDrive so will not bother too much if it is lost. I am very wary of the electricity cost, so am steering away from having a number of hard drives spinning in vain.

Would appreciate any help I can get since this is all new to me :)

Many thanks
 
Last edited:
Hey,

You have a beautiful plan :D

Take time to anaylse how many ressources each VM need to have at least.

How do you suggest I set up the storage
For many reasons, i recommand to you to install yours OS and datas filesystem in ZFS filesystem (Neccessary for snapshots capabilitys, and global performances.)

Passthrough network card isn't a easy solution. You can make that with routing tables, firewalling on host and configuring virtual network on your host.
Do you agree with my choices?
Computing has 1001 differents choices for succefully make your system, but what would you priorise YOU?

For network VM services, i recommand to check how to boost Network performances of NIC Virtual:
Multiqueue
If you are using the VirtIO driver, you can optionally activate the Multiqueue option. This option allows the guest OS to process networking packets using multiple virtual CPUs, providing an increase in the total number of packets transferred.

When using the VirtIO driver with Proxmox VE, each NIC network queue is passed to the host kernel, where the queue will be processed by a kernel thread spawned by the vhost driver. With this option activated, it is possible to pass multiple network queues to the host kernel for each NIC.

When using Multiqueue, it is recommended to set it to a value equal to the number of Total Cores of your guest. You also need to set in the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool command:

ethtool -L ens1 combined X

where X is the number of the number of vcpus of the VM.

You should note that setting the Multiqueue parameter to a value greater than one will increase the CPU load on the host and guest systems as the traffic increases. We recommend to set this option only when the VM has to process a great number of incoming connections, such as when the VM is running as a router, reverse proxy or a busy HTTP server doing long polling.
(source: local proxmox documentation of Virtual Network Configurator)


Best way for your backup: don't store them on the same machoine of your host. Minimaly get a USB HDD for store yours backups.
 
  • Like
Reactions: yancho
For many reasons, i recommand to you to install yours OS and datas filesystem in ZFS filesystem (Neccessary for snapshots capabilitys, and global performances.)
For ZFS that would be a bad hardware choice. No ECC RAM, no enterprise SSD with powerloss protection, additional CPU and RAM usage, not everyting atleast mirrored, ... if consumer hardware should be used I would stick with LVM and onboard raid.
 
  • Like
Reactions: Pifouney
For ZFS that would be a bad hardware choice. No ECC RAM, no enterprise SSD with powerloss protection, additional CPU and RAM usage, not everyting atleast mirrored, ... if consumer hardware should be used I would stick with LVM and onboard raid.
thanks for completion ^^
 
So given my hardware, it's best that I stick to LVM as I proposed originally, right?

With regards to passthrough NICs, you suggest that I pass them as virtual NICs. I am finding having second thoughts about whether I should go for an INTEL NIC chipset. But that means more expensive. Is it worth the extra expense? This particular chipset of the Tp-Link (Realtek 8168) seems not very well recommended in proxmox forum.

I've encountered a dual Intel chipset (China knockoff) x1 card. It maxes 600MB/s (when both are used). I was thinking that I use the onboard as WAN, one of the NICs will be connected to the Edge switch and the other used by Proxmox VE to be connected on the network internally. Thoughts?
 
There can be alot of differences between NICs of the same bandwidth. If you get a better Intel NIC made for servers you will get more advanced features that no normal consumer computer would make use of but can be very important if you want to run a server. For example that the NIC got a bootloader so it can boot from iSCSI. Or that the NIC supports SR-IOV so that you can pass it through to multiple VMs at the same time. And that the NIC got more advanced hardware offloading features.

Not every Intel NIC is great, there are cheaper ones with less features and better ones that got everything you might want (like a i350-T4 v2). But in general you got better driver support and less problems using Linux/Unix if you buy a Intel NIC. But as you already said, if you buy them second hand you need to make sure you are not buying a faked one. I would guess half of the i350-T4 sold on ebay are acutally fake ones.

And for your chosen hardware...you want to buy 870$ of consumer hardware to build a server. If you already got that hardware that would be fine. But if you need to buy the hardware first, why not just buy a second hand server? There you get way better and more professional hardware for less money. Something like a 12 core 24 thread 128GB RAM server you get for around 500$. And that is all enterprise grade stuff that was especially designed to work very reliable and efficient 24/7 as a server.
 
  • Like
Reactions: yancho
Thanks @Dunuin for your reply. I will look further for Intel NICs. I was suggested that it's better to use virtual NICs for pfSense rather than passthrough. That means that the SR-IOV is not strictly necessary, right?

Was thinking of swapping the motherboard to an Asus P10S-WS which is a workstation motherboard with a server chipset. It gives me more Pciex4s and dual Intel Nics. I can also add on it an Intel Quad port.

RE: Hard drives, my sister got interested in the project and was thinking of shifting to this set up:

512 M.2 and 256 M.2 as above
1x 1TB WD Purple for Frigate NVR
2x 6TB Software Raid 1 for Seafile (5TB) and Proxmox Backup Server's VM (1TB)

I can do a Raspberry Pi with a 6TB disk on it at my sister's house and can syc the two 6TBs. She uses my Seafile as her private cloud, and like so I have data redundancy in case of a major hassle.

Thoughts?
 
Thanks @Dunuin for your reply. I will look further for Intel NICs. I was suggested that it's better to use virtual NICs for pfSense rather than passthrough. That means that the SR-IOV is not strictly necessary, right?
In general PCI passthrough of NICs is faster because there is no virtualization overhead and you can make better use of hardware offloading. But for Gbit NICs this isn't that important (more for 10/40/100 Gbit NICs) and if you want that NICs for OPNsense/pfsense only there should hardware offloading be disabled anyway. Most people just don't passthrough NICs because it is harder to setup and because they only could pass it through to a single VM because they don't got decent NICs that support SR-IOV to allow the NIC to be passed through to multiple VM. Bus yes, for your use case SR-IOV wouldn't be that important.
RE: Hard drives, my sister got interested in the project and was thinking of shifting to this set up:

512 M.2 and 256 M.2 as above
1x 1TB WD Purple for Frigate NVR
2x 6TB Software Raid 1 for Seafile (5TB) and Proxmox Backup Server's VM (1TB)
Make sure that you don't buy shingled magnetic recording (SMR) HDDs. They are always terrible an not suitable for any server stuff.
 
  • Like
Reactions: yancho
Make sure that you don't buy shingled magnetic recording (SMR) HDDs. They are always terrible an not suitable for any server stuff.
Thanks a lot for your patience and guidance. Do you think NAS drives 5200 rpm would be better than 7200 rpm desktop for my use case? Something like WD red plus or Seagate ironwolf? They are both CMR.

With regards to the one at my sister's I think that can be seen a desktop version since it will be used for writes only and very rarely reads so no need for parallel stuff, I guess. Something like WD red or Seagate Barracuda? Or should I go for a full on desktop disk here?

Thanks loads!!
 
Hi

Warning: newbie post below - trying to sort out my head on Proxmox

I am currently building the parts for my home server. These are the specs I am currently looking at (fyi: https://de.pcpartpicker.com/list/CrtmnL ):


  1. CPU: Intel Core i7-6700T 2.8 GHz Quad-Core OEM/Tray Processor
  2. Motherboard: Asus PRIME Q270M-C Micro ATX LGA1151 Motherboard
  3. Memory: G.Skill Ripjaws Series 32 GB (4 x 8 GB) DDR4-2400 CL15 Memory
  4. Storage: Patriot P300 256 GB M.2-2280 NVME Solid State Drive
  5. Storage: Patriot P300 512 GB M.2-2280 NVME Solid State Drive
  6. Storage: Seagate Barracuda 1 TB 3.5" 7200RPM Internal Hard Drive
  7. Storage: Seagate Barracuda 1 TB 3.5" 7200RPM Internal Hard Drive
  8. Power Supply: be quiet! System Power 9 400 W 80+ Bronze Certified ATX Power Supply
  9. Wired Network Adapter: TP-Link TG-3468 PCIe x1 1 Gbit/s Network Adapter
  10. Wired Network Adapter: TP-Link TG-3468 PCIe x1 1 Gbit/s Network Adapter
  11. M2 Card: Google M.2 Accelerator
  12. Case fan: Inter-Tech LC-CC-95
  13. PCI Card: KT016 x4 PCIe to M.2 (M key)
  14. Case: Inter-Tech Midi-Tower B42
  15. USB Zigbee Gate: Zigate USB Din
With regards to data storage I plan to use:
  1. ext4 for the 512GB M.2 which will be used to install Proxmox and all the VMs [512SSD]
  2. use BIOS software raid for the 2 SATA drives as raid 1 mirrored
  3. the software raided disks will be set as LVM with ext4 for data array [1TBDA]
  4. use the 2nd 256GB M.2 as LVM Cache for the data array (3 above) as per: https://blog.jenningsga.com/lvm-caching-with-ssds/ [256SSD]
  5. Using 75% of the 1TBDA as the main data disk for: -
    1. Seafile
    2. and Frigate
  6. Using the remaining 25% of the 1TBDA as the main disk onto which I save the weekly backups of the VMs. I do not believe the VMs will change often so I do not think it should be a problem.

I plan to install Proxmox as a hypervisor (installed on the 512GB M.2) and on it:
  1. pfSense (will have around 6-8 VLANs) - it will get the 2 PCI NICs: 1 for WAN and 1 for LAN. Proxmox's ethernet will connect to LAN switch after setup
    1. Will be installed on 512SSD
    2. Passthrough the 2 NICs
  2. Seafile (as a local cloud - most of my stuff is in OneDrive)
    1. Installed on the 512SSD
    2. Use 1TBDA as storage + 256SSD
    3. Use Motherboard's nic as Virtual NIC
  3. debian with docker + portainer:
    1. homeassistant
    2. unifi controller
    3. cups server
    4. mosquitto ++++ (other future containers)
    5. *** Hardware
      1. Installed on 512SSD
      2. Passthrough the ZiGate USB
      3. Use Motherboard's nic as Virtual NIC
  4. UPS monitor - probably APC with own OVA
    1. Installed on 512SSD
    2. Passthrough the USB interface to the UPS
    3. Use Motherboard's nic as Virtual NIC
  5. Frigate for NVR which will take care of 12 Reolink cameras and record only on motion. The place is not very busy and at most 2 cameras will work simultaneously.
    1. Installed on 512SSD
    2. Use 1TBDA as storage + 256SSD
    3. Use Motherboard's nic as Virtual NIC

I have a few questions on the above:

  1. Are you seeing any warnings which I might be overlooking, please?
  2. How do you suggest I set up the storage? Do you agree with my choices?
Note, my private cloud on Seafile would be synced to OneDrive so will not bother too much if it is lost. I am very wary of the electricity cost, so am steering away from having a number of hard drives spinning in vain.

Would appreciate any help I can get since this is all new to me :)

Many thanks

You should make use of vSwitches and get 10 Gig Ethernet switches. Would recommend Mikrotik switches.
Unify is a waste of money to be honest, so those hundreds wasted on Unify could be re-allocated on your actual server and it's I/O. If you got money to waste, donate to the poor & hungry. Learn linux instead.

Your CPU choice is bad, there are better choices such as Ryzen Zen 3 CPUs. Combo with 3200-3600 Mhz.
This will also make your life beautiful.

The storage architecture is non-existent, please review your storage choices.

Sorry dude
 
  • Like
Reactions: yancho
You should make use of vSwitches and get 10 Gig Ethernet switches. Would recommend Mikrotik switches.
Unify is a waste of money to be honest, so those hundreds wasted on Unify could be re-allocated on your actual server and it's I/O. If you got money to waste, donate to the poor & hungry. Learn linux instead.

The Unify controller is for the APs not for the switches. Thanks for your suggestion. No money is not of a surplus, I just need to do the bare minimum so not wasting any money. Switches I aim for Tp-Link - been a loyal customer of theirs and very well served.


Your CPU choice is bad, there are better choices such as Ryzen Zen 3 CPUs. Combo with 3200-3600 Mhz.
This will also make your life beautiful.

Too late. I already have the 6700T :(

The storage architecture is non-existent, please review your storage choices.
Why so? This is my current plan (updated from @Dunuin's recommendations:

Hardware:
  1. 1x 512GB M.2 Patriot P300 [512SSD]
  2. 1x 256GB M.2 Patriot P300 [256SSD]
  3. 2x 6TB WD Red Plus / Seagate Ironwolf SATA [6TB]
  4. 1x1TB WD Purple [1TB]
  5. 1x6TB WD Red / Red Plus / Desktop <not sure> - Offsite attached to a Raspberry Pi [Offsite]
Logical
  1. [512SSD] formatted as ext4 and will be used to install Proxmox and all the VMs
  2. the software raided (mirrored) 2x [6TB] disks will be set as LVM with ext4 for data array [FileDataArray]
    1. Partition the [FileDataArray]: 75% to Seafile and 25% to Backups
  3. use the [256SSD] as LVM Cache for the [FileDataArray]
  4. Use the [1TB] for Frigate NVR
What / where do I need to re-think, please?
 
The Unify controller is for the APs not for the switches. Thanks for your suggestion. No money is not of a surplus, I just need to do the bare minimum so not wasting any money. Switches I aim for Tp-Link - been a loyal customer of theirs and very well served.




Too late. I already have the 6700T :(


Why so? This is my current plan (updated from @Dunuin's recommendations:

Hardware:
  1. 1x 512GB M.2 Patriot P300 [512SSD]
  2. 1x 256GB M.2 Patriot P300 [256SSD]
  3. 2x 6TB WD Red Plus / Seagate Ironwolf SATA [6TB]
  4. 1x1TB WD Purple [1TB]
  5. 1x6TB WD Red / Red Plus / Desktop <not sure> - Offsite attached to a Raspberry Pi [Offsite]
Logical
  1. [512SSD] formatted as ext4 and will be used to install Proxmox and all the VMs
  2. the software raided (mirrored) 2x [6TB] disks will be set as LVM with ext4 for data array [FileDataArray]
    1. Partition the [FileDataArray]: 75% to Seafile and 25% to Backups
  3. use the [256SSD] as LVM Cache for the [FileDataArray]
  4. Use the [1TB] for Frigate NVR
What / where do I need to re-think, please?
Hi, sorry for the late reply but I've found these articles really useful.
Mostly got those by searching this forum and the web.

https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/
https://blog.victormendonca.com/2020/11/03/zfs-for-dummies/
https://www.ilsistemista.net/index....s-on-red-hat-enterprise-linux-62.html?start=5
https://jrs-s.net/2018/03/13/zvol-vs-qcow2-with-kvm/

They're all worth reading and studying

Good luck
 
Hey :)

You need to keep in mind theses things:
- ZFS is a bloc filesystem. This point is really important.
- If you declare a zfs pool (native or dedicated pool) in your virtualization environnement, Promox won't allow you to create a qcow2 virtual drive. Because he's running as bloc filesystem.
- The only solution for use a qcow2 file on a zvol is to mount it in a directory, and declare the directory in Proxmox WebUI. BUT, your virtual_drive store doesn't allow you to snapshoting running machines, and your syslog file will be flooded by I/O error.

Hoping this can help your in yours choices
 
Hey :)

You need to keep in mind theses things:
- ZFS is a bloc filesystem. This point is really important.
- If you declare a zfs pool (native or dedicated pool) in your virtualization environnement, Promox won't allow you to create a qcow2 virtual drive. Because he's running as bloc filesystem.
- The only solution for use a qcow2 file on a zvol is to mount it in a directory, and declare the directory in Proxmox WebUI. BUT, your virtual_drive store doesn't allow you to snapshoting running machines, and your syslog file will be flooded by I/O error.

Hoping this can help your in yours choices
Thats not completely true. ZFS got Zvols (block devices) and datasets (filesystems). You can create a dataset and add it as a directory storage to store qcow2 on that dataset.

But that would indeed doesn't make much sense because of the additional overhead. ZFS is already copy-on-write (CoW) and qcow2 ist CoW too. CoW ontop of CoW should create way too much overhead.
 
As a follow-up, I've reworked based on your suggestions.

I'm believing that its better offsite and remote backups rather than raid 1. I've also increased the RAM to 64GB (non-ECC). Not sure whether to go for the ASUS thin client rather than the RPis

Comments welcomed.
 

Attachments

  • Hard drives.jpg
    Hard drives.jpg
    167.8 KB · Views: 22
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!