A Question about Installing on Weak Hardware (PVE and PBS)

Whitterquick

Member
Aug 1, 2020
246
9
23
Hey all,

Quick question. I am thinking about setting up a secondary server for backup and testing and have been looking at some cheap NUCs to mess around with. I was wondering if a modern Celeron/Pentium would be fine for this and work well with PVE and PBS? They seem to be within the recommended requirements (quad-core, 8GB RAM etc) and have VT-d, but obviously won’t support EEC RAM or many of the bells and whistles of a typical server. I have installed various distros on older/weaker hardware before and they have worked fine, but is there anything I might regret doing this? Anything that might not work?
 
Last edited:
It won't be very versatile. Atleast I can't think of using just a NUC without the possiblity to add in and passthrough PCIe cards. Right not I'm using 6 PCIe cards and would like to use 10 but would need to upgrade to a ATX or E-ATX mainboard first. PBS doesn't need great hardware (except for the SSDs) and for PVE it really depends on how much and what VMs you are planning to run on it. If you just want to run some LXCs it might be fine, but as soon as you want to run some windows VMs you need some RAM and CPU performance.
 
It won't be very versatile. Atleast I can't think of using just a NUC without the possiblity to add in and passthrough PCIe cards. Right not I'm using 6 PCIe cards and would like to use 10 but would need to upgrade to a ATX or E-ATX mainboard first. PBS doesn't need great hardware (except for the SSDs) and for PVE it really depends on how much and what VMs you are planning to run on it. If you just want to run some LXCs it might be fine, but as soon as you want to run some windows VMs you need some RAM and CPU performance.
It would literally just be for backups, testing, and moving the odd VM for maintenance. Not for production use.

6 PCIe cards wow, what system are you running on? Are these NICs, HBA/RAID and GFX?
 
It would literally just be for backups, testing, and moving the odd VM for maintenance. Not for production use.

6 PCIe cards wow, what system are you running on? Are these NICs, HBA/RAID and GFX?
2x HBAs (PCIe 8x) for passthrough to NAS VMs (would like to use a 3rd one)
2x 10Gbit SFP+ NICs (PCIe 4x) for fast storage backend between the nodes (would like to use a 3rd one)
1x Quad port Gbit NIC (PCIe 4x) for OPNsense VM
1x GT710 GPU (PCIe 16x) for HTPC Win VM (would like to use 2 more GPUs for other VMs)
0x Dual M.2 adapter card (PCIe 8x) which I can't you because no PCIe slots are left so I can't use my NVMe SSDs

Damn...so I actually would need 11 slots and not just 10 o_O
Atleast I don't want to use my homeservers as workstations or capture boxes. The 11 slots are just for storage, NICs and GPUs. Other people might also need them for USB passthrough, sat receivers, capture cards, soundcards and whatever.
 
Last edited:
2x HBAs (PCIe 8x) for passthrough to NAS VMs (would like to use a 3rd one)
2x 10Gbit SFP+ NICs (PCIe 4x) for fast storage backend between the nodes (would like to use a 3rd one)
1x Quad port Gbit NIC (PCIe 4x) for OPNsense VM
1x GT710 GPU (PCIe 16x) for HTPC Win VM (would like to use 2 more GPUs for other VMs)
0x Dual M.2 adapter card (PCIe 8x) which I can't you because no PCIe slots are left so I can't use my NVMe SSDs

Damn...so I actually would need 11 slots and not just 10 o_O
Atleast I don't want to use my homeservers as workstations or capture boxes. The 11 slots are just for storage, NICs and GPUs. Other people might also need them for USB passthrough, sat receivers, capture cards, soundcards and whatever.
That’s quite a lot in one box (if it is in one box), I would be scared of too many eggs in one basket, even with backups! I’m guessing you have HA enabled with all that?
 
That’s quite a lot in one box (if it is in one box), I would be scared of too many eggs in one basket, even with backups! I’m guessing you have HA enabled with all that?
No, its across 3 servers.

2x HBAs + 1x 10Gbit SFP+ NIC is for the main NAS.
1x HBA + 1x 10Gbit SFP+ NIC would be for the backup NAS.
1x 10Gbit SFP+ NIC + 1x Quad port Gbit NIC + 3x GPU + 1x Dual M.2 adapter card would be for the Hypervisor.

But right now all motherboards are just mATX with 3 PCIe slots so I would need to upgrade the Hypervisor to ATX or E-ATX (but that costs even 300€ second hand) to be able to use all hardware that I would like to use.
 
Last edited:
  • Like
Reactions: Whitterquick
Say I have two servers; Server1 has 2 additional disks attached, and Server2 has 1 additional disk attached. Is there a way I can share the second disk on Server1 with Server2 without it being a NFS/SMB share inside a VM?

Server1
-Boot Disk
-Disk1A (VMs, LXCs)
-Disk1B (ISOs, Backups, Snapshots)

Server2
-Boot Disk
-Disk2A (VMs, LXCs)
-Disk1B (ISOs, Backups, Snapshots) <—shared from Server1

Possible or no?
 
No, its across 3 servers.

2x HBAs + 1x 10Gbit SFP+ NIC is for the main NAS.
1x HBA + 1x 10Gbit SFP+ NIC would be for the backup NAS.
1x 10Gbit SFP+ NIC + 1x Quad port Gbit NIC + 3x GPU + 1x Dual M.2 adapter card would be for the Hypervisor.

But right now all motherboards are just mATX with 3 PCIe slots so I would need to upgrade the Hypervisor to ATX or E-ATX (but that costs even 300€ second hand) to be able to use all hardware that I would like to use.
Is this all for home use? A fully 10Gbit setup costs a small fortune!
 
Is this all for home use?
Jup, just for me alone at home.
A fully 10Gbit setup costs a small fortune!
Wasn't cheap but not that bad:
- 180€ for a new and retail power efficient (9-22W) 24 port Gbit + 4 port 10Gbit SFP+ managed switch (works with tagged vlans, LACP and all nice that stuff) [Aruba JL682A]
- 35€ each of the three second hand 10Gbit NICs [Mellanox Connectx-3 MCX311A-XCAT]
- 10€ each of the two second hand 3m DACs [CISCO, don't remember model]
- around 50€ for a new 15m fibre optic cable and two second hand transceivers [CISCO, don't remember model]
- maybe 15€ for some wire protection tubes so my cats won't kill the fibre optic cable
So all together around 370€.

Had alot of fun segmenting my LAN into a dozen of VLANs and bonding NICs so it was worth it. And backups are so much faster now if you aren't bottlenecked by the 117 MB/s of a Gbit NIC. Now its faster to read stuff from the NAS than reading it from a local SATA SSD.
 
Last edited:
  • Like
Reactions: Whitterquick
I have been wondering if I should get a basic managed switch but not sure if it’s really worth it if I only have 2 NICs?

Is a managed switch the only way VLANs are possible?
 
I have been wondering if I should get a basic managed switch but not sure if it’s really worth it if I only have 2 NICs?

Is a managed switch the only way VLANs are possible?
Jep, otherwise you can only use VLANs inside your PVE server so you are very limited what to do with them.
 
Is there anything like an IPMI console for regular hardware to access a system before the OS boots, for example to enter encryption passwords? (Similar to what Hp has) I know SuperMicro has one but what about for systems that don’t?
 
Is there anything like an IPMI console for regular hardware to access a system before the OS boots, for example to enter encryption passwords? (Similar to what Hp has) I know SuperMicro has one but what about for systems that don’t?
There are alot of WebKVMs you can attach to any hardware. But these aren't cheap. Even if you get a Do-it-Yourself PiKVM v3 it will probably cost you 200-300$.

But if you just want to unlock your encryption you don't need a WebKVM. For that you can boot into initramfs-dropbear on the unencrypted part of your storage, so you can SSH into your server to unlock your encrypted storage with all other data like your OS. And then boot your now unlocked OS from there.
 
Last edited:
  • Like
Reactions: Whitterquick
There are alot of WebKVMs you can attach to any hardware. But these aren't cheap. Even if you get a Do-it-Yourself PiKVM v3 it will probably cost you 200-300$.

But if you just want to unlock your encryption you don't need a WebKVM. For that you can boot into initramfs-dropbear on the unencrypted part of your storage, so you can SSH into your server to unlock your encrypted storage with all other data like your OS. And then boot your now unlocked OS from there.
I do remember you mentioning dropbear before but I haven’t gotten round to looking into it yet. Any guides?

EDIT: I found this guide but let me know if you have a better one or if anything here is bad practice :)
 
Last edited:
From my notes while setting it up myself:
008.) install Dropbear-initramfs to unlock LUKS via SSH
- # apt install dropbear-initramfs dropbear busybox
- # nano /etc/dropbear-initramfs/authorized_keys
- put RSA pubkey in there:
- change SSH port:
# nano /etc/dropbear-initramfs/config
- replace the line "#DROPBEAR_OPTION" with "DROPBEAR_OPTIONS="-p 10022 -j -k -c cryptroot-unlock""
which will make dropbear use port 10022 instead of 22 and then automatically ask for the LUKS password
- # nano /etc/initramfs-tools/initramfs.conf
- change the line "DEVICE=" to "DEVICE=eno2" so only the management NIC is used.
- there is also this line to setup network configuration:
ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>
- so use something like: "ip=192.168.0.2::192.168.0.1:255.255.255.0:Hypervisor:eno2:off:192.168.0.1:192.168.0.1:"
- # update-initramfs -u
- reboot server and look if unlocking using SSH works (make sure to connect to port 10022 and not the default port 22)
- looks like static IPs aren't working but DHCP does. So I setup the router to always give the dropbears MAC to same IP
Edit:
I switched to VLANs and now everything is only working using a VLAN trunk. Because initramfs by default can't use VLANs you need to setup the hooks from this Github repo first: "https://github.com/skom91/initramfs-tools-network-hook". For that you need to do:
- # nano /etc/initramfs-tools/scripts/local-top/vlan
insert content of "https://github.com/skom91/initramfs...er/etc/initramfs-tools/scripts/local-top/vlan"
- # nano /etc/initramfs-tools/scripts/local-bottom/vlan
insert content of "https://github.com/skom91/initramfs...etc/initramfs-tools/scripts/local-bottom/vlan"
- # nano /etc/initramfs-tools/hooks/vlan
insert content of "https://github.com/skom91/initramfs-tools-network-hook/blob/master/etc/initramfs-tools/hooks/vlan"
- set rights:
# chmod 755 /etc/initramfs-tools/scripts/local-top/vlan
# chmod 755 /etc/initramfs-tools/scripts/local-bottom/vlan
# chmod 755 /etc/initramfs-tools/hooks/vlan
- # nano /etc/initramfs-tools/initramfs.conf
Instead of this lines...

DEVICE=eno2
"ip=192.168.0.2::192.168.0.1:255.255.255.0:Hypervisor:eno2:off:192.168.0.1:192.168.0.1:"

... use these:

VLAN="ens5:43"
IP=192.168.0.2::192.168.0.1:255.255.255.0::ens5.43:off

With this initramfs will use the interface ens5 with tagged VLANID 43 which will result in a interface "ens5.43" and this will be assiged the static IP "192.168.0.2" with a subnetmask of "255.255.255.0" and "192.168.0.1" as gateway
- rebuild initramfs:
# update-initramfs -u
Didn't wrote it down but you also need to create that cryptroot-unlock script like describes here: https://github.com/ceremcem/unlock-luks-partition#3-create-the-unlock-script

There are some tutorials out there:
https://www.cyberciti.biz/security/how-to-unlock-luks-using-dropbear-ssh-keys-remotely-in-linux/
https://github.com/ceremcem/unlock-luks-partition

I think I followed the latter one.
 
Last edited:
  • Like
Reactions: Whitterquick

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!