Starting into Proxmox--Best Version for My Hardware?

SamirD

Member
Oct 12, 2019
38
1
8
49
HSV and SFO
www.huntsvillecarscene.com
I've been computing since the days of DOS and now think that it makes sense to virtualize a lot of the boxes I have since their hardware can do much more than they are currently doing. I have the following servers I would like to put to use:
  • HP DL380 G5 dual x5460 16-32GB RAM (haven't upgraded it yet) with RAID controller (not going to do IT mode and just run raid1) and 8x 300-600GB sas disks
  • HP DL360 G5 dual x5430 16GB RAM with RAID controller (not going to do IT mode and just run raid1) and 6x 146-300GB sas disks
  • HP DL360 G5 dual x5160 16GB RAM with RAID controller (not going to do IT mode and just run raid1) and 6x 146-300GB sas disks
  • Dell R410 dual x5670 128GB RAM with RAID controller (not going to do IT mode and just run raid1) and 2x 500GB sata disks
  • Dell R710 dual x5670 384GB RAM with RAID controller (not going to do IT mode and just run raid1) and 8x 300GB sas disks
  • Dell 2950 dual x5470 16GB RAM with RAID controller (not going to do IT mode and just run raid1) and 2x 300GB sas disks
  • Dell 2950 dual x5160 32GB RAM with RAID controller (not going to do IT mode and just run raid1) and 2x 146GB sas disks
  • Dell CS24-SC dual L5420 24GB RAM with RAID controller (not going to do IT mode and just run raid1) and 2x sas/sata disks (haven't installed yet)
With the age of this hardware and the raid controllers, which version of proxmox will set up the easiest? I'm not worried about the latest and greatest as this will basically be a lab network, but I don't want the hardware to be overly taxed trying to run the newest (and possibly most demanding) version of proxmox. Each server will basically just run windows that would be remoted into via rdp from a thin client.

Any ideas, suggetions, and questions welcome. I've got to get this right from the get-go as I won't have too much time to mess with it once it is up and running. Thank you in advance.
 
I've been computing since the days of DOS and now think that it makes sense to virtualize a lot of the boxes I have since their hardware can do much more than they are currently doing. I have the following servers I would like to put to use:
  • HP DL380 G5 dual x5460 16-32GB RAM (haven't upgraded it yet) with RAID controller (not going to do IT mode and just run raid1) and 8x 300-600GB sas disks
  • HP DL360 G5 dual x5430 16GB RAM with RAID controller (not going to do IT mode and just run raid1) and 6x 146-300GB sas disks
  • HP DL360 G5 dual x5160 16GB RAM with RAID controller (not going to do IT mode and just run raid1) and 6x 146-300GB sas disks
  • Dell R410 dual x5670 128GB RAM with RAID controller (not going to do IT mode and just run raid1) and 2x 500GB sata disks
  • Dell R710 dual x5670 384GB RAM with RAID controller (not going to do IT mode and just run raid1) and 8x 300GB sas disks
  • Dell 2950 dual x5470 16GB RAM with RAID controller (not going to do IT mode and just run raid1) and 2x 300GB sas disks
  • Dell 2950 dual x5160 32GB RAM with RAID controller (not going to do IT mode and just run raid1) and 2x 146GB sas disks
  • Dell CS24-SC dual L5420 24GB RAM with RAID controller (not going to do IT mode and just run raid1) and 2x sas/sata disks (haven't installed yet)
With the age of this hardware and the raid controllers, which version of proxmox will set up the easiest? I'm not worried about the latest and greatest as this will basically be a lab network, but I don't want the hardware to be overly taxed trying to run the newest (and possibly most demanding) version of proxmox. Each server will basically just run windows that would be remoted into via rdp from a thin client.

Any ideas, suggetions, and questions welcome. I've got to get this right from the get-go as I won't have too much time to mess with it once it is up and running. Thank you in advance.

PVE's demands are really small - you can run it even on a N40L, given that you have enough RAM. The only thing I can advise is, that you re-consider about flashing your disk controller to IT-mode, since both Ceph and ZFS will welcome that. Other than that… that's quite a bunch of HW you're having there for a lab - you will probably be in for a lot of fun setting that all up (I really mean it! ;) )
 
PVE's demands are really small - you can run it even on a N40L, given that you have enough RAM. The only thing I can advise is, that you re-consider about flashing your disk controller to IT-mode, since both Ceph and ZFS will welcome that. Other than that… that's quite a bunch of HW you're having there for a lab - you will probably be in for a lot of fun setting that all up (I really mean it! ;) )
Thank you for the reply! That's great to hear. These things are 10 years old and a lot has changed in that time. Would you recommend I go with the most current version then? Would an older version not be more 'age appropriate' for the hardware?

I don't want to risk bricking the controllers. I know that zfs likes to have a direct link to the target drive, but if made them all jbod through the controller would it still have a fit? I wasn't planning on housing much data on these since there's a file server that already handles that duty, but instead just using the storage for each virtual disk that's needed. Or am I missing something conceptually that might be better?

Yep, it's a TON of hardware! Almost literally, lol And considering all that was under $600 is pretty sweet. :) It's amazing how much computing power you can get cheap these days.
 
So I just discovered that I can put 256GB of LRDIMMs into my HP z420, so add that to the stack of hardware. :D

Still wondering if an older version would be better for my older hardware though, especially with all the issues people seem to be having from the upgrade to 6.x
 
My advice: Go with the latest and greatest.
Typically the hardware (even when old) will still benefit from newer kernel development.
I have an old Opteron 6276 based system and it runs just fine. It upgraded just fine as well from 5.4 to 6.1 but my setup is really simple.

Major release upgrades can be a hazzle, therefore I would choose the newest version of the software in all cases. Issues after an upgrade typically come from the upgrade itself, which is much more error prone than a fresh install.

To be honest I would try to replace a lot of the older gear (2950, G5 HP) with one larger server (R710, whatever). They will eat a lot of energy not providing much to the table. CPU as well as memory-wise. In my opinion everything below 64 GB is probably not much worth the effort.
This statment assumes you are using some VM workloads. Containers might be different (as they consume much less memory).
 
I would install the latest version of Proxmox. I run a small cluster at home with intel X56xx processors and everything runs really well.

Personally, if I had your gear, I would take the 3 or 5 of the newest machines and build out a cluster - maximizing ram and drives per host. Unless your goal is to use every bit of gear you have.

Oh, and maybe you can make a thread about your journey over on Linux Kung-Fu. ;) (assuming your username checks out)
 
My advice: Go with the latest and greatest.
Typically the hardware (even when old) will still benefit from newer kernel development.
I have an old Opteron 6276 based system and it runs just fine. It upgraded just fine as well from 5.4 to 6.1 but my setup is really simple.

Major release upgrades can be a hazzle, therefore I would choose the newest version of the software in all cases. Issues after an upgrade typically come from the upgrade itself, which is much more error prone than a fresh install.

To be honest I would try to replace a lot of the older gear (2950, G5 HP) with one larger server (R710, whatever). They will eat a lot of energy not providing much to the table. CPU as well as memory-wise. In my opinion everything below 64 GB is probably not much worth the effort.
This statment assumes you are using some VM workloads. Containers might be different (as they consume much less memory).
Good to hear that there's benefit and not bloat with newer versions. I've still yet to see this in any os development, but I'll give it a shot before trying a 5.4 install (since that worked for you).

I'm not worried about energy as I've pretty much got a nuclear plant in my back yard so energy is dirt cheap. Right now the systems are still on and being used, but not being utilized fully like they could with VMs, hence the move to VMs. But what is a VM workload and what is a container? Conceptually, that is? I've seen the terms several times in my research, but still can't wrap my head around them. :(
 
I would install the latest version of Proxmox. I run a small cluster at home with intel X56xx processors and everything runs really well.

Personally, if I had your gear, I would take the 3 or 5 of the newest machines and build out a cluster - maximizing ram and drives per host. Unless your goal is to use every bit of gear you have.

Oh, and maybe you can make a thread about your journey over on Linux Kung-Fu. ;) (assuming your username checks out)
Interesting idea. What would the benefits of a cluster be? Could I live migrate VMs from one machine to another? Or will they automatically load balance between each other? It would be nice to just have a 'pool' of servers that can serve out whatever VMs I want as much as I want. :D Unlimited power!! :D

Most of the systems aside from the R410 and R710 are maxed out on ram or close to it. Once I find a nice deal on ram for the R410 and R710, I'll max them out too (to 256GB and 384GB, respectively). I recently found out my HP z420 with an e5-2630L was able to be upgraded to 256GB so that system is also going to be used for proxmox as well.

The goal is to use every bit of gear. I plan to virtualize systems that I'm currently using bare metal, so I'd just be logging into the same virtual systems except they would be running on proxmox--at least that's the idea. And the concept would be that I could add as many virtual systems as proxmox would allow me to. :D

I have no Linux Kung-Fu, so that can't be me. I wish it was!
 
Energy dirt cheap? Sounds like a dream to me. Putting that aside I still wouldn't mess with resources. But anyways, thats my take on it.

This is not technically correct, but might help: Containers are basically VMs without the nasty part: the OS.
VM-OSes add a lot of overhead (Memory, Disk Space, Services, etc.) so the "newest kid in the datacenter" are containers.
Those are typically stateless and run just the application, on an instanciated Base-OS.

Containers have a less security isolation than VMs, but require much less resources. Means density goes up. However you need to have an application that can be containerized. Not all apps can do that. I thought moving to containers, but skipped it. Several topics weren't ready yet. Will have a look into it in a few years again (all stuff at home)

HTH
 
Energy dirt cheap? Sounds like a dream to me. Putting that aside I still wouldn't mess with resources. But anyways, thats my take on it.

This is not technically correct, but might help: Containers are basically VMs without the nasty part: the OS.
VM-OSes add a lot of overhead (Memory, Disk Space, Services, etc.) so the "newest kid in the datacenter" are containers.
Those are typically stateless and run just the application, on an instanciated Base-OS.

Containers have a less security isolation than VMs, but require much less resources. Means density goes up. However you need to have an application that can be containerized. Not all apps can do that. I thought moving to containers, but skipped it. Several topics weren't ready yet. Will have a look into it in a few years again (all stuff at home)

HTH
It is--gas bill is 4x higher than electricity which is about the same as water, which the last time I checked was 10 cents for 1000 gallons. :D

Thank you for the brief on containers--they almost sound like a snap-in of sorts versus a full blown VM, which would make sense why they're lighter. :)
 
your cpu should do some virtualisation too and you need a lots or RAM
Yep, I've been reading about the RAM and CPU requirements. All of my equipment supports virtualization on the cpu on different levels, and as long as I have enough ram to run more than what the machines are doing now I'll be happy. The sas storage is nice and snappy with cached controllers hitting basically nvme speeds while in the cache. :)
 
SamirD
The reason to run as a cluster is to have a single user interface for all the ProxMox machines.
The ability to migrate machines between nodes should it be needed. (i.e. keep a guest machine up while you restart a host machine)
The ability to Manually load / performance balance guests between different hardware.
The ability to migrate machines to a newly added machine.
 
Yep, I've been reading about the RAM and CPU requirements. All of my equipment supports virtualization on the cpu on different levels, and as long as I have enough ram to run more than what the machines are doing now I'll be happy. The sas storage is nice and snappy with cached controllers hitting basically nvme speeds while in the cache. :)

Well then don't use either ZFS or Ceph with that.
 
SamirD
The reason to run as a cluster is to have a single user interface for all the ProxMox machines.
The ability to migrate machines between nodes should it be needed. (i.e. keep a guest machine up while you restart a host machine)
The ability to Manually load / performance balance guests between different hardware.
The ability to migrate machines to a newly added machine.
Ooooo...I like these features. Sounds like I will like clustering. :D
 
Probably won't need to unless I have to. What's the deal with ZFS and needing direct target control again? I remember reading about it a long time ago, but don't know how it is involved with Proxmox.

Well… ZFS - and that goes for Ceph as well, both ensure that the data has made it onto the disks correctly. If you do have a (caching) HW-RAID controller between the disks and ZFS/Ceph, it may be that some operations are flagged to be on stable storage, when they in fact aren't. It may not be of much importance in a lab setup, but I'd never trust my data to any HW RAID controller these days. Pressure on price/performance has been so great, that the vendors seem to have settled on shady compromises regarding safe programming. Especially a series of LSI controllers have been infamous for causing all sorts of trouble and LSI is, what still drives most of the HBAs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!