Before buying some servers, thoughts ?

saphirblanc

Well-Known Member
Jul 4, 2017
49
0
46
Hi,

We'are going to purchase some refurbished servers and that one is our preferred for now :
  • HP Proliant DL360p G8 Gen8 (SFF)
    • 2 x Intel Xeon E5-2697V2
    • 256GB Registered ECC DDR3
    • HP Smart Array P420i 2GB Cache
    • 1 x HP 1Gbit Ethernet 4-Port 331FLR FlexibleLOM Adapter 634025-001
    • 1 x LAN Controller 10 Gigabit CX311A Single Port RDMA RoCE - 1x 10GbE SFP+ (Server 2016 vmware 6.7 kompatibel) FP
    • 2 x 2,5" 1,8 TB 10k SAS 12Gb/s Raid Enterprise Storage 24/7 HDD (RAID 1) for larger VM needing less IO
    • 6 x Samsung SSD PM863a OEM Enterprise 960 GB (RAID 5)
Obviously, we'd like to use ZFS instead of the P420i HW Raid controller and then benefit of the replication feature Proxmox do propose on a third system.

However, I've read on Google that it's kind of hard to make it possible even using HBA mode. Boot seems to fail after the installation. Therefore, before buying anything and spending $15k (3 servers), I'd like to get your thoughts and recommendations ? Something instead of HPE ?

Thank you very much.
 
Hi,
HPE has the disadvantage, that you get updates with an support contract only!

If you buy Dell Server, like R 630 or R 730, you can use the disks in non-raid-mode (like an hba).
Unfortunality with the older ones (R620, R720...) you have the problems, that you don't have direkt access to the disks (raid0 on one disk, but that's ugly).

Udo
 
Hi,
HPE has the disadvantage, that you get updates with an support contract only!

If you buy Dell Server, like R 630 or R 730, you can use the disks in non-raid-mode (like an hba).
Unfortunality with the older ones (R620, R720...) you have the problems, that you don't have direkt access to the disks (raid0 on one disk, but that's ugly).

Udo

Hi!

Thanks for your message.

About the updates, all of them will be installed and I'll get an onsite support for 3 more years, so I'm not so worried about this.
My main concern is to buy an hardware I won't be able to use ZFS and their advantages. Of course, if really needed I will be able to use the hardware raid and ZFS on top however it's not recommended.

Yann
 
As a total Proxmox newbie, I've taken the decision to stick to the hardware RAID controller, despite the disadvantages of not being able to use ZFS and therefore replication, which I really, really wanted. So if you do end up with hardware RAID, you won't be alone.

Your choice to use RAID 5 on the SSDs did raise an eyebrow with me, though. I'd be more comfortable with RAID 10, despite the loss of 50% of available space, plus it would give a bit of a performance boost. That's just my extremely risk-averse opinion though :)
 
As a total Proxmox newbie, I've taken the decision to stick to the hardware RAID controller, despite the disadvantages of not being able to use ZFS and therefore replication, which I really, really wanted. So if you do end up with hardware RAID, you won't be alone.

Your choice to use RAID 5 on the SSDs did raise an eyebrow with me, though. I'd be more comfortable with RAID 10, despite the loss of 50% of available space, plus it would give a bit of a performance boost. That's just my extremely risk-averse opinion though :)
Just to know, you ended up by using HW Raid with ext4 ? Or ZFS on top of it ? Thanks!
 
I've played with some 360 gen 8 with p420i, you can force it to HBA mode from cli, starting with the SPP 2017.04 and entering CTRL+ALT and keeping pressed type in sequence x+d+b, a cli appear and you can force to hba.

The problem is that is not able to boot from any of the port if you set it HBA. You have 2 solutions:

  • convert the cdrom to hd with an adapter, and boot from an HD (but you have no redundancy on the proxmox install disc)
  • remove the p420i and use an hba like H240.
A proxmox version tuned to run from USB Drive or SD card would be perfect here.
 
There is a third option, which we are using with our dl380 servers with p420i in HBA mode: Install the grub bootloader on a small USB stick
I've tried this solution for 3 days, and something like 50 reboots, so Please tell us how to do this ;)
 
Hi,
the grub-boot-loader are only used during booting. This can't be the reason for reboots.

Sounds more like an bios-setting for watchdog or anything else? But i havn't much hp experiences.

Udo
Nono, i mean 50 reboot between Every test i've made trying to start from USB.

So. If is possible i'm interested in the procedure to make the USB drive.
As of now i've crated a raid 0 for every single disk and is not the best solution
 
There is a third option, which we are using with our dl380 servers with p420i in HBA mode: Install the grub bootloader on a small USB stick

did you follow some guide? because i've google but i can't find how to make the USB Drive
 
At the end, I'm tending to replace the HP servers with some Supermicro servers with the following (2x) :
  • 1029P-WTRT
  • 2 x Intel Xeon Silver 4114
  • Samsung 256GB DDR4 2666MHz ECC RDIMM
  • 2 Port SFP+ 10GbE Adapter
  • 3 x Seagate 2TB Enterprise SATA 6Gb/s 2.5" (RAID 5) > rpool-hdd
  • 5 x Samsung SM863a 960GB 2.5" Enterprise SATA SSD (RAID 6) > rpool
And the backup node :
  • 5019P-MR
  • 1 x Intel Xeon Silver 4112
  • Samsung 128GB DDR4 2666MHz ECC RDIMM
  • 2 Port SFP+ 10GbE Adapter
  • 4 x HGST Ultrastar He10 3.5" 8TB 6Gb/s SATA HDD (RAID 5)
Still thinking about ZFS pools as it needs to be exactly the same on the three nodes to have the ZFS replication feature. Therefore, I might change the backup node with something with 2U to be able to add more disks and obviously "zfs pool".

What do you think ?

Thanks again for your replies :).
 
...
Still thinking about ZFS pools as it needs to be exactly the same on the three nodes to have the ZFS replication feature. Therefore, I might change the backup node with something with 2U to be able to add more disks and obviously "zfs pool".

What do you think ?

Thanks again for your replies :).
Hi,
why should the pools be exactly the same?

I have such an scenario running for an custumer with two locations.

server a has an "a-pool" with SSDs, and an b-pool with HDDs (striped mirrors). Server b has the b-pool with SSDs and the a-pool with HDDs.
Every 15 minutes the disks are synced with pve-zsync betwenn the servers and every 6 hours with znapzend to the other location (for disaster recovery).
Sometimes I must step in the znapzend-task, but overall it's work well.

The hdd-pools are not for working! If one node fails, the images from the hdd-pool will migrate to an shared sas-storage.

Udo
 
Hi Udo,

I've tested in my lab to have A and B replicated to C. In case A or B are going down, I should be able to resume normal operation (or nearly) on C and move back the VMs on A or B when the failed node comes back.

However, with ZFS, I've seen that I'm unable to create a replication task to the node C if it does not have the same pool's name. As some VMs will be hosted on rpool-hdd I would need to have the same on C, am I right ?

Thanks again for sharing!

Yann
 
Hi Udo,

I've tested in my lab to have A and B replicated to C. In case A or B are going down, I should be able to resume normal operation (or nearly) on C and move back the VMs on A or B when the failed node comes back.

However, with ZFS, I've seen that I'm unable to create a replication task to the node C if it does not have the same pool's name. As some VMs will be hosted on rpool-hdd I would need to have the same on C, am I right ?

Thanks again for sharing!

Yann
Hi,
yes with the sync from pve you need the same pool-name. With znapzend you can use different names.

Udo
 
  • Like
Reactions: saphirblanc
Here we go! Bought the three servers and installed Proxmox with ZFS (2 zfs pools on each system)

I only have one issue (if we can say it's an issue) : it takes about 12 minutes from powering to show me the grub. BMC from SuperMicro takes about 2 minutes and then it's like "blackout" until the Grub appears.

What I thought first is it was booting on EFI, and therefore I tried to manually adjust this settings in the BIOS to only use Legacy BIOS - unfortunately it did not help. The behavior is the same on the 2 main servers and is working normally on the backup system with less RAM and only HDD (8) - it takes around 3-5 minutes to show me the GRUB. BIOS settings are completely the same between all nodes.

Do you have any guess, where should I look into ? Perhaps it's completely normal that it takes 10-15 minutes, but I find this weird though.

Thanks!
 
There is a third option, which we are using with our dl380 servers with p420i in HBA mode: Install the grub bootloader on a small USB stick
Hi, I too am interested if you could give some kind of how to for your solution when you have some Time. Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!