Best Storage config for 4-disk server

stefanzman

Renowned Member
Jan 12, 2013
43
0
71
USA - Kansas City
www.ice-sys.com
I have some DELL 1U servers with 4 SAS drives recently discovered that the built-in RAID was just a firmware (not hardware) implementation. I subsequently decided to try configuring the 4 disks as as Proxmox ZFS RAID 10 array and proceed with installation.

This appeared to work fine, but I am having second thoughts.

Would it be better to install the PVE OS on a separate disk, perhaps SSD, and use the ZFS volume for VM storage?

If so, I would need to figure out a good way to add a 5th drive. These 1U DELL chassis have only 4 hot-swap drive bays. The additional drive would need to be shoehorned in some fashion. My understanding is the PVE OS does not like running on a USB stick like VMware. Is there another way?

Thanks for any suggestions.

Stefan
 
For what purpose to install the PVE OS on a separate disk?
Why You don't want install (via iso-image) PVE on RAIDZ1 or RAIDZ2?
 
Last edited:
Gosha - I suppose it is just from our other experiences with ESXi and MSFT Hypervisor.

For these solutions, it is always recommended or advised to separate the host OS from the VMs and general storage. I assume this is to make things easier or quicker for disaster recovery, as the host machine and basic config can be back running immediately. The VMs can then be restored in sequence.

Is this concept not applicable to PVE?
 
Hi!
For these solutions, it is always recommended or advised to separate the host OS from the VMs and general storage. I assume this is to make things easier or quicker for disaster recovery, as the host machine and basic config can be back running immediately. The VMs can then be restored in sequence.
Is this concept not applicable to PVE?

I prefer for VMs and CTs:
1-st level - RAIDZn (for PVE without HA) or CEPH (for cluster shared storage)
2-nd level - backups
3-rd level - backups
:)

For PVE system: removing from cluster and reinstalling PVE-node (without restoring VM's&CT's from backups) - its 10 minutes.
:)

Best regards,
Gosha
 
Last edited:
I guess the question still remaining is wether it makes sense or is advisable to physically separate the PVE OS storage from the VM storage? This standard practice for VMWare and MSFT Hypervisor.

This is a good idea on any sistem (proxmox included) especially in any enterprise environment.
 
  • Like
Reactions: GadgetPig
Yes. I agree, but we are in a bit of a quandary with these DELL C1100 1U servers. It is is difficult to find a workable secondary storage device for the PVE OS itself, as the 1U chassis provide limited options.

I may be able get at PCI Express SSD (NVMe) installed with a riser card, however this looks unlikely. There is no CD caddy, and the USB ports are not v3.0. We could otherwise break to the 4-disk array into two different RAID1 mirrors, but this would be far from ideal. Searching for other ideas.

Assuming we have backed up all the VMs externally off the ZFS RAID10, how easy would it be to recover the system in the event of serious hardware failure? Can the system support an immediate cold migration to another PVE host? Can a new server be quickly setup with the PVE CD and the VMs imported intact?
 
may be able get at PCI Express SSD (NVMe) installed with a riser card, however this looks unlikely.

Then buy a cheap sata controller (pcie bus) and connect on this 2 SSDs. In the end you will install Proxmox OS on this 2 SSD (zfs mirror if you want) and after install create the zfs pool for storage (4 x HDD).

Can the system support an immediate cold migration to another PVE host? Can a new server be quickly setup with the PVE CD and the VMs imported intact?

Yes on both questions.
 
  • Like
Reactions: GadgetPig
OK. I had forgotten about the mezzanine card option in the C1100. Thanks much for the reminder, Digitaldaz!

So, doing some more research, I came upon this very helpful thread - https://community.spiceworks.com/topic/316943-raid-controller-for-dell-poweredge-c1100?page=1

Basically it said this - "C1100s are a little more tricky, as you need to scour google for a C1100 power adapter to get a molex power connection inside the server. Couple that with a molex to 2x sata power splitter and purchase 2x msata SSDs + msata to sata converter mounts and you are in business!"

So, I know the right path for a separate RAID1 for boot & OS. But should I do it? Do I need it? Given the fact that PVE and its VMs are so easily installed, restored, moved around, it seems like the absolute necessity is questionable. We can have several of these C1100 up and running with ZFS RAID10 storage alone and relocate VMs as needed.

Thoughts?
 
One other option - use a couple of the available SATA ports open / existing on the DELL C1100 mainboard to connect up two SSDs and have PVE handle the RAID1 on those as well as ZFS RAID 10 on the 4 500g hds in the drive bays. Would that be to taxing for the system?
 
So, I know the right path for a separate RAID1 for boot & OS. But should I do it? Do I need it?

It is up to you. If something go wrong with boot/os and you use zfs, on both (os and data) it is simple to reinstall os, and then to import your data/pool.
If you use the same disks for data and os, the recovery it will take a long time. Recover data then move to other server, then reinstall os and copy back the data.
So it is up to you to decide what will be the best for your case.
 
OK. This was the basis of my other question about moving VMs around easily.

Since I will have several of these servers running PVE, I was thinking it would be as follows:

1 - Server or storage crash
2 - Restore backups of VMs to a different PVE server (a few min, depending on size?)
3 - VMs up and running again
4 - Work on repairing the crashed server without impact on operation
 
Your plan is ok in theory :). You must test this plan and to see if it is ok. By the way, you need 2 plans (different ), in case of the primary plan is not working. In some occasion, my very well tested primary plan was not working. But lucky me, my secondary plan was successful. So use 2 plans and write the steps on paper. Test both from time to time because your landscape is dinamyc.
 
OK. This was the basis of my other question about moving VMs around easily.

Since I will have several of these servers running PVE, I was thinking it would be as follows:

1 - Server or storage crash
2 - Restore backups of VMs to a different PVE server (a few min, depending on size?)
3 - VMs up and running again
4 - Work on repairing the crashed server without impact on operation

Do you plan to use ZFS? If so, then you must understand - ZFS is not a cluster's shared storage.
To provide favorable answers to your questions, you need use the shared storage in the cluster (CEPH for example).
See Table1, column "Shared" in https://pve.proxmox.com/wiki/Storage

Best regards,
Gosha
 
I had not planned to use a cluster initially, but that is a very good point.

I did find that an additional connections can be added to this server with an adaptor. Along with a molex to Sata power splitter,
this would allow me to add a SDD to connect available ports on the mainboard. I can then optionally use this drive for installation the PVE OS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!