[SOLVED] Setup/Config Recommendations

May 18, 2019
169
9
23
Los Angeles, CA USA
This is my first time running my own virtualization environment. PVE looks like the best option. I'd like to get some advice on setting it up.

I have a Supermicro box (HW RAID 1 available) with the following config
  • 2x WD Red 4TB drives
  • 1x 2TB NVME SSD
  • 1x Xeon E 2136
  • 2X 16GB RAM
  • 2X 1Gbps NICs
  • 5 static IPs
  • Dedicated IPMI port
Intended usage
  1. I need to run about half a dozen VMs
  2. Guest VMs will be Ubuntu, mostly Bionic, maybe 1 Xenial.
  3. Guests need to have as native as HW possible access to use security keys and HSMs such as YubiHSM2
  4. Guests do not need to be hard constrained on resources, as I will be the single admin on all guests.
  5. Guests are constantly reading and writing to disk. A data dir on each guest needs to be backed up with as high as possible consistency on a daily basis. The system/application dir should be backed up once a week. Uptime is of utmost importance, but guests can pause operation for a few seconds and pickup where they left. I can backup the entire guest if this is easier and won't increase downtime much (for example with incremental backups)
  6. Guests ideally share IPs: no guest will bind to the same ports (for SSH I can use a different port for each guest). I would like to avoid NAT so I don't have to manage port forwarding.
  7. PVE and guest remote access would be via an SSH tunnel or VPN.
  8. I foresee adding another NVME drive in the future, and possibly 2 more HDs. Currently I use LVM to add more space to the system.
Questions
  1. What is the recommended filesystem(s) to use?
  2. Should I rely on HW RAID for the HDs or should I use PVE's RAID manager? Or maybe I can RAID 1 between the SSD and one of the HDs, leaving the other HD for backups.
  3. Anything you foresee I should handle in advance given my intended usage?
  4. If this increases complexity nevermind, but it would be ideal to have each guest system/app dir on HD and the data dir on NVME (only the data needs NVME speeds)
  5. It would be great to have the swapfile on Raid 0 in the HD (I guess this throws HW RAID out the window - no problem)
Depending on this I will go ahead with PVE, in which case I will surely subscribe for support. Thank you!
 
Last edited:

atom70

New Member
Nov 7, 2018
7
2
3
23
Hello Gaia,

I use Proxmox for over 3 years now (open-source version) for various project related to virtualisation. I had several problems with the Raid 1 software because I did not have a UPS when it ran out of electricity and I was doing excessive writing on my disks. I have always used the ext4 file system, so I can not really guide you on which file system to use.

I guess if you're thinking about doing a lot of read / write, you'd probably be better off with the hardware raid, especially if your Raid card has a battery to back up the cache. If on the other hand you have a secure infrastructure on UPS, the software raid could be your solution, but I still think that the hardware raid is more effective if you have a raid card.

I mainly use the software raid when I have no choice, for example if my server only has a SAS card.

Also, do not forget to check if your Raid card is optimized for NVME SSDs, because some cards are not and quickly damage / use the NVME disk.
 
  • Like
Reactions: Gaia
May 18, 2019
169
9
23
Los Angeles, CA USA
I hear you. The machine will be at a Tier 4 DC, I don't think there will be power problems. I might do SW RAID bc of the flexibility.

So it is possible to have RAID 1 between disks of widely varying speed (NVME & HD)? Does it slow down the NVME to match the HD or does it use some type of buffering?

Hello Gaia,

I use Proxmox for over 3 years now (open-source version) for various project related to virtualisation. I had several problems with the Raid 1 software because I did not have a UPS when it ran out of electricity and I was doing excessive writing on my disks. I have always used the ext4 file system, so I can not really guide you on which file system to use.

I guess if you're thinking about doing a lot of read / write, you'd probably be better off with the hardware raid, especially if your Raid card has a battery to back up the cache. If on the other hand you have a secure infrastructure on UPS, the software raid could be your solution, but I still think that the hardware raid is more effective if you have a raid card.

I mainly use the software raid when I have no choice, for example if my server only has a SAS card.

Also, do not forget to check if your Raid card is optimized for NVME SSDs, because some cards are not and quickly damage / use the NVME disk.
 

atom70

New Member
Nov 7, 2018
7
2
3
23
I never did that, but normally it is possible to raid 1 software with two disks of the same size, so I imagine it is possible to do it with an NVMe and an HDD of the same size.

As for having a 'varying' speed, I do not think it will work. Maybe someone else in the community will know how to answer you.

Out of curiosity, in what datacenter your server will be located? :)
 
  • Like
Reactions: Gaia
May 18, 2019
169
9
23
Los Angeles, CA USA
http://pcad.lib.washington.edu/building/9947/

I never did that, but normally it is possible to raid 1 software with two disks of the same size, so I imagine it is possible to do it with an NVMe and an HDD of the same size.

As for having a 'varying' speed, I do not think it will work. Maybe someone else in the community will know how to answer you.

Out of curiosity, in what datacenter your server will be located? :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!