Beginner needs advice for host SSD / HDD configuration

bezse

New Member
Nov 12, 2020
13
1
3
51
Hello There,

I searched and read all related configuration topic, but would like to ask a little help too.

HP Proilant DL380e Gen8 / 2 x Intel Xeon 8 Core E5-2450L @ 1,8GHz 20MB Cache / 16GB DDR3 RAM / Intel C600 Chipset / 14 x 3,5" HDD bays / P420 RAID Controller with 1GB RAM / 4 x 1GB NIC

I’d like to build a great host for the following services:
-Plex Media Server KVM or LXC (did not decide yet)
-Roon ROCK KVM (already found a good article about it)
-Xpenology (I’m lost but trying to found answers)

I wonder what is the optimal configuration for these purpose?

My thoughts are:
-(SAS) SSD HW RAID1 array for host and VMs
-HW RAID5 array for media and documents (shared through Xpenology accessed by the two other VMs too)

Is this configuration the best practice for our needs or do I need to reconsider something?

Still do not know how to use those 4 x 1GB NICs :(

Any advice, ide, thoughts are welcome!
 
Hello There,

I searched and read all related configuration topic, but would like to ask a little help too.

HP Proilant DL380e Gen8 / 2 x Intel Xeon 8 Core E5-2450L @ 1,8GHz 20MB Cache / 16GB DDR3 RAM / Intel C600 Chipset / 14 x 3,5" HDD bays / P420 RAID Controller with 1GB RAM / 4 x 1GB NIC

I’d like to build a great host for the following services:
-Plex Media Server KVM or LXC (did not decide yet)
-Roon ROCK KVM (already found a good article about it)
-Xpenology (I’m lost but trying to found answers)

I wonder what is the optimal configuration for these purpose?

My thoughts are:
-(SAS) SSD HW RAID1 array for host and VMs
-HW RAID5 array for media and documents (shared through Xpenology accessed by the two other VMs too)

Is this configuration the best practice for our needs or do I need to reconsider something?

Still do not know how to use those 4 x 1GB NICs :(

Any advice, ide, thoughts are welcome!

Are SAS HDDs HW RAID1 array for host and VMs worse / same / better?
 
Last edited:
Ideally you would use Enterprise grade SSDs for VMs. I personally wouldn't use RAID5. Make sure that you know about the pros and cons in contrast to other types like RAID10.

Still do not know how to use those 4 x 1GB NICs :(
You plan to have a single host attached to your router?
 
  • Like
Reactions: bezse
Ideally you would use Enterprise grade SSDs for VMs. I personally wouldn't use RAID5. Make sure that you know about the pros and cons in contrast to other types like RAID10.


You plan to have a single host attached to your router?

Thanks for your reply!

I'm planning to implement Enterprise grade SSDs for VMs.
My question is if it is a good scenario to share the same SSD mirror for the host (Proxmox) and for VMs?

Failure of a single drive is acceptable for these data, maybe I'll consider RAID6.

This server connects to a Gbits switch what connects to another Gbits but PoE switch with wired clients and PoE APs.
 
Your biggest issue is the P420 raid controller. For maximum flexibility it needs to operate in HBA mode - i.e non-raid, especially if you are contemplating running a virtual Xpenology nas (FreeNas might be a better option) where I presume you will be looking to pass the hard drives through to the virtual machine?

However, it's not trivial to configure the P420 to operate in this mode - and I've seen comments that the driver performance is not great as it's not a 'mainstream' product. You should be looking at an LSI controller or an OEM-compatible version - see here for more info https://www.servethehome.com/buyers...s-freenas-nas-servers/top-picks-freenas-hbas/
 
  • Like
Reactions: bezse
Your biggest issue is the P420 raid controller. For maximum flexibility it needs to operate in HBA mode - i.e non-raid, especially if you are contemplating running a virtual Xpenology nas (FreeNas might be a better option) where I presume you will be looking to pass the hard drives through to the virtual machine?

However, it's not trivial to configure the P420 to operate in this mode - and I've seen comments that the driver performance is not great as it's not a 'mainstream' product. You should be looking at an LSI controller or an OEM-compatible version - see here for more info https://www.servethehome.com/buyers...s-freenas-nas-servers/top-picks-freenas-hbas/

Thanks for you reply!

Why do I have to put it to HBA if I'd like to use it with Xpenology VM? I thought the host (Proxmox) deals with the volume itself and I'll set a mounting point for Xpenology.

I've got some experience with FreeNAS and like to use Xpenology instead of it (mobile apps etc).
 
FreeNAS uses ZFS and ZFS can't use disks attached to HW Raid. Also you can't just use mountpoints. FreeNAS inside a VM needs direkt access to the drives, so you need to PCIe passthrough your HBA to the VM. Not sure about Xpenology but I will bet you also want to passthrough your HBA for less overhead.
 
Last edited:
  • Like
Reactions: bezse
FreeNAS uses ZFS and ZFS can't use disks attached to HW Raid. Also you can't just use mountpoints. FreeNAS inside a VM needs direkt access to the drives, so you need to PCIe passthrough your HBA to the VM.

Thanks for your reply!

As I mentioned earlier I would not like to use FreeNAS, sticked to Xpenology because it is more (user)friendly.

If it isn't necessary would not use SW RAID too as there is a HW controller available.
 
HW Raid isn't always better. You are screwed if your controller fails and you can't find a replacement anymore. And a HW Raid really should use a backup battery unit. Really depends on the situation.But it looks like it is possible to use HW raid with Xpenology.
 
  • Like
Reactions: bezse
As far as I know, Xpenology is a clone of the OS on the Synology NAS units, and I would imagine that this too will require either direct access to the physical drives or some form of multi-drive emulation in order to function.

Personally, I would either use Proxmox as a fileshare host using SAMBA and NFS (which is what I do on my system because you can then always leverage mount points to provide shared media libraries to virtual hosts such as plex or emby, etc and the whole pool storage is still accessible for VM disks or backups or whatever else I might need) or I would run TrueNAS (FreeNAS) if I wanted to have the nice GUI for managing file shares and doing stuff like AFP and WebDav
 
  • Like
Reactions: bezse
HW Raid isn't always better. You are screwed if your controller fails and you can't find a replacement anymore. And a HW Raid really should use a backup battery unit. Really depends on the situation.But it looks like it is possible to use HW raid with Xpenology.

I can not argue with you related the HW vs SW RAID topic, I just would like to use the most painless solution as this whole projects aim a family server.
 
As far as I know, Xpenology is a clone of the OS on the Synology NAS units, and I would imagine that this too will require either direct access to the physical drives or some form of multi-drive emulation in order to function.

Personally, I would either use Proxmox as a fileshare host using SAMBA and NFS (which is what I do on my system because you can then always leverage mount points to provide shared media libraries to virtual hosts such as plex or emby, etc and the whole pool storage is still accessible for VM disks or backups or whatever else I might need) or I would run TrueNAS (FreeNAS) if I wanted to have the nice GUI for managing file shares and doing stuff like AFP and WebDav

I have to and I'll read more about how can I reach volume from Xpenology. Maybe the Proxmox file share is a great solution for us.
 
How many disks are you intending to install in the server? Have you got the drive caddies or are they just blanks?
 
How many disks are you intending to install in the server? Have you got the drive caddies or are they just blanks?

We had currently working Xpenology unit with 6pcs spinning disks in RAID 5 config, but it is almost full, that is why I bought a server with more bays. We running the PMS on it and have a separated computer for Roon ROCK.

I would like to expand the capacity (I've got 6pcs caddies but can buy 8 pcs more) and consolidate these services.
 
So I see why you are so keen on Xpenology and I would assume you would like to try and 'import' the existing drives onto the new server? IMO that could be risky and I'd only attempt it if you have a backup of all your data just in case anything went wrong.

Much better to create a new server and then copy the data across

BTW, are you sure the DL380 has 14 bays, I thought they were only 12 bays ?
 
  • Like
Reactions: bezse
So I see why you are so keen on Xpenology and I would assume you would like to try and 'import' the existing drives onto the new server? IMO that could be risky and I'd only attempt it if you have a backup of all your data just in case anything went wrong.

Much better to create a new server and then copy the data across

BTW, are you sure the DL380 has 14 bays, I thought they were only 12 bays ?

No, I'll manage the data transfer via wired network, but we like the DS apps on iOS that is why we sticked to it.

This version has 2 bays on the back too.
 
...but my question is still if it is a good scenario to share the same SSD mirror for the host (Proxmox) and for VMs?
 
Hi.
Sharing the drives for the proxmox instal and the vms is simple and works well.
I'd ditch hardware raid and set up proxmox using zfs raid1. Once installed you will have a local-zfs datastore that has the remaining space not used by the proxmox install that you can set up your vms on.
If zfs raid doesn't work for you because of the raid controler then sharing a raid1 volume for the os install should be fine. You'll have to figure out a method for sharing the data drives though, and if hardware raid is the only option there i'd probably forget about passing the volume through to a vm. Also, personal preference here, but i'd use raid 10 over raid 5 or 6.

Thanks.
 
  • Like
Reactions: bezse
Hi.
Sharing the drives for the proxmox instal and the vms is simple and works well.
I'd ditch hardware raid and set up proxmox using zfs raid1. Once installed you will have a local-zfs datastore that has the remaining space not used by the proxmox install that you can set up your vms on.
If zfs raid doesn't work for you because of the raid controler then sharing a raid1 volume for the os install should be fine. You'll have to figure out a method for sharing the data drives though, and if hardware raid is the only option there i'd probably forget about passing the volume through to a vm. Also, personal preference here, but i'd use raid 10 over raid 5 or 6.

Thanks.

Thank you very much!
 
...but my question is still if it is a good scenario to share the same SSD mirror for the host (Proxmox) and for VMs?
Your virtual machines and Proxmox will work if they are on the same drive and they will also work on a separate drive. If you select a disk in the Proxmox VE ISO installer it will create multiple storages on this disk for you. One of those is certainly suitable to store virtual machines. You can find some information about default storages in our administration guide.
If you have enough disks, it does make sense in my opinion to use a separate drive for Proxmox VE. For example, in case something goes wrong, it is then easier to just wipe the whole disk and reinstall Proxmox VE.
If you want separate disks, you can just click through the installer for the OS disk and then add additional storages/disks in the GUI of Proxmox VE.

Concerning RAID, I'll just quote the documentation:
VM storage: For local storage use either a hardware RAID with battery backed write cache (BBU) or non-RAID for ZFS and Ceph. Neither ZFS nor Ceph are compatible with a hardware RAID controller.
I'd generally consider ZFS RAID a good choice. However, depending on how much data you have 16GB RAM is really not much.
ZFS works best with a lot of memory. If you intend to use ZFS make sure to have enough RAM available for it. A good calculation is 4GB plus 1GB RAM for each TB RAW disk space (documentation)

And about virtual FreeNAS: I am pretty sure that the screenshot shows a FreeNAS VM with only virtual disks, no passthrough of anything.

freenas.png

Admittedly, it might not make much sense to have ZFS (of FreeNAS) on virtual disk, but it does start.


Anyway, you can install Proxmox VE itself in a virtual machine (and it will also allow guests - nested virtualization) on whatever computer you're currently reading this. So you can see how the ISO storage setup works and in what way you can install FreeNAS or Xpenology without spending a single Euro/Dollar/whatever.
 
Last edited:
  • Like
Reactions: bezse

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!