Yet another RaidZx and VM setup question! 36 disk system...

CelticWebs

Member
Mar 14, 2023
75
3
8
I've been trawling through the forums trying to answer my own questions and I'm finding conflicting information. So thought I'd post here and see what peoples thoughts are.

Ive had a Synology NAS for many years and it's been perfect for my use case. I have a 5 bay main unit as pool1, which I later added a 5 bay expansion as pool 2. I use it primarily for general file backup, web server backups and media storage. It is accessed by 6 people, though admittedly rarely are we all working at the same time. I've got to the point where expansion is no longer financially viable and we're running out of space.

Initially I looked at larger Synology units, they were either really low spec or extremely expensive. After some deliberation, I bought a 36 Bay Super Micro rack Server with the intention of setting up Proxmox and relevant raid to give the space we need, with future upgrade possibility to larger disks.

Here's what I've got to build my new Proxmox server.

System Disks
2 x 4TB SAS in mirror raid as boot disk.
4 x 500gb NVME stripped / mirrored as VM storage.

Storage Disks
24 4TB SAS disks (existing older server disks that have been in storage for a while)
5 14TB SATA Disks (Pool1 disks Synology)
5 12TB SATA Disks (Pool2 disks Synology)

Considerations for the new setup
  • I had 2 seperate pools on the Synology, the issue that often came up was that there'd be space on the one pool and none on the other, to combat this I'd end up having to move / delete large amounts of data to give space where it was required. For this reason, I'd like to have one large pool where I just have folders within it.

  • When multiple users tried to access large files, transfers would become slow due to disk access and network speeds. To combat this, the new system has 10g network capability and I'm hoping to setup the disks in such a way that they give good read and write speeds.

  • Finally, data security, as you can imagine, it's a lot of data and I don't want to risk it unnecessarily. While we could rebuild most of it if we really had too, this would be a very laborious process. So reasonable data security without loosing half the storage space seems sensible.
From my investigations, it appears I'd be looking at creating a pool with multiple Vdevs from the available disks. The initial plan is to setup the 24 4TB SAS disks with a couple of Vdevs in to one pool. This will allow me to transfer all of the data from one bank of disks on the Synology, add that bank of disks as a new Vdev within the same pool on Proxmox, transfer the final banks worth of data, then add those disks as a Vdev to the existing pool on Proxmox. Eventually ending up with all 34 disks as one large Pool that has parity on each of the VDEVs.

The plan once disks are setup in to the Pool.

There will be a number of VMs running on Proxmox. My intention is that all VM will have access to the same data, possibly using NFS to access the pool?

Assuming this is all workable, my intention would be as follows

2 x 11 4TB SAS disk in RaidZ2 with 1 x hot spare (or am I better just doing Z3?) as 2 VDEVS creating the initial pool
Then 5 14TB disks in RaidZ1 added as VDEV to the main pool
Finally 5 x 12TB Disks in RaidZ1 added as VDEV into to the pool

What are everyone's thoughts on the above? Is it all doable? Is it a sensible way to do it? Does anyone have a better suggestion? Perhaps more smaller VDEVS creating the pool?

I've only been using Proxmox for a couple of months for my web hosting on another server that I run, so this is still all quite new to me. I'm more than happy to listen to any suggestions stating pros on cons of what I'm doing VS other possible setups.
 
Last edited:
2 x 11 4TB SAS disk in RaidZ2 with 1 x hot spare (or am I better just doing Z3?) as 2 VDEVS creating the initial pool
Then 5 14TB disks in RaidZ1 added as VDEV to the main pool
Finally 5 x 12TB Disks in RaidZ1 added as VDEV into to the pool
Doesn't make much sense to mix raidz1 and raidz2. Lose a vdev and you lose the whole pool. So by striping a raidz1 to a raidz2 you basically lower the whole reliability down to raidz1 level.
And I would add SSDs as special devices. And with that amount of disks draid might be a better choice.
 
Thanks, only reason I did raidz2 on the 4tb was they’re older so I considered them higher risk. I did look a little a draid but not enough to understand the real differences, I’ll have to go look again.

I can add more nvme on pcie cards, I’ve got quite a few 500gb available. What do you mean by special devices?
 
Last edited:
I can add more nvme on pcie cards, I’ve got quite a few 500gb available.
Then remember that you shouldn't use consumer SSDs with ZFS. ;) Enterprise SSD with proper DWPD and power-loss protection highly recommended.

What do you mean by special devices?
Do some research about "special" vdevs. Would be a waste not to store the pools metadata on SSDs:
https://forum.level1techs.com/t/zfs-metadata-special-device-z/159954

only reason I did raidz2 on the 4tb was they’re older so I considered them higher risk.
Yes, age increases risk of failure. But bigger disks mean longer resilvering time so higher change to lose the whole pool. With these big disks you really shouldn't use raidz1. Would be better to buy a 6th disk and set them up as raidz2 too.

And keep in mind that Raid is no backup. I hope you still got 2 additional copies of everything you care about.
 
Last edited:
Then remember that you shouldn't use consumer SSDs with ZFS. ;) Enterprise SSD with proper DWPD and power-loss protection highly recommended.
They are Enterprise, theyre from a data centre.

Do some research about "special" vdevs. Would be a waste not to store the pools metadata on SSDs:
https://forum.level1techs.com/t/zfs-metadata-special-device-z/159954

thanks
Yes, age increases risk of failure. But bigger disks mean longer resilvering time so higher change to lose the whole pool. With these big disks you really shouldn't use raidz1. Would be better to buy a 6th disk and set them up as raidz2 too.
that makes sense, I could reduce the number of 4tb to 22 and add another 14tb and 12tb I suppose. Just to bring them all up to z2 or whatever shows its head as being the worthwhile structure.

that would then make
6 x14tb
6 x 12tb
2 @ 11 x 4tb

loosing 2 on each for parity still leaves me with 184tb before a fs losses. I’ll look further in to dRaid to see how this could work.
And keep in mind that Raid is no backup. I hope you still got 2 additional copies of everything you care about.

We do indeed have external backups of anything important, it’s stored in long term storage, s3 glacier. Hence being able to restore but not really wanting to cause that to happen unnecessarily.
 
@Dunuin, I’ve been attempting to imprve my knowledge of draid after your suggestion. It’s taking a bit of getting my head aground. As far as I can work out, I can put all the 4tb in one draid, and have it laid out liek multiple vdevs woud be. Rather than clutter this post with explanations of how dRaid works, I'm going to start a dRaid post, then come back to this one to further discuss the VM setup and pool access methods.

In fact I think I'm better splitting the raid setup vs pool access for VMs in to 2 seperate Thread.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!