Setup suggestions/guidance please

shuhdonk

Member
Dec 15, 2020
24
0
6
Hey all, I am new to Proxmox and semi new to Linux as well. My server is currently a Windows 2019 server that is mostly used for network share and playing around with vm's. I just built a new server and am considering using Proxmox. my new server is an Epyx 7302P cpu on a Supermicro H12SSL-NT board with 256gb of ram. For storage I have 6 4tb WD Blue ssds, 2 120gb ssd's mirrored that Proxmox is installed on currently. I have a 1 tb Inland pro nvme and a 512gb Samsung evo 970 plus nvme. I also have a 16tb exos sata drive I want to use for backups.

My current 2019 server (which is a Ryzen 3950x cpu and 64gb of ecc unregistered ram) has the 6 4tb WD blue ssd's installed in raid 5, they are connected to a LSI sas3 (IT mode) HBA. One of the issues with it is it is very slow, If I copy a large file over the network it will start off fast/"normal" speed and then go down to zero for a few seconds.. then go back up to decent speeds for a few seconds and go to zero and repeat. I want to avoid this behavior on the new setup and hoping to sustain 2-400mb/s a second transfer rates or better if possible.

I want to use the 6 4tb ssd drives as network storage. I am not sure which drives or how they should be setup for VM's (the nvme's or the network share drives?). How should I go about setting up the network share with Proxmox (Truenas recommended maybe via VM or something similar(never used Truenas before)? If I were to run Truenas (core or scale?) how do I pass the drives through to the VM? They will be plugged directly into the motherboard via Supermicro slimsas connection that is on the motherboard. I want to be able to do nightly incremental backup to the 16tb exos drive, should that drive be part of Truenas? Can I or should I be running an nvme drive as a cache or log drive (I am completely new to zfs so not exactly sure how it all should be setup for speed and reliability), if need be I can add more ssd's too.

Any suggestions would be appreciated, thanks!
 
My current 2019 server (which is a Ryzen 3950x cpu and 64gb of ecc unregistered ram) has the 6 4tb WD blue ssd's installed in raid 5, they are connected to a LSI sas3 (IT mode) HBA. One of the issues with it is it is very slow, If I copy a large file over the network it will start off fast/"normal" speed and then go down to zero for a few seconds.. then go back up to decent speeds for a few seconds and go to zero and repeat. I want to avoid this behavior on the new setup and hoping to sustain 2-400mb/s a second transfer rates or better if possible.

I want to use the 6 4tb ssd drives as network storage. I am not sure which drives or how they should be setup for VM's (the nvme's or the network share drives?).
You bought consumer SSDs. In general enterprise SSDs are recommended because they will survive way longer and the performance drop won't be that big when the cache gets full so you get better performance for sustained writes.
How should I go about setting up the network share with Proxmox (Truenas recommended maybe via VM or something similar(never used Truenas before)?
PVE doesn't offer any NAS functionalities but its a complete Linux OS based on Debian, so everything that you can do with a Debian server, you can do with PVE. So you could either install a SMB/NFS server directly on the host or use a VM/LXC as a NAS.
If I were to run Truenas (core or scale?) how do I pass the drives through to the VM?
If only your WD Blues are connected to the HBA I would use PCI passthrough to passthrough the complete HBA with all disks attached to it into the TrueNAS VM. Thats the only way how TrueNAS could physically and directly access the drives without an additional virtualization layer in between or additional overhead.
Another option would be to use "qm set" for passthrough but keep in mind that this is no physical passthrough and your TrueNAS VM would still only see virtual disks.
They will be plugged directly into the motherboard via Supermicro slimsas connection that is on the motherboard. I want to be able to do nightly incremental backup to the 16tb exos drive, should that drive be part of Truenas?
You could also add the 16TB HDD to that TrueNAS VM as a second pool and then use local ZFS replication to have a exact copy of the pool including all snapshots for versioning.
 
  • Like
Reactions: 0bit
If only your WD Blues are connected to the HBA I would use PCI passthrough to passthrough the complete HBA with all disks attached to it into the TrueNAS VM. Thats the only way how TrueNAS could physically and directly access the drives without an additional virtualization layer in between or additional overhead.
Another option would be to use "qm set" for passthrough but keep in mind that this is no physical passthrough and your TrueNAS VM would still only see virtual disks.

You could also add the 16TB HDD to that TrueNAS VM as a second pool and then use local ZFS replication to have a exact copy of the pool including all snapshots for versioning.
Hey Dunuin, thanks for the reply. Would I suffer any performance issues running all my sata drives off the HBA instead of the motherboards sas port(s)? If the performance would not suffer at all that is something I can do if that is better to pass through than using qm set like you explained (if I ran Truenas for instance in a VM). I have an LSI 9207-8i HBA (IT Mode) controller. I could also get a different controller if need be too, I just don't want the controller to be a bottle neck, hoping the drives themselves will be the bottle neck.

Also I understand they are not enterprise ssds but I am able to copy large files over the network to other pcs that have standard ssds in them and get a consistent 200+ mb/s (2.5gb lan). I am hoping to get that performance on the server if they are running zfs and some flavor or raid (preferably not raid 10).

Thanks again!
 
In your current server config, are you using an onboard consumer RAID controller to achieve raid5? This may be part of the problem if it doesn't have a a proper cache built in. This could cause high IO delay. This would be solved by running HBA -> ZFS with a large amount of RAM.

I just got done setting up a TRUENAS VM in proxmox with an LSI 9207-8i using PCI passthrough. I only have HDDs connected but it easily maxes out a 1 gig connection with little to no slowdown on file transfer.

Another thing to consider, if you are using a regular desktop case there may not be enough airflow to cool the HBA heatsink. I attached a fan to mine to avoid any overheating concerns.
 
Last edited:
In your current server config, are you using an onboard consumer RAID controller to achieve raid5? This may be part of the problem if it doesn't have a a proper cache built in. This could cause high IO delay. This would be solved by running HBA -> ZFS with a large amount of RAM.

I just got done setting up a TRUENAS VM in proxmox with an LSI 9207-8i using PCI passthrough. I only have HDDs connected but it easily maxes out a 1 gig connection with little to no slowdown on file transfer.

Another thing to consider, if you are using a regular desktop case there may not be enough airflow to cool the HBA heatsink. I attached a fan to mine to avoid any overheating concerns.

My current server has the 6 ssds connected to the lsi 9207-8i hba which is in IT Mode, the raid is done in Windows 2019 server. My current server is in a server rack with rack mount case with lots of fans and airflow through it, so should not be an issue. The new server will be going into the same case.

Good to hear your new server is performing well, that is what I am hoping for as well. My lan will be a combination of 10g 2.5g and 1g connections, any large file transfers over the network will be between 10g or 2.5g connected PC/devices so hoping to get the network storage setup in a way to sustain 2-300mb/s a second transfer speeds without any stalling/slow downs.
 
My current server has the 6 ssds connected to the lsi 9207-8i hba which is in IT Mode, the raid is done in Windows 2019 server. My current server is in a server rack with rack mount case with lots of fans and airflow through it, so should not be an issue. The new server will be going into the same case.

Good to hear your new server is performing well, that is what I am hoping for as well. My lan will be a combination of 10g 2.5g and 1g connections, any large file transfers over the network will be between 10g or 2.5g connected PC/devices so hoping to get the network storage setup in a way to sustain 2-300mb/s a second transfer speeds without any stalling/slow downs.

Okay so it sounds like you'd be going from Storage Spaces to ZFS. The only benchmark comparison I could find was this one that shows on some workflows ZFS is much faster than SS.
 
  • Like
Reactions: shuhdonk
Hey Dunuin, thanks for the reply. Would I suffer any performance issues running all my sata drives off the HBA instead of the motherboards sas port(s)? If the performance would not suffer at all that is something I can do if that is better to pass through than using qm set like you explained (if I ran Truenas for instance in a VM). I have an LSI 9207-8i HBA (IT Mode) controller. I could also get a different controller if need be too, I just don't want the controller to be a bottle neck, hoping the drives themselves will be the bottle neck.
That HBA is using PCIe 3.0 8x so if you put it into a electrical PCIe 3.0 8x slot it can handle a throughput of 7.8 GByte/s which shouldn't bottleneck 6x SATA/SAS SSD and which should be faster than the chipsets onboard ports which possibly got a lower bandwidth.
Also I understand they are not enterprise ssds but I am able to copy large files over the network to other pcs that have standard ssds in them and get a consistent 200+ mb/s (2.5gb lan). I am hoping to get that performance on the server if they are running zfs and some flavor or raid (preferably not raid 10).
Atleast they are TLC and not QLC SSDs so they should be able to handle 200MB/s for big sequential writes. But doing small random sync writes your performance will be lower than 200MB/s. especially if you don't want to use a striped mirror (aka raid10). Using raidz1 (aka raid5) or raidz2 (aka raid6) is fine for big sequential reads/writes but bad for IOPS.
 
Last edited:
That HBA is using PCIe 3.0 8x so if you put it into a electrical PCIe 3.0 8x slot it can handle a throughput of 7.8 GByte/s which shouldn't bottleneck 6x SATA/SAS SSD and which should be faster than the chipsets onboard ports which possibly got a lower bandwidth.

Atleast they are TLC and not QLC SSDs so they should be able to handle 200MB/s for big sequential writes. But doing small random sync writes your performance will be lower than 200MB/s. especially if you don't want to use a striped mirror (aka raid10). Using raidz1 (aka raid5) or raidz2 (aka raid6) is fine for big sequential reads/writes but bad for IOPS.

Okay perfect, the HBA would be plugged into a PCI-E 4.0 8x slot so bandwidth should be good then. I would like to use to raid 10 but need the drive space. Would I get just as good or better performance than the 4tb ssd's if I replaced them with something like 6 12-16tb exos and putting those in raid 10?
 
No, using HDDs instead of SSDs would just make it way worse.

Theoretical Performance, reliability and usable capacity to be expected:
6 disk striped mirror (raid10) @ ashift=12, volblocksize=8K:6 disk raidz1 (raid5) @ ashift=12, volblocksize=32K:6 disk raidz2 (raid6) @ ashift=12, volblocksize=16K:
Real usable capacity (all zvols):40% = 9.6 TB = 8.73 TiB64% = 15.36 TB = 13.97 TiB53% = 12.67 TB = 11.52 TiB
Real usable capacity (all datasets):40% = 9.6 TB = 8.73 TiB66% = 15.94 TB = 14.5 TiB53% = 12.67 TB = 11.52 TiB
Drives may fail:1-312
Random IOPS @ 4K:1.5x0.125x0.25x
Random IOPS @ 8K:3x0.25x0.5x
Random IOPS @ 16K:3x0.5x1x
Random IOPS @ 32K+:3x1x1x
Sequential Reads/Writes Throughput @ 4K:3x / 1.5x0.625x / 0.625x1x / 1x
Sequential Reads/Writes Throughput @ 8K:6x / 3x1.25x / 1.25x2x / 2x
Sequential Reads/Writes Throughput @ 16K:6x / 3x2.5x / 2.5x4x / 4x
Sequentual Reads/Writes Throughput @ 32K+:6x / 3x5x / 5x4x / 4x
But sequential reads/writes throughput of low blocksizes might not be that bad in reality because of caching.
 
Last edited:
No, using HDDs instead of SSDs would just make it way worse.

Theoretical Performance, reliability and usable capacity to be expected:
6 disk striped mirror (raid10) @ ashift=12, volblocksize=8K:6 disk raidz1 (raid5) @ ashift=12, volblocksize=32K:6 disk raidz2 (raid6) @ ashift=12, volblocksize=16K:
Real usable capacity (all zvols):40% = 9.6 TB64% = 15.36 TB53% = 12.67 TB
Real usable capacity (all datasets):40% = 9.6 TB66% = 15.94 TB53% = 12.67 TB
Drives may fail:1-312
Random IOPS @ 4K:1.5x0.125x0.25x
Random IOPS @ 8K:3x0.25x0.5x
Random IOPS @ 16K:3x0.5x1x
Random IOPS @ 32K+:3x1x1x
Sequential Reads/Writes Throughput @ 4K:3x / 1.5x0.625x / 0.625x1x / 1x
Sequential Reads/Writes Throughput @ 8K:6x / 3x1.25x / 1.25x2x / 2x
Sequential Reads/Writes Throughput @ 16K:6x / 3x2.5x / 2.5x4x / 4x
Sequentual Reads/Writes Throughput @ 32K+:6x / 3x5x / 5x4x / 4x
But sequential reads/writes throughput of low blocksizes might not be that bad in reality because of caching.

So 6 4tb ssd's in raid 5 would perform better than 6 14tb exos drives in raid 10? I know raid 10 performs better than raid 5 but that performance gain would be negated with exos drives over ssd's?
 
So 6 4tb ssd's in raid 5 would perform better than 6 14tb exos drives in raid 10? I know raid 10 performs better than raid 5 but that performance gain would be negated with exos drives over ssd's?
Jup, SSDs in raidz1 still should be way faster than HDDs in a striped mirror.
As long as most of your operations/files are 32KB or bigger raidz1 should be fine. But I wouldn't run a DB of it because of the bad sync IOPS.
 
Last edited:
  • Like
Reactions: shuhdonk

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!