RAID with NVME and SATA SSD?

Ejo2001

New Member
Feb 5, 2021
6
0
1
23
Hello!



I'm considering picking up an Asus Barebone PN52 for a low power homelab (I have an enormous homelab, but electricity prices has gone up, so I can't keep it up at the moment). It can fit an NVME drive as well as a 2.5" drive. I am considering getting a 1TB SATA SSD (Samsung 870 EVO 1TB) and a 1TB NVME SSD (Kingston NV2 Gen 4 1TB) drive. I plan on using the built in software RAID in a RAID1 configuration. Is this possible/recommended?



Thanks in advance

Ejo
 
Are you sure this will work? Atleast with PCs I used you are limited to NVMe raid or SATA raid when using the onboard raid but you can't create a raid array out of a NVMe and a SATA disk. And in case you want to use ZFS for software raid...there it shouldn't be a problem to create a mirror using a NVMe and SATA but consumer SSDs are not recommended for that. ZFS got a very high overhead and can kill consumer SSDs quite fast, especially when your workloads got alot of sync writes, for example if your guests will use some kind of databases.

And a fast PCIe Gen4 SSD would be wasted money, as it wouldn't be faster than the slow SATA SSD.
 
Are you sure this will work? Atleast with PCs I used you are limited to NVMe raid or SATA raid when using the onboard raid but you can't create a raid array out of a NVMe and a SATA disk. And in case you want to use ZFS for software raid...there it shouldn't be a problem to create a mirror using a NVMe and SATA but consumer SSDs are not recommended for that. ZFS got a very high overhead and can kill consumer SSDs quite fast, especially when your workloads got alot of sync writes, for example if your guests will use some kind of databases.

And a fast PCIe Gen4 SSD would be wasted money, as it wouldn't be faster than the slow SATA SSD.
I agree for the part that raid ZFS over consumer disk is a bad idea. i try and the wearout is massive.

So i go back to my "old" install of proxmox over a raid 1 debian with mdadm, with a nvme disk and a Sata.

The cool stuf with it is the option "write mostly" : https://raid.wiki.kernel.org/index.php/Write-mostly

when you write, you do it to the two drive : so the speed is like the slowest.

But when tou read , you do it only from the faster, so the nvme drive

and you can do raid with partition (even with lvm lv volume )

cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md127 : active raid1 dm-2[1](W) dm-0[0] 157154304 blocks super 1.2 [2/2] [UU] bitmap: 2/2 pages [8KB], 65536KB chunk md0 : active raid1 sda2[1](W) nvme0n1p2[0] 19513344 blocks super 1.2 [2/2] [UU] md1 : active raid1 sda1[1] nvme0n1p1[0] 96192 blocks [2/2] [UU]

You cans ee the "W" for SSD drive
Code:
hdparm -t /dev/md0

/dev/md0:
 Timing buffered disk reads: 3496 MB in  3.00 seconds = 1164.72 MB/sec
 
Had a homelab maintaince today and replaced two consumer SSDs ZFS killed in the last 2 weeks. And replaced a third consumer SSD 2 months ago that ZFS also had killed. So yeah, not really recommended...very annoying, but atleast no lost data or downtime when running a mirror. ;)
Got way less problems with the much older enterprise SSDs.

SSDs will always die sooner or later, so I would really suggest to use the NVMe and SATA in raid1, even if you loose alot of performance.
 
Last edited:
I also believe consumers drives are eaten by virtualization and ZFS for breakfast. Allow me to tell you about my storage, even though it might not apply to your use case.

Running and unbalanced ZFS stripe of mirrors (raid10) of 2 partitions of a Samsung 970 Evo Plus NVMe in a mirror with 2 Toshiba Q300 Pro SATA (half the size) gives me good performance and low degradation by disabling some Proxmox services (corosync, pve-ha-crm, pve-ha-lrm). Even though these are consumer TLC drives.
I assume that this setup is uncommon and maybe my consumer/noob use case of running some service VMs, some disposable VMs and a two desktop VMs with GPUs on Proxmox might not be what it is designed for, it works fine. Disabling the services I don't need and making sure the system logs are not filled with unnecessary lines every seconds (by correcting configuration stuff) really made a difference: 2% NVME-life spent in the first couple of months and only +1% spent in a year after making changes.

Anyway: either get the good enterprise stuff or see if you can configure your setup to work good for your particular work load (but be prepared to spend money to replace stuff). Also note that I don't know the bare metal speed as I have been running everything on Proxmox for years. Maybe I took a big performance hit when I started, it has improved with new hardware as expected and i get the benefits of ZFS and PBS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!