NVMe BTRFS RAID0 vs ZFS Stripe

Nov 29, 2010
4
0
21
What do you think performance wise for a system with 3x nvme samsung 970 pro 1tb's is the best? It's for a home setup so I don't really care for the data recovery but just the convenience and performance as a 'single' disk. The downtime to recover from a backup does not matter much to me. I think making a nightly backup job to another machine will be enough as backup (+ cloud backup), I guess this should be enough or do I really need a RAIDZ ZFS setup for more protection so it can't corrupt my backups?

Machine is a NUC9VXQNX (mobile 8-core xeon version) with 64gb ecc ram and 3x samsung 970 pro 1tb and a 1x intel x550 T2 10gig nic as addon.
 
Last edited:
you dont need zfs for safety homelab then go with btrfs - lower memory consumption
but backup should you do - also in a homelab
how to use 3 ssds depends of your wishes
do you want much storage and speed than make a stripe with raid0 over all 3, but i think your cpu/Lanes will be the bottleneck with 3 NVMEs, maybe
BUT: Have a look at -->https://linustechtips.com/topic/768...etter-real-world-performance-or-not-how-much/
Maybe: you should go with "Single"
 
Last edited:
Thanks for your answer. I have been reading up a bit and tried both.

The NUC has a daughterboard with a dedicated x16 bifurcation to 4x nvme slot, 8x slot and a x4 slot. So all nvme drives have a dedicated x4 connection. 2x direct to the cpu and 1x through the chipset dmi 3.0. All chipset functions are disabled so I think I am good for the full x4 bandwidth. (usb disabled, onboard nics disabled etc etc) Latency of 970 pro read/write should be slower than the dmi 3.0 connect so it should not slow down the raid I think. Single drive should perform fine, but I like to tinker with these things :).

I tried ZFS and BTRFS, but I have strange issue with BTRFS. The disks is slowing filling up while the container raw disks aren't that full. These should be 'thin' right? The backup size is way smaller then disk usage is showing. I think it's maybe slowly filling up to the 'thick' raw disks. Still have to find out what it can be.

ZFS worked fine but seems a tad slower when benchmarking with FIO. It's using up to 50% ram, but this seems normal behaviour.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!