Intel NUC single SSD - best filesystem?

Leon Roy

New Member
Mar 1, 2019
10
0
1
42
We're planning on using a few Intel NUCs in lab conditions with Proxmox. They take a single M.2 NVME device. What would be the best filesystem to use in this case?

Also if ZFS would a RAID1 or RAID0 array be the correct option?
 
I don't think you can do any raid setup with a single disk, because traditionally all raids require more than one disk that is the purpose of a raid.

We're planning on using a few Intel NUCs in lab conditions with Proxmox. They take a single M.2 NVME device. What would be the best filesystem to use in this case?

Also if ZFS would a RAID1 or RAID0 array be the correct option?
 
But ZFS in RAID0 is possible with one disk. And then you'll be able to use all of the fancy ZFS features (snapshots, compression, arc) and not lose something compared to any other filesystem running on a single disk, so yes, if you have plenty of RAM, stick to ZFS.
 
  • Like
Reactions: Leon Roy
thanks - the disk will be a single 1TB SSD. It will only have about 100-200GB of containers on it.

Do you know if ZFS on Linux’s lack of support for TRIM will be a problem?
 
We're running a lots of servers since years without problems.

But please be warned, make sure it's some kind of professional SSD (Datacenter, Server, ...). Normal consumer SSDs wear quickly and PVE is writing a lots of data (logs, transactions, ...). You can search the forum for problems (and solution) like this.

Example: https://www.samsung.com/semiconductor/ssd/enterprise-ssd/MZ1LB960HAJQ/
 
  • Like
Reactions: Leon Roy
@morph027 so I did a quick calculation:

Purchased - Samsung 970 Pro Evo Plus - it quotes TBW of 600.

Ideal - Samsung PM983 Enterprise - it quotes DWPD 1.3 (3 years)
When you convert to TBW it gives 1367.

So approximately twice the longevity for about £50 more...hmmm, tempted. Pity it seems to be out of stock everywhere in the UK.

TBH for my FreeNAS ZFS servers I purchased Intel S3700 SSDs for the SLOG way back when at great expense...those are at 1% usage even after 4+ years of service so think I can get away with the Evo for home use. But yep, at work or the enterprise definitely worth paying the 50% premium for an enterprise SSD.
 
Last edited:
Also if ZFS would a RAID1 or RAID0 array be the correct option?

No. It will be a mistake. Even with some data center SSD. If you will have some check-sums errors the zfs pool will be unusable. I have see few Intel DC series SSD with chechsum errors. More safe is to use nr-of-copys = 2 as zfs propriety (this means that each zfs data is write using 2 copies, excluding metadata). But this option is not 100% safe.
 
  • Like
Reactions: Leon Roy
thanks @guletz, food for thought.

Some interesting analysis on the subject here suggests that whilst the `copies` property is not anywhere as good as two separate devices, a single device with `copies=2` can increase odds of recovery from corruption enough to be worth doing with important pools:
testing the resiliency of zfs set copies=n

One thing I can't figure out is what to set the ashift value to. Is there any harm in setting it to 13 vs 12?

Seems unclear whether the Samsung 970 Evo Plus uses 4k or 8k page sizes.
 
And yes, you can't correct checksum...but you also can't with any other filesystem using a single disk. So no real loss.

It is not like that. On any FS(excluding zfs) if you have some corupted files, you can access the rest of the FS, or you can replace them/delete using a live CD. But on zfs you can not do this(only in some lucky cases you can recover your pool, but you need a lot of work and more time consuming).
 
Can the drives on a Intel NUC be Rad1? or Raid ZFS 1? I have a NUC7I7DNHE with a 1 TB NVME and a 1TB 2.5" Sata drive. I would like to have them raided, but so far it seems I can RAID the PCIEs or the SATAs not both? Can anyone confirm this?
Why does their specifications show that Raid is support 0 and 1, is it possible to put in dual SATA drives? or dual PCIE drives?
Thanks.
 
Hi,

As a general rule, any raid storage(0, 1, whatever) could be use if ALL disks:
- use the same type of disk controller/interface (like sata, sas, pcie, etc)
- have similar specifications (speed, i/o, block size, etc)

So I guess you can not use hardware raid (with bad performance on SOHO systems) with pcie+sata!

Good luck!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!