What filesystem should i use for my setup?

scootnoot

New Member
Mar 18, 2024
2
0
1
Hey all,

Kinda new to this was wondering what file system i should use, I'm not too worried about RAID at the moment so can probobly do with or without it

1 x 500GB SSD

1 x 3TB HDD

128GB RAM
 
That's a short question and a small set of hardware. Nevertheless the answer could fill a chapter in a book easily...

Without knowing your preferences or plans (want to separate OS and data? Speed requirements? More disks to come? Reliability is really not on the bill? Build a cluster later on? Which VMs to install? ...?)

A ZFS fanboy would create a ZFS pool with the harddisk as a normal/single vdev and add the SSD as a "Special Device", carefully adjusting parameters like "special_small_blocks". I am not sure if the installer can do that though. If any of both devices fails, all data is gone - as there is no device-redundancy, which is a no-go for me personally.

I am not sure if that is really the best option for you. But it gives you the best filesystem available :-)
 
  • Like
Reactions: news
As an installation takes just few minutes why not try all and make a small hardcore test to all filesystems as eg. doing a fio write test and WHILE that running pull the power cord ( or press on/off button long until off or hardreset in console). Try to boot and see which filesystem is able to run further, probably the best filesystem will fail hard here but if realibility isn't on your list it may an option :)
 
  • Like
Reactions: UdoB
probably the best filesystem will fail hard here
Of course you do not think ZFS here ;-)

In my personal experience the time with damaged filesystems - when ext2fs was state of the art - is long ago. And yes, that was bad every time.

All journaling filesystem should survive an unexpected power cut. (Possibly with data "in transit" lost..., but the filesystem stays intact - always.)
 
Right but zfs isn't a journaling filesystem as it's a cow one which (just) in theory should do even better but in reality everybody should try for itself what the theory is worth.
Zfs is the most feature rich filesystem but even the most sensitive.
 
Last edited:
  • Like
Reactions: esi_y
I would like to add the result of an experiment I made just now:

My personal bad-on-purpose example: I have a fantec USB3 case, some years old. With four WD Red drives in a RaidZ2. When I pull the power cord (of the USB device, not the computer) I can create some distinct error situations. Each pass required manual interaction afterwards. I repeated this several times, always writing data full throttle to a testfile. In all situations the pool got "suspended" and I need to fight with "zpool import" several minutes to get it imported again. It is not obvious if I have to reboot the computer, or not. Of course rebooting is the cleaner way - who knows the state of the OS internal buffers after disconnecting/connecting an USB-device like this? Sometimes one of those four disks is stated "removed", sometimes none. Always "zpool scrub" is necessary as there are several CHKSUM errors presented by "zpool status". The testfiles I was actively writing data to got damaged - the data "on flight" was not written successfully and those files were reported like "errors: Permanent errors have been detected in the following files: /tank/test/myfilename".

Never the pool was lost - and this is the important fact!

Of course ZFS on such an USB device is NOT recommended. I do not use it actively, this is/was just a test-setup. I was fine with the behaviour and I would not expect any internal SATA-drive or NVMe behave worse than this fragile USB construct. Neverthess...: ymmv!
 
I like the features which aren't integrated in other filesystems like ...

  1. Two dRAID spares for one vdev - https://github.com/openzfs/zfs/issues/16547
  2. Zpool spare in faulted state, but also online - https://github.com/openzfs/zfs/discussions/16524
  3. Single Spare disk attached in two places (used twice simultaneously) - https://github.com/openzfs/zfs/issues/16398
  4. Mirror seeing half the write IOPS on one disk than the other - https://discourse.practicalzfs.com/...n-one-disk-than-the-other-is-this-normal/1814
  5. The HDD volumes can significantly affect the write performance of SSD volumes on the same server node - https://github.com/openzfs/zfs/issues/16518
  6. simple rename of file changes performance/access behaviour - https://github.com/openzfs/zfs/issues/16490
  7. Issue importing pool - https://www.reddit.com/r/zfs/commen...ting_pool_cannot_import_ontario_io/?rdt=41808
  8. I/O error, unable to import pool - https://www.reddit.com/r/zfs/comments/1emf5xt/io_error_unable_to_import_pool/
  9. ZFS pool disappear after reboot - https://serverfault.com/questions/1163505/zfs-pool-disappear-after-reboot
  10. Truenas pool unable to import - https://www.reddit.com/r/truenas/comments/1ek4hop/truenas_pool_unable_to_import/
  11. zfs pool wont mount, all disks healthy - https://forum.proxmox.com/threads/zfs-pool-wont-mount-all-disks-healthy.152175/
  12. "No such pool or dataset" after one disk of a mirror has failed - https://www.reddit.com/r/zfs/comments/1ejsryg/no_such_pool_or_dataset_after_one_disk_of_a/
  13. zpool disappeared at reboot - https://superuser.com/questions/1850173/zpool-disappeared-at-reboot
As without zfs the life of a unix/linux admin is much more boring but always get into a funny and interessant one with it :)
 
  • Like
Reactions: esi_y
When I pull the power cord (of the USB device, not the computer) I can create some distinct error situations.

I need to fight with "zpool import" several minutes to get it imported again.

This is the reason why I would suggest no one to run ZFS as their root. It is completely fine to have to manually intervene, but if it has to be on filesystem level, I would like to be already booting into some OS. It's much nicer to find some files corrupt on host (that can be easily recovered from a backup on a whim) only. The patient should be data pool, not the OS, as much as possible.

It is not obvious if I have to reboot the computer, or not. Of course rebooting is the cleaner way - who knows the state of the OS internal buffers after disconnecting/connecting an USB-device like this? Sometimes one of those four disks is stated "removed", sometimes none.

There's also weird way how ZFS looks for (replaced) disks depending on how they were referenced:
https://openzfs.github.io/openzfs-d...electing-dev-names-when-creating-a-pool-linux

Never the pool was lost - and this is the important fact!

On a long enough timeline ...
 
  • Like
Reactions: waltar
I like the features which aren't integrated in other filesystems like ...

And I thought I was the only one keeping lists.

As without zfs the life of a unix/linux admin is much more boring but always get into a funny and interessant one with it :)

I just feel like this forum is hard pro-ZFS based on anectodal evidence, but how would we know of all the issues if we never used it actively ourselves?

(And I am already restraining myself from asking how come PLP did not save the scrub...)
 
(And I am already restraining myself from asking how come PLP did not save the scrub...)
Haha :) Still doing weekly scrub on every raid regardless if it's on hw-ctrl, mdadm or a zpool.
how would we know of all the issues if we never used it actively ourselves?
Yes, that's the reason to try everyone, known it's pros and even cons and how to workaround them.
To know to which usecase (given requirements also) which fits best is to run mostly all common filesystems as no one fits all.
 
That's a short question and a small set of hardware. Nevertheless the answer could fill a chapter in a book easily...

Without knowing your preferences or plans (want to separate OS and data? Speed requirements? More disks to come? Reliability is really not on the bill? Build a cluster later on? Which VMs to install? ...?)

A ZFS fanboy would create a ZFS pool with the harddisk as a normal/single vdev and add the SSD as a "Special Device", carefully adjusting parameters like "special_small_blocks". I am not sure if the installer can do that though. If any of both devices fails, all data is gone - as there is no device-redundancy, which is a no-go for me personally.

I am not sure if that is really the best option for you. But it gives you the best filesystem available :)
i wanna install the Proxmox OS itself on the SSD and maybe like 1 or 2 VM's on it also and the rest on the 3TB HDD, i dont care for RAID at the moment but will implement it in the future when i get more drives
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!