right filesystem for mini pc (home lab)

crc-error-79

Member
Apr 10, 2023
78
7
13
Italy
Hello everybody,
I need an help to choose the right file system for a new testing cluster.

I am going to install proxmox on 3 mini pc (1 nvme with standard ddr4 memory).
This cluster will run mainly some vm to learn kubernates (rke2 and longhorn) and, since - at least at the beginning - I will do a lot of errors I the entire system will have daily backups and many many snapshots.
So no mission critical, etc..

Which file system should I consider?

- ext4?
- zfs?
- btrfs?

I am using the last one on my notebook and I had no issues, but I am not sure how well it could works with proxmox.
 
When running a cluster you probably want a shared filesystem (NFS/ceph) or replicated local filesystem (ZFS) to be able to easily migrate the VMs and/or use HA.
 
Thank you, I didn't consider that option.
And for the proxmox installation? ext4 is ok or should I consider some more "sophisticated" fs like zfs (even if I don't have ecc memory) or btrfs?
 
Btrfs is experimental so wouldn't be my first choice. If you want ext4+LVM-Thin or ZFS depends how much you care about your data and features. ZFS would allow you to use replication but also got way more overhead so higher SSD wear and less performance. And enterprise SSDs with power-loss protection would be highly recommended as well as lots of ECC you already mentioned.
 
Last edited:
  • Like
Reactions: Johannes S
That's why I go the ZFS road: it has the best feature set and it counts as "shared" - at least in the PVE context. But it comes with some drawbacks!
That doesn't contradict anything I said. I was answering to someone that said that they were just using lvm+ext4, and I said that with such configuration they will not be able to use HA.
I'm myself using ZFS for the shared storage, but not for the root OS file system. In fact, I started questioning my decision, and that is how I ended in this forum thread. I question if I'm using it properly, because the reality is that none of my LXCs are properly failing over when I have a problem in a node, so I may have the overhead of ZFS and none of the high availability features.
 
I question if I'm using it properly, because the reality is that none of my LXCs are properly failing over when I have a problem in a node, so I may have the overhead of ZFS and none of the high availability features.
This sounds like something warranting a new thread forctroubleshooting. Lxcs should be restarted on another host in case of an ha event