Help choosing filesystem

toxic

Active Member
Aug 1, 2020
57
6
28
37
Hello,
I have a small homelab cluster running a single HA VM for home-assistant as my lights depends on it's zigbee2mqtt plugin.Besides that one VM, I'm mostly running CTs but nothing of critical importance.
I had a ceph setup but on my commodity hardware and gigabit network it's dead slow even for home assistant only. As my hardware is reliable enough I really don't need HA but only the ability to migrate a VM to allow some downtime of a node when I need to do something on it or electrical...

So I ditched ceph, and was considering lukewarm replication to hopefully speed up migration and reinstalled the cluster on btrfs just to discover the feature isn't available for btrfs
I discovered something very strange with btrfs : my IP camera recorder (frigate) is setup to store all the recordings on a 250GB disk image stored on the local-btrfs storage. The recordings rotate frequently. And I already had several instances of / being full on my PvE host. Looking with btdu I found out I had 500gb of unreachable space occupied by the disk image in question where frigate is rotating it's recordings... Only solution for me was to delete some files on the host then run btrfs defrag on the disk image that gave me back hundreds of GB of free space.

In the past I didn't observe this behavior on ext4.

Now I'm considering reinstalling the cluster with ZFS to finally get the lukewarm replication for my home assistant VM, but I'm a bit afraid that it would have the same issue of lost space for this disk image on which the content rotates frequently.

I've made peace with running ZFS on commodity hardware and SSD I will monitor the TBW and it probably won't be much worse than btrfs especially with frequent defrag...
I will also not do raid or redundancy as I spent my money and drives on backups instead of raid/HA that I barely need.

I'm looking for advice as I'm still wondering if ext4 is not the best option even if migrating a VM to do some maintenance on a node will take ages over gigabit without lukewarm replication...

Any opinion is welcome and thank a lot for the reading!
 
In ZFS to get thin provisioning working you must tick the box in Datacenter, Storage, local-zfs section. Then, make sure your VM disk uses Virtio SCSI single and that Discard is checked in every disk of the VMs. That should help to avoid / to get full.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!