Proxmox simple with BTRFS

allebone

New Member
Mar 19, 2025
3
3
3
Hello,

I am new to using proxmox but have used btrfs extensively. As a result I wanted to use btrfs for all volumes everywhere. Im a simple homelabber so have old pc's lying around and old disks so wanted a solution that simply worked with old disks and proxmox easily. The reason for selecting btrfs is that disks can be any size, mismatched and any type including plugged in via USB and btrfs will happily use them in any way it can no problem. The amound of disk space in RAID1 for btrfs is simply the total size of all the mismatched disks * .5.
As I had 3 disks for backup purposes, 1TB, 512GB and 512GB, the total usable size is 1TB. I also wanted to make use of compression to enable backups to save disk space beyond the deduplication of chunks already done by pbs.

Here is what I found with regards proxmox. Some of the default settings of proxmox are not ideal and should be changed when using btrfs.

Setup: A Nas that holds my VM's connected at 2.5G RJ45 networking cables. This is used for shared storage and runs a btrfs RAID1 pool of 3 disks. This is the shared storage that proxmox nodes can connect to to find VM's and run them. This can be any NAS, OMV, unraid, Debian with SMB and so on. It simply needs to be accessible via all nodes.
As networking is used for shared storage and is only 2.5 gigabit SMB shares were used instead of iscsi or NFS. There is no benefit IMHO otherwise and SMB is simple, and permissions based which I prefer. Setup of a NAS is pretty simple and I expect ost of you already have one.

Nodes: I have only 1 node for now, which is an old 2013 mac pro running a xeon and 64GB ECC RAM. cat /proc/cpuinfo identifies this cpu as model name : Intel(R) Xeon(R) CPU E5-1620 v2 @ 3.70GHz. This is adequate for my needs of running 3 VM's on my network currently.

During installation of proxmox I selected the entire built in disk (a 256GB NVME) and btrfs.
On completion of the install the command

Code:
btrfs filesystem usage -T /

Showed suboptimal settings. Metadata was "single" which means that in the case of a metadata error, which can happen in various ways, the entire disk will corrupt. This is not advised to be configured this way and metadata at a bare minumum for btrfs should be set to DUP.
I set the entire disk to DUP as there is no need to the local disk, other than to boot proxmox ve and so space is no concern. The command used was btrfs balance start -dconvert=DUP -mconvert=DUP /
Running the btrfs fs usage command again will show DUP after doing this, crucially for metadata which is important. I would reccomend changing the installation of proxmox to make metadata DUP by default if btrfs is continued to be supported.

Once this was done I added the 3 disks in RAID1 to achieve a location to backup proxmox to and apt-get install proxmox-backup proxmox-backup-server. I had to add a couple of repositories etc in sources.list to complete this, but by in large the setup was pretty basic and is a normal proxmox ve setup with pbs running on the same box. Just to add the shared SMB location for VM's etc.

For the backup location I created the 3 disks listing them with lsblk, and then wiping and adding them to a pool with "mkfs.btrfs -m raid1 -d raid1 -L My-Storage /dev/sdb /dev/sdc /dev/sdd -f"
creation of a mount point with mkdir /mnt/my-storage
listing first disk in RAID1 with blkid
Mounting them using fstab:
Code:
UUID=TypeUUIDFromblkidHere /mnt/my-storage        btrfs   defaults,compress=lzo,discard=async,space_cache=v2      0       2
then mount -a && systemctl daemon-reload

For backup you just login and add a datastore, absolute path eg: /mnt/my-storage, set prune options etc.
Add this to pve for backup and then set a schedule as normal.

The compression success varies but I have a total disk allocation of 92G + 166G + 76G + 41G disks allocated to the 3 VM's = 375GB. The backup size after a few test runs consumed 280GB which is in line with expected results.

Regarding maintaining btrfs (failed disk and so on in backup set) the replacement of disks is simple eg:
btrfs device remove /dev/sdc /mnt/my-storage/
(turn off server and change disk)
btrfs device add /dev/sdc /mnt/my-storage/ -f
run a balance and scrub.
If the disk is completely broken, use btrfs replace instead, rather than remove and add. If only 2 disks you cannot use remove and add.

For maintenance of btrfs volumes I have a weekly cron job which runs this script:
Code:
#!/bin/bash
date > /mnt/my-storage/logs/BalanceScrub.log
btrfs scrub status /mnt/my-storage/ >> /mnt/my-storage/logs/BalanceScrub.log
btrfs fi show /mnt/my-storage >> /mnt/my-storage/logs/BalanceScrub.log
btrfs filesystem usage -T /mnt/my-storage >> /mnt/my-storage/logs/BalanceScrub.log
echo ----startbalance---- >> /mnt/my-storage/logs/BalanceScrub.log
btrfs balance start -v -dusage=75 -musage=75 /mnt/my-storage >> /mnt/my-storage/logs/BalanceScrub.log
echo ----endbalance---- >> /mnt/my-storage/logs/BalanceScrub.log
echo ----startscrub---- >> /mnt/my-storage/logs/BalanceScrub.log
btrfs scrub start /mnt/my-storage >> /mnt/my-storage/logs/BalanceScrub.log
echo ----endscrub---- >> /mnt/my-storage/logs/BalanceScrub.log
btrfs fi show /mnt/my-storage >> /mnt/my-storage/logs/BalanceScrub.log
btrfs filesystem usage -T /mnt/my-storage >> /mnt/my-storage/logs/BalanceScrub.log
date >> /mnt/my-storage/logs/BalanceScrub.log

This ensures any bitrot or corrupted files are self healed and you can configure your scripts to send a mail when they run if you prefer.

Just thought I would share this in case anyone else wants to use btrfs and needs help. If you have any questions please ask :)

-P
 
Last edited:
Looks good, but I always use "compress=zstd" because of a better compression ratio and faster too than lzo. :)
 
Looks good, but I always use "compress=zstd" because of a better compression ratio and faster too than lzo. :)

Yes the choice is dependent on need. I use lzo as the memory requirement is lower and I prefer to conserve memory for VM's (I installed pbs on the same box as pve). However you can choose a different option if it is more applicable for you. I should note this is also an additional reason for using btrfs as it has less memory requirements than zfs so there are no unexpected results in a simper setup.
 
Last edited: