pfsense VM with UFS leads to overfull vm disk until reboot

rakali

Active Member
Jan 2, 2020
42
4
28
41
Hello,

I recently moved to running pfsense in a VM on pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-8-pve). My proxmox storage is zfs, so using virtio block storage for the vm creates a zvol. In pfsense, in order to avoid zfs on zfs, and write amplification, I am using UFS as the filesystem.

This leads to a relatively fast exhaustion of available storage, but not visible files using that storage. Rebooting the VM frees up that storage until it builds up again.

Currently my files are using 4.8G of disk space, but df is reporting 26G used. This number increases until pfsense breaks because it can no longer write to disk. The disk for the vm is assigned 128G.

pfsense:
Code:
[2.7.2-RELEASE][root@pfSense.spacelab]/root: du -h -d1 /
4.0K    /.snap
101M    /boot
4.5K    /dev
 16M    /rescue
4.0K    /proc
 96M    /root
2.1G    /var
4.0K    /media
 12K    /conf.default
4.0K    /mnt
 13M    /tmp
4.6M    /sbin
 17M    /lib
4.0K    /net
 15M    /cf
164K    /libexec
8.0M    /etc
1.4M    /bin
1.8G    /usr
 20K    /home
1.1M    /tftpboot
4.8G    /


Code:
[2.7.2-RELEASE][root@pfSense.spacelab]/root: df -hi
Filesystem                     Size    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/ufsid/659ec5d8d33f9720    120G     26G     85G    23%     55k   16M    0%   /
devfs                          1.0K      0B    1.0K     0%       0     0     -   /dev
/dev/vtbd0p1                   260M    1.3M    259M     1%       2   510    0%   /boot/efi
tmpfs                          4.0M    192K    3.8M     5%      56   14k    0%   /var/run

Proxmox:

Code:
root@pvepbs:~# zfs get all rpool/data/vm-100-disk-1
NAME                      PROPERTY              VALUE                     SOURCE
rpool/data/vm-100-disk-1  type                  volume                    -
rpool/data/vm-100-disk-1  creation              Mon Jan  8 13:18 2024     -
rpool/data/vm-100-disk-1  used                  33.5G                     -
rpool/data/vm-100-disk-1  available             684G                      -
rpool/data/vm-100-disk-1  referenced            33.5G                     -
rpool/data/vm-100-disk-1  compressratio         3.58x                     -
rpool/data/vm-100-disk-1  reservation           none                      default
rpool/data/vm-100-disk-1  volsize               128G                      local
rpool/data/vm-100-disk-1  volblocksize          16K                       default
rpool/data/vm-100-disk-1  checksum              on                        default
rpool/data/vm-100-disk-1  compression           on                        inherited from rpool
rpool/data/vm-100-disk-1  readonly              off                       default
rpool/data/vm-100-disk-1  createtxg             2282                      -
rpool/data/vm-100-disk-1  copies                1                         default
rpool/data/vm-100-disk-1  refreservation        none                      default
rpool/data/vm-100-disk-1  guid                  7625546012721003177       -
rpool/data/vm-100-disk-1  primarycache          all                       default
rpool/data/vm-100-disk-1  secondarycache        all                       default
rpool/data/vm-100-disk-1  usedbysnapshots       0B                        -
rpool/data/vm-100-disk-1  usedbydataset         33.5G                     -
rpool/data/vm-100-disk-1  usedbychildren        0B                        -
rpool/data/vm-100-disk-1  usedbyrefreservation  0B                        -
rpool/data/vm-100-disk-1  logbias               latency                   default
rpool/data/vm-100-disk-1  objsetid              2013                      -
rpool/data/vm-100-disk-1  dedup                 off                       default
rpool/data/vm-100-disk-1  mlslabel              none                      default
rpool/data/vm-100-disk-1  sync                  standard                  inherited from rpool
rpool/data/vm-100-disk-1  refcompressratio      3.58x                     -
rpool/data/vm-100-disk-1  written               33.5G                     -
rpool/data/vm-100-disk-1  logicalused           119G                      -
rpool/data/vm-100-disk-1  logicalreferenced     119G                      -
rpool/data/vm-100-disk-1  volmode               default                   default
rpool/data/vm-100-disk-1  snapshot_limit        none                      default
rpool/data/vm-100-disk-1  snapshot_count        none                      default
rpool/data/vm-100-disk-1  snapdev               hidden                    default
rpool/data/vm-100-disk-1  context               none                      default
rpool/data/vm-100-disk-1  fscontext             none                      default
rpool/data/vm-100-disk-1  defcontext            none                      default
rpool/data/vm-100-disk-1  rootcontext           none                      default
rpool/data/vm-100-disk-1  redundant_metadata    all                       default
rpool/data/vm-100-disk-1  encryption            off                       default
rpool/data/vm-100-disk-1  keylocation           none                      default
rpool/data/vm-100-disk-1  keyformat             none                      default
rpool/data/vm-100-disk-1  pbkdf2iters           0                         default
rpool/data/vm-100-disk-1  snapshots_changed     Mon Feb 12  9:32:58 2024  -
Code:
root@pvepbs:~# cat /etc/pve/qemu-server/100.conf
balloon: 0
bios: ovmf
boot: order=ide2;virtio0
cores: 6
cpu: host,flags=+aes
efidisk0: local-zfs:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:01:00,pcie=1
hostpci1: 0000:03:00,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 16384
meta: creation-qemu=8.1.2,ctime=1704716297
name: pfsense
numa: 0
onboot: 1
ostype: other
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=dd5d2b62-e3a4-4da9-b796-e8c97bfb8358
sockets: 1
vga: qxl
virtio0: local-zfs:vm-100-disk-1,cache=writeback,discard=on,iothread=1,size=128G
vmgenid: babe5976-8e38-42cb-9d4b-4f14f6e2cdfa


Code:
root@pvepbs:~# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     37.1G   684G   104K  /rpool
rpool/ROOT                2.46G   684G    96K  /rpool/ROOT
rpool/ROOT/pve-1          2.46G   684G  2.46G  /
rpool/data                33.4G   684G    96K  /rpool/data
rpool/data/vm-100-disk-0    84K   684G    84K  -
rpool/data/vm-100-disk-1  33.4G   684G  33.4G  -
rpool/pbs_datastore         96K   600G    96K  /rpool/pbs_datastore
rpool/var-lib-vz          1.13G   684G  1.13G  /var/lib/vz

I have been going over this on the pfsense forum, but perhaps some experienced person can help me out here with a suggestion or two?
https://forum.netgate.com/topic/186...comes-full-please-help-identify-the-culprit/8

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!