discard inside LXCs?

Dunuin

Distinguished Member
Jun 30, 2020
14,793
4,611
258
Germany
Hi,

Just tried a fstrim -a as root inside my LXC with this result:
Code:
fstrim: /: FITRIM ioctl failed: Operation not permitted

So looks like doing a TRIM inside a unprivileged LXC isn't working. So do I need to setup discard/fstrim at all inside my LXC guestx or will PVE do a fstrim for all LXCs datasets on host level?
All my LXCs are ZFS datasets on a thin-provisioned ZFS SSD pool or LVs on a thin-provisioned LVM-thin so some kind of TRIM/discard should be required.

Edit:
Looks like a fstrim -a is working on another unprivileged LXC. So can someone point me in the right direction why it isn't working with my other LXC?

Edit:
The LXC that isn't working is using a LVM-thin, the one where its working is stored on a ZFS pool.

Edit:
config of the LXC where fstrim won't work:
Code:
root@Hypervisor:~# pct config 121
arch: amd64
cores: 2
features: nesting=1
hostname: GraylogLXC
memory: 4096
nameserver: 192.168.42.1
net0: name=eth0,bridge=vmbr42,firewall=1,gw=192.168.42.1,hwaddr=XX:XX:XX:XX:XX:XX,ip=192.168.42.72/24,type=veth
ostype: debian
rootfs: LVMthin:vm-121-disk-0,mountoptions=noatime,size=100G
swap: 1024
unprivileged: 1

config of the LXC where fstrim works:
Code:
root@Hypervisor:~# pct config 126
arch: amd64
cores: 1
features: nesting=1
hostname: YoutubeDL
memory: 256
mp0: /media/YoutubeDL,mp=/mnt/YoutubeDL
nameserver: 192.168.42.1
net0: name=eth0,bridge=vmbr42,firewall=1,gw=192.168.42.1,hwaddr=XX:XX:XX:XX:XX:XX4,ip=192.168.42.75/24,type=veth
net1: name=eth1,bridge=vmbr45,firewall=1,hwaddr=XX:XX:XX:XX:XX:XX,ip=192.168.45.15/24,type=veth
ostype: debian
rootfs: VMpool_VLT_VM:subvol-126-disk-0,mountoptions=noatime,size=8G
swap: 128
unprivileged: 1
lxc.idmap: u 0 100000 1103
lxc.idmap: g 0 100000 1103
lxc.idmap: u 1103 1103 1
lxc.idmap: g 1103 1103 1
lxc.idmap: u 1104 101104 64432
lxc.idmap: g 1104 101104 64432
 
Last edited:
hi,

have you tried also with pct fstrim CTID from the PVE host?

here it works fine when i try trimming disk on thin lvm.
 
hi,

have you tried also with pct fstrim CTID from the PVE host?

here it works fine when i try trimming disk on thin lvm.
LXCs 100/101/126/133 are unprivileged Debian 11 LXCs on an thin ZFS pool.
LXC 121 is a unprivileged Debian 10 LXC on a LVM-thin.
LXC 100/126 got bind-mounts with edited user-remapping.

Running "pct fstrim CTID" on the host worked for the one LXC 126 that is strored on the LVM-thin but failed for all the LXCs stored on ZFS.
Code:
root@Hypervisor:~# pct fstrim 100
fstrim: /var/lib/lxc/100/rootfs/: the discard operation is not supported
command 'fstrim -v /var/lib/lxc/100/rootfs/' failed: exit code 1

root@Hypervisor:~# pct fstrim 101
fstrim: /var/lib/lxc/101/rootfs/: the discard operation is not supported
command 'fstrim -v /var/lib/lxc/101/rootfs/' failed: exit code 1

root@Hypervisor:~# pct fstrim 121
/var/lib/lxc/121/rootfs/: 81.2 GiB (87184261120 bytes) trimmed

root@Hypervisor:~# pct fstrim 126
fstrim: /var/lib/lxc/126/rootfs/: the discard operation is not supported
command 'fstrim -v /var/lib/lxc/126/rootfs/' failed: exit code 1

root@Hypervisor:~# pct fstrim 133
fstrim: /var/lib/lxc/133/rootfs/: the discard operation is not supported
command 'fstrim -v /var/lib/lxc/133/rootfs/' failed: exit code 1
So I guess my fstrim for LXC 121 inside the guests wasn't working for a while if 81.2 GiB of a 100 GiB virtual disk got trimmed.

Running fstrim -v / inside the guests I get this:

LXC 100:
Code:
root@DockerDMZ:~# fstrim -v /
fstrim: /: the discard operation is not supported

LXC 101:
Code:
root@DockerIntranet:~# fstrim -v /
fstrim: /: the discard operation is not supported

LXC 121:
Code:
root@GraylogLXC:~# fstrim -v /
fstrim: /: FITRIM ioctl failed: Operation not permitted

LXC 126:
Code:
root@YoutubeDL:~# fstrim -v /
fstrim: /: the discard operation is not supported

LXC 133:
Code:
root@DokuWiki:~# fstrim -v /
fstrim: /: the discard operation is not supported

Shouldn't the datasets that the LXCs use a a virtual disk support discard? Atleast the ZFS pool, the datasets are stored on, is using thin-provisioning.

Here are the dataset attributes of LXC 133 root virtual disk:
Code:
root@Hypervisor:~# zfs get all VMpool/VLT/VM/subvol-133-disk-1
NAME                             PROPERTY              VALUE                             SOURCE
VMpool/VLT/VM/subvol-133-disk-1  type                  filesystem                        -
VMpool/VLT/VM/subvol-133-disk-1  creation              Mon Nov 22 17:35 2021             -
VMpool/VLT/VM/subvol-133-disk-1  used                  918M                              -
VMpool/VLT/VM/subvol-133-disk-1  available             15.1G                             -
VMpool/VLT/VM/subvol-133-disk-1  referenced            918M                              -
VMpool/VLT/VM/subvol-133-disk-1  compressratio         2.01x                             -
VMpool/VLT/VM/subvol-133-disk-1  mounted               yes                               -
VMpool/VLT/VM/subvol-133-disk-1  quota                 none                              default
VMpool/VLT/VM/subvol-133-disk-1  reservation           none                              default
VMpool/VLT/VM/subvol-133-disk-1  recordsize            128K                              default
VMpool/VLT/VM/subvol-133-disk-1  mountpoint            /VMpool/VLT/VM/subvol-133-disk-1  default
VMpool/VLT/VM/subvol-133-disk-1  sharenfs              off                               default
VMpool/VLT/VM/subvol-133-disk-1  checksum              on                                default
VMpool/VLT/VM/subvol-133-disk-1  compression           lz4                               inherited from VMpool
VMpool/VLT/VM/subvol-133-disk-1  atime                 off                               inherited from VMpool
VMpool/VLT/VM/subvol-133-disk-1  devices               on                                default
VMpool/VLT/VM/subvol-133-disk-1  exec                  on                                default
VMpool/VLT/VM/subvol-133-disk-1  setuid                on                                default
VMpool/VLT/VM/subvol-133-disk-1  readonly              off                               default
VMpool/VLT/VM/subvol-133-disk-1  zoned                 off                               default
VMpool/VLT/VM/subvol-133-disk-1  snapdir               hidden                            default
VMpool/VLT/VM/subvol-133-disk-1  aclmode               discard                           default
VMpool/VLT/VM/subvol-133-disk-1  aclinherit            restricted                        default
VMpool/VLT/VM/subvol-133-disk-1  createtxg             3113446                           -
VMpool/VLT/VM/subvol-133-disk-1  canmount              on                                default
VMpool/VLT/VM/subvol-133-disk-1  xattr                 sa                                local
VMpool/VLT/VM/subvol-133-disk-1  copies                1                                 default
VMpool/VLT/VM/subvol-133-disk-1  version               5                                 -
VMpool/VLT/VM/subvol-133-disk-1  utf8only              off                               -
VMpool/VLT/VM/subvol-133-disk-1  normalization         none                              -
VMpool/VLT/VM/subvol-133-disk-1  casesensitivity       sensitive                         -
VMpool/VLT/VM/subvol-133-disk-1  vscan                 off                               default
VMpool/VLT/VM/subvol-133-disk-1  nbmand                off                               default
VMpool/VLT/VM/subvol-133-disk-1  sharesmb              off                               default
VMpool/VLT/VM/subvol-133-disk-1  refquota              16G                               local
VMpool/VLT/VM/subvol-133-disk-1  refreservation        none                              default
VMpool/VLT/VM/subvol-133-disk-1  guid                  18337437784834349132              -
VMpool/VLT/VM/subvol-133-disk-1  primarycache          all                               default
VMpool/VLT/VM/subvol-133-disk-1  secondarycache        all                               default
VMpool/VLT/VM/subvol-133-disk-1  usedbysnapshots       0B                                -
VMpool/VLT/VM/subvol-133-disk-1  usedbydataset         918M                              -
VMpool/VLT/VM/subvol-133-disk-1  usedbychildren        0B                                -
VMpool/VLT/VM/subvol-133-disk-1  usedbyrefreservation  0B                                -
VMpool/VLT/VM/subvol-133-disk-1  logbias               latency                           default
VMpool/VLT/VM/subvol-133-disk-1  objsetid              93844                             -
VMpool/VLT/VM/subvol-133-disk-1  dedup                 off                               default
VMpool/VLT/VM/subvol-133-disk-1  mlslabel              none                              default
VMpool/VLT/VM/subvol-133-disk-1  sync                  standard                          inherited from VMpool
VMpool/VLT/VM/subvol-133-disk-1  dnodesize             legacy                            default
VMpool/VLT/VM/subvol-133-disk-1  refcompressratio      2.01x                             -
VMpool/VLT/VM/subvol-133-disk-1  written               918M                              -
VMpool/VLT/VM/subvol-133-disk-1  logicalused           1.34G                             -
VMpool/VLT/VM/subvol-133-disk-1  logicalreferenced     1.34G                             -
VMpool/VLT/VM/subvol-133-disk-1  volmode               default                           default
VMpool/VLT/VM/subvol-133-disk-1  filesystem_limit      none                              default
VMpool/VLT/VM/subvol-133-disk-1  snapshot_limit        none                              default
VMpool/VLT/VM/subvol-133-disk-1  filesystem_count      none                              default
VMpool/VLT/VM/subvol-133-disk-1  snapshot_count        none                              default
VMpool/VLT/VM/subvol-133-disk-1  snapdev               hidden                            default
VMpool/VLT/VM/subvol-133-disk-1  acltype               posix                             local
VMpool/VLT/VM/subvol-133-disk-1  context               none                              default
VMpool/VLT/VM/subvol-133-disk-1  fscontext             none                              default
VMpool/VLT/VM/subvol-133-disk-1  defcontext            none                              default
VMpool/VLT/VM/subvol-133-disk-1  rootcontext           none                              default
VMpool/VLT/VM/subvol-133-disk-1  relatime              off                               default
VMpool/VLT/VM/subvol-133-disk-1  redundant_metadata    all                               default
VMpool/VLT/VM/subvol-133-disk-1  overlay               on                                default
VMpool/VLT/VM/subvol-133-disk-1  encryption            aes-256-gcm                       -
VMpool/VLT/VM/subvol-133-disk-1  keylocation           none                              default
VMpool/VLT/VM/subvol-133-disk-1  keyformat             passphrase                        -
VMpool/VLT/VM/subvol-133-disk-1  pbkdf2iters           350000                            -
VMpool/VLT/VM/subvol-133-disk-1  encryptionroot        VMpool/VLT                        -
VMpool/VLT/VM/subvol-133-disk-1  keystatus             available                         -
VMpool/VLT/VM/subvol-133-disk-1  special_small_blocks  0                                 default

And the config of LXC 133:
Code:
root@Hypervisor:~# pct config 133
arch: amd64
cores: 1
features: nesting=1
hostname: DokuWiki
memory: 512
nameserver: 192.168.43.1
net0: name=eth0,bridge=vmbr43,firewall=1,gw=192.168.43.1,hwaddr=XX:XX:XX:XX:XX:XX,ip=192.168.43.69/24,type=veth
ostype: debian
rootfs: VMpool_VLT_VM:subvol-133-disk-1,size=16G
swap: 512
unprivileged: 1

Edit:
I looked at the LVM-thin storages graphs and now after the manual...
Code:
root@Hypervisor:~# pct fstrim 121
/var/lib/lxc/121/rootfs/: 81.2 GiB (87184261120 bytes) trimmed
...the LVM-thin is down from 100 to 85GiB. And according to th graphs it was never that much free space for the last month so I guess there is no automatic trimming at all.

All my LXCs got this script daily run by cron which worked fine for all my linux VMs:
Code:
cat /etc/cron.daily/daily_trim.sh
#!/bin/sh
#
# To find which FS support trim, we check that DISC-MAX (discard max bytes)
# is great than zero. Check discard_max_bytes documentation at
# https://www.kernel.org/doc/Documentation/block/queue-sysfs.txt
#
# Copy script to /etc/cron.daily or /etc/cron.weekly
#
for fs in $(lsblk -o MOUNTPOINT,DISC-MAX,FSTYPE | grep -E '^/.* [1-9]+.* ' | awk '{print $1}'); do
        fstrim "$fs"
done

So what would be the correct way to get discard working for my LXC?
Never had any problem with my VMs.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!