Is a Discard/Trim Issued When Removing an LV from a ThinPool?

May 18, 2019
231
15
38
Varies
On PVE 8.1, I have a thin pool on NVME with a few containers.

I deleted one of the containers (via GUI) and now I am not sure if the space used by the LV was discarded/trimmed. I can't issue an `fstrim`, unless it is against a directory, nor I can issue a `pct fstrim` against the CT because it's been deleted.

lvm.conf is default:
Code:
issue_discards = 0
thin_pool_discards = "passdown"
thin_disabled_features = [ "discards", "block_size" ]

I'm suspecting the trim wasn't issued bc of degraded performance on the NVME (all is fine according to smartctl), as I deleted the CT exactly bc the thin pool filled up.

Does `lvremove` issued by pve handle discarding/trimming? If not, what is the solution to trim the unused space in the thin pool?
 
Does `lvremove` issued by pve handle discarding/trimming? If not, what is the solution to trim the unused space in the thin pool?
I'd be interested to know this as well. I have a LVM storage backed by an iSCSI LUN on a Synology target that supports space reclamation and I don't see any evidence that deletions are being passed.
 
Last edited:
I'd be interested to know this as well. I have a LVM storage backed by an iSCSI LUN on a Synology target that supports space reclamation and I don't see any evidence that deletions are being passed.
QEMU VMs I've seen a commit which does just that. For LXC CTs, one day we will get a reply.
 
I'm kind of wondering why issue_discards = 0 by default. The documentation (in the comments in lvm.conf) doesn't say what the states are, but my assumption would be that this means that lvremove would NOT issue a discard for the removed volume's blocks, which is what I think I'm seeing.

I'd be a bit nervous about just changing this to see what happened, though.
 
I'm kind of wondering why issue_discards = 0 by default. The documentation (in the comments in lvm.conf) doesn't say what the states are, but my assumption would be that this means that lvremove would NOT issue a discard for the removed volume's blocks, which is what I think I'm seeing.

I'd be a bit nervous about just changing this to see what happened, though.
Discarding is differing from trimming. I don't think discards=0 is related, and in fact discarding is very rarely recommended, but trimming nearly always recommended.
 
I guess I don't really understand the terminology being used: "trim", "discard" and "unmap" all seem to be used semi-interchangeably.

In my case, I'm expecting SCSI "unmap" operations to be sent over iSCSI for a LUN region when a logical volume is deleted at the Proxmox level, and I don't think that's happening. It seems like the documentation is saying that this is controlled by issue_discards in lvm.conf, but for some reason that's not set by default.

There's another layer, in a logical volume that's mapped as a VM or container disk, where you can say "discard" and that results in a lot of traffic when individual files are deleted by the guest because it's sending individual trim/unmap operations for every file operaion; I'm not looking at that for now but it seems reasonable that should be disabled and performed by periodic "fstrim" operations to batch up the trim/unmap operations sent to the underlying storage. In any case, though, that by definition isn't going to affect lvremove operations performed by the host.

Does that sound right?
 
Last edited: