[SOLVED] LVM `issue_discards` is disabled after upgrade from 7 to 8.

Jan 8, 2022
32
4
13
Hi,

I'm not sure if this is an "issue" per-say, but I noticed that during an upgrade the `issue_discards = 1` config line is removed from lvm.conf and the new default is `issue_discards = 0`. I also could not find the setting anywhere else in /etc/lvm, for example in lvmlocal.conf.

Is this an issue and/or intended? In most clusters I don't imagine it'll cause an immediate issue, but it could cause an issue long-term on SSDs.

These questions are also likely relevant:
1) has anyone else has encountered this? It's possible this is unique to me.
2) Is this file the same on a fresh 8.x install? Or is this unique to upgrades? -- If someone has a fresh install to look at it might save some work, otherwise I might try a fresh install later
3) is there something else going on that might make this a non-issue? (ie: LVM behavior change that's not documented in lvm.conf)

Config file diff during `dist-upgrade`:

Diff:
        # Configuration option devices/issue_discards.
        # Issue discards to PVs that are no longer used by an LV.
@@ -286,7 +379,8 @@
        # benefit from discards, but SSDs and thinly provisioned LUNs
        # generally do. If enabled, discards will only be issued if both the
        # storage and kernel provide support.
-       issue_discards = 1
+       # This configuration option has an automatic default value.
+       # issue_discards = 0

I've attached full config files for reference:
- lvm.conf - LVM config installed as part of the upgrade (I opted to install the package maintainers version)
- lvm.conf.bak - LVM Config prior to the upgrade.
 
did you by chance get prompted about config file changes and selected to use the maintainer's version?
 
I had a chance to do a fresh Proxmox-VE install (inside a VM if that makes any difference). The defaults are to leave issue_discards set to 0 (off). The install was using proxmox-ve-8.0-2.iso.

Bash:
root@pve-test:~# lvmconfig --typeconfig full | grep issue_discards
        issue_discards=0
       
root@pve-test:~# grep issue_discards /etc/lvm/lvm.conf
        # Configuration option devices/issue_discards.
        # issue_discards = 0
 
You know... When I installed 7.4 (from proxmox-ve_7.4-1.iso), the defaults are to set issue_discards to 0 as well. This system was originally a 5.x install and I've done several upgrades so I'm wondering if it was a setting from a long time ago. I don't believe it's anything I tuned myself.
 
yes, it defaults to 0. I am not sure whether the Debian package had a different default at some point in the past, that would be possible. for PVE it doesn't make much difference - the option is for discarding segments when a regular LV is removed (and thus, space on a PV becomes "free" again). this rarely happens in PVE, unless you are using regular LVM and not LVM thin.
 
  • Like
Reactions: Dmitrius7
For completeness, a fresh install from the 6.4-1 ISO (which installed the proxmox-ve:6.3-1 package) had issue_discards set to 1.

Bash:
root@pve-test:~# lvmconfig --typeconfig full | grep issue_discards

        issue_discards=1

root@pve-test:~# grep issue_discards /etc/lvm/lvm.conf

        # Configuration option devices/issue_discards.

        issue_discards = 1
 
  • Like
Reactions: fabian
I have the same problem. what solution did you find please
I determined that it wasn't necessary to set this value. Most people use LVM thin provisioning.

To quote @fabian,
for PVE it doesn't make much difference - the option is for discarding segments when a regular LV is removed (and thus, space on a PV becomes "free" again). this rarely happens in PVE, unless you are using regular LVM and not LVM thin.

If you do use LVM thin provisioning, you can always edit /etc/lvm/lvm.conf to insert this line:

Code:
issue_discards = 1
 
The lvm volume is displayed with a question mark after updating and the output of the pvdisplay command is:

root@pve2:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda3
VG Name pve
PV Size 278.86 GiB / not usable <1.28 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 71389
Free PE 4095
Allocated PE 67294
PV UUID L1v4sc-a1Bk-NNoI-KFb7-98jl-aB4P-HUHhAp

"/dev/sdb" is a new physical volume of "44.88 TiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 44.88 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID eQyJrR-qRck-55nT-FC3i-Thd9-M3au-amt55M
 
What "pvdisplay" shows is a physical volumes and doesn't show whether or not you're using volume thin provisioning.

What does "lvdisplay" show? This will show if you're using lvm thin provisioning or not
 
surely I have the wrong discussion thread but my problem is that all my vms are on an LVM volume which is connected to an ISCI, after the upgrade
at level 7 to 8, my LVM has marked a question mark and it is no longer accessible, my VMs no longer start either, here is the error message when I try to start the VM:

-------------------------------------
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Volume group "bnetd-stockage" not found
TASK ERROR: can't activate LV '/dev/bnetd-stockage/vm-250-disk-0': Cannot process volume group bnetd-stockage

-------------------------------

I have nearly 110 VMs running in my environment and their mounted on this volume
 
create storage failed: command '/sbin/pvs --separator : --noheadings --units k --unbuffered --nosuffix --options pv_name,pv_size,vg_name,pv_uuid /dev/disk/by-id/scsi-36006016024984e00cf23665e71e47afa' failed: exit code 5 (500)

when i want to create new LVM
 
yes, it defaults to 0. I am not sure whether the Debian package had a different default at some point in the past, that would be possible. for PVE it doesn't make much difference - the option is for discarding segments when a regular LV is removed (and thus, space on a PV becomes "free" again). this rarely happens in PVE, unless you are using regular LVM and not LVM thin.
Just curious... Why isn't it necessary to set issue_discards=1 when used with PVE and lvm thin provisioning? Surely this is still needed when a thin volume is deleted (as there's always stuff left lying around in the volume etc)?
 
Last edited:
issue_discards just controls whether the PV extents are discarded when an LV is removed. with a thin pool, the full size of the pool itself is allocated on the VG/PV and belongs to the pool, and removing a thin LV within the pool has no effect outside of the pool, no physical space becomes free as a result, so no discard can happen.

for thin pool there is a separate mechanism for preventing data leakage of previously used extents - since all the extents belong to the thin pool, whenever an extent is first assigned for usage by a thin LV, it is written over with zeroes. you can see this in effect when doing a live migration with a big thin volume - there will be a pretty hefty I/O spike at the start of the storage migration when the full image is overwritten on the target side. this behaviour is controlled by "thin_pool_zero" in /etc/lvm/lvm.conf. additionally, you can also control what happens with discard/trim on a thin LV (e.g., like a guest OS might issue on a guest volume) - this is "thin_pool_discards". the default here is to mark discarded extents for re-use in the pool *and* also issue a discard on the underlying device.

"man lvmthin" contains a lot more information about thin pools and the various tuning options. just be careful - it's easy to misconfigure a pool and lose all your data!