[SOLVED] HELP! Cancel lvresize ? --> just wait ....

wbk

Renowned Member
Oct 27, 2019
239
34
68
Hi all,

Sorry for crossposting

My main container (mail, messaging, calendar, phone backup) is offline (and has been for some 10 hours) due to resizing the LV from 3,6 TB to 0,4 TB.

This is an LV on a thin pool; it has previously been filled to about 3 TB before I moved most data to other storage. In effect:
  • < 400 GB extents filled with real data
  • 2600 GB extents filled with removed data (fstrim)
  • 600 GB extents never used
I think shrinking the LV by this much, means that all extents need to be remapped. I only realized that and the chronological implications after the transaction not completing after half an hour.

Has anyone had experience with such a case, and hurriedly cancelled the transaction to non catastrophic result?

I have a backup from ~12 hour before the transaction started, mostly nighttime that saw few interactions with the system. Restoring it will take quite a while, so I don't know whether it will be faster to restore the backup and retry resizing a bit less aggressive, or that it is faster to let the resize run its course.

Code:
# lvresize -vr -L 400G /dev/mapper/allerlei-vm--104--disk--0  
  Executing: /sbin/fsadm --verbose check /dev/allerlei/vm-104-disk-0
fsadm: "ext4" filesystem found on "/dev/mapper/allerlei-vm--104--disk--0".
fsadm: Executing fsck -p /dev/mapper/allerlei-vm--104--disk--0
fsck from util-linux 2.38.1
/dev/mapper/allerlei-vm--104--disk--0: clean, 5204892/244908032 files, 113766239/979632128 blocks
  Executing: /sbin/fsadm --verbose resize /dev/allerlei/vm-104-disk-0 419430400K
fsadm: "ext4" filesystem found on "/dev/mapper/allerlei-vm--104--disk--0".
fsadm: Device "/dev/mapper/allerlei-vm--104--disk--0" size is 4012573196288 bytes
fsadm: Parsing tune2fs -l "/dev/mapper/allerlei-vm--104--disk--0"
fsadm: Resizing filesystem on device "/dev/mapper/allerlei-vm--104--disk--0" to 429496729600 bytes (979632128 -> 104857600 blocks of 4096 bytes)
fsadm: Executing resize2fs /dev/mapper/allerlei-vm--104--disk--0 104857600
resize2fs 1.47.0 (5-Feb-2023)
Resizing the filesystem on /dev/mapper/allerlei-vm--104--disk--0 to 104857600 (4k) blocks.

The transaction locks LVM, I can't get current details at the moment.
 
Solved! "DELLE", as I recall the solution is called in German :p

After waiting for half a day and then fretting for hours.

With help of an AI, I got some idea of the state of the process, but things were not actually moving a lot. Until they did, then all of a sudden the operation was over.

Out of micromanaging, I had an `iostat -h 10` running for the last hour or so, that I fed to the AI to have it analyze it for different stages in the process. For the benefit of others running into the same issue, I'll paste the output here.

1773452826053.png
Four distinct phases are visible:

Phase 1 (samples 1–72, ~12 min): Pure metadata writes at ~10 MB/s, near-zero reads, iowait steady ~41%. resize2fs scanning the filesystem and writing journal/bitmap updates.
Phase 2 (samples 73–136, ~11 min): Nearly idle — writes drop to ~1 MB/s, iowait collapses to ~1–3%. resize2fs building its relocation plan internally, minimal disk activity.
Phase 3 (samples 137–230, ~16 min): Active block relocation — writes jump to ~35 MB/s sustained, reads appear for the first time (~5–15 MB/s), iowait climbs to 40–65%. This is where the actual data movement happened.
Phase 4 (samples 231–236, ~1 min): Completion — reads suddenly dominate over writes, then both drop. resize2fs verifying the relocated blocks and updating the superblock.

with:
  • sdc: root with ZFS on (slow) SSD (Samsung 860 QVO)
  • sdb: LVM thin pool on 8 TB HDD (WDC WD80EDAZ)
In addition to the graph and generated comment: the complete transaction took about 12 hours, during which sdc and sdb were writing constantly at 4-20 MB/s; `iostat`'s first summary line notes afterwards :
Code:
# iostat -h 10 /dev/sd[abc]
Linux 6.8.12-19-pve (verjaardag)        03/14/2026      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
    4.8%    0.0%    3.5%   13.5%    0.0%   78.1%

      tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd Device
     4.99        74.0k       577.6k         0.0k      53.1G     414.7G       0.0k sda
    42.95         5.7M       627.0k         0.0k       4.1T     450.2G       0.0k sdb
   178.76         2.5M         1.6M         0.0k       1.8T       1.2T       0.0k sdc

sda was placed to offload data from sdb to enable shrinkage; sdc has seen relatively few writes besides the shrinking of the volume.

It may well be that this action has killed my SSD, as it had negligible wearout previously, and is at 99% now...

Code:
# lvresize -vr -L 400G /dev/mapper/allerlei-vm--104--disk--0
  Executing: /sbin/fsadm --verbose check /dev/allerlei/vm-104-disk-0
fsadm: "ext4" filesystem found on "/dev/mapper/allerlei-vm--104--disk--0".
fsadm: Executing fsck -p /dev/mapper/allerlei-vm--104--disk--0
fsck from util-linux 2.38.1
/dev/mapper/allerlei-vm--104--disk--0: clean, 5204892/244908032 files, 113766239/979632128 blocks
  Executing: /sbin/fsadm --verbose resize /dev/allerlei/vm-104-disk-0 419430400K
fsadm: "ext4" filesystem found on "/dev/mapper/allerlei-vm--104--disk--0".
fsadm: Device "/dev/mapper/allerlei-vm--104--disk--0" size is 4012573196288 bytes
fsadm: Parsing tune2fs -l "/dev/mapper/allerlei-vm--104--disk--0"
fsadm: Resizing filesystem on device "/dev/mapper/allerlei-vm--104--disk--0" to 429496729600 bytes (979632128 -> 104857600 blocks of 4096 bytes)
fsadm: Executing resize2fs /dev/mapper/allerlei-vm--104--disk--0 104857600
resize2fs 1.47.0 (5-Feb-2023)
Resizing the filesystem on /dev/mapper/allerlei-vm--104--disk--0 to 104857600 (4k) blocks.
bThe filesystem on /dev/mapper/allerlei-vm--104--disk--0 is now 104857600 (4k) blocks long.

  Reducing logical volume allerlei/vm-104-disk-0 to 400.00 GiB
  Size of logical volume allerlei/vm-104-disk-0 changed from <3.65 TiB (956672 extents) to 400.00 GiB (102400 extents).
  Archiving volume group "allerlei" metadata (seqno 284).
  Loading table for allerlei-dunnedata_tdata_corig (252:4).
  Suppressed allerlei-dunnedata_tdata_corig (252:4) identical table reload.
  Loading table for allerlei-dunnecache_cvol (252:1).
  Suppressed allerlei-dunnecache_cvol (252:1) identical table reload.
  Loading table for allerlei-dunnecache_cvol-cdata (252:2).
  Suppressed allerlei-dunnecache_cvol-cdata (252:2) identical table reload.
  Loading table for allerlei-dunnecache_cvol-cmeta (252:3).
  Suppressed allerlei-dunnecache_cvol-cmeta (252:3) identical table reload.
  Loading table for allerlei-dunnedata_tdata (252:5).
  Suppressed allerlei-dunnedata_tdata (252:5) identical table reload.
  Loading table for allerlei-dunnedata_tmeta (252:0).
  Suppressed allerlei-dunnedata_tmeta (252:0) identical table reload.
  Loading table for allerlei-dunnedata-tpool (252:6).
  Suppressed allerlei-dunnedata-tpool (252:6) identical table reload.
  Loading table for allerlei-vm--104--disk--0 (252:8).
  Not monitoring allerlei/dunnedata with libdevmapper-event-lvm2thin.so
  Unmonitored LVM-8tJDvh9EYY4EM3ShrQPIqeEmeiR5bGvFH03GcrRybkkNwye0HZsHMVzTq3SzvQpy-tpool for events
  Suspending allerlei-vm--104--disk--0 (252:8)
  Suspending allerlei-dunnedata-tpool (252:6)
  Suspending allerlei-dunnedata_tdata (252:5)
  Suspending allerlei-dunnedata_tmeta (252:0)
  Suspending allerlei-dunnedata_tdata_corig (252:4)
  Suspending allerlei-dunnecache_cvol-cdata (252:2)
  Suspending allerlei-dunnecache_cvol-cmeta (252:3)
  Suspending allerlei-dunnecache_cvol (252:1)
  Loading table for allerlei-dunnedata_tdata_corig (252:4).
  Suppressed allerlei-dunnedata_tdata_corig (252:4) identical table reload.
  Loading table for allerlei-dunnecache_cvol (252:1).
  Suppressed allerlei-dunnecache_cvol (252:1) identical table reload.
  Loading table for allerlei-dunnecache_cvol-cdata (252:2).
  Suppressed allerlei-dunnecache_cvol-cdata (252:2) identical table reload.
  Loading table for allerlei-dunnecache_cvol-cmeta (252:3).
  Suppressed allerlei-dunnecache_cvol-cmeta (252:3) identical table reload.
  Loading table for allerlei-dunnedata_tdata (252:5).
  Suppressed allerlei-dunnedata_tdata (252:5) identical table reload.
  Loading table for allerlei-dunnedata_tmeta (252:0).
  Suppressed allerlei-dunnedata_tmeta (252:0) identical table reload.
  Loading table for allerlei-dunnedata-tpool (252:6).
  Suppressed allerlei-dunnedata-tpool (252:6) identical table reload.
  Resuming allerlei-dunnecache_cvol (252:1).
  Resuming allerlei-dunnedata_tdata_corig (252:4).
  Resuming allerlei-dunnecache_cvol-cdata (252:2).
  Resuming allerlei-dunnecache_cvol-cmeta (252:3).
  Resuming allerlei-dunnedata_tdata (252:5).
  Resuming allerlei-dunnedata_tmeta (252:0).
  Resuming allerlei-dunnedata-tpool (252:6).
  Resuming allerlei-vm--104--disk--0 (252:8).
  Monitored LVM-8tJDvh9EYY4EM3ShrQPIqeEmeiR5bGvFH03GcrRybkkNwye0HZsHMVzTq3SzvQpy-tpool for events
  Logical volume allerlei/vm-104-disk-0 successfully resized.
  Creating volume group backup "/etc/lvm/backup/allerlei" (seqno 285).
 
Last edited: