Unterstanding how snapshots under LVM Thin-Volumes work

BigBob

Active Member
Jan 9, 2019
4
0
41
32
Germany
Hello,

I work with Proxmox 5.2 and LVM Thin-Volumes. I create a couple of snapshots from a windows test-vm and wondering how it worked.

A normal thin snapshot under LVM2 used the same size as the original thin-volume has. In my case the thin-lv has 32GiB, so the snapshot would become also 32GiB.

But the proxmox-snapshot feature also create another thin-volume with a size of 8.49GiB. What is this volume for?

Here is a example of the configuration:
Code:
--- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                pfsrcH-edM9-tFPo-enss-ieLA-7cRk-eQo9iE
  LV Write Access        read/write
  LV Creation host, time pve1, 2019-01-07 17:32:53 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                32.00 GiB
  Mapped size            68.20%
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

  --- Logical volume ---
  LV Path                /dev/pve/vm-100-state-test
  LV Name                vm-100-state-test
  VG Name                pve
  LV UUID                nlLYpH-dxbP-vXX6-SQVc-N7zB-5ngF-KY6ny3
  LV Write Access        read/write
  LV Creation host, time pve1, 2019-01-08 12:22:58 +0100
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                8.49 GiB
  Mapped size            35.76%
  Current LE             2173
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7

  --- Logical volume ---
  LV Path                /dev/pve/snap_vm-100-disk-0_test
  LV Name                snap_vm-100-disk-0_test
  VG Name                pve
  LV UUID                sDKTHl-eGol-yZI9-DNME-Nl5A-RHaP-KfuYdg
  LV Write Access        read only
  LV Creation host, time pve1, 2019-01-08 12:23:18 +0100
  LV Pool name           data
  LV Thin origin name    vm-100-disk-0
  LV Status              NOT available
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

Code:
root@pve1:~# lvs
  LV                       VG  Attr       LSize   Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  SNAP                     pve Vwi---tz-k  10.00g data tv_test
  SNAP2                    pve Vwi---tz-k  10.00g data tv_test
  data                     pve twi-aotz-- 415.27g                    14.17  0.81
  root                     pve -wi-ao----  15.00g
  snap_vm-100-disk-0_test  pve Vri---tz-k  32.00g data vm-100-disk-0
  snap_vm-100-disk-0_test2 pve Vri---tz-k  32.00g data vm-100-disk-0
  snap_vm-100-disk-0_test3 pve Vri---tz-k  32.00g data vm-100-disk-0
  swap                     pve -wi-ao----  10.00g
  tv_test                  pve Vwi-a-tz--  10.00g data               87.12
  vm-100-disk-0            pve Vwi-a-tz--  32.00g data               87.41
  vm-100-disk-1            pve Vwi-a-tz--  10.00g data               84.88
  vm-100-state-test        pve Vwi-a-tz--   8.49g data               35.76
  vm-100-state-test2       pve Vwi-a-tz--   8.49g data               46.08
  vm-100-state-test3       pve Vwi-a-tz--   8.49g data               46.97

Can anyone explain why Proxmox creates two thin volumes of different sizes?