is there a way to reduce the space taken by snapshots?

euler001

New Member
Jan 21, 2020
16
0
1
54
Hi,

We have a proxmox lab server showing alarm "Sum of all thin volume sizes (7.83 TiB) exceeds the size of thin pool pve/data and the size of whole volume group (7.27 TiB)!".
two LVS command outputs are as follows. it looks like those snapshots take too much disc space. I am wondering if there is a way to reduce the snapshot space. Or is there a way to better manage snapshots?

thanks,



root@fqa:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 7.13t 2.80 0.77
root pve -wi-ao---- 96.00g
snap_vm-101-disk-0_stress_b240 pve Vri---tz-k 296.00m data
snap_vm-101-disk-1_stress_b240 pve Vri---tz-k 250.00g data
snap_vm-102-disk-0_lab_b242 pve Vri---tz-k 296.00m data
snap_vm-102-disk-1_lab_b242 pve Vri---tz-k 250.00g data
snap_vm-103-disk-0_S20201109 pve Vri---tz-k 150.00g data vm-103-disk-0
snap_vm-105-disk-0_Mantis671538_2 pve Vri---tz-k 296.00m data vm-105-disk-0
snap_vm-105-disk-0_v64_b343 pve Vri---tz-k 296.00m data
snap_vm-105-disk-0_v6_b228 pve Vri---tz-k 296.00m data
snap_vm-105-disk-1_Mantis671538_2 pve Vri---tz-k 250.00g data vm-105-disk-1
snap_vm-105-disk-1_v64_b343 pve Vri---tz-k 250.00g data
snap_vm-105-disk-1_v6_b228 pve Vri---tz-k 250.00g data
snap_vm-106-disk-0_Mantis671538_2 pve Vri---tz-k 296.00m data vm-106-disk-0
snap_vm-106-disk-0_v64_b343 pve Vri---tz-k 296.00m data
snap_vm-106-disk-0_v6_b228 pve Vri---tz-k 296.00m data
snap_vm-106-disk-1_Mantis671538_2 pve Vri---tz-k 250.00g data vm-106-disk-1
snap_vm-106-disk-1_v64_b343 pve Vri---tz-k 250.00g data
snap_vm-106-disk-1_v6_b228 pve Vri---tz-k 250.00g data
snap_vm-108-disk-0_v60_b228 pve Vri---tz-k 296.00m data vm-108-disk-0
snap_vm-108-disk-1_v60_b228 pve Vri---tz-k 250.00g data vm-108-disk-1
snap_vm-109-disk-0_v60_b228 pve Vri---tz-k 296.00m data vm-109-disk-0
snap_vm-109-disk-0_v60_b243 pve Vri---tz-k 296.00m data vm-109-disk-0
snap_vm-109-disk-1_v60_b228 pve Vri---tz-k 250.00g data vm-109-disk-1
snap_vm-109-disk-1_v60_b243 pve Vri---tz-k 250.00g data vm-109-disk-1
snap_vm-110-disk-0_CustomerConfig_v53b264 pve Vri---tz-k 296.00m data vm-110-disk-0
snap_vm-110-disk-0_v53b264blank pve Vri---tz-k 296.00m data vm-110-disk-0
snap_vm-110-disk-1_CustomerConfig_v53b264 pve Vri---tz-k 250.00g data vm-110-disk-1
snap_vm-110-disk-1_v53b264blank pve Vri---tz-k 250.00g data vm-110-disk-1
swap pve -wi-ao---- 8.00g
vm-101-disk-0 pve Vwi-aotz-- 296.00m data snap_vm-101-disk-0_stress_b240 99.32
vm-101-disk-1 pve Vwi-aotz-- 250.00g data snap_vm-101-disk-1_stress_b240 2.67
vm-101-state-stress_b240 pve Vwi-a-tz-- 32.49g data 11.84
vm-102-disk-0 pve Vwi-aotz-- 296.00m data snap_vm-102-disk-0_lab_b242 99.32
vm-102-disk-1 pve Vwi-aotz-- 250.00g data snap_vm-102-disk-1_lab_b242 10.76
vm-102-state-lab_b242 pve Vwi-a-tz-- 32.49g data 25.62
vm-103-disk-0 pve Vwi-aotz-- 150.00g data 16.06
vm-103-state-S20201109 pve Vwi-a-tz-- 32.49g data 48.10
vm-104-disk-0 pve Vwi-aotz-- 320.00g data 10.89
vm-105-disk-0 pve Vwi-a-tz-- 296.00m data snap_vm-105-disk-0_v64_b343 99.32
vm-105-disk-1 pve Vwi-a-tz-- 250.00g data snap_vm-105-disk-1_v64_b343 2.37
vm-105-state-Mantis671538_2 pve Vwi-a-tz-- 32.49g data 4.94
vm-105-state-v64_b343 pve Vwi-a-tz-- 32.49g data 4.37
vm-105-state-v6_b228 pve Vwi-a-tz-- 32.49g data 3.67
vm-106-disk-0 pve Vwi-a-tz-- 296.00m data snap_vm-106-disk-0_v64_b343 99.32
vm-106-disk-1 pve Vwi-a-tz-- 250.00g data snap_vm-106-disk-1_v64_b343 2.36
vm-106-state-Mantis671538_2 pve Vwi-a-tz-- 32.49g data 4.29
vm-106-state-v64_b343 pve Vwi-a-tz-- 32.49g data 4.33
vm-106-state-v6_b228 pve Vwi-a-tz-- 32.49g data 3.53
vm-107-disk-0 pve Vwi-a-tz-- 296.00m data 99.32
vm-107-disk-1 pve Vwi-a-tz-- 250.00g data 2.44
vm-108-disk-0 pve Vwi-a-tz-- 296.00m data 60.07
vm-108-disk-1 pve Vwi-a-tz-- 250.00g data 2.25
vm-108-state-v60_b228 pve Vwi-a-tz-- 32.49g data 3.58
vm-109-disk-0 pve Vwi-a-tz-- 296.00m data 99.32
vm-109-disk-1 pve Vwi-a-tz-- 250.00g data 2.37
vm-109-state-v60_b228 pve Vwi-a-tz-- 32.49g data 4.34
vm-109-state-v60_b243 pve Vwi-a-tz-- 32.49g data 6.03
vm-110-disk-0 pve Vwi-aotz-- 296.00m data 48.29
vm-110-disk-1 pve Vwi-aotz-- 250.00g data 2.87
vm-110-state-CustomerConfig_v53b264 pve Vwi-a-tz-- 32.49g data 8.40
vm-110-state-v53b264blank pve Vwi-a-tz-- 32.49g data 3.18
vm-200-disk-0 pve Vwi-a-tz-- 296.00m data 49.32
vm-200-disk-1 pve Vwi-a-tz-- 250.00g data 0.00
vm-201-disk-0 pve Vwi-a-tz-- 296.00m data 49.32
vm-201-disk-1 pve Vwi-a-tz-- 250.00g data 0.00
vm-202-disk-0 pve Vwi-a-tz-- 296.00m data 49.32
vm-202-disk-1 pve Vwi-a-tz-- 250.00g data 0.00
vm-203-disk-0 pve Vwi-a-tz-- 296.00m data 49.32
vm-203-disk-1 pve Vwi-a-tz-- 250.00g data 0.00
vm-204-disk-0 pve Vwi-a-tz-- 296.00m data 49.32
vm-204-disk-1 pve Vwi-a-tz-- 250.00g data 0.00
vm-205-disk-0 pve Vwi-a-tz-- 296.00m data 49.32
vm-205-disk-1 pve Vwi-a-tz-- 250.00g data 0.00
vm-206-disk-0 pve Vwi-a-tz-- 296.00m data 49.32
vm-206-disk-1 pve Vwi-a-tz-- 250.00g data 0.00
vm-207-disk-0 pve Vwi-a-tz-- 296.00m data 49.32
vm-207-disk-1 pve Vwi-a-tz-- 250.00g data 0.00
vm-208-disk-0 pve Vwi-a-tz-- 296.00m data 49.32
vm-208-disk-1 pve Vwi-a-tz-- 250.00g data 0.00
vm-209-disk-0 pve Vwi-a-tz-- 296.00m data 49.32
vm-209-disk-1 pve Vwi-a-tz-- 250.00g data 0.00

root@fvcqa:~# lvs --units g --nosuffix --separator \| | cut -d \| -f 4,5 | grep "|data" | cut -f 1 -d \| | awk '{s+=$1} END {print s}'
8833.85
 
Hi,

I am wondering if there is a way to reduce the snapshot space.
A snapshot needs the space it needs. The only way to get more space is to delete snapshots.
 
thanks. I actually have a feeling this alarm can be ignored. I use thin LVM, not thick LVM. From the GUI, I am able to see that real space usage is less than 5%. Even the thin volume is more than the physical volume, I am still able to create and run new VMs. Thus, my understanding is this alarm can be ignored. am I right?
 
thanks. I actually have a feeling this alarm can be ignored. I use thin LVM, not thick LVM. From the GUI, I am able to see that real space usage is less than 5%. Even the thin volume is more than the physical volume, I am still able to create and run new VMs. Thus, my understanding is this alarm can be ignored. am I right?
yes with thin-lvm only the actual space usage matters. the warning means exactly what it says: the sum of all the volumes are more than your total, so eventually if you say fill your disks a lot then it will exceed, though you can always add more physical disks to your lvs.
with thin provisioning only the actual used space will be occupied.
 
though you can always add more physical disks to your lvs.
with thin provisioning only the actual used space will be occupied.
To clarify this, don't extend your LVM-thin pool. The metadata LV will not be resized and you will end up with a full metadata LV and a broken pool. Better create a new storage and move some of the VM/CT.
 
  • Like
Reactions: oguz
To clarify this, don't extend your LVM-thin pool. The metadata LV will not be resized and you will end up with a full metadata LV and a broken pool. Better create a new storage and move some of the VM/CT.
Really?
Why does this happen with LVM-thin pool?
 
Why does this happen with LVM-thin pool?
LVM normally allocates blocks when you create a volume. LVM-thin pools instead allocate blocks when they are written. This behavior is called thin-provisioning, because volumes can be much larger than the physical available space. These writes are distributed across the data pool. Therefor each written block needs to be kept in an "index", that's where the metadata pool comes in to play. As more data is written this index grows. With an extended pool the size of the metadata pool can't be increased. And the space for the metadata will become to small to keep the index.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!