Good morning!
I need some help calculating the max "size:" value I can assign in vzdump.conf to get my backups working again.
I know this could be resolved with an upgrade but things have been running so well that I'm too chicken to risk it.
I'm having a problem with snapshot backups of a KVM "stalling" here:
INFO: adding '/mnt/vzsnap0/images/103/vm-103-disk-1.raw' to archive ('vm-disk-ide0.raw')
I'm pretty sure it's because the logical volume for the snapshot is running out of space. Traditionally, when this happens I bump up the "size:" value in vzdump.conf and I'm good to go again. Currently the size: value in vzdump.conf is set to 7168 (bumped up from 6144) but it's still failing.
Combing through old threads I see that lv and vg display as well as pvdisplay are important pieces. I have included those below all taken while the backup was still running. Can anyone help me to define the max size: value I can assign and whether there are any negative impacts to increasing it?
Thanks!!!!
The outputs:
vgdisplay
lvdisplay
pvdisplay
Thanks again for any help you can provide.
I need some help calculating the max "size:" value I can assign in vzdump.conf to get my backups working again.
I know this could be resolved with an upgrade but things have been running so well that I'm too chicken to risk it.
I'm having a problem with snapshot backups of a KVM "stalling" here:
INFO: adding '/mnt/vzsnap0/images/103/vm-103-disk-1.raw' to archive ('vm-disk-ide0.raw')
I'm pretty sure it's because the logical volume for the snapshot is running out of space. Traditionally, when this happens I bump up the "size:" value in vzdump.conf and I'm good to go again. Currently the size: value in vzdump.conf is set to 7168 (bumped up from 6144) but it's still failing.
Combing through old threads I see that lv and vg display as well as pvdisplay are important pieces. I have included those below all taken while the backup was still running. Can anyone help me to define the max size: value I can assign and whether there are any negative impacts to increasing it?
Thanks!!!!
The outputs:
vgdisplay
Code:
root@proxvs1:~# vgdisplay
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283759104: Input/outpu
t error
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283816448: Input/outpu
t error
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 0: Input/output error
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 4096: Input/output error
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 19706
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.64 TiB
PE Size 4.00 MiB
Total PE 428703
Alloc PE / Size 426400 / 1.63 TiB
Free PE / Size 2303 / 9.00 GiB
VG UUID lENCBy-879J-R6Np-s22Z-TI8O-HNp0-imdBbx
lvdisplay
Code:
root@proxvs1:~# lvdisplay
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283759104: Input/outpu
t error
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283816448: Input/outpu
t error
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 0: Input/output error
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 4096: Input/output error
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID aJ0zA3-DKMb-zTWD-Oc3h-dmfW-fDxG-O8ei6S
LV Write Access read/write
LV Creation host, time proxmox, 2012-08-03 10:41:04 -0400
LV Status available
# open 1
LV Size 62.00 GiB
Current LE 15872
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID QuO2hP-m7HU-msVz-4L3j-R16l-QXjE-jhtCo4
LV Write Access read/write
LV Creation host, time proxmox, 2012-08-03 10:41:04 -0400
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/pve/data
LV Name data
VG Name pve
LV UUID L0BisB-W2tx-YzXe-0ztb-0prU-6jFA-7oqsnq
LV Write Access read/write
LV Creation host, time proxmox, 2012-08-03 10:41:04 -0400
LV snapshot status source of
vzsnap-proxvs1-0 [INACTIVE]
LV Status available
# open 1
LV Size 1.47 TiB
Current LE 384160
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Path /dev/pve/vzsnap-proxvs1-0
LV Name vzsnap-proxvs1-0
VG Name pve
LV UUID uSXgKj-aqmY-X7ON-MyEM-gaRU-MN07-nLjywI
LV Write Access read/write
LV Creation host, time proxvs1, 2015-06-08 22:30:02 -0400
LV snapshot status INACTIVE destination for data
LV Status available
# open 1
LV Size 1.47 TiB
Current LE 384160
COW-table size 7.00 GiB
COW-table LE 1792
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
pvdisplay
Code:
root@proxvs1:~# pvdisplay
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283759104: Input/outpu
t error
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283816448: Input/outpu
t error
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 0: Input/output error
/dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 4096: Input/output error
--- Physical volume ---
PV Name /dev/sda2
VG Name pve
PV Size 1.64 TiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 428703
Free PE 2303
Allocated PE 426400
PV UUID yS3gGf-67jl-Gmvy-fSVS-8Df9-mDnf-M4glE
Thanks again for any help you can provide.