qmrestore bwlimit problem

Anfernee

New Member
Jul 25, 2020
3
0
1
37
I have some servers with proxmox. And when I use qmrestore with -bwlimit option, it is a strange problem.
In version 6.1-7 it works corrent.

Bash:
()
restore vma archive: lzop -d -c /mnt/pve/nfs/vdska_templates/vzdump-qemu-130-2020_01_01-01_01_01.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp8396.fifo - /var/tmp/vzdumptmp8396
CFG: size: 485 name: qemu-server.conf
DEV: dev_id=1 size: 21474836480 devname: drive-scsi0
CTIME: Mon Jul  6 00:15:05 2020
[B]rate limit for storage ssd-dmitrov2: 20000 KiB/s[/B]
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "vm-4026-disk-0" created.
  WARNING: Sum of all thin volume sizes (910.00 GiB) exceeds the size of thin pool local-ssd2/data and the size of whole volume group (858.30 GiB).
new volume ID is 'ssd-dmitrov2:vm-4026-disk-0'
map 'drive-scsi0' to '/dev/local-ssd2/vm-4026-disk-0' (write zeros = 0)
progress 1% (read 214761472 bytes, duration 7 sec)

And in 6.2-4 versio it doesn`t work:
Bash:
()
restore vma archive: lzop -d -c /mnt/pve/nfs/vdska_templates/vzdump-qemu-120-2020_01_01-01_01_01.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp14798.fifo - /var/tmp/vzdumptmp14798
CFG: size: 466 name: qemu-server.conf
DEV: dev_id=1 size: 21474836480 devname: drive-scsi0
CTIME: Mon Jul  6 11:55:07 2020
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "vm-4020-disk-0" created.
  WARNING: Sum of all thin volume sizes (645.00 GiB) exceeds the size of thin pool local-ssd/data and the size of whole volume group (<465.76 GiB).
new volume ID is 'ssd-gagarin:vm-4020-disk-0'
map 'drive-scsi0' to '/dev/local-ssd/vm-4020-disk-0' (write zeros = 0)
progress 1% (read 214761472 bytes, duration 1 sec)
progress 2% (read 429522944 bytes, duration 4 sec)

I use the same command to restore backup
Bash:
qmrestore /mnt/pve/nfs/vdska_templates/vzdump-qemu-120-2020_01_01-01_01_01.vma.lzo 150 -bwlimit 20000 -storage ssd-kolomna

I try to add
bwlimit: restore=20000,default=20000
in
/etc/pve/datacenter.cfg
but it doesn`t work too.
 
Could you please post the following?
Code:
pveversion -v
cat /etc/pve/datacenter.cfg
cat /etc/pve/storage.cfg
Maybe extract the lzo manually and then
Code:
vma config <your_extracted_.vma_file>
 
Code:
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

Code:
# cat /etc/pve/datacenter.cfg
bwlimit: restore=20000,default=20000
keyboard: en-us

Code:
# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content rootdir,vztmpl,backup,iso,images

lvmthin: ssd-gagarin
    thinpool data
    vgname local-ssd
    content rootdir,images
    nodes gagarin

dir: backup-gagarin
    path /root/backup
    content vztmpl,rootdir,snippets,images,iso,backup
    maxfiles 1
    nodes gagarin
    shared 0
 
Is this the complete storage.cfg?

1) ssd-kolomna is not in storage.cfg
2) Your file /mnt/pve/nfs/vdska_templates/vzdump-qemu-120-2020_01_01-01_01_01.vma.lzo is not on a PVE storage?
 
Last edited:
Yes, on first server it`s ok (with old proxmox)
rate limit for storage ssd-dmitrov2: 20000 KiB/s
Code:
# dpkg -l | grep proxmox-backup
ii  libproxmox-backup-qemu0              0.1.3-1                      amd64        Proxmox Backup Server client library for QEMU
 
I think this really is a bug. Thank you!

You can add yourself to its CC list if you want to receive updates at the Bugzilla entry.
 
Hi, I just encounter this issue when restoring VM from PBS, I try to limit with 1M (--bwlimit=1024) but get speed over 150MB/s:

# qmrestore backup:backup/vm/134/2021-04-05T05:48:05Z 135 --storage rpool1 --bwlimit 1024
new volume ID is 'rpool1:vm-135-disk-0'
restore proxmox backup image: /usr/bin/pbs-restore --repository admin@pbs@x.x.x.x:backup vm/134/2021-04-05T05:48:05Z drive-scsi0.img.fidx /dev/zvol/rpool1/vm-135-disk-0 --verbose --format raw --skip-zero
connecting to repository 'admin@pbs@x.x.x.x:backup'
open block backend for target '/dev/zvol/rpool1/vm-135-disk-0'
starting to restore snapshot 'vm/134/2021-04-05T05:48:05Z'
download and verify backup index
progress 1% (read 432013312 bytes, zeroes = 2% (12582912 bytes), duration 4 sec)
progress 2% (read 859832320 bytes, zeroes = 1% (12582912 bytes), duration 8 sec)
...
progress 99% (read 42521853952 bytes, zeroes = 39% (16710107136 bytes), duration 263 sec)
progress 100% (read 42949672960 bytes, zeroes = 39% (16793993216 bytes), duration 268 sec)
restore image complete (bytes=42949672960, duration=268.15s, speed=152.75MB/s)
rescan volumes...
TASK OK
 
Last edited:
  • Like
Reactions: EuroDomenii
Confirm the same bug.

Code:
root@max:~# qmrestore --bwlimit 10240 --force true melania:backup/vm/195/2021-04-08T14:34:02Z 195
  Logical volume "vm-195-cloudinit" successfully removed
  Logical volume "vm-195-disk-0" successfully removed
  Logical volume "vm-195-cloudinit" created.
new volume ID is 'nvme-ssd1:vm-195-cloudinit'
  Logical volume "vm-195-disk-0" created.
new volume ID is 'nvme-ssd1:vm-195-disk-0'
restore proxmox backup image: /usr/bin/pbs-restore --repository melania@pbs@localhost:melania vm/195/2021-04-08T14:34:02Z drive-scsi0.img.fidx /dev/vgnvme1/vm-195-disk-0 --verbose --format raw --skip-zero
connecting to repository 'melania@pbs@localhost:melania'
open block backend for target '/dev/vgnvme1/vm-195-disk-0'
starting to restore snapshot 'vm/195/2021-04-08T14:34:02Z'
download and verify backup index
progress 1% (read 272629760 bytes, zeroes = 56% (155189248 bytes), duration 0 sec)
progress 2% (read 545259520 bytes, zeroes = 28% (155189248 bytes), duration 3 sec)
......
progress 99% (read 26789019648 bytes, zeroes = 51% (13832814592 bytes), duration 350 sec)
progress 100% (read 27057455104 bytes, zeroes = 52% (14097055744 bytes), duration 350 sec)
restore image complete (bytes=27057455104, duration=350.72s, speed=73.58MB/s)
rescan volumes...

Note the speed restore is low 73.58MB/s, since the target storage is on hdd, on a hetzner auction devel server.
 
  • Like
Reactions: chengkinhung