PVE very slow restore vm

Andrew31

Member
Aug 10, 2018
7
0
21
24
I have a fresh install on two sata SSDs on ZFS - Raid1 which performs very slow on restore from NAS (via NFS)
NAS is connected to 100mbps link, even on such link backup durations is about 3 mins but restore is about 30 mins (vm is new about 1.7GB)
My observation was that on restore NAS was reporting only 1MB/s upload instead of 10-12MB/s (as it was on backup, downstream of course)
Please advise on which logs to check to investigate this behavior.

pve version:
proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-10
pve-kernel-helper: 7.2-10
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.5-1
proxmox-backup-file-restore: 2.2.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1

backup:
INFO: starting new backup job: vzdump 100 --compress zstd --node pve-iacobas1 --storage synology --mode snapshot --notes-template '{{guestname}}' --remove 0
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2022-09-10 20:39:28
INFO: status = running
INFO: VM Name: elastic
INFO: include disk 'sata0' 'local-zfs:vm-100-disk-0' 100G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: pending configuration changes found (not included into backup)
INFO: creating vzdump archive '/mnt/pve/synology/dump/vzdump-qemu-100-2022_09_10-20_39_28.vma.zst'
INFO: started backup task '09738f8c-f666-455a-8a4f-dcf26bca2339'
INFO: resuming VM again
INFO: 10% (10.5 GiB of 100.0 GiB) in 3s, read: 3.5 GiB/s, write: 74.4 MiB/s
INFO: 23% (23.0 GiB of 100.0 GiB) in 6s, read: 4.2 GiB/s, write: 25.7 MiB/s
INFO: 33% (33.6 GiB of 100.0 GiB) in 10s, read: 2.6 GiB/s, write: 35.8 MiB/s
INFO: 47% (47.4 GiB of 100.0 GiB) in 13s, read: 4.6 GiB/s, write: 12.3 MiB/s
INFO: 50% (51.0 GiB of 100.0 GiB) in 16s, read: 1.2 GiB/s, write: 164.7 MiB/s
INFO: 52% (52.1 GiB of 100.0 GiB) in 19s, read: 368.5 MiB/s, write: 214.4 MiB/s
INFO: 62% (62.9 GiB of 100.0 GiB) in 22s, read: 3.6 GiB/s, write: 65.8 MiB/s
INFO: 63% (63.6 GiB of 100.0 GiB) in 25s, read: 240.5 MiB/s, write: 218.6 MiB/s
INFO: 64% (64.1 GiB of 100.0 GiB) in 28s, read: 180.0 MiB/s, write: 127.0 MiB/s
INFO: 65% (65.0 GiB of 100.0 GiB) in 33s, read: 175.4 MiB/s, write: 126.3 MiB/s
INFO: 66% (66.8 GiB of 100.0 GiB) in 37s, read: 459.5 MiB/s, write: 120.2 MiB/s
INFO: 77% (77.4 GiB of 100.0 GiB) in 40s, read: 3.5 GiB/s, write: 33.2 MiB/s
INFO: 88% (88.4 GiB of 100.0 GiB) in 43s, read: 3.6 GiB/s, write: 11.0 MiB/s
INFO: 97% (97.9 GiB of 100.0 GiB) in 46s, read: 3.2 GiB/s, write: 113.3 KiB/s
INFO: 100% (100.0 GiB of 100.0 GiB) in 47s, read: 2.1 GiB/s, write: 0 B/s
INFO: backup is sparse: 96.00 GiB (95%) total zero data
INFO: transferred 100.00 GiB in 47 seconds (2.1 GiB/s)
INFO: archive file size: 1.59GB
INFO: adding notes to backup
INFO: Finished Backup of VM 100 (00:02:46)
INFO: Backup finished at 2022-09-10 20:42:14
INFO: Backup job finished successfully
TASK OK

restore:
restore vma archive: zstd -q -d -c /mnt/pve/synology/dump/vzdump-qemu-100-2022_09_10-20_39_28.vma.zst | vma extract -v -r /var/tmp/vzdumptmp526267.fifo - /var/tmp/vzdumptmp526267
CFG: size: 435 name: qemu-server.conf
DEV: dev_id=1 size: 107374182400 devname: drive-sata0
CTIME: Sat Sep 10 20:39:28 2022
new volume ID is 'local-zfs:vm-200-disk-0'
map 'drive-sata0' to '/dev/zvol/rpool/data/vm-200-disk-0' (write zeros = 0)
progress 1% (read 1073741824 bytes, duration 52 sec)
progress 2% (read 2147483648 bytes, duration 52 sec)
progress 3% (read 3221225472 bytes, duration 52 sec)
progress 4% (read 4294967296 bytes, duration 53 sec)
progress 5% (read 5368709120 bytes, duration 53 sec)
progress 6% (read 6442450944 bytes, duration 53 sec)
progress 7% (read 7516192768 bytes, duration 70 sec)
progress 8% (read 8589934592 bytes, duration 70 sec)
progress 9% (read 9663676416 bytes, duration 70 sec)
progress 10% (read 10737418240 bytes, duration 70 sec)
progress 11% (read 11811160064 bytes, duration 70 sec)
progress 12% (read 12884901888 bytes, duration 70 sec)
progress 13% (read 13958643712 bytes, duration 70 sec)
progress 14% (read 15032385536 bytes, duration 70 sec)
progress 15% (read 16106127360 bytes, duration 70 sec)
progress 16% (read 17179869184 bytes, duration 70 sec)
progress 17% (read 18253611008 bytes, duration 70 sec)
progress 18% (read 19327352832 bytes, duration 70 sec)
progress 19% (read 20401094656 bytes, duration 70 sec)
progress 20% (read 21474836480 bytes, duration 82 sec)
progress 21% (read 22548578304 bytes, duration 82 sec)
progress 22% (read 23622320128 bytes, duration 82 sec)
progress 23% (read 24696061952 bytes, duration 82 sec)
progress 24% (read 25769803776 bytes, duration 82 sec)
progress 25% (read 26843545600 bytes, duration 82 sec)
progress 26% (read 27917287424 bytes, duration 82 sec)
progress 27% (read 28991029248 bytes, duration 82 sec)
progress 28% (read 30064771072 bytes, duration 82 sec)
progress 29% (read 31138512896 bytes, duration 82 sec)
progress 30% (read 32212254720 bytes, duration 82 sec)
progress 31% (read 33285996544 bytes, duration 138 sec)
progress 32% (read 34359738368 bytes, duration 138 sec)
progress 33% (read 35433480192 bytes, duration 138 sec)
progress 34% (read 36507222016 bytes, duration 152 sec)
progress 35% (read 37580963840 bytes, duration 152 sec)
progress 36% (read 38654705664 bytes, duration 152 sec)
progress 37% (read 39728447488 bytes, duration 152 sec)
progress 38% (read 40802189312 bytes, duration 152 sec)
progress 39% (read 41875931136 bytes, duration 153 sec)
progress 40% (read 42949672960 bytes, duration 153 sec)
progress 41% (read 44023414784 bytes, duration 153 sec)
progress 42% (read 45097156608 bytes, duration 154 sec)
progress 43% (read 46170898432 bytes, duration 154 sec)
progress 44% (read 47244640256 bytes, duration 154 sec)
progress 45% (read 48318382080 bytes, duration 154 sec)
progress 46% (read 49392123904 bytes, duration 157 sec)
progress 47% (read 50465865728 bytes, duration 157 sec)
progress 48% (read 51539607552 bytes, duration 157 sec)
progress 49% (read 52613349376 bytes, duration 157 sec)
progress 50% (read 53687091200 bytes, duration 158 sec)
progress 51% (read 54760833024 bytes, duration 334 sec)
progress 52% (read 55834574848 bytes, duration 516 sec)
progress 53% (read 56908316672 bytes, duration 568 sec)
progress 54% (read 57982058496 bytes, duration 568 sec)
progress 55% (read 59055800320 bytes, duration 568 sec)
progress 56% (read 60129542144 bytes, duration 568 sec)
progress 57% (read 61203283968 bytes, duration 568 sec)
progress 58% (read 62277025792 bytes, duration 568 sec)
progress 59% (read 63350767616 bytes, duration 568 sec)
progress 60% (read 64424509440 bytes, duration 568 sec)
progress 61% (read 65498251264 bytes, duration 568 sec)
progress 62% (read 66571993088 bytes, duration 569 sec)
progress 63% (read 67645734912 bytes, duration 666 sec)
progress 64% (read 68719476736 bytes, duration 1053 sec)
progress 65% (read 69793218560 bytes, duration 1463 sec)
progress 66% (read 70866960384 bytes, duration 1734 sec)
progress 67% (read 71940702208 bytes, duration 1734 sec)
progress 68% (read 73014444032 bytes, duration 1734 sec)
progress 69% (read 74088185856 bytes, duration 1734 sec)
progress 70% (read 75161927680 bytes, duration 1734 sec)
progress 71% (read 76235669504 bytes, duration 1734 sec)
progress 72% (read 77309411328 bytes, duration 1735 sec)
progress 73% (read 78383153152 bytes, duration 1735 sec)
progress 74% (read 79456894976 bytes, duration 1735 sec)
progress 75% (read 80530636800 bytes, duration 1735 sec)
progress 76% (read 81604378624 bytes, duration 1735 sec)
progress 77% (read 82678120448 bytes, duration 1735 sec)
progress 78% (read 83751862272 bytes, duration 1788 sec)
progress 79% (read 84825604096 bytes, duration 1788 sec)
progress 80% (read 85899345920 bytes, duration 1788 sec)
progress 81% (read 86973087744 bytes, duration 1788 sec)
progress 82% (read 88046829568 bytes, duration 1788 sec)
progress 83% (read 89120571392 bytes, duration 1788 sec)
progress 84% (read 90194313216 bytes, duration 1788 sec)
progress 85% (read 91268055040 bytes, duration 1788 sec)
progress 86% (read 92341796864 bytes, duration 1788 sec)
progress 87% (read 93415538688 bytes, duration 1788 sec)
progress 88% (read 94489280512 bytes, duration 1788 sec)
progress 89% (read 95563022336 bytes, duration 1788 sec)
progress 90% (read 96636764160 bytes, duration 1788 sec)
progress 91% (read 97710505984 bytes, duration 1788 sec)
progress 92% (read 98784247808 bytes, duration 1788 sec)
progress 93% (read 99857989632 bytes, duration 1788 sec)
progress 94% (read 100931731456 bytes, duration 1788 sec)
progress 95% (read 102005473280 bytes, duration 1788 sec)
progress 96% (read 103079215104 bytes, duration 1788 sec)
progress 97% (read 104152956928 bytes, duration 1788 sec)
progress 98% (read 105226698752 bytes, duration 1788 sec)
progress 99% (read 106300440576 bytes, duration 1788 sec)
progress 100% (read 107374182400 bytes, duration 1788 sec)
total bytes read 107374182400, sparse bytes 103078354944 (96%)
space reduction due to 4K zero blocks 2.17%
rescan volumes...
TASK OK
 
i think both direction are slow. Test the storage speed at the console. Maybe you have a performance problem with the ssds.
 
I would also guess you maybe just got a SMR HDD or QLC SSD as PVE storage? These got ok read performance but terrible write performance. In that case restoring a VM would be way slower than doing a backup. What disk models are you using?
 
I would also guess you maybe just got a SMR HDD or QLC SSD as PVE storage? These got ok read performance but terrible write performance. In that case restoring a VM would be way slower than doing a backup. What disk models are you using?
Two Sata Samsung SSD 870 EVO 250GB
 
please send the vm config!

boot: order=sata0
cores: 2
memory: 4096
meta: creation-qemu=6.2.0,ctime=1661082740
name: Linux-b12
net0: e1000=8E:11:4A:A1:27:43,bridge=vmbr0
net1: e1000=DA:ED:F3:B2:E3:2A,bridge=vmbr0,tag=2
numa: 0
onboot: 1
ostype: l26
sata0: local-zfs:vm-100-disk-0,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=202feace-9a89-9087-aee0-e16c1e664255
sockets: 1
vmgenid: de12b1d5-bd5d-153c-a42e-611ff2fb13b0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!