Backup zurück spielen legt andere VM fast lahm

achim22

Renowned Member
May 21, 2015
419
5
83
59
Dortmund
Hallo zusammen,
ich habe einmal ein Backup unter /var/lib/vz/datensicherung angelegt, zum testen.

Wenn ich dieses erstelle bzw zurück lese gehen die VMs, die nicht gesichert werden, recht stark in die Knie das man damit kaum noch arbeiten kann.
Ist das normal? Komischerweise habe ich kaum eine Auslastung wären des Backups bzw. restore.
Auch dauert das restore recht lang.

Gruß
Achim

proxmox-ve: 5.1-32 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.13.13-2-pve: 4.13.13-32
pve-kernel-4.10.17-5-pve: 4.10.17-25
pve-kernel-4.10.17-1-pve: 4.10.17-18
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9

restore vma archive: lzop -d -c /var/lib/vz/datensicherung/dump/vzdump-qemu-204-2017_12_27-18_03_22.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp24776.fifo - /var/tmp/vzdumptmp24776
CFG: size: 385 name: qemu-server.conf
DEV: dev_id=1 size: 34359738368 devname: drive-ide0
DEV: dev_id=2 size: 536870912000 devname: drive-virtio0
CTIME: Wed Dec 27 18:03:22 2017
Formatting '/var/lib/vz/images/204/vm-204-disk-1.qcow2', fmt=qcow2 size=34359738368 encryption=off cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'local:204/vm-204-disk-1.qcow2'
map 'drive-ide0' to '/var/lib/vz/images/204/vm-204-disk-1.qcow2' (write zeros = 0)
Formatting '/var/lib/vz/images/204/vm-204-disk-2.qcow2', fmt=qcow2 size=536870912000 encryption=off cluster_size=65536 preallocation=metadata lazy_refcounts=off refcount_bits=16
new volume ID is 'local:204/vm-204-disk-2.qcow2'
map 'drive-virtio0' to '/var/lib/vz/images/204/vm-204-disk-2.qcow2' (write zeros = 0)
progress 1% (read 5712314368 bytes, duration 7 sec)
progress 2% (read 11424628736 bytes, duration 7 sec)
progress 3% (read 17136943104 bytes, duration 8 sec)
progress 4% (read 22849257472 bytes, duration 8 sec)
progress 5% (read 28561571840 bytes, duration 9 sec)
progress 6% (read 34273886208 bytes, duration 9 sec)
progress 7% (read 39986200576 bytes, duration 95 sec)
progress 8% (read 45698514944 bytes, duration 247 sec)
progress 9% (read 51410763776 bytes, duration 377 sec)
progress 10% (read 57123078144 bytes, duration 527 sec)
progress 11% (read 62835392512 bytes, duration 654 sec)
progress 12% (read 68547706880 bytes, duration 773 sec)
progress 13% (read 74260021248 bytes, duration 926 sec)
progress 14% (read 79972335616 bytes, duration 1054 sec)
progress 15% (read 85684649984 bytes, duration 1182 sec)
progress 16% (read 91396964352 bytes, duration 1330 sec)
progress 17% (read 97109213184 bytes, duration 1453 sec)
progress 18% (read 102821527552 bytes, duration 1581 sec)
progress 19% (read 108533841920 bytes, duration 1714 sec)
progress 20% (read 114246156288 bytes, duration 1865 sec)
progress 21% (read 119958470656 bytes, duration 1991 sec)
progress 22% (read 125670785024 bytes, duration 2121 sec)
progress 23% (read 131383099392 bytes, duration 2249 sec)
progress 24% (read 137095413760 bytes, duration 2412 sec)
progress 25% (read 142807662592 bytes, duration 2533 sec)
progress 26% (read 148519976960 bytes, duration 2615 sec)
progress 27% (read 154232291328 bytes, duration 2683 sec)
progress 28% (read 159944605696 bytes, duration 2741 sec)
progress 29% (read 165656920064 bytes, duration 2805 sec)
progress 30% (read 171369234432 bytes, duration 2881 sec)
progress 31% (read 177081548800 bytes, duration 2958 sec)
progress 32% (read 182793863168 bytes, duration 3007 sec)
progress 33% (read 188506177536 bytes, duration 3051 sec)
progress 34% (read 194218426368 bytes, duration 3093 sec)
progress 35% (read 199930740736 bytes, duration 3134 sec)
progress 36% (read 205643055104 bytes, duration 3177 sec)
progress 37% (read 211355369472 bytes, duration 3217 sec)
progress 38% (read 217067683840 bytes, duration 3260 sec)
progress 39% (read 222779998208 bytes, duration 3308 sec)
progress 40% (read 228492312576 bytes, duration 3360 sec)
progress 41% (read 234204626944 bytes, duration 3412 sec)
progress 42% (read 239916875776 bytes, duration 3461 sec)
progress 43% (read 245629190144 bytes, duration 3511 sec)
progress 44% (read 251341504512 bytes, duration 3570 sec)
progress 45% (read 257053818880 bytes, duration 3622 sec)
progress 46% (read 262766133248 bytes, duration 3667 sec)
progress 47% (read 268478447616 bytes, duration 3719 sec)
progress 48% (read 274190761984 bytes, duration 3771 sec)
progress 49% (read 279903076352 bytes, duration 3823 sec)
progress 50% (read 285615325184 bytes, duration 3875 sec)
progress 51% (read 291327639552 bytes, duration 3928 sec)
progress 52% (read 297039953920 bytes, duration 3982 sec)
progress 53% (read 302752268288 bytes, duration 4037 sec)
progress 54% (read 308464582656 bytes, duration 4092 sec)
progress 55% (read 314176897024 bytes, duration 4147 sec)
progress 56% (read 319889211392 bytes, duration 4213 sec)
progress 57% (read 325601525760 bytes, duration 4265 sec)
progress 58% (read 331313840128 bytes, duration 4316 sec)
progress 59% (read 337026088960 bytes, duration 4374 sec)
progress 60% (read 342738403328 bytes, duration 4431 sec)
progress 61% (read 348450717696 bytes, duration 4487 sec)
progress 62% (read 354163032064 bytes, duration 4543 sec)
progress 63% (read 359875346432 bytes, duration 4602 sec)
progress 64% (read 365587660800 bytes, duration 4661 sec)
progress 65% (read 371299975168 bytes, duration 4719 sec)
progress 66% (read 377012289536 bytes, duration 4778 sec)
progress 67% (read 382724538368 bytes, duration 4836 sec)
progress 68% (read 388436852736 bytes, duration 4881 sec)
progress 69% (read 394149167104 bytes, duration 4929 sec)
progress 70% (read 399861481472 bytes, duration 4974 sec)
progress 71% (read 405573795840 bytes, duration 5019 sec)
progress 72% (read 411286110208 bytes, duration 5064 sec)
progress 73% (read 416998424576 bytes, duration 5110 sec)
progress 74% (read 422710738944 bytes, duration 5155 sec)
progress 75% (read 428422987776 bytes, duration 5199 sec)
progress 76% (read 434135302144 bytes, duration 5247 sec)
progress 77% (read 439847616512 bytes, duration 5294 sec)
progress 78% (read 445559930880 bytes, duration 5340 sec)
progress 79% (read 451272245248 bytes, duration 5387 sec)
progress 80% (read 456984559616 bytes, duration 5433 sec)
progress 81% (read 462696873984 bytes, duration 5479 sec)
progress 82% (read 468409188352 bytes, duration 5526 sec)
progress 83% (read 474121502720 bytes, duration 5577 sec)
progress 84% (read 479833751552 bytes, duration 5628 sec)
progress 85% (read 485546065920 bytes, duration 5680 sec)
progress 86% (read 491258380288 bytes, duration 5731 sec)
progress 87% (read 496970694656 bytes, duration 5781 sec)
progress 88% (read 502683009024 bytes, duration 5834 sec)
progress 89% (read 508395323392 bytes, duration 5888 sec)
progress 90% (read 514107637760 bytes, duration 5940 sec)
progress 91% (read 519819952128 bytes, duration 5991 sec)
progress 92% (read 525532200960 bytes, duration 6044 sec)
progress 93% (read 531244515328 bytes, duration 6096 sec)
progress 94% (read 536956829696 bytes, duration 6150 sec)
progress 95% (read 542669144064 bytes, duration 6241 sec)
progress 96% (read 548381458432 bytes, duration 6299 sec)
progress 97% (read 554093772800 bytes, duration 6354 sec)
progress 98% (read 559806087168 bytes, duration 6363 sec)
progress 99% (read 565518401536 bytes, duration 6363 sec)
progress 100% (read 571230650368 bytes, duration 6363 sec)
total bytes read 571230650368, sparse bytes 51082334208 (8.94%)
space reduction due to 4K zero blocks 0.0283%
 

Attachments

  • last.png
    last.png
    34.2 KB · Views: 6
Das hört sich nach langsamen Storage an, das Backup wird auf das selbe Storage gleichzeitig gelesen und geschrieben.
 
  • Like
Reactions: achim22

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!