Is it possible to throttle backup and restore disk io?

Yes.
Code:
vzdump help
USAGE: vzdump help
USAGE: vzdump {<vmid>} [OPTIONS]
  <vmid>     <string>

             The ID of the guest system you want to backup.

  -all       <boolean>   (default=0)

             Backup all known guest systems on this host.

  -bwlimit   <integer> (0 - N)   (default=0)

             Limit I/O bandwidth (KBytes per second).
 
note that -bwlimit works only for the "cfq" scheduler, now the default is "deadline" that works much better IMHO but makes the bwlimit option useless,
 
Hi,

It's really a problem for us, we have technicians that sometime have to move vms and restoring backups, and anyone of those kills i/o on other Vms. We have 10G network and raid10 ssd.

What command so you suggest?

Anyway, a Gui option when starting any i/o task will be highly appreciated.

Thank you!
 
  • Like
Reactions: Matthew Bates
Hi,

I use cstream as a workaround. With this command, I can restore a VM with a 30MB/s limit:

cstream -t 30000000 -i /mnt/pve/backups/dump/backupfile.vma.lzo | lzop -cd | qmrestore - newVMID --storage destinationstorage

I think it can also be used to migrate VMs between nodes, to mitigate high I/O.

Regards,
 
bwlimit support for other disk operations is in the works, if you are interested check the pve-devel mailing list (archives).
 
bwlimit support for other disk operations is in the works, if you are interested check the pve-devel mailing list (archives).

Are there any news on limiting the bandwidth of restore and migrate operations?

Due to the KVM CPU freeze bug in the kernel I reported (and posted about several times), heavy disk writes not only slow down other VMs disk IO, but their network IO as well due to the guest CPU freezing, rendering them in an almost entirely frozen state for the duration of the operation.

Bug 1453 - CPU freezes on KVM guests during high IO load on host
https://bugzilla.proxmox.com/show_bug.cgi?id=1453

Since this bug is not getting any attention (I suspect it's beyond the scope of the Proxmox developers being a kernel issue), we seriously need ways to limit the bandwidth of host disk writes during restores and migrations.
 
  • Like
Reactions: Marcel Lanz
...

Since this bug is not getting any attention (I suspect it's beyond the scope of the Proxmox developers being a kernel issue), we seriously need ways to limit the bandwidth of host disk writes during restores and migrations.

As Fabian wrote, our devs are working on that and in fact, its already available - please test and give feedback.
 
Do you mean you have implemented the limitation (what is just kinda workaround) or you have solved the CPU freezes while high i/o?

both. there has been one issue relating to ZFS and qemu disk-caching which could cause severe performance problems on offline disk moving and qmrestore operations - that has been fixed. we also implemented bandwidth limits so that restore (and other operations) can be fine-tuned to not overwhelm the whole storage.
 
  • Like
Reactions: efeu
Oh, so this was "only" related to backup jobs. I have a machine with two storage pools, one for root and multiple VMs and one fast for only one VM, I recognized while rootpool is "under heavy load" the VM with its own storage has some freezes. So will my only option to avoid this be to get an "extra" pool only for rootfs?
 
Oh, so this was "only" related to backup jobs. I have a machine with two storage pools, one for root and multiple VMs and one fast for only one VM, I recognized while rootpool is "under heavy load" the VM with its own storage has some freezes. So will my only option to avoid this be to get an "extra" pool only for rootfs?

if you are talking about ZFS here, it would be interesting to see some monitoring data from "arcstat" and "arc_summary" before and while you are experiencing this issue.
 
both. there has been one issue relating to ZFS and qemu disk-caching which could cause severe performance problems on offline disk moving and qmrestore operations - that has been fixed.
This is very interesting news, would love to test. Can you point me to the particular patch about this issue? Is it in ZFS or in PVE?

we also implemented bandwidth limits so that restore (and other operations) can be fine-tuned to not overwhelm the whole storage
So I suppose this is in 5.x pvetest? Also, is there going to be a systemwide, general maximum bandwidth setting for these operations, or can only be set via GUI for when you launch a single restore/migrate?
 
This is very interesting news, would love to test. Can you point me to the particular patch about this issue? Is it in ZFS or in PVE?

the ZFS issue was fixed in pve-qemu-kvm (vma/qmrestore) and qemu-server (move disk/qemu-img convert).

So I suppose this is in 5.x pvetest? Also, is there going to be a systemwide, general maximum bandwidth setting for these operations, or can only be set via GUI for when you launch a single restore/migrate?

yes. see "man datacenter.cfg" and "man pvesm". note that the actual rate-limiting is not yet implemented for anything except "qmrestore" and "pct restore" (and their respective API counter parts). the rest will follow (soon) though.
 
if you are talking about ZFS here, it would be interesting to see some monitoring data from "arcstat" and "arc_summary" before and while you are experiencing this issue.

before:
https://pastebin.com/DPvREayk
meanwhile:
https://pastebin.com/AThQeRJy

Code:
free -m
              total        used        free      shared  buff/cache   available
Mem:          48308       32520         394         147       15392       15071
Swap:          8191         119        8072

I can see that arc is kinda dropped, but even before I start the heavy i/o task I have 10G of free RAM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!