Backup, restore and offline vm disk moving is creating high io load. Other vms do not respond. Is it possible to limit resources for these tasks?
vzdump help
USAGE: vzdump help
USAGE: vzdump {<vmid>} [OPTIONS]
<vmid> <string>
The ID of the guest system you want to backup.
-all <boolean> (default=0)
Backup all known guest systems on this host.
-bwlimit <integer> (0 - N) (default=0)
Limit I/O bandwidth (KBytes per second).
note that -bwlimit works only for the "cfq" scheduler, now the default is "deadline" that works much better IMHO but makes the bwlimit option useless,
Yes, sorry, I'm wrong, I confused it with ioniceWhat? bwlimit always works for backups.
bwlimit support for other disk operations is in the works, if you are interested check the pve-devel mailing list (archives).
...
Since this bug is not getting any attention (I suspect it's beyond the scope of the Proxmox developers being a kernel issue), we seriously need ways to limit the bandwidth of host disk writes during restores and migrations.
Do you mean you have implemented the limitation (what is just kinda workaround) or you have solved the CPU freezes while high i/o?
Oh, so this was "only" related to backup jobs. I have a machine with two storage pools, one for root and multiple VMs and one fast for only one VM, I recognized while rootpool is "under heavy load" the VM with its own storage has some freezes. So will my only option to avoid this be to get an "extra" pool only for rootfs?
This is very interesting news, would love to test. Can you point me to the particular patch about this issue? Is it in ZFS or in PVE?both. there has been one issue relating to ZFS and qemu disk-caching which could cause severe performance problems on offline disk moving and qmrestore operations - that has been fixed.
So I suppose this is in 5.x pvetest? Also, is there going to be a systemwide, general maximum bandwidth setting for these operations, or can only be set via GUI for when you launch a single restore/migrate?we also implemented bandwidth limits so that restore (and other operations) can be fine-tuned to not overwhelm the whole storage
This is very interesting news, would love to test. Can you point me to the particular patch about this issue? Is it in ZFS or in PVE?
So I suppose this is in 5.x pvetest? Also, is there going to be a systemwide, general maximum bandwidth setting for these operations, or can only be set via GUI for when you launch a single restore/migrate?
if you are talking about ZFS here, it would be interesting to see some monitoring data from "arcstat" and "arc_summary" before and while you are experiencing this issue.
free -m
total used free shared buff/cache available
Mem: 48308 32520 394 147 15392 15071
Swap: 8191 119 8072