Possible to limit IO for disk move or live migration

mailinglists

Renowned Member
Mar 14, 2012
643
70
93
When I move a VM disk on ZFS i have high IO WAIT.
Reasons aside, I want to reduce IO WAIT by reducing IO strain by disk moving operation.
I think the same logic is used also when doing live migraton with local disks.

Is there a way to specify disk bandwidth for such operations, or maybe IO priority?

I guess in extreme cases I could use ionice on the already running process to reduce IO strain.
 
The bwlimit: function in datacenter.cfg doesn't work. You can only copy disks with rsync bwlimit when the VM is stopped.

See other forum posts, moving disks with limits in datacenter.cfg doesn't work. It will just use the full MBps possible.
 
@check-ict I will test and report back. Somehow i doubt such feature would not work.

@wolfgang can we set limit specific for single storage in storage.cfg, or must we use global config which will apply to all our storages?
Also is the unit MiB/s MebiBytes per second, right?
 
Last edited:
Em...? What gives?
Any links to bug reports or feature requests?
Why would manual already have not yet implemented or redacted options?
 
I see. :-(
@wolfgang do you think adding ionice before qemu migrate or whathever commands are used for disk moving, cloning, etc would be an option for ProxMox developers? We can probably get away with just a static value, since these commands should never interfere with running VMs, but they do. We could also limit bw with cstream or mbuffer and such.

Optionally, we could have bw as well as ionice options on those commands as parameters.
 
  • Like
Reactions: devopstales