i/o disk limit

We went with ZFS so that we were able to use replication, enabling replication and HA for several containers was easy in Proxmox with ZFS.

I will have to see if its possible to use the replication feature if we moved from ZFS to Ceph. (have never used Ceph before but if it is a better fit for us then I will research it and test implementing it!)

Thank you fabian, I really appreciate the info!
 
@fabian I was told I can change the IO scheduler from what ZFS normally uses to CFQ and that I would then be able to set blkio weighting using cgroups, while still using ZFS. I am not sure what other side affects changing the scheduler might cause, such as would the replication feature in proxmox still work if I changed the ZFS scheduler to CFQ?

https://www.reddit.com/r/Proxmox/comments/r3szt5/how_to_handle_io_saturation_of_zfs_pool_slowing/

https://askubuntu.com/questions/577098/cgroups-disk-io-throttling-with-zfs

https://unix.stackexchange.com/ques...eight-doesnt-seem-to-have-the-expected-effect
 
Last edited:
replication shouldn't be affected by the scheduler settings, but I am not sure what the performance implications are for your workload, you'd have to test that yourself. also note that this still doesn't allow setting io limits, just weights.
 
  • Like
Reactions: jieiku
AFAIK - no.
 
I've been trying what has been discussed here, though I don't seem to have the same setup as everyone else.
Using Proxmox 7.1-10
Guessing something changed along the way, I've got the blkio mounted to /sys/fs/bpf, and there is no "blkio" directory inside of /sys/fs/cgroup/ or /sys/fs/cgroup/lxc/{id}/
 
Neat, that works, so its using cgroups v2 now.

All that has to be done is echo the correct configuration into `/sys/fs/cgroup/lxc/{id}/io.max`
Only problem with that is that it seems like the configuration is reset when the VM is restarted, but oh well, nothing a quick simple script can fix.
Thankfully the disk IDs don't change so its quite easy to do
 
Last edited: