limit IO ( I/O) in LXC Container / ZFS

TheMrg

Active Member
Aug 1, 2019
118
4
38
42
Is there a way in Proxmox to limit IO of LXC Container?
It is on ZFS.

Somethink like
lxc config device set ci root limits.read 30MB
lxc config device set ci root limits.write 10MB

do not work. there is no lxc command.
we did not find any in this forum.

Thanks so much.
 
Have you had a look at cgroups?
 
Unfortunately, this is not trivial. What is written in this thread is still valid.
 
Thanks we will read.
We see there is no out of the box solution.

with KVM VM it is possible?
 
Is the a tool which shows the actual disk IO on LXC / Container basis?
iotop shows on process basis and so we do have to do
cat /proc/NNNN/cgroup
is there a tool which shows this values per container?
 
with KVM VM it is possible?
You can set various IO parameters during VM creation in the Hard Disk Tab if you tick the "Advanced" checkbox.

Is the a tool which shows the actual disk IO on LXC / Container basis?
You can use
Code:
pct status CTID --verbose
or scroll down in the Summary of the container in the GUI.
 
Sorry Dominic,

but do you have read the questions from TheMrg correct? About your answer: He has not ask about KVM settings, why he knows this! But he has ask about LXC containers especiality for ZFS!

Dear TheMrg,

I have used before under Proxmox 5.x a resolution this has work before, but I am not sure, the syntax is absolute correct and I have not tested now under Proxmox 6.x - So this can been only an Idea for a work for a resolution!

So in first you must make a block list on your host:

lsblk

Their you must find out your ZFS - Drives, expl (cut of) / and other machine with other hardware, then you must change one time more!

nvme0n1 259:0 0 1.8T 0 disk
├─nvme0n1p1 259:1 0 512M 0 part
├─nvme0n1p2 259:2 0 1.8T 0 part
└─nvme0n1p9 259:3 0 8M 0 part
nvme1n1 259:4 0 1.8T 0 disk
├─nvme1n1p1 259:5 0 512M 0 part
├─nvme1n1p2 259:6 0 1.8T 0 part
└─nvme1n1p9 259:7 0 8M 0 part

So now you can use the 259:0 and 259:4 (mirrored discs) for the settings. When you have more disk, you must make then many rules.

Ok, then you must make scripts:

echo cfq > /sys/module/zfs/parameters/zfs_vdev_scheduler

echo 500 > /sys/fs/cgroup/blkio/lxc/node01/blkio.weight
echo 100 > /sys/fs/cgroup/blkio/lxc/node02/blkio.weight
echo 1000 > /sys/fs/cgroup/blkio/lxc/node04/blkio.weight

Then for the containers you can test out first:

cd /sys/fs/cgroup/blkio/lxc/100 - for container 100:
cat blkio.throttle.write_bps_device 259:0 500
cat blkio.throttle.read_bps_device 259:0 500

This you can overwrite temporary and then you can fix later in your lxc-config-File (/etc/pve/lxc/100.conf)

in Form of this:

lxc.cgroup.blkio.throttle.read_iops_device: 259:0 20
lxc.cgroup.blkio.throttle.write_iops_device: 259:0 10
lxc.cgroup.blkio.throttle.read_iops_device: 259:4 20
lxc.cgroup.blkio.throttle.write_iops_device: 259:4 10
other drives:
lxc.cgroup.blkio.throttle.read_iops_device: 8:32 20
lxc.cgroup.blkio.throttle.write_iops_device: 8:32 10
lxc.cgroup.blkio.throttle.read_iops_device: 8:48 20
lxc.cgroup.blkio.throttle.write_iops_device: 8:48 10
 
Sorry Dominic,
but do you have read the questions from TheMrg correct?
I hope so.


He has not ask about KVM settings
with KVM VM it is possible?



But he has ask about LXC containers especiality for ZFS!
Which is why I provided a link where IO limits for containers are discussed.
What is written in this thread is still valid.
Admittedly, ZFS is not mentioned there. However, this doesn't change anything essential as far as I can see.




I'd be delighted if you could provide a reliable source proving that CFQ is advantageous in this situation.
echo cfq > /sys/module/zfs/parameters/zfs_vdev_scheduler
 
So as I have test, I thing so, the schedulers for ZFS-disc IO not works in proxmox 6 ?!

Ok, CFQ or DEADLINE, when I read so many forum entries, nothing is complete clear and then it can been set on 2 different points the changing:

/sys/module/zfs/parameters/zfs_vdev_scheduler

OR

/sys/block/[diskdrive]/queue/scheduler

Then in the first versions of Proxmox 4.x the "Read IO" and "Write IO" Statistics in the GUI was not filled. Later after correct a bug in the kernel the statistics shows for ZFS containers perfect! Now in Proxmox 6.x is bad as in Proxmox 4.x before - It looks like, that we go 10 steps forward and 7 steps backwards!

And too, the IO-Limitations, as I have written in Proxmox 6.x equal not function, same as before in Proxmox 4.x ! It looks like, that the scheduler in Proxmox is switched off per default?!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!