Has anybody else noticed the I/O scheduler seems to perform very "unfairly" ?
During a vzdump I pretty much see the entire system's load average inside OpenVZ containers and even in the host just skyrocket up to 5-20 easily with no actual load on the system. Apparently everything that is not the vzdump/tar is just waiting and waiting and waiting on I/O. vzctl set 0 --ioprio 2 and vzctl --set 102 --ioprio 6 helps a little but not much, anything inside the OpenVZ container can pretty much just forget about getting any I/O time. Even the KVM suffers, you stop being able to ping the guest in the KVM for 10 seconds and then all the pings come back at once.
During a vzdump I pretty much see the entire system's load average inside OpenVZ containers and even in the host just skyrocket up to 5-20 easily with no actual load on the system. Apparently everything that is not the vzdump/tar is just waiting and waiting and waiting on I/O. vzctl set 0 --ioprio 2 and vzctl --set 102 --ioprio 6 helps a little but not much, anything inside the OpenVZ container can pretty much just forget about getting any I/O time. Even the KVM suffers, you stop being able to ping the guest in the KVM for 10 seconds and then all the pings come back at once.