C
Chris Rivera
Guest
We are trying to figure out a way to properly limit the bandwidth to and from vms on our cloud but this is something that is not as simple as it sounds.
We have followed your instructions and have added limitation to outbound but this is where the issue lies.
############################
DEV=eth0
tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev $DEV parent 1: classid 1:1 cbq rate 1024kbit allot 1500 prio 5 bounded isolated
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip src 199.***.***.*** flowid 1:1
tc qdisc add dev $DEV parent 1:1 sfq perturb 10
############################
Once we run this connectivity to the node & the ip listed above becomes limited.
As you may have noticed u changed 256kbit to 1024kbit.... all this really did was drop the time rate from ping.
Example:
256k = 1500+ms ping time
512k = 1000+ms ping time
1024k = 500ms ping time
Still did not correct the issue with dropping of packets
When i remove the rules by running:
DEV=eth0
tc qdisc del dev $DEV root
Ping time to the hostnode & the vm instantly change to 1ms. No dropped packets, perfect quick SSH... no problems
###############################
If i run just
DEV=eth0
tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev $DEV parent 1: classid 1:1 cbq rate 6144kbit allot 1500 prio 5 bounded isolated
the host node & vm looses packets and ping time rises
###############################
Essentially all we want to do is limit the cts to 100mbps. While ct download speed tests before rate limiting max out at only 12.0M which seems to be right, i have found vps to be using over 120MB which is the max on proxmox mrtg graphs..... in cacti i can see that these vms are using over 800MB/s-2GB/s.
We need to be able to set these vms to a 100mbps limit.
Can you shed some light on how this can be properly tuned... thanks
We have followed your instructions and have added limitation to outbound but this is where the issue lies.
############################
DEV=eth0
tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev $DEV parent 1: classid 1:1 cbq rate 1024kbit allot 1500 prio 5 bounded isolated
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip src 199.***.***.*** flowid 1:1
tc qdisc add dev $DEV parent 1:1 sfq perturb 10
############################
Once we run this connectivity to the node & the ip listed above becomes limited.
- If we run a ping we get almost 50% dropped packets
- the clients cannot SSH to their VPS.
- downloads from the vms are like 5MB and not steady.... should be 12MB
As you may have noticed u changed 256kbit to 1024kbit.... all this really did was drop the time rate from ping.
Example:
256k = 1500+ms ping time
512k = 1000+ms ping time
1024k = 500ms ping time
Still did not correct the issue with dropping of packets
When i remove the rules by running:
DEV=eth0
tc qdisc del dev $DEV root
Ping time to the hostnode & the vm instantly change to 1ms. No dropped packets, perfect quick SSH... no problems
###############################
If i run just
DEV=eth0
tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev $DEV parent 1: classid 1:1 cbq rate 6144kbit allot 1500 prio 5 bounded isolated
the host node & vm looses packets and ping time rises
###############################
Essentially all we want to do is limit the cts to 100mbps. While ct download speed tests before rate limiting max out at only 12.0M which seems to be right, i have found vps to be using over 120MB which is the max on proxmox mrtg graphs..... in cacti i can see that these vms are using over 800MB/s-2GB/s.
We need to be able to set these vms to a 100mbps limit.
Can you shed some light on how this can be properly tuned... thanks