aboutsummaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorVasily Averin <vvs@parallels.com>2013-04-01 03:01:32 +0000
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2013-05-01 08:56:36 -0700
commitf7b8a0f5795aca696f78042db1b8c4b3d07e04c5 (patch)
tree5a4725d65fbcebafbb6ab5bb9056892bd8a20446 /net
parent9758b79c56ae6dc93f660928a0d389ba45e530ed (diff)
downloadkernel_samsung_smdk4412-f7b8a0f5795aca696f78042db1b8c4b3d07e04c5.zip
kernel_samsung_smdk4412-f7b8a0f5795aca696f78042db1b8c4b3d07e04c5.tar.gz
kernel_samsung_smdk4412-f7b8a0f5795aca696f78042db1b8c4b3d07e04c5.tar.bz2
cbq: incorrect processing of high limits
[ Upstream commit f0f6ee1f70c4eaab9d52cf7d255df4bd89f8d1c2 ] currently cbq works incorrectly for limits > 10% real link bandwidth, and practically does not work for limits > 50% real link bandwidth. Below are results of experiments taken on 1 Gbit link In shaper | Actual Result -----------+--------------- 100M | 108 Mbps 200M | 244 Mbps 300M | 412 Mbps 500M | 893 Mbps This happen because of q->now changes incorrectly in cbq_dequeue(): when it is called before real end of packet transmitting, L2T is greater than real time delay, q_now gets an extra boost but never compensate it. To fix this problem we prevent change of q->now until its synchronization with real time. Signed-off-by: Vasily Averin <vvs@openvz.org> Reviewed-by: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'net')
-rw-r--r--net/sched/sch_cbq.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
index 599f67a..b7cddb9 100644
--- a/net/sched/sch_cbq.c
+++ b/net/sched/sch_cbq.c
@@ -963,8 +963,11 @@ cbq_dequeue(struct Qdisc *sch)
cbq_update(q);
if ((incr -= incr2) < 0)
incr = 0;
+ q->now += incr;
+ } else {
+ if (now > q->now)
+ q->now = now;
}
- q->now += incr;
q->now_rt = now;
for (;;) {