aboutsummaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorPeter Zijlstra <a.p.zijlstra@chello.nl>2008-10-24 11:06:12 +0200
committerIngo Molnar <mingo@elte.hu>2008-10-24 12:50:59 +0200
commit01c8c57d668d94f1036d9ab11a22aa24ca16a35d (patch)
treea5bad4df146982e55bdd9dba73912f6bace036df /kernel
parent8c82a17e9c924c0e9f13e75e4c2f6bca19a4b516 (diff)
downloadkernel_samsung_smdk4412-01c8c57d668d94f1036d9ab11a22aa24ca16a35d.zip
kernel_samsung_smdk4412-01c8c57d668d94f1036d9ab11a22aa24ca16a35d.tar.gz
kernel_samsung_smdk4412-01c8c57d668d94f1036d9ab11a22aa24ca16a35d.tar.bz2
sched: fix a find_busiest_group buglet
In one of the group load balancer patches: commit 408ed066b11cf9ee4536573b4269ee3613bd735e Author: Peter Zijlstra <a.p.zijlstra@chello.nl> Date: Fri Jun 27 13:41:28 2008 +0200 Subject: sched: hierarchical load vs find_busiest_group The following change: - if (max_load - this_load + SCHED_LOAD_SCALE_FUZZ >= + if (max_load - this_load + 2*busiest_load_per_task >= busiest_load_per_task * imbn) { made the condition always true, because imbn is [1,2]. Therefore, remove the 2*, and give the it a fair chance. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c
index 6625c3c..12bc367 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3344,7 +3344,7 @@ small_imbalance:
} else
this_load_per_task = cpu_avg_load_per_task(this_cpu);
- if (max_load - this_load + 2*busiest_load_per_task >=
+ if (max_load - this_load + busiest_load_per_task >=
busiest_load_per_task * imbn) {
*imbalance = busiest_load_per_task;
return busiest;