aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2015-03-04 10:37:43 -0500
committerBen Hutchings <ben@decadent.org.uk>2015-05-09 23:16:30 +0100
commitfc2669222243f394bedd471aa04b4c70de50768c (patch)
tree4e0a77cc8345342d08c773e793339ff54e0043c4 /mm
parentbf21de36006d41407c179164387984374b846943 (diff)
downloadkernel_samsung_smdk4412-fc2669222243f394bedd471aa04b4c70de50768c.zip
kernel_samsung_smdk4412-fc2669222243f394bedd471aa04b4c70de50768c.tar.gz
kernel_samsung_smdk4412-fc2669222243f394bedd471aa04b4c70de50768c.tar.bz2
writeback: add missing INITIAL_JIFFIES init in global_update_bandwidth()
commit 7d70e15480c0450d2bfafaad338a32e884fc215e upstream. global_update_bandwidth() uses static variable update_time as the timestamp for the last update but forgets to initialize it to INITIALIZE_JIFFIES. This means that global_dirty_limit will be 5 mins into the future on 32bit and some large amount jiffies into the past on 64bit. This isn't critical as the only effect is that global_dirty_limit won't be updated for the first 5 mins after booting on 32bit machines, especially given the auxiliary nature of global_dirty_limit's role - protecting against global dirty threshold's sudden dips; however, it does lead to unintended suboptimal behavior. Fix it. Fixes: c42843f2f0bb ("writeback: introduce smoothed global dirty limit") Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Jan Kara <jack@suse.cz> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'mm')
-rw-r--r--mm/page-writeback.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index aad22aa..98cd090 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -726,7 +726,7 @@ static void global_update_bandwidth(unsigned long thresh,
unsigned long now)
{
static DEFINE_SPINLOCK(dirty_lock);
- static unsigned long update_time;
+ static unsigned long update_time = INITIAL_JIFFIES;
/*
* check locklessly first to optimize away locking for the most time