aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>2008-09-22 13:57:52 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2008-09-23 08:09:14 -0700
commita10cebf56ca7e7c034d1b6646230c6553e478967 (patch)
tree80f92bd693b7a6079be2814b01c14649b5f82217 /mm
parentb4d19cc84e8e6838f4aa0b26b3afcdc8c7f71505 (diff)
downloadkernel_samsung_smdk4412-a10cebf56ca7e7c034d1b6646230c6553e478967.zip
kernel_samsung_smdk4412-a10cebf56ca7e7c034d1b6646230c6553e478967.tar.gz
kernel_samsung_smdk4412-a10cebf56ca7e7c034d1b6646230c6553e478967.tar.bz2
memcg: check under limit at shrink_usage
Current memory cgroup(both in mainline and -mm) doesn't account swap caches as memory(swap cache support is dropped temporarily now). So try_to_free_mem_cgroup_pages doesn't reflect the count of pages that have been moved to swap cache. But this makes mem_cgroup_shrink_usage fail easily if most of the pages are anon/shmem, and then shmem_getpage returns -ENOMEM and the process will be killed. This patch adds res_counter_check_under_limit to avoid these cases. BTW, even if swap cache support is enabled again, if a process is moved to another cgroup, which has been just made, between precharge and shrink_usage in shmem_getpage, shrink_usage may fail just because there is no pages to reclaim. So this change would make sense anyway. Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@in.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memcontrol.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 0f1f7a7..c0500e4 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -806,6 +806,7 @@ int mem_cgroup_shrink_usage(struct mm_struct *mm, gfp_t gfp_mask)
do {
progress = try_to_free_mem_cgroup_pages(mem, gfp_mask);
+ progress += res_counter_check_under_limit(&mem->res);
} while (!progress && --retry);
css_put(&mem->css);