aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorDan Magenheimer <dan.magenheimer@oracle.com>2012-01-25 16:58:46 -0800
committerSimon Shields <keepcalm444@gmail.com>2016-06-12 21:20:21 +1000
commit1e291c9a70e255278ddb451097f0d65126556b61 (patch)
treefbfa0000179bd3e5d59a4cd44463e661663077a7 /mm
parentc72e09b31537cd8c558f70fd00d40c755e670114 (diff)
downloadkernel_samsung_smdk4412-1e291c9a70e255278ddb451097f0d65126556b61.zip
kernel_samsung_smdk4412-1e291c9a70e255278ddb451097f0d65126556b61.tar.gz
kernel_samsung_smdk4412-1e291c9a70e255278ddb451097f0d65126556b61.tar.bz2
mm: implement WasActive page flag (for improving cleancache)
(Feedback welcome if there is a different/better way to do this without using a page flag!) Since about 2.6.27, the page replacement algorithm maintains an "active" bit to help decide which pages are most eligible to reclaim, see http://linux-mm.org/PageReplacementDesign This "active' information is also useful to cleancache but is lost by the time that cleancache has the opportunity to preserve the pageful of data. This patch adds a new page flag "WasActive" to retain the state. The flag may possibly be useful elsewhere. It is up to each cleancache backend to utilize the bit as it desires. The matching patch for zcache is included here for clarification/discussion purposes, though it will need to go through GregKH and the staging tree. The patch resolves issues reported with cleancache which occur especially during streaming workloads on older processors, see https://lkml.org/lkml/2011/8/17/351 Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com> Conflicts: include/linux/page-flags.h Change-Id: I0fcb2302a7b9c5e66db005229f679baee90f262f Conflicts: include/linux/page-flags.h
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c4
1 files changed, 4 insertions, 0 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9b72c26..c11955c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -624,6 +624,8 @@ void putback_lru_page(struct page *page)
int was_unevictable = PageUnevictable(page);
VM_BUG_ON(PageLRU(page));
+ if (active)
+ SetPageWasActive(page);
redo:
ClearPageUnevictable(page);
@@ -1289,6 +1291,7 @@ unsigned long clear_active_flags(struct list_head *page_list,
if (PageActive(page)) {
lru += LRU_ACTIVE;
ClearPageActive(page);
+ SetPageWasActive(page);
nr_active += numpages;
}
if (count)
@@ -1710,6 +1713,7 @@ static void shrink_active_list(unsigned long nr_pages, struct zone *zone,
}
ClearPageActive(page); /* we are de-activating */
+ SetPageWasActive(page);
list_add(&page->lru, &l_inactive);
}