aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/staging/zcache
Commit message (Collapse)AuthorAgeFilesLines
* initial merge with 3.2.72Wolfgang Wiedmeyer2015-10-231-0/+1986
|\
| * staging: zcache: fix cleancache race condition with shrinkerSeth Jennings2012-09-191-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 6d7d9798ad5c97ee4e911dd070dc12dc5ae55bd0 upstream. This patch fixes a race condition that results in memory corruption when using cleancache. The race exists between the zcache shrinker handler, shrink_zcache_memory() and cleancache_get_page(). In most cases, the shrinker will both evict a zbpg from its buddy list and flush it from tmem before a cleancache_get_page() occurs on that page. A subsequent cleancache_get_page() will fail in the tmem layer. In the rare case that two occur together and the cleancache_get_page() path gets through the tmem layer before the shrinker path can flush tmem, zbud_decompress() does a check to see if the zbpg is a "zombie", i.e. not on a buddy list, which means the shrinker is in the process of reclaiming it. If the zbpg is a zombie, zbud_decompress() returns -EINVAL. However, this return code is being ignored by the caller, zcache_pampd_get_data_and_free(), which results in the caller of cleancache_get_page() thinking that the page has been properly retrieved when it has not. This patch modifies zcache_pampd_get_data_and_free() to convey the failure up the stack so that the caller of cleancache_get_page() knows the page retrieval failed. This needs to be applied to stable trees as well. zcache-main.c was named zcache.c before v3.1, so I'm not sure how you want to handle trees earlier than that. Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
| * staging: zcache: avoid AB-BA deadlock conditionAndrea Righi2012-04-021-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit cfbc6a92212e74b07aa76c9e2f20c542e36077fb upstream. Commit 9256a47 fixed a deadlock condition, being sure that the buddy list spinlock is always taken before the page spinlock. However in zbud_free_and_delist() locking order is the opposite (page lock -> list lock). Possible unsafe locking scenario (reported by lockdep): CPU0 CPU1 ---- ---- lock(&(&zbpg->lock)->rlock); lock(zbud_budlists_spinlock); lock(&(&zbpg->lock)->rlock); lock(zbud_budlists_spinlock); Fix by grabbing the locks in opposite order in zbud_free_and_delist(). Signed-off-by: Andrea Righi <andrea@betterlinux.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| * zcache: fix deadlock conditionDan Magenheimer2012-02-131-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 9256a4789be3dae37d00924c03546ba7958ea5a3 upstream. I discovered this deadlock condition awhile ago working on RAMster but it affects zcache as well. The list spinlock must be locked prior to the page spinlock and released after. As a result, the page copy must also be done while the locks are held. Applies to 3.2. Konrad, please push (via GregKH?)... this is definitely a bug fix so need not be pushed during a -rc0 window. Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| * zcache: Set SWIZ_BITS to 8 to reduce tmem bucket lock contention.Dan Magenheimer2012-02-131-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit e8b4553457e78bcff90f70a31212a40a8fd4f0db upstream. SWIZ_BITS > 8 results in a much larger number of "tmem_obj" allocations, likely one per page-placed-in-frontswap. The tmem_obj is not huge (roughly 100 bytes), but it is large enough to add a not-insignificant memory overhead to zcache. The SWIZ_BITS=8 will get roughly the same lock contention without the space wastage. The effect of SWIZ_BITS can be thought of as "2^SWIZ_BITS is the number of unique oids that be generated" (This concept is limited to frontswap's use of tmem). Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| * Merge branch 'staging-next' into Linux 3.1Greg Kroah-Hartman2011-10-251-33/+18
| |\ | | | | | | | | | | | | | | | | | | | | | | | | This was done to resolve a conflict in the drivers/staging/comedi/drivers/ni_labpc.c file that resolved a build bugfix in Linus's tree with a "better" bugfix that was in the staging-next tree that resolved the issue in a more complete manner. Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| | * staging: zcache: remove zcache_direct_reclaim_lockSeth Jennings2011-10-171-27/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | zcache_do_preload() currently does a spin_trylock() on the zcache_direct_reclaim_lock. Holding this lock intends to prevent shrink_zcache_memory() from evicting zbud pages as a result of a preload. However, it also prevents two threads from executing zcache_do_preload() at the same time. The first thread will obtain the lock and the second thread's spin_trylock() will fail (an aborted preload) causing the page to be either lost (cleancache) or pushed out to the swap device (frontswap). It also doesn't ensure that the call to shrink_zcache_memory() is on the same thread as the call to zcache_do_preload(). Additional, there is no need for this mechanism because all zcache_do_preload() calls that come down from cleancache already have PF_MEMALLOC set in the process flags which prevents direct reclaim in the memory manager. If the zcache_do_preload() call is done from the frontswap path, we _want_ reclaim to be done (which it isn't right now). This patch removes the zcache_direct_reclaim_lock and related statistics in zcache. Based on v3.1-rc8 Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Reviewed-by: Dave Hansen <dave@linux.vnet.ibm.com> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| | * staging: zcache: reduce tmem bucket lock contentionSeth Jennings2011-10-121-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tmem uses hash buckets each with their own rbtree and lock to quickly lookup tmem objects. tmem has TMEM_HASH_BUCKETS (256) buckets per pool. However, because of the way the tmem_oid is generated for frontswap pages, only 16 unique tmem_oids are being generated, resulting in only 16 of the 256 buckets being used. This cause high lock contention for the per bucket locks. This patch changes SWIZ_BITS to include more bits of the offset. The result is that all 256 hash buckets are potentially used resulting in a 95% drop in hash bucket lock contention. Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| | * staging: zcache: fix crash on cpu removeSeth Jennings2011-10-111-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the case that a cpu is taken offline before zcache_do_preload() is ever called on the cpu, the per-cpu zcache_preloads structure will be uninitialized. In the CPU_DEAD case for zcache_cpu_notifier(), kp->obj is not checked before calling kmem_cache_free() on it. If it is NULL, a crash results. This patch ensures that both kp->obj and kp->page are not NULL before calling the respective free functions. In practice, just checking one or the other should be sufficient since they are assigned together in zcache_do_preload(), but I check both for safety. Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| | * Merge 3.1-rc4 into staging-nextGreg Kroah-Hartman2011-08-291-3/+3
| | |\ | | | | | | | | | | | | | | | | | | | | | | | | This resolves a conflict with: drivers/staging/brcm80211/brcmsmac/types.h Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| | | * staging: zcache: fix typosSeth Jennings2011-08-231-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch fixes two typos in zcache-main.c Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| | | * staging: zcache: fix possible sleep under lockSeth Jennings2011-08-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | zcache_new_pool() calls kmalloc() with GFP_KERNEL which has __GFP_WAIT set. However, zcache_new_pool() gets called on a stack that holds the swap_lock spinlock, leading to a possible sleep-with-lock situation. The lock is obtained in enable_swap_info(). The patch replaces GFP_KERNEL with GFP_ATOMIC. v2: replace with GFP_ATOMIC, not GFP_IOFS Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| * | | staging: zcache: fix cleancache crashSeth Jennings2011-09-201-1/+1
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After commit c5f5c4db3938 ("staging: zcache: fix crash on high memory swap") cleancache crashes on the first successful get. This was caused by a remaining virt_to_page() call in zcache_pampd_get_data_and_free() that only gets run in the cleancache path. The patch converts the virt_to_page() to struct page casting like was done for other instances in c5f5c4db3938. Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Tested-By: Valdis Kletnieks <valdis.kletnieks@vt.edu> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | Staging: zcache: signedness bug in tmem_get()Dan Carpenter2011-08-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | "ret" needs to be signed for the error handling to work properly. Signed-off-by: Dan Carpenter <error27@gmail.com> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| * | staging: zcache: fix crash on high memory swapSeth Jennings2011-08-231-4/+4
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | zcache_put_page() was modified to pass page_address(page) instead of the actual page structure. In combination with the function signature changes to tmem_put() and zcache_pampd_create(), zcache_pampd_create() tries to (re)derive the page structure from the virtual address. However, if the original page is a high memory page (or any unmapped page), this virt_to_page() fails because the page_address() in zcache_put_page() returned NULL. This patch changes zcache_put_page() and zcache_get_page() to pass the page structure instead of the page's virtual address, which may or may not exist. Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| * zcache: Fix build error when sysfs is not definedNitin Gupta2011-08-081-1/+1
| | | | | | | | | | Signed-off-by: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| * zcache: Use div_u64 for 64-bit divisionThadeu Lima de Souza Cascardo2011-08-081-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | xv_get_total_size_bytes returns a u64 value and it's used in a division. This causes build failures in 32-bit architectures, as reported by Randy Dunlap. Reported-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Dan Magenheimer <dan.magenheimer@oracle.com> Cc: Nitin Gupta <ngupta@vflare.org> Acked-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| * staging: zcache: include module.h for MODULE_LICENSEThadeu Lima de Souza Cascardo2011-08-031-0/+1
| | | | | | | | | | | | | | | | | | | | The oncoming cleanup of module.h usage requires the explicit inclusion of module.h when it was otherwise being included indirectly. Otherwise, building zcache will fail. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| * staging: zcache: module is GPLThadeu Lima de Souza Cascardo2011-08-021-0/+3
| | | | | | | | | | | | | | | | This avoids tainting the kernel as if a proprietary module was loaded. The kernel will still be tainted because this is a staging driver. Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| * staging: fix zcache buildingThadeu Lima de Souza Cascardo2011-08-022-1/+1
| | | | | | | | | | | | | | | | | | | | zcache is only building tmem.c and not building zcache.c. To keep the module name, zcache.c must be renamed if symbols from tmem.c are to remain unexported. Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
| * staging: zcache: support multiple clients, prep for KVM and RAMsterDan Magenheimer2011-07-083-117/+523
|/ | | | | | | | | | | | | | | | | | | | | | | | This is version 3 of an update to zcache, incorporating feedback from the list. This patch adds support to the in-kernel transcendent memory ("tmem") code and the zcache driver for multiple clients, which will be needed for both RAMster and KVM support. It also adds additional tmem callbacks to support RAMster and corresponding no-op stubs in the zcache driver. In v2, I've also taken the liberty of adding some additional sysfs variables to both surface information and allow policy control. Those experimenting with zcache should find them useful. V3 clarifies some code walking and declaring arrays. Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com> [v3: error27@gmail.com: fix array bounds/walking] [v2: konrad.wilk@oracle.com: fix bools, add check for NULL, fix a comment] [v2: sjenning@linux.vnet.ibm.com: add info/tunables for poor compression] [v2: marcusklemm@googlemail.com: add tunable for max persistent pages] Acked-by: Dan Carpenter <error27@gmail.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: linux-mm@kvack.org Cc: kvm@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* vmscan: change shrinker API by passing shrink_control structYing Han2011-05-251-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | Change each shrinker's API by consolidating the existing parameters into shrink_control struct. This will simplify any further features added w/o touching each file of shrinker. [akpm@linux-foundation.org: fix build] [akpm@linux-foundation.org: fix warning] [kosaki.motohiro@jp.fujitsu.com: fix up new shrinker API] [akpm@linux-foundation.org: fix xfs warning] [akpm@linux-foundation.org: update gfs2] Signed-off-by: Ying Han <yinghan@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Acked-by: Pavel Emelyanov <xemul@openvz.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Acked-by: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Cc: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* staging: Allow sharing xvmalloc for zram and zcacheNitin Gupta2011-02-231-1/+3
| | | | | | | | | | | | | | | Both zram and zcache use xvmalloc allocator. If xvmalloc is compiled separately for both of them, we will get linker error if they are both selected as "built-in". We can also get linker error regarding missing xvmalloc symbols if zram is not built. So, we now compile xvmalloc separately and export its symbols which are then used by both of zram and zcache. Signed-off-by: Nitin Gupta <ngupta@vflare.org> Acked-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* staging: zcache: fix memory leakVasiliy Kulikov2011-02-181-0/+1
| | | | | | | obj is not freed if __get_free_page() failed. Signed-off-by: Vasiliy Kulikov <segoon@openwall.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* staging: zcache: misc build/configDan Magenheimer2011-02-092-0/+14
| | | | | | | | | | | | | | [PATCH V2 3/3] drivers/staging: zcache: misc build/config Makefiles and Kconfigs to build zcache in drivers/staging There is a dependency on xvmalloc.* which in 2.6.37 resides in drivers/staging/zram. Should this move or disappear, some Makefile/Kconfig changes will be required. Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* staging: zcache: host services and PAM servicesDan Magenheimer2011-02-091-0/+1657
| | | | | | | | | | | | | | | | | | | | [PATCH V2 2/3] drivers/staging: zcache: host services and PAM services Zcache provides host services (memory allocation) for tmem, a "shim" to interface cleancache and frontswap to tmem, and two different page-addressable memory implemenations using lzo1x compression. The first, "compression buddies" ("zbud") compresses pairs of pages and supplies a shrinker interface that allows entire pages to be reclaimed. The second is a shim to xvMalloc which is more space-efficient but less receptive to page reclamation. The first is used for ephemeral pools and the second for persistent pools. All ephemeral pools share the same memory, that is, even pages from different pools can share the same page. Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* staging: zcache: in-kernel tmem codeDan Magenheimer2011-02-092-0/+905
[PATCH V2 1/3] drivers/staging: zcache: in-kernel tmem code Transcendent memory ("tmem") is a clean API/ABI that provides for an efficient address translation and a set of highly concurrent access methods to copy data between a page-oriented data source (e.g. cleancache or frontswap) and a page-addressable memory ("PAM") data store. Of critical importance, the PAM data store is of unknown (and possibly varying) size so any individual access may succeed or fail as defined by the API/ABI. Tmem exports a basic set of access methods (e.g. put, get, flush, flush object, new pool, and destroy pool) which are normally called from a "host" (e.g. zcache). To be functional, two sets of "ops" must be registered by the host, one to provide "host services" (memory allocation) and one to provide page-addressable memory ("PAM") hooks. Tmem supports one or more "clients", each which can provide a set of "pools" to partition pages. Each pool contains a set of "objects"; each object holds pointers to some number of PAM page descriptors ("pampd"), indexed by an "index" number. This triple <pool id, object id, index> is sometimes referred to as a "handle". Tmem's primary function is to essentially provide address translation of handles into pampds and move data appropriately. As an example, for cleancache, a pool maps to a filesystem, an object maps to a file, and the index is the page offset into the file. And in this patch, zcache is the host and each PAM descriptor points to a compressed page of data. Tmem supports two kinds of pages: "ephemeral" and "persistent". Ephemeral pages may be asynchronously reclaimed "bottoms up" so the data structures and concurrency model must allow for this. For example, each pampd must retain sufficient information to invalidate tmem's handle-to-pampd translation. its containing object so that, on reclaim, all tmem data structures can be made consistent. Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>