aboutsummaryrefslogtreecommitdiffstats
path: root/fs/dcache.c
Commit message (Collapse)AuthorAgeFilesLines
...
* fs: scale inode alias listNick Piggin2011-01-071-8/+58
| | | | | | | Add a new lock, dcache_inode_lock, to protect the inode's i_dentry list from concurrent modification. d_alias is also protected by d_lock. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: dcache scale subdirsNick Piggin2011-01-071-74/+174
| | | | | | | | | | | | Protect d_subdirs and d_child with d_lock, except in filesystems that aren't using dcache_lock for these anyway (eg. using i_mutex). Note: if we change the locking rule in future so that ->d_child protection is provided only with ->d_parent->d_lock, it may allow us to reduce some locking. But it would be an exception to an otherwise regular locking scheme, so we'd have to see some good results. Probably not worthwhile. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: dcache scale d_unhashedNick Piggin2011-01-071-24/+50
| | | | | | | Protect d_unhashed(dentry) condition with d_lock. This means keeping DCACHE_UNHASHED bit in synch with hash manipulations. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: dcache scale dentry refcountNick Piggin2011-01-071-24/+82
| | | | | | | | Make d_count non-atomic and protect it with d_lock. This allows us to ensure a 0 refcount dentry remains 0 without dcache_lock. It is also fairly natural when we start protecting many other dentry members with d_lock. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: dcache scale lruNick Piggin2011-01-071-28/+84
| | | | | | | | Add a new lock, dcache_lru_lock, to protect the dcache LRU list from concurrent modification. d_lru is also protected by d_lock, which allows LRU lists to be accessed without the lru lock, using RCU in future patches. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: dcache scale hashNick Piggin2011-01-071-11/+62
| | | | | | | Add a new lock, dcache_hash_lock, to protect the dcache hash table from concurrent modification. d_hash is also protected by d_lock. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* hostfs: simplify lockingNick Piggin2011-01-071-2/+13
| | | | | | | | | | Remove dcache_lock locking from hostfs filesystem, and move it into dcache helpers. All that is required is a coherent path name. Protection from concurrent modification of the namespace after path name generation is not provided in current code, because dcache_lock is dropped before the path is used. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: change d_hash for rcu-walkNick Piggin2011-01-071-1/+1
| | | | | | | | | Change d_hash so it may be called from lock-free RCU lookups. See similar patch for d_compare for details. For in-tree filesystems, this is just a mechanical change. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: change d_compare for rcu-walkNick Piggin2011-01-071-1/+3
| | | | | | | | | | | | | Change d_compare so it may be called from lock-free RCU lookups. This does put significant restrictions on what may be done from the callback, however there don't seem to have been any problems with in-tree fses. If some strange use case pops up that _really_ cannot cope with the rcu-walk rules, we can just add new rcu-unaware callbacks, which would cause name lookup to drop out of rcu-walk mode. For in-tree filesystems, this is just a mechanical change. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: name case update methodNick Piggin2011-01-071-0/+27
| | | | | | | smpfs and ncpfs want to update a live dentry name in-place. Rather than have them open code the locking, provide a documented dcache API. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: change d_delete semanticsNick Piggin2011-01-071-2/+0
| | | | | | | | | | | | Change d_delete from a dentry deletion notification to a dentry caching advise, more like ->drop_inode. Require it to be constant and idempotent, and not take d_lock. This is how all existing filesystems use the callback anyway. This makes fine grained dentry locking of dput and dentry lru scanning much simpler. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: use fast counters for vfs cachesNick Piggin2011-01-071-6/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | percpu_counter library generates quite nasty code, so unless you need to dynamically allocate counters or take fast approximate value, a simple per cpu set of counters is much better. The percpu_counter can never be made to work as well, because it has an indirection from pointer to percpu memory, and it can't use direct this_cpu_inc interfaces because it doesn't use static PER_CPU data, so code will always be worse. In the fastpath, it is the difference between this: incl %gs:nr_dentry # nr_dentry and this: movl percpu_counter_batch(%rip), %edx # percpu_counter_batch, movl $1, %esi #, movq $nr_dentry, %rdi #, call __percpu_counter_add # (plus I clobber registers) __percpu_counter_add: pushq %rbp # movq %rsp, %rbp #, subq $32, %rsp #, movq %rbx, -24(%rbp) #, movq %r12, -16(%rbp) #, movq %r13, -8(%rbp) #, movq %rdi, %rbx # fbc, fbc #APP # 216 "/home/npiggin/usr/src/linux-2.6/arch/x86/include/asm/thread_info.h" 1 movq %gs:kernel_stack,%rax #, pfo_ret__ # 0 "" 2 #NO_APP incl -8124(%rax) # <variable>.preempt_count movq 32(%rdi), %r12 # <variable>.counters, tcp_ptr__ #APP # 78 "lib/percpu_counter.c" 1 add %gs:this_cpu_off, %r12 # this_cpu_off, tcp_ptr__ # 0 "" 2 #NO_APP movslq (%r12),%r13 #* tcp_ptr__, tmp73 movslq %edx,%rax # batch, batch addq %rsi, %r13 # amount, count cmpq %rax, %r13 # batch, count jge .L27 #, negl %edx # tmp76 movslq %edx,%rdx # tmp76, tmp77 cmpq %rdx, %r13 # tmp77, count jg .L28 #, .L27: movq %rbx, %rdi # fbc, call _raw_spin_lock # addq %r13, 8(%rbx) # count, <variable>.count movq %rbx, %rdi # fbc, movl $0, (%r12) #,* tcp_ptr__ call _raw_spin_unlock # .L29: #APP # 216 "/home/npiggin/usr/src/linux-2.6/arch/x86/include/asm/thread_info.h" 1 movq %gs:kernel_stack,%rax #, pfo_ret__ # 0 "" 2 #NO_APP decl -8124(%rax) # <variable>.preempt_count movq -8136(%rax), %rax #, D.14625 testb $8, %al #, D.14625 jne .L32 #, .L31: movq -24(%rbp), %rbx #, movq -16(%rbp), %r12 #, movq -8(%rbp), %r13 #, leave ret .p2align 4,,10 .p2align 3 .L28: movl %r13d, (%r12) # count,* jmp .L29 # .L32: call preempt_schedule # .p2align 4,,6 jmp .L31 # .size __percpu_counter_add, .-__percpu_counter_add .p2align 4,,15 Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* vfs: revert per-cpu nr_unused counters for dentry and inodesNick Piggin2011-01-071-11/+5
| | | | | | | | | | | | | | The nr_unused counters count the number of objects on an LRU, and as such they are synchronized with LRU object insertion and removal and scanning, and protected under the LRU lock. Making it per-cpu does not actually get any concurrency improvements because of this lock, and summing the counter is much slower, and incrementing/decrementing it costs more code size and is slower too. These counters should stay per-LRU, which currently means global. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: d_validate fixesNick Piggin2011-01-071-18/+7
| | | | | | | | | | | | | | | | | | d_validate has been broken for a long time. kmem_ptr_validate does not guarantee that a pointer can be dereferenced if it can go away at any time. Even rcu_read_lock doesn't help, because the pointer might be queued in RCU callbacks but not executed yet. So the parent cannot be checked, nor the name hashed. The dentry pointer can not be touched until it can be verified under lock. Hashing simply cannot be used. Instead, verify the parent/child relationship by traversing parent's d_child list. It's slow, but only ncpfs and the destaged smbfs care about it, at this point. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* Revert "fs: use RCU read side protection in d_validate"Nick Piggin2011-01-051-12/+19
| | | | | | | | | This reverts commit 3825bdb7ed920845961f32f364454bee5f469abb. You cannot dget() a dentry without having a reference, or holding a lock that guarantees it remains valid. Signed-off-by: Nick Piggin <npiggin@kernel.dk>
* fs: use RCU read side protection in d_validateChristoph Hellwig2010-10-251-19/+12
| | | | | | | | | d_validate does a purely read lookup in the dentry hash, so use RCU read side locking instead of dcache_lock. Split out from a larget patch by Nick Piggin <npiggin@suse.de>. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: clean up dentry lru modificationChristoph Hellwig2010-10-251-26/+23
| | | | | | | | | | | | | Always do a list_del_init on the LRU to make sure the list_empty invariant for not beeing on the LRU always holds true, and fold dentry_lru_del_init into dentry_lru_del. Replace the dentry_lru_add_tail primitive with a dentry_lru_move_tail operations that simpler when the dentry already is one the list, which is always is. Move the list_empty into dentry_lru_add to fit the scheme of the other lru helpers, and simplify locking once we move to a separate LRU lock. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: split __shrink_dcache_sbChristoph Hellwig2010-10-251-60/+67
| | | | | | | | | | | Currently __shrink_dcache_sb has an extremly awkward calling convention because it tries to please very different callers. Split out the main loop into a shrink_dentry_list helper, which gets called directly from shrink_dcache_sb for the cases where all dentries need to be pruned, or from __shrink_dcache_sb for pruning only a certain number of dentries. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: improve DCACHE_REFERENCED usageNick Piggin2010-10-251-3/+6
| | | | | | | | | | | dentry referenced bit is only set when installing the dentry back onto the LRU. However with lazy LRU, the dentry can already be on the LRU list at dput time, thus missing out on setting the referenced bit. Fix this. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: use percpu counter for nr_dentry and nr_dentry_unusedChristoph Hellwig2010-10-251-19/+32
| | | | | | | | | | | | The nr_dentry stat is a globally touched cacheline and atomic operation twice over the lifetime of a dentry. It is used for the benfit of userspace only. Turn it into a per-cpu counter and always decrement it in d_free instead of doing various batching operations to reduce lock hold times in the callers. Based on an earlier patch from Nick Piggin <npiggin@suse.de>. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: simplify __d_freeChristoph Hellwig2010-10-251-9/+5
| | | | | | | Remove d_callback and always call __d_free with a RCU head. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: take dcache_lock inside __d_pathChristoph Hellwig2010-10-251-2/+4
| | | | | | | | All callers take dcache_lock just around the call to __d_path, so take the lock into it in preparation of getting rid of dcache_lock. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: brlock vfsmount_lockNick Piggin2010-08-181-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | fs: brlock vfsmount_lock Use a brlock for the vfsmount lock. It must be taken for write whenever modifying the mount hash or associated fields, and may be taken for read when performing mount hash lookups. A new lock is added for the mnt-id allocator, so it doesn't need to take the heavy vfsmount write-lock. The number of atomics should remain the same for fastpath rlock cases, though code would be slightly slower due to per-cpu access. Scalability is not not be much improved in common cases yet, due to other locks (ie. dcache_lock) getting in the way. However path lookups crossing mountpoints should be one case where scalability is improved (currently requiring the global lock). The slowpath is slower due to use of brlock. On a 64 core, 64 socket, 32 node Altix system (high latency to remote nodes), a simple umount microbenchmark (mount --bind mnt mnt2 ; umount mnt2 loop 1000 times), before this patch it took 6.8s, afterwards took 7.1s, about 5% slower. Cc: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Nick Piggin <npiggin@kernel.dk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: remove extra lookup in __lookup_hashNick Piggin2010-08-181-25/+35
| | | | | | | | | | | | | | | | | | | | | | | | | fs: remove extra lookup in __lookup_hash Optimize lookup for create operations, where no dentry should often be common-case. In cases where it is not, such as unlink, the added overhead is much smaller than the removed. Also, move comments about __d_lookup racyness to the __d_lookup call site. d_lookup is intuitive; __d_lookup is what needs commenting. So in that same vein, add kerneldoc comments to __d_lookup and clean up some of the comments: - We are interested in how the RCU lookup works here, particularly with renames. Make that explicit, and point to the document where it is explained in more detail. - RCU is pretty standard now, and macros make implementations pretty mindless. If we want to know about RCU barrier details, we look in RCU code. - Delete some boring legacy comments because we don't care much about how the code used to work, more about the interesting parts of how it works now. So comments about lazy LRU may be interesting, but would better be done in the LRU or refcount management code. Signed-off-by: Nick Piggin <npiggin@kernel.dk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs/dcache: fix function param name in kernel-docRandy Dunlap2010-08-141-1/+1
| | | | | | | Fix parameter name in kernel-doc notation (causes a warning). Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vfs: show unreachable paths in getcwd and procMiklos Szeredi2010-08-111-4/+50
| | | | | | | | | | | | | | | Prepend "(unreachable)" to path strings if the path is not reachable from the current root. Two places updated are - the return string from getcwd() - and symlinks under /proc/$PID. Other uses of d_path() are left unchanged (we know that some old software crashes if /proc/mounts is changed). Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: only add " (deleted)" where necessaryMiklos Szeredi2010-08-111-10/+22
| | | | | | | | | | | | | | | | | | | | | | | | __d_path() has 4 callers: d_path() sys_getcwd() seq_path_root() tomoyo_realpath_from_path2() Of these the only one which needs the " (deleted)" ending is d_path(). sys_getcwd() checks for existence before calling __d_path(). seq_path_root() is used to show the mountpoint path in /proc/PID/mountinfo, which is always a positive. And tomoyo doesn't want the deleted ending. Create a helper "path_with_deleted()" as subsequent patches will need this in multiple places. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: add prepend_path() helperMiklos Szeredi2010-08-111-36/+58
| | | | | | | | | | | | Split off prepend_path() from __d_path(). This new helper takes an end-of-buffer pointer and buffer-length pointer just like the other prepend_* functions. Move the " (deleted)" postfix out to __d_path(). This patch doesn't change any functionality but paves the way for the following patches. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: __d_path: dont prepend the name of the root dentryMiklos Szeredi2010-08-111-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the old times pseudo-filesystems set the name of theroot dentry to some prefix like "pipe:" and the name of the child dentry to "[123]" and relied on a hack in __d_path() to replace the preceding slash with the root's name to get "pipe:[123]". Then the d_dname() dentry operation was introduced which solved the same problem without having to pre-fill the name in each dentry. Currently the following pseudo filesystems exist in the kernel: perfmon mtd anon_inode bdev pipe socket Of these only perfmon, anon_inode, pipe and socket create sub-dentries, all of which have now been switched to using d_dname(). bdev and mtd only create inodes. This means that now the hack to overwrite the slash can be removed, so for unreachable paths (e.g. within a detached mount) the path string won't be polluted with garbage. For these cases a subsequent patch will add a prefix, indicating that the path is unreachable. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vfs: add helpers to get root and pwdMiklos Szeredi2010-08-111-10/+2
| | | | | | | | | | | | Add three helpers that retrieve a refcounted copy of the root and cwd from the supplied fs_struct. get_fs_root() get_fs_pwd() get_fs_root_and_pwd() Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* no need for list_for_each_entry_safe()/resetting with superblock listAl Viro2010-08-091-5/+7
| | | | | | just delay __put_super() a bit Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* new helper: __dentry_path()Al Viro2010-08-091-5/+22
| | | | | | | builds path relative to fs root, called under dcache_lock, doesn't append any nonsense to unlinked ones. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* mm: add context argument to shrinker callbackDave Chinner2010-07-191-1/+1
| | | | | | | | | | | | The current shrinker implementation requires the registered callback to have global state to work from. This makes it difficult to shrink caches that are not global (e.g. per-filesystem caches). Pass the shrinker structure to the callback so that users can embed the shrinker structure in the context the shrinker needs to operate on and get back to it in the callback via container_of(). Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* fs: fix superblock iteration racenpiggin@suse.de2010-06-291-0/+2
| | | | | | | | | | | | | | | | | | list_for_each_entry_safe is not suitable to protect against concurrent modification of the list. 6754af6 introduced a race in sb walking. list_for_each_entry can use the trick of pinning the current entry in the list before we drop and retake the lock because it subsequently follows cur->next. However list_for_each_entry_safe saves n=cur->next for following before entering the loop body, so when the lock is dropped, n may be deleted. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Christoph Hellwig <hch@infradead.org> Cc: John Stultz <johnstul@us.ibm.com> Cc: Frank Mayhar <fmayhar@google.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* fix prune_dcache()/umount() raceAl Viro2010-05-211-11/+6
| | | | | | ... and get rid of the last __put_super_and_need_restart() caller Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Leave superblocks on s_list until the endAl Viro2010-05-211-0/+2
| | | | | | | | | | | We used to remove from s_list and s_instances at the same time. So let's *not* do the former and skip superblocks that have empty s_instances in the loops over s_list. The next step, of course, will be to get rid of rescan logics in those loops. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* clean DCACHE_CANT_MOUNT in d_delete()Al Viro2010-05-211-0/+1
| | | | | | | | | | | | | | | | | | We set the "it's dead, don't mount on it" flag _and_ do not remove it if we turn the damn thing negative and leave it around. And if it goes positive afterwards, well... Fortunately, there's only one place where that needs to be caught: only d_delete() can turn the sucker negative without immediately freeing it; all other places that can lead to ->d_iput() call are followed by unconditionally freeing struct dentry in question. So the fix is obvious: Addresses https://bugzilla.kernel.org/show_bug.cgi?id=16014 Reported-by: Adam Tkac <vonsch@gmail.com> Tested-by: Adam Tkac <vonsch@gmail.com> Cc: <stable@kernel.org> [2.6.34.x] Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fix race in d_splice_alias()Al Viro2010-03-031-1/+0
| | | | | | | | | rehashing the negative placeholder opens a race with d_lookup(); we unhash it almost immediately (by d_move()), but the race window is there. Since d_move() doesn't rely on target being hashed, we don't need that d_rehash() at all. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* New helper: path_is_under(path1, path2)Al Viro2010-03-031-0/+24
| | | | | | Analog of is_subdir for vfsmount,dentry pairs, moved from audit_tree.c Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs/dcache.c: CodingStyle cleanupH Hartley Sweeten2010-03-031-23/+22
| | | | | | | | | | Cleanup EXPORT* macros according to Documantation/CodingStyle. Move EXPORT* macros to the line immediately after the closing function brace. Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* libfs: move EXPORT_SYMBOL for d_alloc_nameH Hartley Sweeten2009-12-161-0/+1
| | | | | | | | | | The EXPORT_SYMBOL for d_alloc_name is in fs/libfs.c but the function is in fs/dcache.c. Move the EXPORT_SYMBOL to the line immediately after the closing function brace line in fs/dcache.c as mentioned in Documentation/CodingStyle. Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* sched: Pull up the might_sleep() check into cond_resched()Frederic Weisbecker2009-07-181-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | might_sleep() is called late-ish in cond_resched(), after the need_resched()/preempt enabled/system running tests are checked. It's better to check the sleeps while atomic earlier and not depend on some environment datas that reduce the chances to detect a problem. Also define cond_resched_*() helpers as macros, so that the FILE/LINE reported in the sleeping while atomic warning displays the real origin and not sched.h Changes in v2: - Call __might_sleep() directly instead of might_sleep() which may call cond_resched() - Turn cond_resched() into a macro so that the file:line couple reported refers to the caller of cond_resched() and not __cond_resched() itself. Changes in v3: - Also propagate this __might_sleep() pull up to cond_resched_lock() and cond_resched_softirq() Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1247725694-6082-6-git-send-email-fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* dcache: extrace and use d_unlinked()Alexey Dobriyan2009-06-111-4/+3
| | | | | | | | | d_unlinked() will be used in middle-term to ban checkpointing when opened but unlinked file is detected, and in long term, to detect such situation and special case on it. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: dcache fix LRU orderingnpiggin@suse.de2009-05-091-1/+1
| | | | | | | | | Fix ordering of LRU when moving referenced dentries to the head of the list (they should go to the head of the list in the same order as they were found from the tail, rather than reverse order). Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* No need for crossing to mountpoint in audit_tag_tree()Al Viro2009-04-201-1/+0
| | | | | | | is_under() will DTRT anyway. And yes, is_subdir() behaviour is intentional. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Trim includes of fdtable.hAl Viro2009-03-311-1/+0
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Get rid of indirect include of fs_struct.hAl Viro2009-03-311-0/+1
| | | | | | | | Don't pull it in sched.h; very few files actually need it and those can include directly. sched.h itself only needs forward declaration of struct fs_struct; Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* cleanup d_add_ciChristoph Hellwig2009-03-271-30/+18
| | | | | | | | | Make sure that comments describe what's going on and not how, and always use __d_instantiate instead of two separate branches, one with d_instantiate and one with __d_instantiate. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* EXPORT_SYMBOL(d_obtain_alias) rather than EXPORT_SYMBOL_GPLBenny Halevy2009-02-271-1/+1
| | | | | | | | | | | | | | | | | | Commit 4ea3ada2955e4519befa98ff55dd62d6dfbd1705 declares d_obtain_alias() as EXPORT_SYMBOL_GPL where it's supposed to replace d_alloc_anon which was previously declared as EXPORT_SYMBOL and thus available to any loadable module. This patch reverts that. Signed-off-by: Benny Halevy <bhalevy@panasas.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [CVE-2009-0029] System call wrappers part 20Heiko Carstens2009-01-141-1/+1
| | | | Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>