aboutsummaryrefslogtreecommitdiffstats
path: root/lib
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-04-13 15:15:30 -0700
committerBen Hutchings <ben@decadent.org.uk>2013-04-25 20:25:39 +0100
commitc6680a1301578e09b51a5ad68f9c00cb23d28fa3 (patch)
tree4b86dd3586a399fa4f78e7fbaec8552275650436 /lib
parent3fa8ee5fafec620e0dadb3ce226124a75d599288 (diff)
downloadkernel_samsung_smdk4412-c6680a1301578e09b51a5ad68f9c00cb23d28fa3.zip
kernel_samsung_smdk4412-c6680a1301578e09b51a5ad68f9c00cb23d28fa3.tar.gz
kernel_samsung_smdk4412-c6680a1301578e09b51a5ad68f9c00cb23d28fa3.tar.bz2
kobject: fix kset_find_obj() race with concurrent last kobject_put()
commit a49b7e82cab0f9b41f483359be83f44fbb6b4979 upstream. Anatol Pomozov identified a race condition that hits module unloading and re-loading. To quote Anatol: "This is a race codition that exists between kset_find_obj() and kobject_put(). kset_find_obj() might return kobject that has refcount equal to 0 if this kobject is freeing by kobject_put() in other thread. Here is timeline for the crash in case if kset_find_obj() searches for an object tht nobody holds and other thread is doing kobject_put() on the same kobject: THREAD A (calls kset_find_obj()) THREAD B (calls kobject_put()) splin_lock() atomic_dec_return(kobj->kref), counter gets zero here ... starts kobject cleanup .... spin_lock() // WAIT thread A in kobj_kset_leave() iterate over kset->list atomic_inc(kobj->kref) (counter becomes 1) spin_unlock() spin_lock() // taken // it does not know that thread A increased counter so it remove obj from list spin_unlock() vfree(module) // frees module object with containing kobj // kobj points to freed memory area!! kobject_put(kobj) // OOPS!!!! The race above happens because module.c tries to use kset_find_obj() when somebody unloads module. The module.c code was introduced in commit 6494a93d55fa" Anatol supplied a patch specific for module.c that worked around the problem by simply not using kset_find_obj() at all, but rather than make a local band-aid, this just fixes kset_find_obj() to be thread-safe using the proper model of refusing the get a new reference if the refcount has already dropped to zero. See examples of this proper refcount handling not only in the kref documentation, but in various other equivalent uses of this pattern by grepping for atomic_inc_not_zero(). [ Side note: the module race does indicate that module loading and unloading is not properly serialized wrt sysfs information using the module mutex. That may require further thought, but this is the correct fix at the kobject layer regardless. ] Reported-analyzed-and-tested-by: Anatol Pomozov <anatol.pomozov@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'lib')
-rw-r--r--lib/kobject.c9
1 files changed, 8 insertions, 1 deletions
diff --git a/lib/kobject.c b/lib/kobject.c
index 640bd98..83bd5b3 100644
--- a/lib/kobject.c
+++ b/lib/kobject.c
@@ -531,6 +531,13 @@ struct kobject *kobject_get(struct kobject *kobj)
return kobj;
}
+static struct kobject *kobject_get_unless_zero(struct kobject *kobj)
+{
+ if (!kref_get_unless_zero(&kobj->kref))
+ kobj = NULL;
+ return kobj;
+}
+
/*
* kobject_cleanup - free kobject resources.
* @kobj: object to cleanup
@@ -785,7 +792,7 @@ struct kobject *kset_find_obj_hinted(struct kset *kset, const char *name,
slow_search:
list_for_each_entry(k, &kset->list, entry) {
if (kobject_name(k) && !strcmp(kobject_name(k), name)) {
- ret = kobject_get(k);
+ ret = kobject_get_unless_zero(k);
break;
}
}