aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/workqueue_sched.h
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2010-06-29 10:07:14 +0200
committerTejun Heo <tj@kernel.org>2010-06-29 10:07:14 +0200
commite22bee782b3b00bd4534ae9b1c5fb2e8e6573c5c (patch)
tree9854d22294699d9ec27e28f70c05f479e5640abd /kernel/workqueue_sched.h
parentd302f0178223802a1e496ba90c66193b7721c9c1 (diff)
downloadkernel_samsung_smdk4412-e22bee782b3b00bd4534ae9b1c5fb2e8e6573c5c.zip
kernel_samsung_smdk4412-e22bee782b3b00bd4534ae9b1c5fb2e8e6573c5c.tar.gz
kernel_samsung_smdk4412-e22bee782b3b00bd4534ae9b1c5fb2e8e6573c5c.tar.bz2
workqueue: implement concurrency managed dynamic worker pool
Instead of creating a worker for each cwq and putting it into the shared pool, manage per-cpu workers dynamically. Works aren't supposed to be cpu cycle hogs and maintaining just enough concurrency to prevent work processing from stalling due to lack of processing context is optimal. gcwq keeps the number of concurrent active workers to minimum but no less. As long as there's one or more running workers on the cpu, no new worker is scheduled so that works can be processed in batch as much as possible but when the last running worker blocks, gcwq immediately schedules new worker so that the cpu doesn't sit idle while there are works to be processed. gcwq always keeps at least single idle worker around. When a new worker is necessary and the worker is the last idle one, the worker assumes the role of "manager" and manages the worker pool - ie. creates another worker. Forward-progress is guaranteed by having dedicated rescue workers for workqueues which may be necessary while creating a new worker. When the manager is having problem creating a new worker, mayday timer activates and rescue workers are summoned to the cpu and execute works which might be necessary to create new workers. Trustee is expanded to serve the role of manager while a CPU is being taken down and stays down. As no new works are supposed to be queued on a dead cpu, it just needs to drain all the existing ones. Trustee continues to try to create new workers and summon rescuers as long as there are pending works. If the CPU is brought back up while the trustee is still trying to drain the gcwq from the previous offlining, the trustee will kill all idles ones and tell workers which are still busy to rebind to the cpu, and pass control over to gcwq which assumes the manager role as necessary. Concurrency managed worker pool reduces the number of workers drastically. Only workers which are necessary to keep the processing going are created and kept. Also, it reduces cache footprint by avoiding unnecessarily switching contexts between different workers. Please note that this patch does not increase max_active of any workqueue. All workqueues can still only process one work per cpu. Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'kernel/workqueue_sched.h')
-rw-r--r--kernel/workqueue_sched.h13
1 files changed, 3 insertions, 10 deletions
diff --git a/kernel/workqueue_sched.h b/kernel/workqueue_sched.h
index af040ba..2d10fc9 100644
--- a/kernel/workqueue_sched.h
+++ b/kernel/workqueue_sched.h
@@ -4,13 +4,6 @@
* Scheduler hooks for concurrency managed workqueue. Only to be
* included from sched.c and workqueue.c.
*/
-static inline void wq_worker_waking_up(struct task_struct *task,
- unsigned int cpu)
-{
-}
-
-static inline struct task_struct *wq_worker_sleeping(struct task_struct *task,
- unsigned int cpu)
-{
- return NULL;
-}
+void wq_worker_waking_up(struct task_struct *task, unsigned int cpu);
+struct task_struct *wq_worker_sleeping(struct task_struct *task,
+ unsigned int cpu);