aboutsummaryrefslogtreecommitdiffstats
path: root/block
Commit message (Collapse)AuthorAgeFilesLines
* block: fail SCSI passthrough ioctls on partition devicesPaolo Bonzini2012-01-251-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 0bfc96cb77224736dfa35c3c555d37b3646ef35e upstream. [ Changes with respect to 3.3: return -ENOTTY from scsi_verify_blk_ioctl and -ENOIOCTLCMD from sd_compat_ioctl. ] Linux allows executing the SG_IO ioctl on a partition or LVM volume, and will pass the command to the underlying block device. This is well-known, but it is also a large security problem when (via Unix permissions, ACLs, SELinux or a combination thereof) a program or user needs to be granted access only to part of the disk. This patch lets partitions forward a small set of harmless ioctls; others are logged with printk so that we can see which ioctls are actually sent. In my tests only CDROM_GET_CAPABILITY actually occurred. Of course it was being sent to a (partition on a) hard disk, so it would have failed with ENOTTY and the patch isn't changing anything in practice. Still, I'm treating it specially to avoid spamming the logs. In principle, this restriction should include programs running with CAP_SYS_RAWIO. If for example I let a program access /dev/sda2 and /dev/sdb, it still should not be able to read/write outside the boundaries of /dev/sda2 independent of the capabilities. However, for now programs with CAP_SYS_RAWIO will still be allowed to send the ioctls. Their actions will still be logged. This patch does not affect the non-libata IDE driver. That driver however already tests for bd != bd->bd_contains before issuing some ioctl; it could be restricted further to forbid these ioctls even for programs running with CAP_SYS_ADMIN/CAP_SYS_RAWIO. Cc: linux-scsi@vger.kernel.org Cc: Jens Axboe <axboe@kernel.dk> Cc: James Bottomley <JBottomley@parallels.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [ Make it also print the command name when warning - Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* block: add and use scsi_blk_cmd_ioctlPaolo Bonzini2012-01-251-0/+7
| | | | | | | | | | | | | | | | | commit 577ebb374c78314ac4617242f509e2f5e7156649 upstream. Introduce a wrapper around scsi_cmd_ioctl that takes a block device. The function will then be enhanced to detect partition block devices and, in that case, subject the ioctls to whitelisting. Cc: linux-scsi@vger.kernel.org Cc: Jens Axboe <axboe@kernel.dk> Cc: James Bottomley <JBottomley@parallels.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* block: initialize request_queue's numa node duringMike Snitzer2012-01-061-11/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 5151412dd4338b273afdb107c3772528e9e67d92 upstream. struct request_queue is allocated with __GFP_ZERO so its "node" field is zero before initialization. This causes an oops if node 0 is offline in the page allocator because its zonelists are not initialized. From Dave Young's dmesg: SRAT: Node 1 PXM 2 0-d0000000 SRAT: Node 1 PXM 2 100000000-330000000 SRAT: Node 0 PXM 1 330000000-630000000 Initmem setup node 1 0000000000000000-000000000affb000 ... Built 1 zonelists in Node order, mobility grouping on. ... BUG: unable to handle kernel paging request at 0000000000001c08 IP: [<ffffffff8111c355>] __alloc_pages_nodemask+0xb5/0x870 and __alloc_pages_nodemask+0xb5 translates to a NULL pointer on zonelist->_zonerefs. The fix is to initialize q->node at the time of allocation so the correct node is passed to the slab allocator later. Since blk_init_allocated_queue_node() is no longer needed, merge it with blk_init_allocated_queue(). [rientjes@google.com: changelog, initializing q->node] Reported-by: Dave Young <dyoung@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: David Rientjes <rientjes@google.com> Tested-by: Dave Young <dyoung@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* cfq-iosched: fix cfq_cic_link() race confitionYasuaki Ishimatsu2012-01-061-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 5eb46851de3904cd1be9192fdacb8d34deadc1fc upstream. cfq_cic_link() has race condition. When some processes which shared ioc issue I/O to same block device simultaneously, cfq_cic_link() returns -EEXIST sometimes. The race condition might stop I/O by following steps: step 1: Process A: Issue an I/O to /dev/sda step 2: Process A: Get an ioc (iocA here) in get_io_context() which does not linked with a cic for the device step 3: Process A: Get a new cic for the device (cicA here) in cfq_alloc_io_context() step 4: Process B: Issue an I/O to /dev/sda step 5: Process B: Get iocA in get_io_context() since process A and B share the same ioc step 6: Process B: Get a new cic for the device (cicB here) in cfq_alloc_io_context() since iocA has not been linked with a cic for the device yet step 7: Process A: Link cicA to iocA in cfq_cic_link() step 8: Process A: Dispatch I/O to driver and finish it step 9: Process B: Try to link cicB to iocA in cfq_cic_link() But it fails with showing "cfq: cic link failed!" kernel message, since iocA has already linked with cicA at step 7. step 10: Process B: Wait for finishig I/O in get_request_wait() The function does not wake up, when there is no I/O to the device. When cfq_cic_link() returns -EEXIST, it means ioc has already linked with cic. So when cfq_cic_link() return -EEXIST, retry cfq_cic_lookup(). Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* cfq-iosched: free cic_index if blkio_alloc_blkg_stats failsmajianpeng2012-01-061-0/+5
| | | | | | | | | | | | commit 2984ff38ccf6cbc02a7a996a36c7d6f69f3c6146 upstream. If we fail allocating the blkpg stats, we free cfqd and cfgq. But we need to free the IDA cfqd->cic_index as well. Signed-off-by: majianpeng <majianpeng@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* block: Always check length of all iov entries in blk_rq_map_user_iov()Ben Hutchings2011-11-211-3/+4
| | | | | | | | | | | | | | commit 6b76106d8ef31111d6fc469564b83b5f5542794f upstream. Even after commit 5478755616ae2ef1ce144dded589b62b2a50d575 ("block: check for proper length of iov entries earlier ...") we still won't check for zero-length entries after an unaligned entry. Remove the break-statement, so all entries are checked. Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* block: make gendisk hold a reference to its queueTejun Heo2011-11-111-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit f992ae801a7dec34a4ed99a6598bbbbfb82af4fb upstream. The following command sequence triggers an oops. # mount /dev/sdb1 /mnt # echo 1 > /sys/class/scsi_device/0\:0\:1\:0/device/delete # umount /mnt general protection fault: 0000 [#1] PREEMPT SMP CPU 2 Modules linked in: Pid: 791, comm: umount Not tainted 3.1.0-rc3-work+ #8 Bochs Bochs RIP: 0010:[<ffffffff810d0879>] [<ffffffff810d0879>] __lock_acquire+0x389/0x1d60 ... Call Trace: [<ffffffff810d2845>] lock_acquire+0x95/0x140 [<ffffffff81aed87b>] _raw_spin_lock+0x3b/0x50 [<ffffffff811573bc>] bdi_lock_two+0x5c/0x70 [<ffffffff811c2f6c>] bdev_inode_switch_bdi+0x4c/0xf0 [<ffffffff811c3fcb>] __blkdev_put+0x11b/0x1d0 [<ffffffff811c4010>] __blkdev_put+0x160/0x1d0 [<ffffffff811c40df>] blkdev_put+0x5f/0x190 [<ffffffff8118f18d>] kill_block_super+0x4d/0x80 [<ffffffff8118f4a5>] deactivate_locked_super+0x45/0x70 [<ffffffff8119003a>] deactivate_super+0x4a/0x70 [<ffffffff811ac4ad>] mntput_no_expire+0xed/0x130 [<ffffffff811acf2e>] sys_umount+0x7e/0x3a0 [<ffffffff81aeeeab>] system_call_fastpath+0x16/0x1b This is because bdev holds on to disk but disk doesn't pin the associated queue. If a SCSI device is removed while the device is still open, the sdev puts the base reference to the queue on release. When the bdev is finally released, the associated queue is already gone along with the bdi and bdev_inode_switch_bdi() ends up dereferencing already freed bdi. Even if it were not for this bug, disk not holding onto the associated queue is very unusual and error-prone. Fix it by making add_disk() take an extra reference to its queue and put it on disk_release() and ensuring that disk and its fops owner are put in that order after all accesses to the disk and queue are complete. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* block: Free queue resources at blk_release_queue()Hannes Reinecke2011-10-032-7/+11
| | | | | | | | | | | | | | | | | | commit 777eb1bf15b8532c396821774bf6451e563438f5 upstream. A kernel crash is observed when a mounted ext3/ext4 filesystem is physically removed. The problem is that blk_cleanup_queue() frees up some resources eg by calling elevator_exit(), which are not checked for in normal operation. So we should rather move these calls to the destructor function blk_release_queue() as at that point all remaining references are gone. However, in doing so we have to ensure that any externally supplied queue_lock is disconnected as the driver might free up the lock after the call of blk_cleanup_queue(), Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* blk-cgroup: be able to remove the record of unplugged deviceWanlong Gao2011-10-031-21/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit d11bb4462c4cc6ddd45c6927c617ad79fa6fb8fc upstream. The bug is we're not able to remove the device from blkio cgroup's per-device control files if it gets unplugged. To reproduce the bug: # mount -t cgroup -o blkio xxx /cgroup # cd /cgroup # echo "8:0 1000" > blkio.throttle.read_bps_device # unplug the device # cat blkio.throttle.read_bps_device 8:0 1000 # echo "8:0 0" > blkio.throttle.read_bps_device -bash: echo: write error: No such device After patching, the device removal will succeed. Thanks for the comments of Paul, Zefan, and Vivek. Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Paul Menage <paul@paulmenage.org> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* fix crash in scsi_dispatch_cmd()James Bottomley2011-08-042-0/+10
| | | | | | | | | | | | | | | | | | | | | | commit bfe159a51203c15d23cb3158fffdc25ec4b4dda1 upstream. USB surprise removal of sr is triggering an oops in scsi_dispatch_command(). What seems to be happening is that USB is hanging on to a queue reference until the last close of the upper device, so the crash is caused by surprise remove of a mounted CD followed by attempted unmount. The problem is that USB doesn't issue its final commands as part of the SCSI teardown path, but on last close when the block queue is long gone. The long term fix is probably to make sr do the teardown in the same way as sd (so remove all the lower bits on ejection, but keep the upper disk alive until last close of user space). However, the current oops can be simply fixed by not allowing any commands to be sent to a dead queue. Signed-off-by: James Bottomley <JBottomley@Parallels.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* cfq-iosched: make code consistentShaohua Li2011-06-271-1/+2
| | | | | | | | | ioc->ioc_data is rcu protectd, so uses correct API to access it. This doesn't change any behavior, but just make code consistent. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Cc: stable@kernel.org # after ab4bd22d Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: fix a rcu warningShaohua Li2011-06-271-1/+4
| | | | | | | | | I got a rcu warnning at boot. the ioc->ioc_data is rcu_deferenced, but doesn't hold rcu_read_lock. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Cc: stable@kernel.org # after ab4bd22d Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: Add __attribute__((format(printf...) and fix falloutJoe Perches2011-06-132-7/+8
| | | | | | | | | Use the compiler to verify format strings and arguments. Fix fallout. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: make disk_block_events() properly wait for work cancellationTejun Heo2011-06-091-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | disk_block_events() should guarantee that the event work is not in flight on return and once blocked it shouldn't issue further cancellations. Because there was no synchronization between the first blocker doing cancel_delayed_work_sync() and the following blockers, the following blockers could finish before cancellation was complete, which broke both guarantees - event work could be in flight and cancellation could happen after return. This bug triggered WARN_ON_ONCE() in disk_clear_events() reported in bug#34662. https://bugzilla.kernel.org/show_bug.cgi?id=34662 Fix it by adding an outer mutex which protects both block count manipulation and work cancellation. -v2: Use outer mutex instead of bit waitqueue per Linus. Signed-off-by: Tejun Heo <tj@kernel.org> Tested-by: Sitsofe Wheeler <sitsofe@yahoo.com> Reported-by: Sitsofe Wheeler <sitsofe@yahoo.com> Reported-by: Borislav Petkov <bp@alien8.de> Reported-by: Meelis Roos <mroos@linux.ee> Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: remove non-syncing __disk_block_events() and fold it into ↵Tejun Heo2011-06-091-31/+24
| | | | | | | | | | | | | disk_block_events() After the previous update to disk_check_events(), nobody is using non-syncing __disk_block_events(). Remove @sync and, as this makes __disk_block_events() virtually identical to disk_block_events(), remove the underscore prefixed version. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: don't use non-syncing event blocking in disk_check_events()Tejun Heo2011-06-091-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch is part of fix for triggering of WARN_ON_ONCE() in disk_clear_events() reported in bug#34662. https://bugzilla.kernel.org/show_bug.cgi?id=34662 disk_clear_events() blocks events, schedules and flushes the event work. It expects the work to have started execution on schedule and finished on return from flush. WARN_ON_ONCE() triggers if the event work hasn't executed as expected. This problem happens because __disk_block_events() fails to guarantee that the event work item is not in flight on return from the function in race-free manner. The problem is two-fold and this patch addresses one of them. When __disk_block_events() is called with @sync == %false, it bumps event block count, calls cancel_delayed_work() and return. This makes it impossible to guarantee that event polling is not in flight on return from syncing __disk_block_events() - if the first blocker was non-syncing, polling could still be in progress and later syncing ones would assume that the first blocker already canceled it. Making __disk_block_events() cancel_sync regardless of block count isn't feasible either as it may race with forced event checking in disk_clear_events(). As disk_check_events() is the only user of non-syncing __disk_block_events(), updating it to directly cancel and schedule event work is the easiest way to solve the issue. Note that there's another bug in __disk_block_events() and this patch doesn't fix the issue completely. Later patch will fix the other bug. Signed-off-by: Tejun Heo <tj@kernel.org> Tested-by: Sitsofe Wheeler <sitsofe@yahoo.com> Reported-by: Sitsofe Wheeler <sitsofe@yahoo.com> Reported-by: Borislav Petkov <bp@alien8.de> Reported-by: Meelis Roos <mroos@linux.ee> Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: fix locking around ioc->ioc_data assignmentJens Axboe2011-06-061-1/+4
| | | | | | | | | | | | | | | | | | | | | Since we are modifying this RCU pointer, we need to hold the lock protecting it around it. This fixes a potential reuse and double free of a cfq io_context structure. The bug has been in CFQ for a long time, it hit very few people but those it did hit seemed to see it a lot. Tracked in RH bugzilla here: https://bugzilla.redhat.com/show_bug.cgi?id=577968 Credit goes to Paul Bolle for figuring out that the issue was around the one-hit ioc->ioc_data cache. Thanks to his hard work the issue is now fixed. Cc: stable@kernel.org Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: Use hlist_entry() for io_context.cic_list.firstPaul Bolle2011-06-021-2/+2
| | | | | | | | | list_entry() and hlist_entry() are both simply aliases for container_of(), but since io_context.cic_list.first is an hlist_node one should at least use the correct alias. Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: Remove bogus check in queue_fail pathPaul Bolle2011-06-021-3/+0
| | | | | | | | queue_fail can only be reached if cic is NULL, so its check for cic must be bogus. Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* CFQ: Fix typo and remove unnecessary semicolonKyungmin Park2011-05-311-4/+4
| | | | | | | Fix comment typo and remove unnecessary semicolon at macro Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* Merge branch 'for-linus' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds2011-05-272-4/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://git.kernel.dk/linux-2.6-block: loop: export module parameters block: export blk_{get,put}_queue() block: remove unused variable in bio_attempt_front_merge() block: always allocate genhd->ev if check_events is implemented brd: export module parameters brd: fix comment on initial device creation brd: handle on-demand devices correctly brd: limit 'max_part' module param to DISK_MAX_PARTS brd: get rid of unused members from struct brd_device block: fix oops on !disk->queue and sysfs discard alignment display
| * block: export blk_{get,put}_queue()Jens Axboe2011-05-271-0/+2
| | | | | | | | | | | | | | We need them in SCSI to fix a bug, but currently they are not exported to modules. Export them. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block: remove unused variable in bio_attempt_front_merge()Luca Tettamanti2011-05-261-3/+0
| | | | | | | | | | | | | | sector is never read inside the function. Signed-off-by: Luca Tettamanti <kronos.it@gmail.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block: always allocate genhd->ev if check_events is implementedTejun Heo2011-05-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9fd097b149 (block: unexport DISK_EVENT_MEDIA_CHANGE for legacy/fringe drivers) removed DISK_EVENT_MEDIA_CHANGE from legacy/fringe block drivers which have inadequate ->check_events(). Combined with earlier change 7c88a168da (block: don't propagate unlisted DISK_EVENTs to userland), this enables using ->check_events() for internal processing while avoiding enabling in-kernel block event polling which can lead to infinite event loop. Unfortunately, this made many drivers including floppy without any bit set in disk->events and ->async_events in which case disk_add_events() simply skipped allocation of disk->ev, which disables whole event handling. As ->check_events() is still used during open processing for revalidation, this can lead to open failure. This patch always allocates disk->ev if ->check_events is implemented. In the long term, it would make sense to simply include the event structure inline into genhd as it's now used by virtually all block devices. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Ondrej Zary <linux@rainbow-software.org> Reported-by: Alex Villacis Lasso <avillaci@ceibo.fiec.espol.edu.ec> Cc: stable@kernel.org Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cgroups: add per-thread subsystem callbacksBen Blum2011-05-261-12/+6
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | Add cgroup subsystem callbacks for per-thread attachment in atomic contexts Add can_attach_task(), pre_attach(), and attach_task() as new callbacks for cgroups's subsystem interface. Unlike can_attach and attach, these are for per-thread operations, to be called potentially many times when attaching an entire threadgroup. Also, the old "bool threadgroup" interface is removed, as replaced by this. All subsystems are modified for the new interface - of note is cpuset, which requires from/to nodemasks for attach to be globally scoped (though per-cpuset would work too) to persist from its pre_attach to attach_task and attach. This is a pre-patch for cgroup-procs-writable.patch. Signed-off-by: Ben Blum <bblum@andrew.cmu.edu> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Matt Helsley <matthltc@us.ibm.com> Reviewed-by: Paul Menage <menage@google.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* cfq-iosched: free cic_index if cfqd allocation failsNamhyung Kim2011-05-241-1/+5
| | | | | | | When struct cfq_data allocation fails, cic_index need to be freed. Signed-off-by: Namhyung Kim <namhyung@gmail.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: remove unused 'group_changed' in cfq_service_tree_add()Namhyung Kim2011-05-241-2/+1
| | | | | | | | | | | | The 'group_changed' variable is initialized to 0 and never changed, so checking the variable is meaningless. It is a leftover from 0bbfeb832042 ("cfq-iosched: Always provide group iosolation."). Let's get rid of it. Signed-off-by: Namhyung Kim <namhyung@gmail.com> Cc: Justin TerAvest <teravest@google.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: reduce bit operations in cfq_choose_req()Namhyung Kim2011-05-241-9/+5
| | | | | | | | Reduce the number of bit operations in cfq_choose_req() on average (and worst) cases. Signed-off-by: Namhyung Kim <namhyung@gmail.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: algebraic simplification in cfq_prio_to_maxrq()Namhyung Kim2011-05-241-1/+1
| | | | | | | | | Simplify the calculation in cfq_prio_to_maxrq(), plus replace CFQ_PRIO_LISTS to IOPRIO_BE_NR since they are the same and IOPRIO_BE_NR looks more reasonable in this context IMHO. Signed-off-by: Namhyung Kim <namhyung@gmail.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* blk-cgroup: Initialize ioc->cgroup_changed at ioc creation timeVivek Goyal2011-05-231-0/+3
| | | | | | | | | | | | | If we don't explicitly initialize it to zero, CFQ might think that cgroup of ioc has changed and it generates lots of unnecessary calls to call_for_each_cic(changed_cgroup). Fix it. cfq_get_io_context() cfq_ioc_set_cgroup() call_for_each_cic(ioc, changed_cgroup) Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: call elv_bio_merged() when mergedVivek Goyal2011-05-231-0/+2
| | | | | | | | | | | | Commit 73c101011926 ("block: initial patch for on-stack per-task plugging") removed calls to elv_bio_merged() when @bio merged with @req. Re-add them. This in turn will update merged stats in associated group. That should be safe as long as request has got reference to the blkio_group. Signed-off-by: Namhyung Kim <namhyung@gmail.com> Cc: Divyesh Shah <dpshah@google.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: Make IO merge related stats per cpuVivek Goyal2011-05-232-7/+19
| | | | | | | | Make BLKIO_STAT_MERGED per cpu hence gettring rid of need of taking blkg->stats_lock. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: Fix a memory leak of per cpu stats for root groupVivek Goyal2011-05-231-0/+5
| | | | | | | | We allocated per cpu stats struct for root group but did not free it. Fix it. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: get rid of on-stack plugging debug checksJens Axboe2011-05-202-31/+0
| | | | | | | | | We don't need them anymore, so kill: - REQ_ON_PLUG checks in various places - !rq_mergeable() check in plug merging Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* Merge branch 'for-linus' into for-2.6.40/coreJens Axboe2011-05-201-1/+3
|\ | | | | | | | | | | | | This patch merges in a fix that missed 2.6.39 final. Conflicts: block/blk.h
| * block: add proper state guards to __elv_next_requestJames Bottomley2011-05-181-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | blk_cleanup_queue() calls elevator_exit() and after this, we can't touch the elevator without oopsing. __elv_next_request() must check for this state because in the refcounted queue model, we can still call it after blk_cleanup_queue() has been called. This was reported as causing an oops attributable to scsi. Signed-off-by: James Bottomley <James.Bottomley@suse.de> Cc: stable@kernel.org Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-throttle: Make no throttling rule group processing locklessVivek Goyal2011-05-201-4/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we take a queue lock on each bio to check if there are any throttling rules associated with the group and also update the stats. Now access the group under rcu and update the stats without taking the queue lock. Queue lock is taken only if there are throttling rules associated with the group. So the common case of root group when there are no rules, save unnecessary pounding of request queue lock. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-cgroup: Make cgroup stat reset path blkg->lock free for dispatch statsVivek Goyal2011-05-201-0/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now dispatch stats update is lock free. But reset of these stats still takes blkg->stats_lock and is dependent on that. As stats are per cpu, we should be able to just reset the stats on each cpu without any locks. (Atleast for 64bit arch). On 32bit arch there is a small race where 64bit updates are not atomic. The result of this race can be that in the presence of other writers, one might not get 0 value after reset of a stat and might see something intermediate One can write more complicated code to cover this race like sending IPI to other cpus to reset stats and for offline cpus, reset these directly. Right not I am not taking that path because reset_update is more of a debug feature and it can happen only on 32bit arch and possibility of it happening is small. Will fix it if it becomes a real problem. For the time being going for code simplicity. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-cgroup: Make 64bit per cpu stats safe on 32bit archVivek Goyal2011-05-202-5/+24
| | | | | | | | | | | | | | | | | | Some of the stats are 64bit and updation will be non atomic on 32bit architecture. Use sequence counters on 32bit arch to make reading of stats safe. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-throttle: Make dispatch stats per cpuVivek Goyal2011-05-204-42/+147
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we take blkg_stat lock for even updating the stats. So even if a group has no throttling rules (common case for root group), we end up taking blkg_lock, for updating the stats. Make dispatch stats per cpu so that these can be updated without taking blkg lock. If cpu goes offline, these stats simply disappear. No protection has been provided for that yet. Do we really need anything for that? Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-throttle: Free up a group only after one rcu grace periodVivek Goyal2011-05-201-1/+21
| | | | | | | | | | | | | | | | Soon we will allow accessing a throtl_grp under rcu_read_lock(). Hence start freeing up throtl_grp after one rcu grace period. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-throttle: Use helper function to add root throtl group to listsVivek Goyal2011-05-201-3/+1
| | | | | | | | | | | | | | | | Use same helper function for root group as we use with dynamically allocated groups to add it to various lists. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-throttle: Introduce a helper function to fill in device detailsVivek Goyal2011-05-201-12/+23
| | | | | | | | | | | | | | | | A helper function for the code which is used at 2-3 places. Makes reading code little easier. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-throttle: Dynamically allocate root groupVivek Goyal2011-05-201-15/+12
| | | | | | | | | | | | | | | | | | | | | | Currently, we allocate root throtl_grp statically. But as we will be introducing per cpu stat pointers and that will be allocated dynamically even for root group, we might as well make whole root throtl_grp allocation dynamic and treat it in same manner as other groups. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-cgroup: Allow sleeping while dynamically allocating a groupVivek Goyal2011-05-203-67/+205
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, all the cfq_group or throtl_group allocations happen while we are holding ->queue_lock and sleeping is not allowed. Soon, we will move to per cpu stats and also need to allocate the per group stats. As one can not call alloc_percpu() from atomic context as it can sleep, we need to drop ->queue_lock, allocate the group, retake the lock and continue processing. In throttling code, I check the queue DEAD flag again to make sure that driver did not call blk_cleanup_queue() in the mean time. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cfq-iosched: Fix a possible race with cfq cgroup removal codeVivek Goyal2011-05-201-12/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | blkg->key = cfqd is an rcu protected pointer and hence we used to do call_rcu(cfqd->rcu_head) to free up cfqd after one rcu grace period. The problem here is that even though cfqd is around, there are no gurantees that associated request queue (td->queue) or q->queue_lock is still around. A driver might have called blk_cleanup_queue() and release the lock. It might happen that after freeing up the lock we call blkg->key->queue->queue_ock and crash. This is possible in following path. blkiocg_destroy() blkio_unlink_group_fn() cfq_unlink_blkio_group() Hence, wait for an rcu peirod if there are groups which have not been unlinked from blkcg->blkg_list. That way, if there are any groups which are taking cfq_unlink_blkio_group() path, can safely take queue lock. This is how we have taken care of race in throttling logic also. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cfq-iosched: Get rid of redundant function parameter "create"Vivek Goyal2011-05-201-9/+9
| | | | | | | | | | | | | | | | Nobody seems to be using cfq_find_alloc_cfqg() function parameter "create". Get rid of that. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-cgroup: move some fields of unaccounted_time file under right config optionVivek Goyal2011-05-202-3/+6
| | | | | | | | | | | | | | | | cgroup unaccounted_time file is created only if CONFIG_DEBUG_BLK_CGROUP=y. there are some fields which are out side this config option. Fix that. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-throttle: Do the new group initialization with the help of a functionVivek Goyal2011-05-201-29/+35
| | | | | | | | | | | | | | | | | | | | Group initialization code seems to be at two places. root group initialization in blk_throtl_init() and dynamically allocated group in throtl_find_alloc_tg(). Create a common function and use at both the places. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | Merge commit 'v2.6.39' into for-2.6.40/coreJens Axboe2011-05-209-44/+43
|\ \ | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since for-2.6.40/core was forked off the 2.6.39 devel tree, we've had churn in the core area that makes it difficult to handle patches for eg cfq or blk-throttle. Instead of requiring that they be based in older versions with bugs that have been fixed later in the rc cycle, merge in 2.6.39 final. Also fixes up conflicts in the below files. Conflicts: drivers/block/paride/pcd.c drivers/cdrom/viocd.c drivers/ide/ide-cd.c Signed-off-by: Jens Axboe <jaxboe@fusionio.com>