aboutsummaryrefslogtreecommitdiffstats
path: root/block
Commit message (Collapse)AuthorAgeFilesLines
...
* | blk-cgroup: Allow sleeping while dynamically allocating a groupVivek Goyal2011-05-203-67/+205
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, all the cfq_group or throtl_group allocations happen while we are holding ->queue_lock and sleeping is not allowed. Soon, we will move to per cpu stats and also need to allocate the per group stats. As one can not call alloc_percpu() from atomic context as it can sleep, we need to drop ->queue_lock, allocate the group, retake the lock and continue processing. In throttling code, I check the queue DEAD flag again to make sure that driver did not call blk_cleanup_queue() in the mean time. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cfq-iosched: Fix a possible race with cfq cgroup removal codeVivek Goyal2011-05-201-12/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | blkg->key = cfqd is an rcu protected pointer and hence we used to do call_rcu(cfqd->rcu_head) to free up cfqd after one rcu grace period. The problem here is that even though cfqd is around, there are no gurantees that associated request queue (td->queue) or q->queue_lock is still around. A driver might have called blk_cleanup_queue() and release the lock. It might happen that after freeing up the lock we call blkg->key->queue->queue_ock and crash. This is possible in following path. blkiocg_destroy() blkio_unlink_group_fn() cfq_unlink_blkio_group() Hence, wait for an rcu peirod if there are groups which have not been unlinked from blkcg->blkg_list. That way, if there are any groups which are taking cfq_unlink_blkio_group() path, can safely take queue lock. This is how we have taken care of race in throttling logic also. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cfq-iosched: Get rid of redundant function parameter "create"Vivek Goyal2011-05-201-9/+9
| | | | | | | | | | | | | | | | Nobody seems to be using cfq_find_alloc_cfqg() function parameter "create". Get rid of that. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-cgroup: move some fields of unaccounted_time file under right config optionVivek Goyal2011-05-202-3/+6
| | | | | | | | | | | | | | | | cgroup unaccounted_time file is created only if CONFIG_DEBUG_BLK_CGROUP=y. there are some fields which are out side this config option. Fix that. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-throttle: Do the new group initialization with the help of a functionVivek Goyal2011-05-201-29/+35
| | | | | | | | | | | | | | | | | | | | Group initialization code seems to be at two places. root group initialization in blk_throtl_init() and dynamically allocated group in throtl_find_alloc_tg(). Create a common function and use at both the places. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | Merge commit 'v2.6.39' into for-2.6.40/coreJens Axboe2011-05-209-44/+43
|\ \ | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since for-2.6.40/core was forked off the 2.6.39 devel tree, we've had churn in the core area that makes it difficult to handle patches for eg cfq or blk-throttle. Instead of requiring that they be based in older versions with bugs that have been fixed later in the rc cycle, merge in 2.6.39 final. Also fixes up conflicts in the below files. Conflicts: drivers/block/paride/pcd.c drivers/cdrom/viocd.c drivers/ide/ide-cd.c Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block: don't delay blk_run_queue_asyncShaohua Li2011-05-181-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let's check a scenario: 1. blk_delay_queue(q, SCSI_QUEUE_DELAY); 2. blk_run_queue_async(); the second one will became a noop, because q->delay_work already has WORK_STRUCT_PENDING_BIT set, so the delayed work will still run after SCSI_QUEUE_DELAY. But blk_run_queue_async actually hopes the delayed work runs immediately. Fix this by doing a cancel on potentially pending delayed work before queuing an immediate run of the workqueue. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * blk-throttle: Use task_subsys_state() to determine a task's blkio_cgroupVivek Goyal2011-05-164-11/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currentlly we first map the task to cgroup and then cgroup to blkio_cgroup. There is a more direct way to get to blkio_cgroup from task using task_subsys_state(). Use that. The real reason for the fix is that it also avoids a race in generic cgroup code. During remount/umount rebind_subsystems() is called and it can do following with and rcu protection. cgrp->subsys[i] = NULL; That means if somebody got hold of cgroup under rcu and then it tried to do cgroup->subsys[] to get to blkio_cgroup, it would get NULL which is wrong. I was running into this race condition with ltp running on a upstream derived kernel and that lead to crash. So ideally we should also fix cgroup generic code to wait for rcu grace period before setting pointer to NULL. Li Zefan is not very keen on introducing synchronize_wait() as he thinks it will slow down moun/remount/umount operations. So for the time being atleast fix the kernel crash by taking a more direct route to blkio_cgroup. One tester had reported a crash while running LTP on a derived kernel and with this fix crash is no more seen while the test has been running for over 6 days. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block: don't propagate unlisted DISK_EVENTs to userlandTejun Heo2011-04-211-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DISK_EVENT_MEDIA_CHANGE is used for both userland visible event and internal event for revalidation of removeable devices. Some legacy drivers don't implement proper event detection and continuously generate events under certain circumstances. For example, ide-cd generates media changed continuously if there's no media in the drive, which can lead to infinite loop of events jumping back and forth between the driver and userland event handler. This patch updates disk event infrastructure such that it never propagates events not listed in disk->events to userland. Those events are processed the same for internal purposes but uevent generation is suppressed. This also ensures that userland only gets events which are advertised in the @events sysfs node lowering risk of confusion. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * elevator: check for ELEVATOR_INSERT_SORT_MERGE in !elvpriv case tooJens Axboe2011-04-211-1/+2
| | | | | | | | | | | | | | | | | | | | | | The sort insert is the one that goes to the IO scheduler. With the SORT_MERGE addition, we could bypass IO scheduler setup but still ask the IO scheduler to insert the request. This would cause an oops on switching IO schedulers through the sysfs interface, unless the disk just happened to be idle while it occured. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block: Remove the extra check in queue_requests_storeTao Ma2011-04-191-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In queue_requests_store, the code looks like if (rl->count[BLK_RW_SYNC] >= q->nr_requests) { blk_set_queue_full(q, BLK_RW_SYNC); } else if (rl->count[BLK_RW_SYNC]+1 <= q->nr_requests) { blk_clear_queue_full(q, BLK_RW_SYNC); wake_up(&rl->wait[BLK_RW_SYNC]); } If we don't satify the situation of "if", we can get that rl->count[BLK_RW_SYNC} < q->nr_quests. It is the same as rl->count[BLK_RW_SYNC]+1 <= q->nr_requests. All the "else" should satisfy the "else if" check so it isn't needed actually. Signed-off-by: Tao Ma <boyu.mt@taobao.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block, blk-sysfs: Fix an err return path in blk_register_queue()Liu Yuan2011-04-191-1/+3
| | | | | | | | | | | | | | | | | | We do not call blk_trace_remove_sysfs() in err return path if kobject_add() fails. This path fixes it. Cc: stable@kernel.org Signed-off-by: Liu Yuan <tailai.ly@taobao.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block: remove stale kerneldoc member from __blk_run_queue()Jens Axboe2011-04-191-1/+0
| | | | | | | | | | | | | | | | We don't pass in a 'force_kblockd' anymore, get rid of the stsale comment. Reported-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block: get rid of QUEUE_FLAG_REENTERJens Axboe2011-04-192-10/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are currently using this flag to check whether it's safe to call into ->request_fn(). If it is set, we punt to kblockd. But we get a lot of false positives and excessive punts to kblockd, which hurts performance. The only real abuser of this infrastructure is SCSI. So export the async queue run and convert SCSI over to use that. There's room for improvement in that SCSI need not always use the async call, but this fixes our performance issue and they can fix that up in due time. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * cfq-iosched: read_lock() does not always imply rcu_read_lock()Jens Axboe2011-04-191-14/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | For some configurations of CONFIG_PREEMPT that is not true. So get rid of __call_for_each_cic() and always uses the explicitly rcu_read_lock() protected call_for_each_cic() instead. This fixes a potential bug related to IO scheduler removal or online switching. Thanks to Paul McKenney for clarifying this. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block: kill blk_flush_plug_list() exportJens Axboe2011-04-181-1/+0
| | | | | | | | | | | | | | | | With all drivers and file systems converted, we only have in-core use of this function. So remove the export. Reporteed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | block: Fix discard topology stacking and reportingMartin K. Petersen2011-05-182-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some cases we would end up stacking discard_zeroes_data incorrectly. Fix this by enabling the feature by default for stacking drivers and clearing it for low-level drivers. Incorporating a device that does not support dzd will then cause the feature to be disabled in the stacking driver. Also ensure that the maximum discard value does not overflow when exported in sysfs and return 0 in the alignment and dzd fields for devices that don't support discard. Reported-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Acked-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@kernel.org Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blkdev: Do not return -EOPNOTSUPP if discard is supportedLukas Czerner2011-05-061-7/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we return -EOPNOTSUPP in blkdev_issue_discard() if any of the bio fails due to underlying device not supporting discard request. However, if the device is for example dm device composed of devices which some of them support discard and some of them does not, it is ok for some bios to fail with EOPNOTSUPP, but it does not mean that discard is not supported at all. This commit removes the check for bios failed with EOPNOTSUPP and change blkdev_issue_discard() to return operation not supported if and only if the device does not actually supports it, not just part of the device as some bios might indicate. This change also fixes problem with BLKDISCARD ioctl() which now works correctly on such dm devices. Signed-off-by: Lukas Czerner <lczerner@redhat.com> CC: Jens Axboe <jaxboe@fusionio.com> CC: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blkdev: Simple cleanup in blkdev_issue_zeroout()Lukas Czerner2011-05-061-14/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In blkdev_issue_zeroout() we are submitting regular WRITE bios, so we do not need to check for -EOPNOTSUPP specifically in case of error. Also there is no need to have label submit: because there is no way to jump out from the while cycle without an error and we really want to exit, rather than try again. And also remove the check for (sz == 0) since at that point sz can never be zero. Signed-off-by: Lukas Czerner <lczerner@redhat.com> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> CC: Dmitry Monakhov <dmonakhov@openvz.org> CC: Jens Axboe <jaxboe@fusionio.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blkdev: Submit discard bio in batches in blkdev_issue_discard()Lukas Czerner2011-05-061-40/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we are waiting for every submitted REQ_DISCARD bio separately, but it can have unwanted consequences of repeatedly flushing the queue, so we rather submit bios in batches and wait for the entire batch, hence narrowing the window of other ios going in. Use bio_batch_end_io() and struct bio_batch for that purpose, the same is used by blkdev_issue_zeroout(). Also change bio_batch_end_io() so we always set !BIO_UPTODATE in the case of error and remove the check for bb, since we are the only user of this function and we always set this. Remove bio_get()/bio_put() from the blkdev_issue_discard() since bio_alloc() and bio_batch_end_io() is doing the same thing, hence it is not needed anymore. I have done simple dd testing with surprising results. The script I have used is: for i in $(seq 10); do echo $i dd if=/dev/sdb1 of=/dev/sdc1 bs=4k & sleep 5 done /usr/bin/time -f %e ./blkdiscard /dev/sdc1 Running time of BLKDISCARD on the whole device: with patch without patch 0.95 15.58 So we can see that in this artificial test the kernel with the patch applied is approx 16x faster in discarding the device. Signed-off-by: Lukas Czerner <lczerner@redhat.com> CC: Dmitry Monakhov <dmonakhov@openvz.org> CC: Jens Axboe <jaxboe@fusionio.com> CC: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | block: hold queue if flush is running for non-queueable flush driveshaohua.li@intel.com2011-05-062-6/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some drives, flush requests are non-queueable. When flush request is running, normal read/write requests can't run. If block layer dispatches such request, driver can't handle it and requeue it. Tejun suggested we can hold the queue when flush is running. This can avoid unnecessary requeue. Also this can improve performance. For example, we have request flush1, write1, flush 2. flush1 is dispatched, then queue is hold, write1 isn't inserted to queue. After flush1 is finished, flush2 will be dispatched. Since disk cache is already clean, flush2 will be finished very soon, so looks like flush2 is folded to flush1. In my test, the queue holding completely solves a regression introduced by commit 53d63e6b0dfb95882ec0219ba6bbd50cde423794: block: make the flush insertion use the tail of the dispatch list It's not a preempt type request, in fact we have to insert it behind requests that do specify INSERT_FRONT. which causes about 20% regression running a sysbench fileio workload. Stable: 2.6.39 only Cc: stable@kernel.org Signed-off-by: Shaohua Li <shaohua.li@intel.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | block: add a non-queueable flush flagshaohua.li@intel.com2011-05-061-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | flush request isn't queueable in some drives. Add a flag to let driver notify block layer about this. We can optimize flush performance with the knowledge. Stable: 2.6.39 only Cc: stable@kernel.org Signed-off-by: Shaohua Li <shaohua.li@intel.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | iosched: remove redundant sprintfKees Cook2011-05-051-6/+1
| | | | | | | | | | | | | | | | | | After the anticipatory scheduler was dropped, there was no need to special-case the request_module string. As such, drop the redundant sprintf and stack variable. Signed-off-by: Kees Cook <kees.cook@canonical.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | block: Remove 'plug/unplug' comment in blk_execute_rq_nowaitTao Ma2011-05-051-1/+1
|/ | | | | | | | unplug is replaced with blk_run_queue now in blk_execute_rq_nowait, so change the comment accordingly. Signed-off-by: Tao Ma <boyu.mt@taobao.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: add blk_run_queue_asyncChristoph Hellwig2011-04-186-20/+33
| | | | | | | | | | Instead of overloading __blk_run_queue to force an offload to kblockd add a new blk_run_queue_async helper to do it explicitly. I've kept the blk_queue_stopped check for now, but I suspect it's not needed as the check we do when the workqueue items runs should be enough. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: blk_delay_queue() should use kblockd workqueueJens Axboe2011-04-181-1/+2
| | | | | Reported-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: drop queue lock before calling __blk_run_queue() for kblockd puntJens Axboe2011-04-181-8/+25
| | | | | | | | | | If we know we are going to punt to kblockd, we can drop the queue lock before calling into __blk_run_queue() since it only does a safe bit test and a workqueue call. Since kblockd needs to grab this very lock as one of the first things it does, it's a good optimization to drop the lock before waking kblockd. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* Revert "block: add callback function for unplug notification"Jens Axboe2011-04-182-19/+0
| | | | | | | | | | | | | | | MD can't use this since it really requires us to be able to keep more than a single piece of state for the unplug. Commit 048c9374 added the required support for MD, so get rid of this now unused code. This reverts commit f75664570d8b75469cc468f23c2b27220984983b. Conflicts: block/blk-core.c Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: Enhance new plugging support to support general callbacksNeilBrown2011-04-181-0/+20
| | | | | | | | | | md/raid requires an unplug callback, but as it does not uses requests the current code cannot provide one. So allow arbitrary callbacks to be attached to the blk_plug. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: make unplug timer trace event correspond to the schedule() unplugJens Axboe2011-04-161-6/+12
| | | | | | | | | | | It's a pretty close match to what we had before - the timer triggering would mean that nobody unplugged the plug in due time, in the new scheme this matches very closely what the schedule() unplug now is. It's essentially the difference between an explicit unplug (IO unplug) or an implicit unplug (timer unplug, we scheduled with pending IO queued). Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: only force kblockd unplugging from the schedule() pathJens Axboe2011-04-151-6/+7
| | | | | | | | | For the explicit unplugging, we'd prefer to kick things off immediately and not pay the penalty of the latency to switch to kblockd. So let blk_finish_plug() do the run inline, while the implicit-on-schedule-out unplug will punt to kblockd. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: cleanup the block plug helper functionsChristoph Hellwig2011-04-151-18/+6
| | | | | | | | | | | | It's a bit of a mess currently. task->plug is being cleared and reset in __blk_finish_plug(), and blk_finish_plug() is testing for a NULL plug which cannot happen even from schedule() anymore since it uses blk_needs_flush_plug() to determine whether to call into this function at all. So get rid of some of the cruft. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block, blk-sysfs: Use the variable directly instead of a function callLiu Yuan2011-04-131-2/+1
| | | | | | | | | | | | In the function blk_register_queue(), var _dev_ is already assigned by disk_to_dev().So use it directly instead of calling disk_to_dev() again. Signed-off-by: Liu Yuan <tailai.ly@taobao.com> Modified by me to delete an empty line in the same function while in there anyway. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: move queue run on unplug to kblockdJens Axboe2011-04-121-1/+1
| | | | | | | | | | | | | | | | There are worries that we are now consuming a lot more stack in some cases, since we potentially call into IO dispatch from schedule() or io_schedule(). We can reduce this problem by moving the running of the queue to kblockd, like the old plugging scheme did as well. This may or may not be a good idea from a performance perspective, depending on how many tasks have queue plugs running at the same time. For even the slightly contended case, doing just a single queue run from kblockd instead of multiple runs directly from the unpluggers will be faster. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: kill queue_sync_plugs()Jens Axboe2011-04-121-14/+0
| | | | | | | | The original use for this dates back to when we had to track write requests for serializing around barriers. That's not needed anymore, so kill it. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: readd plug trace eventJens Axboe2011-04-121-1/+9
| | | | | | | | | This was removed with the queue plug state. But we can easily readd by checking if this is the first request going to this queue. It's good information to have when tracing to see how effective the plugging is. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: add callback function for unplug notificationJens Axboe2011-04-122-0/+19
| | | | | | | MD would like to know when a queue is unplugged, so it can flush it's bitmap writes. Add such a callback. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: add comment on why we save and disable interrupts in flush_plug_list()Jens Axboe2011-04-121-0/+5
| | | | | | It's done at the top to avoid doing it for every queue we unplug. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: fixup block IO unplug trace callJens Axboe2011-04-121-2/+13
| | | | | | | It was removed with the on-stack plugging, readd it and track the depth of requests added when flushing the plug. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: splice plug list to local contextNeilBrown2011-04-111-5/+9
| | | | | | | | | | | | | | | | | | | | If the request_fn ends up blocking, we could be re-entering the plug flush. Since the list is protected by explicitly not allowing schedule events, this isn't a terribly good idea. Additionally, it can cause us to recurse. As request_fn called by __blk_run_queue is allowed to 'schedule()' (after dropping the queue lock of course), it is possible to get a recursive call: schedule -> blk_flush_plug -> __blk_finish_plug -> flush_plug_list -> __blk_run_queue -> request_fn -> schedule We must make sure that the second schedule does not call into blk_flush_plug again. So instead of leaving the list of requests on blk_plug->list, move them to a separate list leaving blk_plug->list empty. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* Merge branch 'for-linus2' of git://git.profusion.mobi/users/lucas/linux-2.6Linus Torvalds2011-04-076-7/+7
|\ | | | | | | | | * 'for-linus2' of git://git.profusion.mobi/users/lucas/linux-2.6: Fix common misspellings
| * Fix common misspellingsLucas De Marchi2011-03-316-7/+7
| | | | | | | | | | | | Fixes generated by 'codespell' and manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
* | block: fix request sorting at unplugKonstantin Khlebnikov2011-04-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Comparison function for list_sort() must be anticommutative, otherwise it is not sorting in ordinary meaning. But fortunately list_sort() always check ((*cmp)(priv, a, b) <= 0) it not distinguish negative and zero, so comparison function can implement only less-or-equal instead of full three-way comparison. Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | dm: improve block integrity supportMike Snitzer2011-04-051-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current block integrity (DIF/DIX) support in DM is verifying that all devices' integrity profiles match during DM device resume (which is past the point of no return). To some degree that is unavoidable (stacked DM devices force this late checking). But for most DM devices (which aren't stacking on other DM devices) the ideal time to verify all integrity profiles match is during table load. Introduce the notion of an "initialized" integrity profile: a profile that was blk_integrity_register()'d with a non-NULL 'blk_integrity' template. Add blk_integrity_is_initialized() to allow checking if a profile was initialized. Update DM integrity support to: - check all devices with _initialized_ integrity profiles match during table load; uninitialized profiles (e.g. for underlying DM device(s) of a stacked DM device) are ignored. - disallow a table load that would result in an integrity profile that conflicts with a DM device's existing (in-use) integrity profile - avoid clearing an existing integrity profile - validate all integrity profiles match during resume; but if they don't all we can do is report the mismatch (during resume we're past the point of no return) Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | blk-throttle: don't call xchg on boolAndreas Schwab2011-04-051-2/+2
| | | | | | | | | | | | | | xchg does not work portably with smaller than 32bit types. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | block: make the flush insertion use the tail of the dispatch listJens Axboe2011-04-051-2/+2
| | | | | | | | | | | | | | It's not a preempt type request, in fact we have to insert it behind requests that do specify INSERT_FRONT. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | block: get rid of elv_insert() interfaceJens Axboe2011-04-052-22/+17
| | | | | | | | | | | | | | | | Merge it with __elv_add_request(), it's pretty pointless to have a function with only two callers. The main interface is elv_add_request()/__elv_add_request(). Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | block: dump request state on seeing a corrupted request completionJens Axboe2011-04-051-1/+1
|/ | | | | | | | Currently we just dump a non-informative 'request botched' message. Lets actually try and print something sane to help debug issues around this. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: fix issue with calling blk_stop_queue() from the request_fn handlerJens Axboe2011-03-251-1/+1
| | | | | | | | | | | When the queue work handler was converted to delayed work, the stopping was inadvertently made sync as well. Change this back to being async stop, using __cancel_delayed_work() instead of cancel_delayed_work(). Reported-by: Jeremy Fitzhardinge <jeremy@goop.org> Reported-by: Chris Mason <chris.mason@oracle.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: fix bug with inserting flush requests as sort/mergeJens Axboe2011-03-251-1/+4
| | | | | | | | | | | | | | | | With the introduction of the on-stack plugging, we would assume that any request being inserted was a normal file system request. As flush/fua requires a special insert mode, this caused problems. Fix this up by checking for this in flush_plug_list() and use the appropriate insert mechanism. Big thanks goes to Markus Tripplesdorf for tirelessly testing patches, and to Sergey Senozhatsky for helping find the real issue. Reported-by: Markus Tripplesdorf <markus@trippelsdorf.de> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>