From ab0fd1debe730ec9998678a0c53caefbd121ed10 Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Fri, 3 Jul 2009 12:56:18 +0200 Subject: block: don't merge requests of different failfast settings Block layer used to merge requests and bios with different failfast settings. This caused regular IOs to fail prematurely when they were merged into failfast requests for readahead. Niel Lambrechts could trigger the problem semi-reliably on ext4 when resuming from STR. ext4 uses readahead when reading inodes and combined with the deterministic extra SATA PHY exception cycle during resume on the specific configuration, non-readahead inode read would fail causing ext4 errors. Please read the following thread for details. http://lkml.org/lkml/2009/5/23/21 This patch makes block layer reject merging if the failfast settings don't match. This is correct but likely to lower IO performance by preventing regular IOs from mingling into surrounding readahead requests. Changes to allow such mixed merges and handle errors correctly will be added later. Signed-off-by: Tejun Heo Reported-by: Niel Lambrechts Cc: Theodore Tso Signed-off-by: Jens Axboe --- block/blk-merge.c | 6 ++++++ block/elevator.c | 8 ++++++++ 2 files changed, 14 insertions(+) (limited to 'block') diff --git a/block/blk-merge.c b/block/blk-merge.c index 39ce644..e199967 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -350,6 +350,12 @@ static int attempt_merge(struct request_queue *q, struct request *req, if (blk_integrity_rq(req) != blk_integrity_rq(next)) return 0; + /* don't merge requests of different failfast settings */ + if (blk_failfast_dev(req) != blk_failfast_dev(next) || + blk_failfast_transport(req) != blk_failfast_transport(next) || + blk_failfast_driver(req) != blk_failfast_driver(next)) + return 0; + /* * If we are allowed to merge, then append bio list * from next to rq and release next. merge_requests_fn diff --git a/block/elevator.c b/block/elevator.c index ca86192..6f23753 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -100,6 +100,14 @@ int elv_rq_merge_ok(struct request *rq, struct bio *bio) if (bio_integrity(bio) != blk_integrity_rq(rq)) return 0; + /* + * Don't merge if failfast settings don't match + */ + if (bio_failfast_dev(bio) != blk_failfast_dev(rq) || + bio_failfast_transport(bio) != blk_failfast_transport(rq) || + bio_failfast_driver(bio) != blk_failfast_driver(rq)) + return 0; + if (!elv_iosched_allow_merge(rq, bio)) return 0; -- cgit v1.1 From 76da03467a1a78811777561bbb1fa56175ee4778 Mon Sep 17 00:00:00 2001 From: FUJITA Tomonori Date: Thu, 9 Jul 2009 09:48:28 +0200 Subject: block: call blk_scsi_ioctl_init() Currently, blk_scsi_ioctl_init() is not called since it lacks an initcall marking. This causes the command table to be unitialized, hence somce commands are block when they should not have been. This fixes a regression introduced by commit 018e0446890661504783f92388ecce7138c1566d Signed-off-by: FUJITA Tomonori Signed-off-by: Jens Axboe --- block/scsi_ioctl.c | 1 + 1 file changed, 1 insertion(+) (limited to 'block') diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c index f0e0ce0..e5b1001 100644 --- a/block/scsi_ioctl.c +++ b/block/scsi_ioctl.c @@ -680,3 +680,4 @@ int __init blk_scsi_ioctl_init(void) blk_set_cmd_filter_defaults(&blk_default_cmd_filter); return 0; } +fs_initcall(blk_scsi_ioctl_init); -- cgit v1.1 From 32f2e807a3938b24d0831211e6094f9e44b2fc83 Mon Sep 17 00:00:00 2001 From: Vivek Goyal Date: Thu, 9 Jul 2009 22:13:16 +0200 Subject: cfq-iosched: reset oom_cfqq in cfq_set_request() In case memory is scarce, we now default to oom_cfqq. Once memory is available again, we should allocate a new cfqq and stop using oom_cfqq for a particular io context. Once a new request comes in, check if we are using oom_cfqq, and if yes, try to allocate a new cfqq. Tested the patch by forcing the use of oom_cfqq and upon next request thread realized that it was using oom_cfqq and it allocated a new cfqq. Signed-off-by: Vivek Goyal Signed-off-by: Jens Axboe --- block/cfq-iosched.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'block') diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c index 87276eb..fd7080e 100644 --- a/block/cfq-iosched.c +++ b/block/cfq-iosched.c @@ -2311,7 +2311,7 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask) goto queue_fail; cfqq = cic_to_cfqq(cic, is_sync); - if (!cfqq) { + if (!cfqq || cfqq == &cfqd->oom_cfqq) { cfqq = cfq_get_queue(cfqd, is_sync, cic->ioc, gfp_mask); cic_set_cfqq(cic, cfqq, is_sync); } -- cgit v1.1 From 0a09f4319c6d88c732ed46735f8584bbb95cac65 Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Thu, 16 Jul 2009 15:26:55 +0900 Subject: block: fix failfast merge testing in elv_rq_merge_ok() Commit ab0fd1debe730ec9998678a0c53caefbd121ed10 tries to prevent merge of requests with different failfast settings. In elv_rq_merge_ok(), it compares new bio's failfast flags against the merge target request's. However, the flag testing accessors for bio and blk don't return boolean but the tested bit value directly and FAILFAST on bio and blk don't match, so directly comparing them with == results in false negative unnecessary preventing merge of readahead requests. This patch convert the results to boolean by negating them before comparison. Signed-off-by: Tejun Heo Cc: Jens Axboe Cc: Boaz Harrosh Cc: FUJITA Tomonori Cc: James Bottomley Cc: Jeff Garzik --- block/elevator.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) (limited to 'block') diff --git a/block/elevator.c b/block/elevator.c index 6f23753..2d511f9 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -101,11 +101,16 @@ int elv_rq_merge_ok(struct request *rq, struct bio *bio) return 0; /* - * Don't merge if failfast settings don't match + * Don't merge if failfast settings don't match. + * + * FIXME: The negation in front of each condition is necessary + * because bio and request flags use different bit positions + * and the accessors return those bits directly. This + * ugliness will soon go away. */ - if (bio_failfast_dev(bio) != blk_failfast_dev(rq) || - bio_failfast_transport(bio) != blk_failfast_transport(rq) || - bio_failfast_driver(bio) != blk_failfast_driver(rq)) + if (!bio_failfast_dev(bio) != !blk_failfast_dev(rq) || + !bio_failfast_transport(bio) != !blk_failfast_transport(rq) || + !bio_failfast_driver(bio) != !blk_failfast_driver(rq)) return 0; if (!elv_iosched_allow_merge(rq, bio)) -- cgit v1.1 From 9cb308ce8d32a1fb3600acab6034e19a90228743 Mon Sep 17 00:00:00 2001 From: Xiaotian Feng Date: Fri, 17 Jul 2009 15:26:26 +0800 Subject: block: sysfs fix mismatched queue_var_{store,show} in 64bit kernel In blk-sysfs.c, queue_var_store uses unsigned long to store data, but queue_var_show uses unsigned int to show data. This causes, # echo 70000000000 > /sys/block//queue/read_ahead_kb # cat /sys/block//queue/read_ahead_kb => get wrong value Fix it by using unsigned long. While at it, convert queue_rq_affinity_show() such that it uses bool variable instead of explicit != 0 testing. Signed-off-by: Xiaotian Feng Signed-off-by: Tejun Heo --- block/blk-sysfs.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) (limited to 'block') diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index b1cd040..418d636 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -16,9 +16,9 @@ struct queue_sysfs_entry { }; static ssize_t -queue_var_show(unsigned int var, char *page) +queue_var_show(unsigned long var, char *page) { - return sprintf(page, "%d\n", var); + return sprintf(page, "%lu\n", var); } static ssize_t @@ -77,7 +77,8 @@ queue_requests_store(struct request_queue *q, const char *page, size_t count) static ssize_t queue_ra_show(struct request_queue *q, char *page) { - int ra_kb = q->backing_dev_info.ra_pages << (PAGE_CACHE_SHIFT - 10); + unsigned long ra_kb = q->backing_dev_info.ra_pages << + (PAGE_CACHE_SHIFT - 10); return queue_var_show(ra_kb, (page)); } @@ -189,9 +190,9 @@ static ssize_t queue_nomerges_store(struct request_queue *q, const char *page, static ssize_t queue_rq_affinity_show(struct request_queue *q, char *page) { - unsigned int set = test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags); + bool set = test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags); - return queue_var_show(set != 0, page); + return queue_var_show(set, page); } static ssize_t -- cgit v1.1 From a4e7d46407d73f35d217013b363b79a8f8eafcaa Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Tue, 28 Jul 2009 09:07:29 +0200 Subject: block: always assign default lock to queues Move the assignment of a default lock below blk_init_queue() to blk_queue_make_request(), so we also get to set the default lock for ->make_request_fn() based drivers. This is important since the queue flag locking requires a lock to be in place. Signed-off-by: Jens Axboe --- block/blk-core.c | 7 ------- block/blk-settings.c | 7 +++++++ 2 files changed, 7 insertions(+), 7 deletions(-) (limited to 'block') diff --git a/block/blk-core.c b/block/blk-core.c index 4b45435..a0c340d 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -575,13 +575,6 @@ blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id) return NULL; } - /* - * if caller didn't supply a lock, they get per-queue locking with - * our embedded lock - */ - if (!lock) - lock = &q->__queue_lock; - q->request_fn = rfn; q->prep_rq_fn = NULL; q->unplug_fn = generic_unplug_device; diff --git a/block/blk-settings.c b/block/blk-settings.c index bd582a7..8a3ea3b 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -165,6 +165,13 @@ void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn) blk_set_default_limits(&q->limits); /* + * If the caller didn't supply a lock, fall back to our embedded + * per-queue locks + */ + if (!q->queue_lock) + q->queue_lock = &q->__queue_lock; + + /* * by default assume old behaviour and bounce for any highmem page */ blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH); -- cgit v1.1 From 3839e4b29b4385e4b31075e7805683e2aa2a8103 Mon Sep 17 00:00:00 2001 From: Xiaotian Feng Date: Tue, 28 Jul 2009 09:11:14 +0200 Subject: block: fix improper kobject release in blk_integrity_unregister blk_integrity_unregister should use kobject_put to release the kobject, otherwise after bi is freed, memory of bi->kobj->name is leaked. Signed-off-by: Xiaotian Feng Signed-off-by: Jens Axboe --- block/blk-integrity.c | 1 + 1 file changed, 1 insertion(+) (limited to 'block') diff --git a/block/blk-integrity.c b/block/blk-integrity.c index 73e28d3..15c6308 100644 --- a/block/blk-integrity.c +++ b/block/blk-integrity.c @@ -379,6 +379,7 @@ void blk_integrity_unregister(struct gendisk *disk) kobject_uevent(&bi->kobj, KOBJ_REMOVE); kobject_del(&bi->kobj); + kobject_put(&bi->kobj); kmem_cache_free(integrity_cachep, bi); disk->integrity = NULL; } -- cgit v1.1 From 56ad1740d9a8dc271e71fee234be662638c64458 Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Tue, 28 Jul 2009 22:11:24 +0200 Subject: block: make the end_io functions be non-GPL exports Prior to the change for more sane end_io functions, we exported the helpers with the normal EXPORT_SYMBOL(). That got changed to _GPL() for the new interface. Revert that particular change, on the basis that this is basic functionality and doesn't dip into internal structures. If these exports can't be non-GPL, then we may as well make EXPORT_SYMBOL() imply GPL for everything. Signed-off-by: Jens Axboe --- block/blk-core.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'block') diff --git a/block/blk-core.c b/block/blk-core.c index a0c340d..e3299a7 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2136,7 +2136,7 @@ bool blk_end_request(struct request *rq, int error, unsigned int nr_bytes) { return blk_end_bidi_request(rq, error, nr_bytes, 0); } -EXPORT_SYMBOL_GPL(blk_end_request); +EXPORT_SYMBOL(blk_end_request); /** * blk_end_request_all - Helper function for drives to finish the request. @@ -2157,7 +2157,7 @@ void blk_end_request_all(struct request *rq, int error) pending = blk_end_bidi_request(rq, error, blk_rq_bytes(rq), bidi_bytes); BUG_ON(pending); } -EXPORT_SYMBOL_GPL(blk_end_request_all); +EXPORT_SYMBOL(blk_end_request_all); /** * blk_end_request_cur - Helper function to finish the current request chunk. @@ -2175,7 +2175,7 @@ bool blk_end_request_cur(struct request *rq, int error) { return blk_end_request(rq, error, blk_rq_cur_bytes(rq)); } -EXPORT_SYMBOL_GPL(blk_end_request_cur); +EXPORT_SYMBOL(blk_end_request_cur); /** * __blk_end_request - Helper function for drivers to complete the request. @@ -2194,7 +2194,7 @@ bool __blk_end_request(struct request *rq, int error, unsigned int nr_bytes) { return __blk_end_bidi_request(rq, error, nr_bytes, 0); } -EXPORT_SYMBOL_GPL(__blk_end_request); +EXPORT_SYMBOL(__blk_end_request); /** * __blk_end_request_all - Helper function for drives to finish the request. @@ -2215,7 +2215,7 @@ void __blk_end_request_all(struct request *rq, int error) pending = __blk_end_bidi_request(rq, error, blk_rq_bytes(rq), bidi_bytes); BUG_ON(pending); } -EXPORT_SYMBOL_GPL(__blk_end_request_all); +EXPORT_SYMBOL(__blk_end_request_all); /** * __blk_end_request_cur - Helper function to finish the current request chunk. @@ -2234,7 +2234,7 @@ bool __blk_end_request_cur(struct request *rq, int error) { return __blk_end_request(rq, error, blk_rq_cur_bytes(rq)); } -EXPORT_SYMBOL_GPL(__blk_end_request_cur); +EXPORT_SYMBOL(__blk_end_request_cur); void blk_rq_bio_prep(struct request_queue *q, struct request *rq, struct bio *bio) -- cgit v1.1 From fef246672b009cf3f7a74e2fc9a76932ef2eeed2 Mon Sep 17 00:00:00 2001 From: "Martin K. Petersen" Date: Fri, 31 Jul 2009 11:49:10 -0400 Subject: block: Make blk_queue_stack_limits use the new stacking interface blk_queue_stack_limits() has been superceded by blk_stack_limits() and disk_stack_limits(). Wrap the function call for now, we'll deprecate it later. Signed-off-by: Martin K. Petersen Signed-off-by: Jens Axboe --- block/blk-settings.c | 22 +--------------------- 1 file changed, 1 insertion(+), 21 deletions(-) (limited to 'block') diff --git a/block/blk-settings.c b/block/blk-settings.c index 8a3ea3b..8e86e2d 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -433,27 +433,7 @@ EXPORT_SYMBOL(blk_queue_io_opt); **/ void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b) { - /* zero is "infinity" */ - t->limits.max_sectors = min_not_zero(queue_max_sectors(t), - queue_max_sectors(b)); - - t->limits.max_hw_sectors = min_not_zero(queue_max_hw_sectors(t), - queue_max_hw_sectors(b)); - - t->limits.seg_boundary_mask = min_not_zero(queue_segment_boundary(t), - queue_segment_boundary(b)); - - t->limits.max_phys_segments = min_not_zero(queue_max_phys_segments(t), - queue_max_phys_segments(b)); - - t->limits.max_hw_segments = min_not_zero(queue_max_hw_segments(t), - queue_max_hw_segments(b)); - - t->limits.max_segment_size = min_not_zero(queue_max_segment_size(t), - queue_max_segment_size(b)); - - t->limits.logical_block_size = max(queue_logical_block_size(t), - queue_logical_block_size(b)); + blk_stack_limits(&t->limits, &b->limits, 0); if (!t->queue_lock) WARN_ON_ONCE(1); -- cgit v1.1 From 7c958e32649e0c35801762878fb0b6da8c55a515 Mon Sep 17 00:00:00 2001 From: "Martin K. Petersen" Date: Fri, 31 Jul 2009 11:49:11 -0400 Subject: block: Add a wrapper for setting minimum request size without a queue Introduce blk_limits_io_min() and make blk_queue_io_min() call it. Signed-off-by: Mike Snitzer Signed-off-by: Martin K. Petersen Signed-off-by: Jens Axboe --- block/blk-settings.c | 31 ++++++++++++++++++++++++------- 1 file changed, 24 insertions(+), 7 deletions(-) (limited to 'block') diff --git a/block/blk-settings.c b/block/blk-settings.c index 8e86e2d..1f71974 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -384,6 +384,29 @@ void blk_queue_alignment_offset(struct request_queue *q, unsigned int offset) EXPORT_SYMBOL(blk_queue_alignment_offset); /** + * blk_limits_io_min - set minimum request size for a device + * @limits: the queue limits + * @min: smallest I/O size in bytes + * + * Description: + * Some devices have an internal block size bigger than the reported + * hardware sector size. This function can be used to signal the + * smallest I/O the device can perform without incurring a performance + * penalty. + */ +void blk_limits_io_min(struct queue_limits *limits, unsigned int min) +{ + limits->io_min = min; + + if (limits->io_min < limits->logical_block_size) + limits->io_min = limits->logical_block_size; + + if (limits->io_min < limits->physical_block_size) + limits->io_min = limits->physical_block_size; +} +EXPORT_SYMBOL(blk_limits_io_min); + +/** * blk_queue_io_min - set minimum request size for the queue * @q: the request queue for the device * @min: smallest I/O size in bytes @@ -396,13 +419,7 @@ EXPORT_SYMBOL(blk_queue_alignment_offset); */ void blk_queue_io_min(struct request_queue *q, unsigned int min) { - q->limits.io_min = min; - - if (q->limits.io_min < q->limits.logical_block_size) - q->limits.io_min = q->limits.logical_block_size; - - if (q->limits.io_min < q->limits.physical_block_size) - q->limits.io_min = q->limits.physical_block_size; + blk_limits_io_min(&q->limits, min); } EXPORT_SYMBOL(blk_queue_io_min); -- cgit v1.1 From 70dd5bf3b99964d52862ad2810c24cc32a553535 Mon Sep 17 00:00:00 2001 From: "Martin K. Petersen" Date: Fri, 31 Jul 2009 11:49:12 -0400 Subject: block: Stack optimal I/O size When stacking block devices ensure that optimal I/O size is scaled accordingly. Signed-off-by: Martin K. Petersen Reviewed-by: Mike Snitzer Signed-off-by: Jens Axboe --- block/blk-settings.c | 11 +++++++++++ 1 file changed, 11 insertions(+) (limited to 'block') diff --git a/block/blk-settings.c b/block/blk-settings.c index 1f71974..e1327dd 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -7,6 +7,7 @@ #include #include #include /* for max_pfn/max_low_pfn */ +#include #include "blk.h" @@ -520,6 +521,16 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, return -1; } + /* Find lcm() of optimal I/O size */ + if (t->io_opt && b->io_opt) + t->io_opt = (t->io_opt * b->io_opt) / gcd(t->io_opt, b->io_opt); + else if (b->io_opt) + t->io_opt = b->io_opt; + + /* Verify that optimal I/O size is a multiple of io_min */ + if (t->io_min && t->io_opt % t->io_min) + return -1; + return 0; } EXPORT_SYMBOL(blk_stack_limits); -- cgit v1.1 From 7e5f5fb09e6fc657f21816b5a18ba645a913368e Mon Sep 17 00:00:00 2001 From: "Martin K. Petersen" Date: Fri, 31 Jul 2009 11:49:13 -0400 Subject: block: Update topology documentation Update topology comments and sysfs documentation based upon discussions with Neil Brown. Signed-off-by: Martin K. Petersen Signed-off-by: Jens Axboe --- block/blk-settings.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) (limited to 'block') diff --git a/block/blk-settings.c b/block/blk-settings.c index e1327dd..476d870 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -413,10 +413,13 @@ EXPORT_SYMBOL(blk_limits_io_min); * @min: smallest I/O size in bytes * * Description: - * Some devices have an internal block size bigger than the reported - * hardware sector size. This function can be used to signal the - * smallest I/O the device can perform without incurring a performance - * penalty. + * Storage devices may report a granularity or preferred minimum I/O + * size which is the smallest request the device can perform without + * incurring a performance penalty. For disk drives this is often the + * physical block size. For RAID arrays it is often the stripe chunk + * size. A properly aligned multiple of minimum_io_size is the + * preferred request size for workloads where a high number of I/O + * operations is desired. */ void blk_queue_io_min(struct request_queue *q, unsigned int min) { @@ -430,8 +433,12 @@ EXPORT_SYMBOL(blk_queue_io_min); * @opt: optimal request size in bytes * * Description: - * Drivers can call this function to set the preferred I/O request - * size for devices that report such a value. + * Storage devices may report an optimal I/O size, which is the + * device's preferred unit for sustained I/O. This is rarely reported + * for disk drives. For RAID arrays it is usually the stripe width or + * the internal track size. A properly aligned multiple of + * optimal_io_size is the preferred request size for workloads where + * sustained throughput is desired. */ void blk_queue_io_opt(struct request_queue *q, unsigned int opt) { -- cgit v1.1 From 14d9fa352592582e457cf75022202766baac1348 Mon Sep 17 00:00:00 2001 From: John Stoffel Date: Tue, 4 Aug 2009 22:10:17 +0200 Subject: Make SCSI SG v4 driver enabled by default and remove EXPERIMENTAL dependency, since udev depends on BSG Make Block Layer SG support v4 the default, since recent udev versions depend on this to access serial numbers and other low level info properly. This should be backported to older kernels as well, since most distros have enabled this for a long time. Signed-off-by: John Stoffel Cc: stable@kernel.org Signed-off-by: Jens Axboe --- block/Kconfig | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) (limited to 'block') diff --git a/block/Kconfig b/block/Kconfig index 95a86ad..9be0b56 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -48,9 +48,9 @@ config LBDAF If unsure, say Y. config BLK_DEV_BSG - bool "Block layer SG support v4 (EXPERIMENTAL)" - depends on EXPERIMENTAL - ---help--- + bool "Block layer SG support v4" + default y + help Saying Y here will enable generic SG (SCSI generic) v4 support for any block device. @@ -60,7 +60,10 @@ config BLK_DEV_BSG protocols (e.g. Task Management Functions and SMP in Serial Attached SCSI). - If unsure, say N. + This option is required by recent UDEV versions to properly + access device serial numbers, etc. + + If unsure, say Y. config BLK_DEV_INTEGRITY bool "Block layer data integrity support" -- cgit v1.1