Loading Documentation/scsi/ChangeLog.megaraid +35 −0 Original line number Original line Diff line number Diff line Release Date : Fri Nov 11 12:27:22 EST 2005 - Seokmann Ju <sju@lsil.com> Current Version : 2.20.4.7 (scsi module), 2.20.2.6 (cmm module) Older Version : 2.20.4.6 (scsi module), 2.20.2.6 (cmm module) 1. Sorted out PCI IDs to remove megaraid support overlaps. Based on the patch from Daniel, sorted out PCI IDs along with charactor node name change from 'megadev' to 'megadev_legacy' to avoid conflict. --- Hopefully we'll be getting the build restriction zapped much sooner, but we should also be thinking about totally removing the hardware support overlap in the megaraid drivers. This patch pencils in a date of Feb 06 for this, and performs some printk abuse in hope that existing legacy users might pick up on what's going on. Signed-off-by: Daniel Drake <dsd@gentoo.org> --- 2. Fixed a issue: megaraid always fails to reset handler. --- I found that the megaraid driver always fails to reset the adapter with the following message: megaraid: resetting the host... megaraid mbox: reset sequence completed successfully megaraid: fast sync command timed out megaraid: reservation reset failed when the "Cluster mode" of the adapter BIOS is enabled. So, whenever the reset occurs, the adapter goes to offline and just become unavailable. Jun'ichi Nomura [mailto:jnomura@mtc.biglobe.ne.jp] --- Release Date : Mon Mar 07 12:27:22 EST 2005 - Seokmann Ju <sju@lsil.com> Release Date : Mon Mar 07 12:27:22 EST 2005 - Seokmann Ju <sju@lsil.com> Current Version : 2.20.4.6 (scsi module), 2.20.2.6 (cmm module) Current Version : 2.20.4.6 (scsi module), 2.20.2.6 (cmm module) Older Version : 2.20.4.5 (scsi module), 2.20.2.5 (cmm module) Older Version : 2.20.4.5 (scsi module), 2.20.2.5 (cmm module) Loading Documentation/scsi/scsi_mid_low_api.txt +28 −9 Original line number Original line Diff line number Diff line Loading @@ -150,7 +150,8 @@ scsi devices of which only the first 2 respond: LLD mid level LLD LLD mid level LLD ===-------------------=========--------------------===------ ===-------------------=========--------------------===------ scsi_host_alloc() --> scsi_host_alloc() --> scsi_add_host() --------+ scsi_add_host() ----> scsi_scan_host() -------+ | | slave_alloc() slave_alloc() slave_configure() --> scsi_adjust_queue_depth() slave_configure() --> scsi_adjust_queue_depth() Loading Loading @@ -196,7 +197,7 @@ of the issues involved. See the section on reference counting below. The hotplug concept may be extended to SCSI devices. Currently, when an The hotplug concept may be extended to SCSI devices. Currently, when an HBA is added, the scsi_add_host() function causes a scan for SCSI devices HBA is added, the scsi_scan_host() function causes a scan for SCSI devices attached to the HBA's SCSI transport. On newer SCSI transports the HBA attached to the HBA's SCSI transport. On newer SCSI transports the HBA may become aware of a new SCSI device _after_ the scan has completed. may become aware of a new SCSI device _after_ the scan has completed. An LLD can use this sequence to make the mid level aware of a SCSI device: An LLD can use this sequence to make the mid level aware of a SCSI device: Loading Loading @@ -372,7 +373,7 @@ names all start with "scsi_". Summary: Summary: scsi_activate_tcq - turn on tag command queueing scsi_activate_tcq - turn on tag command queueing scsi_add_device - creates new scsi device (lu) instance scsi_add_device - creates new scsi device (lu) instance scsi_add_host - perform sysfs registration and SCSI bus scan. scsi_add_host - perform sysfs registration and set up transport class scsi_adjust_queue_depth - change the queue depth on a SCSI device scsi_adjust_queue_depth - change the queue depth on a SCSI device scsi_assign_lock - replace default host_lock with given lock scsi_assign_lock - replace default host_lock with given lock scsi_bios_ptable - return copy of block device's partition table scsi_bios_ptable - return copy of block device's partition table Loading @@ -386,6 +387,7 @@ Summary: scsi_remove_device - detach and remove a SCSI device scsi_remove_device - detach and remove a SCSI device scsi_remove_host - detach and remove all SCSI devices owned by host scsi_remove_host - detach and remove all SCSI devices owned by host scsi_report_bus_reset - report scsi _bus_ reset observed scsi_report_bus_reset - report scsi _bus_ reset observed scsi_scan_host - scan SCSI bus scsi_track_queue_full - track successive QUEUE_FULL events scsi_track_queue_full - track successive QUEUE_FULL events scsi_unblock_requests - allow further commands to be queued to given host scsi_unblock_requests - allow further commands to be queued to given host scsi_unregister - [calls scsi_host_put()] scsi_unregister - [calls scsi_host_put()] Loading Loading @@ -425,10 +427,10 @@ void scsi_activate_tcq(struct scsi_device *sdev, int depth) * Might block: yes * Might block: yes * * * Notes: This call is usually performed internally during a scsi * Notes: This call is usually performed internally during a scsi * bus scan when an HBA is added (i.e. scsi_add_host()). So it * bus scan when an HBA is added (i.e. scsi_scan_host()). So it * should only be called if the HBA becomes aware of a new scsi * should only be called if the HBA becomes aware of a new scsi * device (lu) after scsi_add_host() has completed. If successful * device (lu) after scsi_scan_host() has completed. If successful * this call we lead to slave_alloc() and slave_configure() callbacks * this call can lead to slave_alloc() and slave_configure() callbacks * into the LLD. * into the LLD. * * * Defined in: drivers/scsi/scsi_scan.c * Defined in: drivers/scsi/scsi_scan.c Loading @@ -439,7 +441,7 @@ struct scsi_device * scsi_add_device(struct Scsi_Host *shost, /** /** * scsi_add_host - perform sysfs registration and SCSI bus scan. * scsi_add_host - perform sysfs registration and set up transport class * @shost: pointer to scsi host instance * @shost: pointer to scsi host instance * @dev: pointer to struct device of type scsi class * @dev: pointer to struct device of type scsi class * * Loading @@ -448,7 +450,11 @@ struct scsi_device * scsi_add_device(struct Scsi_Host *shost, * Might block: no * Might block: no * * * Notes: Only required in "hotplug initialization model" after a * Notes: Only required in "hotplug initialization model" after a * successful call to scsi_host_alloc(). * successful call to scsi_host_alloc(). This function does not * scan the bus; this can be done by calling scsi_scan_host() or * in some other transport-specific way. The LLD must set up * the transport template before calling this function and may only * access the transport class data after this function has been called. * * * Defined in: drivers/scsi/hosts.c * Defined in: drivers/scsi/hosts.c **/ **/ Loading Loading @@ -559,7 +565,7 @@ void scsi_deactivate_tcq(struct scsi_device *sdev, int depth) * area for the LLD's exclusive use. * area for the LLD's exclusive use. * Both associated refcounting objects have their refcount set to 1. * Both associated refcounting objects have their refcount set to 1. * Full registration (in sysfs) and a bus scan are performed later when * Full registration (in sysfs) and a bus scan are performed later when * scsi_add_host() is called. * scsi_add_host() and scsi_scan_host() are called. * * * Defined in: drivers/scsi/hosts.c . * Defined in: drivers/scsi/hosts.c . **/ **/ Loading Loading @@ -698,6 +704,19 @@ int scsi_remove_host(struct Scsi_Host *shost) void scsi_report_bus_reset(struct Scsi_Host * shost, int channel) void scsi_report_bus_reset(struct Scsi_Host * shost, int channel) /** * scsi_scan_host - scan SCSI bus * @shost: a pointer to a scsi host instance * * Might block: yes * * Notes: Should be called after scsi_add_host() * * Defined in: drivers/scsi/scsi_scan.c **/ void scsi_scan_host(struct Scsi_Host *shost) /** /** * scsi_track_queue_full - track successive QUEUE_FULL events on given * scsi_track_queue_full - track successive QUEUE_FULL events on given * device to determine if and when there is a need * device to determine if and when there is a need Loading block/ll_rw_blk.c +31 −9 Original line number Original line Diff line number Diff line Loading @@ -239,7 +239,7 @@ void blk_queue_make_request(request_queue_t * q, make_request_fn * mfn) q->backing_dev_info.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE; q->backing_dev_info.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE; q->backing_dev_info.state = 0; q->backing_dev_info.state = 0; q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY; q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY; blk_queue_max_sectors(q, MAX_SECTORS); blk_queue_max_sectors(q, SAFE_MAX_SECTORS); blk_queue_hardsect_size(q, 512); blk_queue_hardsect_size(q, 512); blk_queue_dma_alignment(q, 511); blk_queue_dma_alignment(q, 511); blk_queue_congestion_threshold(q); blk_queue_congestion_threshold(q); Loading Loading @@ -555,7 +555,12 @@ void blk_queue_max_sectors(request_queue_t *q, unsigned short max_sectors) printk("%s: set to minimum %d\n", __FUNCTION__, max_sectors); printk("%s: set to minimum %d\n", __FUNCTION__, max_sectors); } } q->max_sectors = q->max_hw_sectors = max_sectors; if (BLK_DEF_MAX_SECTORS > max_sectors) q->max_hw_sectors = q->max_sectors = max_sectors; else { q->max_sectors = BLK_DEF_MAX_SECTORS; q->max_hw_sectors = max_sectors; } } } EXPORT_SYMBOL(blk_queue_max_sectors); EXPORT_SYMBOL(blk_queue_max_sectors); Loading Loading @@ -657,8 +662,8 @@ EXPORT_SYMBOL(blk_queue_hardsect_size); void blk_queue_stack_limits(request_queue_t *t, request_queue_t *b) void blk_queue_stack_limits(request_queue_t *t, request_queue_t *b) { { /* zero is "infinity" */ /* zero is "infinity" */ t->max_sectors = t->max_hw_sectors = t->max_sectors = min_not_zero(t->max_sectors,b->max_sectors); min_not_zero(t->max_sectors,b->max_sectors); t->max_hw_sectors = min_not_zero(t->max_hw_sectors,b->max_hw_sectors); t->max_phys_segments = min(t->max_phys_segments,b->max_phys_segments); t->max_phys_segments = min(t->max_phys_segments,b->max_phys_segments); t->max_hw_segments = min(t->max_hw_segments,b->max_hw_segments); t->max_hw_segments = min(t->max_hw_segments,b->max_hw_segments); Loading Loading @@ -1293,9 +1298,15 @@ static inline int ll_new_hw_segment(request_queue_t *q, static int ll_back_merge_fn(request_queue_t *q, struct request *req, static int ll_back_merge_fn(request_queue_t *q, struct request *req, struct bio *bio) struct bio *bio) { { unsigned short max_sectors; int len; int len; if (req->nr_sectors + bio_sectors(bio) > q->max_sectors) { if (unlikely(blk_pc_request(req))) max_sectors = q->max_hw_sectors; else max_sectors = q->max_sectors; if (req->nr_sectors + bio_sectors(bio) > max_sectors) { req->flags |= REQ_NOMERGE; req->flags |= REQ_NOMERGE; if (req == q->last_merge) if (req == q->last_merge) q->last_merge = NULL; q->last_merge = NULL; Loading Loading @@ -1325,9 +1336,16 @@ static int ll_back_merge_fn(request_queue_t *q, struct request *req, static int ll_front_merge_fn(request_queue_t *q, struct request *req, static int ll_front_merge_fn(request_queue_t *q, struct request *req, struct bio *bio) struct bio *bio) { { unsigned short max_sectors; int len; int len; if (req->nr_sectors + bio_sectors(bio) > q->max_sectors) { if (unlikely(blk_pc_request(req))) max_sectors = q->max_hw_sectors; else max_sectors = q->max_sectors; if (req->nr_sectors + bio_sectors(bio) > max_sectors) { req->flags |= REQ_NOMERGE; req->flags |= REQ_NOMERGE; if (req == q->last_merge) if (req == q->last_merge) q->last_merge = NULL; q->last_merge = NULL; Loading Loading @@ -2144,7 +2162,7 @@ int blk_rq_map_user(request_queue_t *q, struct request *rq, void __user *ubuf, struct bio *bio; struct bio *bio; int reading; int reading; if (len > (q->max_sectors << 9)) if (len > (q->max_hw_sectors << 9)) return -EINVAL; return -EINVAL; if (!len || !ubuf) if (!len || !ubuf) return -EINVAL; return -EINVAL; Loading Loading @@ -2259,7 +2277,7 @@ int blk_rq_map_kern(request_queue_t *q, struct request *rq, void *kbuf, { { struct bio *bio; struct bio *bio; if (len > (q->max_sectors << 9)) if (len > (q->max_hw_sectors << 9)) return -EINVAL; return -EINVAL; if (!len || !kbuf) if (!len || !kbuf) return -EINVAL; return -EINVAL; Loading Loading @@ -2306,6 +2324,8 @@ void blk_execute_rq_nowait(request_queue_t *q, struct gendisk *bd_disk, generic_unplug_device(q); generic_unplug_device(q); } } EXPORT_SYMBOL_GPL(blk_execute_rq_nowait); /** /** * blk_execute_rq - insert a request into queue for execution * blk_execute_rq - insert a request into queue for execution * @q: queue to insert the request in * @q: queue to insert the request in Loading Loading @@ -2444,7 +2464,7 @@ void disk_round_stats(struct gendisk *disk) /* /* * queue lock must be held * queue lock must be held */ */ static void __blk_put_request(request_queue_t *q, struct request *req) void __blk_put_request(request_queue_t *q, struct request *req) { { struct request_list *rl = req->rl; struct request_list *rl = req->rl; Loading Loading @@ -2473,6 +2493,8 @@ static void __blk_put_request(request_queue_t *q, struct request *req) } } } } EXPORT_SYMBOL_GPL(__blk_put_request); void blk_put_request(struct request *req) void blk_put_request(struct request *req) { { unsigned long flags; unsigned long flags; Loading block/scsi_ioctl.c +1 −1 Original line number Original line Diff line number Diff line Loading @@ -233,7 +233,7 @@ static int sg_io(struct file *file, request_queue_t *q, if (verify_command(file, cmd)) if (verify_command(file, cmd)) return -EPERM; return -EPERM; if (hdr->dxfer_len > (q->max_sectors << 9)) if (hdr->dxfer_len > (q->max_hw_sectors << 9)) return -EIO; return -EIO; if (hdr->dxfer_len) if (hdr->dxfer_len) Loading drivers/md/dm-table.c +1 −1 Original line number Original line Diff line number Diff line Loading @@ -638,7 +638,7 @@ int dm_split_args(int *argc, char ***argvp, char *input) static void check_for_valid_limits(struct io_restrictions *rs) static void check_for_valid_limits(struct io_restrictions *rs) { { if (!rs->max_sectors) if (!rs->max_sectors) rs->max_sectors = MAX_SECTORS; rs->max_sectors = SAFE_MAX_SECTORS; if (!rs->max_phys_segments) if (!rs->max_phys_segments) rs->max_phys_segments = MAX_PHYS_SEGMENTS; rs->max_phys_segments = MAX_PHYS_SEGMENTS; if (!rs->max_hw_segments) if (!rs->max_hw_segments) Loading Loading
Documentation/scsi/ChangeLog.megaraid +35 −0 Original line number Original line Diff line number Diff line Release Date : Fri Nov 11 12:27:22 EST 2005 - Seokmann Ju <sju@lsil.com> Current Version : 2.20.4.7 (scsi module), 2.20.2.6 (cmm module) Older Version : 2.20.4.6 (scsi module), 2.20.2.6 (cmm module) 1. Sorted out PCI IDs to remove megaraid support overlaps. Based on the patch from Daniel, sorted out PCI IDs along with charactor node name change from 'megadev' to 'megadev_legacy' to avoid conflict. --- Hopefully we'll be getting the build restriction zapped much sooner, but we should also be thinking about totally removing the hardware support overlap in the megaraid drivers. This patch pencils in a date of Feb 06 for this, and performs some printk abuse in hope that existing legacy users might pick up on what's going on. Signed-off-by: Daniel Drake <dsd@gentoo.org> --- 2. Fixed a issue: megaraid always fails to reset handler. --- I found that the megaraid driver always fails to reset the adapter with the following message: megaraid: resetting the host... megaraid mbox: reset sequence completed successfully megaraid: fast sync command timed out megaraid: reservation reset failed when the "Cluster mode" of the adapter BIOS is enabled. So, whenever the reset occurs, the adapter goes to offline and just become unavailable. Jun'ichi Nomura [mailto:jnomura@mtc.biglobe.ne.jp] --- Release Date : Mon Mar 07 12:27:22 EST 2005 - Seokmann Ju <sju@lsil.com> Release Date : Mon Mar 07 12:27:22 EST 2005 - Seokmann Ju <sju@lsil.com> Current Version : 2.20.4.6 (scsi module), 2.20.2.6 (cmm module) Current Version : 2.20.4.6 (scsi module), 2.20.2.6 (cmm module) Older Version : 2.20.4.5 (scsi module), 2.20.2.5 (cmm module) Older Version : 2.20.4.5 (scsi module), 2.20.2.5 (cmm module) Loading
Documentation/scsi/scsi_mid_low_api.txt +28 −9 Original line number Original line Diff line number Diff line Loading @@ -150,7 +150,8 @@ scsi devices of which only the first 2 respond: LLD mid level LLD LLD mid level LLD ===-------------------=========--------------------===------ ===-------------------=========--------------------===------ scsi_host_alloc() --> scsi_host_alloc() --> scsi_add_host() --------+ scsi_add_host() ----> scsi_scan_host() -------+ | | slave_alloc() slave_alloc() slave_configure() --> scsi_adjust_queue_depth() slave_configure() --> scsi_adjust_queue_depth() Loading Loading @@ -196,7 +197,7 @@ of the issues involved. See the section on reference counting below. The hotplug concept may be extended to SCSI devices. Currently, when an The hotplug concept may be extended to SCSI devices. Currently, when an HBA is added, the scsi_add_host() function causes a scan for SCSI devices HBA is added, the scsi_scan_host() function causes a scan for SCSI devices attached to the HBA's SCSI transport. On newer SCSI transports the HBA attached to the HBA's SCSI transport. On newer SCSI transports the HBA may become aware of a new SCSI device _after_ the scan has completed. may become aware of a new SCSI device _after_ the scan has completed. An LLD can use this sequence to make the mid level aware of a SCSI device: An LLD can use this sequence to make the mid level aware of a SCSI device: Loading Loading @@ -372,7 +373,7 @@ names all start with "scsi_". Summary: Summary: scsi_activate_tcq - turn on tag command queueing scsi_activate_tcq - turn on tag command queueing scsi_add_device - creates new scsi device (lu) instance scsi_add_device - creates new scsi device (lu) instance scsi_add_host - perform sysfs registration and SCSI bus scan. scsi_add_host - perform sysfs registration and set up transport class scsi_adjust_queue_depth - change the queue depth on a SCSI device scsi_adjust_queue_depth - change the queue depth on a SCSI device scsi_assign_lock - replace default host_lock with given lock scsi_assign_lock - replace default host_lock with given lock scsi_bios_ptable - return copy of block device's partition table scsi_bios_ptable - return copy of block device's partition table Loading @@ -386,6 +387,7 @@ Summary: scsi_remove_device - detach and remove a SCSI device scsi_remove_device - detach and remove a SCSI device scsi_remove_host - detach and remove all SCSI devices owned by host scsi_remove_host - detach and remove all SCSI devices owned by host scsi_report_bus_reset - report scsi _bus_ reset observed scsi_report_bus_reset - report scsi _bus_ reset observed scsi_scan_host - scan SCSI bus scsi_track_queue_full - track successive QUEUE_FULL events scsi_track_queue_full - track successive QUEUE_FULL events scsi_unblock_requests - allow further commands to be queued to given host scsi_unblock_requests - allow further commands to be queued to given host scsi_unregister - [calls scsi_host_put()] scsi_unregister - [calls scsi_host_put()] Loading Loading @@ -425,10 +427,10 @@ void scsi_activate_tcq(struct scsi_device *sdev, int depth) * Might block: yes * Might block: yes * * * Notes: This call is usually performed internally during a scsi * Notes: This call is usually performed internally during a scsi * bus scan when an HBA is added (i.e. scsi_add_host()). So it * bus scan when an HBA is added (i.e. scsi_scan_host()). So it * should only be called if the HBA becomes aware of a new scsi * should only be called if the HBA becomes aware of a new scsi * device (lu) after scsi_add_host() has completed. If successful * device (lu) after scsi_scan_host() has completed. If successful * this call we lead to slave_alloc() and slave_configure() callbacks * this call can lead to slave_alloc() and slave_configure() callbacks * into the LLD. * into the LLD. * * * Defined in: drivers/scsi/scsi_scan.c * Defined in: drivers/scsi/scsi_scan.c Loading @@ -439,7 +441,7 @@ struct scsi_device * scsi_add_device(struct Scsi_Host *shost, /** /** * scsi_add_host - perform sysfs registration and SCSI bus scan. * scsi_add_host - perform sysfs registration and set up transport class * @shost: pointer to scsi host instance * @shost: pointer to scsi host instance * @dev: pointer to struct device of type scsi class * @dev: pointer to struct device of type scsi class * * Loading @@ -448,7 +450,11 @@ struct scsi_device * scsi_add_device(struct Scsi_Host *shost, * Might block: no * Might block: no * * * Notes: Only required in "hotplug initialization model" after a * Notes: Only required in "hotplug initialization model" after a * successful call to scsi_host_alloc(). * successful call to scsi_host_alloc(). This function does not * scan the bus; this can be done by calling scsi_scan_host() or * in some other transport-specific way. The LLD must set up * the transport template before calling this function and may only * access the transport class data after this function has been called. * * * Defined in: drivers/scsi/hosts.c * Defined in: drivers/scsi/hosts.c **/ **/ Loading Loading @@ -559,7 +565,7 @@ void scsi_deactivate_tcq(struct scsi_device *sdev, int depth) * area for the LLD's exclusive use. * area for the LLD's exclusive use. * Both associated refcounting objects have their refcount set to 1. * Both associated refcounting objects have their refcount set to 1. * Full registration (in sysfs) and a bus scan are performed later when * Full registration (in sysfs) and a bus scan are performed later when * scsi_add_host() is called. * scsi_add_host() and scsi_scan_host() are called. * * * Defined in: drivers/scsi/hosts.c . * Defined in: drivers/scsi/hosts.c . **/ **/ Loading Loading @@ -698,6 +704,19 @@ int scsi_remove_host(struct Scsi_Host *shost) void scsi_report_bus_reset(struct Scsi_Host * shost, int channel) void scsi_report_bus_reset(struct Scsi_Host * shost, int channel) /** * scsi_scan_host - scan SCSI bus * @shost: a pointer to a scsi host instance * * Might block: yes * * Notes: Should be called after scsi_add_host() * * Defined in: drivers/scsi/scsi_scan.c **/ void scsi_scan_host(struct Scsi_Host *shost) /** /** * scsi_track_queue_full - track successive QUEUE_FULL events on given * scsi_track_queue_full - track successive QUEUE_FULL events on given * device to determine if and when there is a need * device to determine if and when there is a need Loading
block/ll_rw_blk.c +31 −9 Original line number Original line Diff line number Diff line Loading @@ -239,7 +239,7 @@ void blk_queue_make_request(request_queue_t * q, make_request_fn * mfn) q->backing_dev_info.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE; q->backing_dev_info.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE; q->backing_dev_info.state = 0; q->backing_dev_info.state = 0; q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY; q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY; blk_queue_max_sectors(q, MAX_SECTORS); blk_queue_max_sectors(q, SAFE_MAX_SECTORS); blk_queue_hardsect_size(q, 512); blk_queue_hardsect_size(q, 512); blk_queue_dma_alignment(q, 511); blk_queue_dma_alignment(q, 511); blk_queue_congestion_threshold(q); blk_queue_congestion_threshold(q); Loading Loading @@ -555,7 +555,12 @@ void blk_queue_max_sectors(request_queue_t *q, unsigned short max_sectors) printk("%s: set to minimum %d\n", __FUNCTION__, max_sectors); printk("%s: set to minimum %d\n", __FUNCTION__, max_sectors); } } q->max_sectors = q->max_hw_sectors = max_sectors; if (BLK_DEF_MAX_SECTORS > max_sectors) q->max_hw_sectors = q->max_sectors = max_sectors; else { q->max_sectors = BLK_DEF_MAX_SECTORS; q->max_hw_sectors = max_sectors; } } } EXPORT_SYMBOL(blk_queue_max_sectors); EXPORT_SYMBOL(blk_queue_max_sectors); Loading Loading @@ -657,8 +662,8 @@ EXPORT_SYMBOL(blk_queue_hardsect_size); void blk_queue_stack_limits(request_queue_t *t, request_queue_t *b) void blk_queue_stack_limits(request_queue_t *t, request_queue_t *b) { { /* zero is "infinity" */ /* zero is "infinity" */ t->max_sectors = t->max_hw_sectors = t->max_sectors = min_not_zero(t->max_sectors,b->max_sectors); min_not_zero(t->max_sectors,b->max_sectors); t->max_hw_sectors = min_not_zero(t->max_hw_sectors,b->max_hw_sectors); t->max_phys_segments = min(t->max_phys_segments,b->max_phys_segments); t->max_phys_segments = min(t->max_phys_segments,b->max_phys_segments); t->max_hw_segments = min(t->max_hw_segments,b->max_hw_segments); t->max_hw_segments = min(t->max_hw_segments,b->max_hw_segments); Loading Loading @@ -1293,9 +1298,15 @@ static inline int ll_new_hw_segment(request_queue_t *q, static int ll_back_merge_fn(request_queue_t *q, struct request *req, static int ll_back_merge_fn(request_queue_t *q, struct request *req, struct bio *bio) struct bio *bio) { { unsigned short max_sectors; int len; int len; if (req->nr_sectors + bio_sectors(bio) > q->max_sectors) { if (unlikely(blk_pc_request(req))) max_sectors = q->max_hw_sectors; else max_sectors = q->max_sectors; if (req->nr_sectors + bio_sectors(bio) > max_sectors) { req->flags |= REQ_NOMERGE; req->flags |= REQ_NOMERGE; if (req == q->last_merge) if (req == q->last_merge) q->last_merge = NULL; q->last_merge = NULL; Loading Loading @@ -1325,9 +1336,16 @@ static int ll_back_merge_fn(request_queue_t *q, struct request *req, static int ll_front_merge_fn(request_queue_t *q, struct request *req, static int ll_front_merge_fn(request_queue_t *q, struct request *req, struct bio *bio) struct bio *bio) { { unsigned short max_sectors; int len; int len; if (req->nr_sectors + bio_sectors(bio) > q->max_sectors) { if (unlikely(blk_pc_request(req))) max_sectors = q->max_hw_sectors; else max_sectors = q->max_sectors; if (req->nr_sectors + bio_sectors(bio) > max_sectors) { req->flags |= REQ_NOMERGE; req->flags |= REQ_NOMERGE; if (req == q->last_merge) if (req == q->last_merge) q->last_merge = NULL; q->last_merge = NULL; Loading Loading @@ -2144,7 +2162,7 @@ int blk_rq_map_user(request_queue_t *q, struct request *rq, void __user *ubuf, struct bio *bio; struct bio *bio; int reading; int reading; if (len > (q->max_sectors << 9)) if (len > (q->max_hw_sectors << 9)) return -EINVAL; return -EINVAL; if (!len || !ubuf) if (!len || !ubuf) return -EINVAL; return -EINVAL; Loading Loading @@ -2259,7 +2277,7 @@ int blk_rq_map_kern(request_queue_t *q, struct request *rq, void *kbuf, { { struct bio *bio; struct bio *bio; if (len > (q->max_sectors << 9)) if (len > (q->max_hw_sectors << 9)) return -EINVAL; return -EINVAL; if (!len || !kbuf) if (!len || !kbuf) return -EINVAL; return -EINVAL; Loading Loading @@ -2306,6 +2324,8 @@ void blk_execute_rq_nowait(request_queue_t *q, struct gendisk *bd_disk, generic_unplug_device(q); generic_unplug_device(q); } } EXPORT_SYMBOL_GPL(blk_execute_rq_nowait); /** /** * blk_execute_rq - insert a request into queue for execution * blk_execute_rq - insert a request into queue for execution * @q: queue to insert the request in * @q: queue to insert the request in Loading Loading @@ -2444,7 +2464,7 @@ void disk_round_stats(struct gendisk *disk) /* /* * queue lock must be held * queue lock must be held */ */ static void __blk_put_request(request_queue_t *q, struct request *req) void __blk_put_request(request_queue_t *q, struct request *req) { { struct request_list *rl = req->rl; struct request_list *rl = req->rl; Loading Loading @@ -2473,6 +2493,8 @@ static void __blk_put_request(request_queue_t *q, struct request *req) } } } } EXPORT_SYMBOL_GPL(__blk_put_request); void blk_put_request(struct request *req) void blk_put_request(struct request *req) { { unsigned long flags; unsigned long flags; Loading
block/scsi_ioctl.c +1 −1 Original line number Original line Diff line number Diff line Loading @@ -233,7 +233,7 @@ static int sg_io(struct file *file, request_queue_t *q, if (verify_command(file, cmd)) if (verify_command(file, cmd)) return -EPERM; return -EPERM; if (hdr->dxfer_len > (q->max_sectors << 9)) if (hdr->dxfer_len > (q->max_hw_sectors << 9)) return -EIO; return -EIO; if (hdr->dxfer_len) if (hdr->dxfer_len) Loading
drivers/md/dm-table.c +1 −1 Original line number Original line Diff line number Diff line Loading @@ -638,7 +638,7 @@ int dm_split_args(int *argc, char ***argvp, char *input) static void check_for_valid_limits(struct io_restrictions *rs) static void check_for_valid_limits(struct io_restrictions *rs) { { if (!rs->max_sectors) if (!rs->max_sectors) rs->max_sectors = MAX_SECTORS; rs->max_sectors = SAFE_MAX_SECTORS; if (!rs->max_phys_segments) if (!rs->max_phys_segments) rs->max_phys_segments = MAX_PHYS_SEGMENTS; rs->max_phys_segments = MAX_PHYS_SEGMENTS; if (!rs->max_hw_segments) if (!rs->max_hw_segments) Loading