Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 710027a4 authored by Randy Dunlap's avatar Randy Dunlap Committed by Jens Axboe
Browse files

Add some block/ source files to the kernel-api docbook. Fix kernel-doc...


Add some block/ source files to the kernel-api docbook. Fix kernel-doc notation in them as needed. Fix changed function parameter names. Fix typos/spellos. In comments, change REQ_SPECIAL to REQ_TYPE_SPECIAL and REQ_BLOCK_PC to REQ_TYPE_BLOCK_PC.

Signed-off-by: default avatarRandy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
parent 5b99c2ff
Loading
Loading
Loading
Loading
+4 −0
Original line number Original line Diff line number Diff line
@@ -364,6 +364,10 @@ X!Edrivers/pnp/system.c
!Eblock/blk-barrier.c
!Eblock/blk-barrier.c
!Eblock/blk-tag.c
!Eblock/blk-tag.c
!Iblock/blk-tag.c
!Iblock/blk-tag.c
!Eblock/blk-integrity.c
!Iblock/blktrace.c
!Iblock/genhd.c
!Eblock/genhd.c
  </chapter>
  </chapter>


  <chapter id="chrdev">
  <chapter id="chrdev">
+36 −36
Original line number Original line Diff line number Diff line
@@ -531,7 +531,7 @@ EXPORT_SYMBOL(blk_alloc_queue_node);
 *    request queue; this lock will be taken also from interrupt context, so irq
 *    request queue; this lock will be taken also from interrupt context, so irq
 *    disabling is needed for it.
 *    disabling is needed for it.
 *
 *
 *    Function returns a pointer to the initialized request queue, or NULL if
 *    Function returns a pointer to the initialized request queue, or %NULL if
 *    it didn't succeed.
 *    it didn't succeed.
 *
 *
 * Note:
 * Note:
@@ -923,8 +923,8 @@ EXPORT_SYMBOL(blk_requeue_request);
 *    Many block devices need to execute commands asynchronously, so they don't
 *    Many block devices need to execute commands asynchronously, so they don't
 *    block the whole kernel from preemption during request execution.  This is
 *    block the whole kernel from preemption during request execution.  This is
 *    accomplished normally by inserting aritficial requests tagged as
 *    accomplished normally by inserting aritficial requests tagged as
 *    REQ_SPECIAL in to the corresponding request queue, and letting them be
 *    REQ_TYPE_SPECIAL in to the corresponding request queue, and letting them
 *    scheduled for actual execution by the request queue.
 *    be scheduled for actual execution by the request queue.
 *
 *
 *    We have the option of inserting the head or the tail of the queue.
 *    We have the option of inserting the head or the tail of the queue.
 *    Typically we use the tail for new ioctls and so forth.  We use the head
 *    Typically we use the tail for new ioctls and so forth.  We use the head
@@ -1322,7 +1322,7 @@ static inline int bio_check_eod(struct bio *bio, unsigned int nr_sectors)
}
}


/**
/**
 * generic_make_request: hand a buffer to its device driver for I/O
 * generic_make_request - hand a buffer to its device driver for I/O
 * @bio:  The bio describing the location in memory and on the device.
 * @bio:  The bio describing the location in memory and on the device.
 *
 *
 * generic_make_request() is used to make I/O requests of block
 * generic_make_request() is used to make I/O requests of block
@@ -1480,13 +1480,13 @@ void generic_make_request(struct bio *bio)
EXPORT_SYMBOL(generic_make_request);
EXPORT_SYMBOL(generic_make_request);


/**
/**
 * submit_bio: submit a bio to the block device layer for I/O
 * submit_bio - submit a bio to the block device layer for I/O
 * @rw: whether to %READ or %WRITE, or maybe to %READA (read ahead)
 * @rw: whether to %READ or %WRITE, or maybe to %READA (read ahead)
 * @bio: The &struct bio which describes the I/O
 * @bio: The &struct bio which describes the I/O
 *
 *
 * submit_bio() is very similar in purpose to generic_make_request(), and
 * submit_bio() is very similar in purpose to generic_make_request(), and
 * uses that function to do most of the work. Both are fairly rough
 * uses that function to do most of the work. Both are fairly rough
 * interfaces, @bio must be presetup and ready for I/O.
 * interfaces; @bio must be presetup and ready for I/O.
 *
 *
 */
 */
void submit_bio(int rw, struct bio *bio)
void submit_bio(int rw, struct bio *bio)
@@ -1524,7 +1524,7 @@ EXPORT_SYMBOL(submit_bio);
/**
/**
 * __end_that_request_first - end I/O on a request
 * __end_that_request_first - end I/O on a request
 * @req:      the request being processed
 * @req:      the request being processed
 * @error:    0 for success, < 0 for error
 * @error:    %0 for success, < %0 for error
 * @nr_bytes: number of bytes to complete
 * @nr_bytes: number of bytes to complete
 *
 *
 * Description:
 * Description:
@@ -1532,8 +1532,8 @@ EXPORT_SYMBOL(submit_bio);
 *     for the next range of segments (if any) in the cluster.
 *     for the next range of segments (if any) in the cluster.
 *
 *
 * Return:
 * Return:
 *     0 - we are done with this request, call end_that_request_last()
 *     %0 - we are done with this request, call end_that_request_last()
 *     1 - still buffers pending for this request
 *     %1 - still buffers pending for this request
 **/
 **/
static int __end_that_request_first(struct request *req, int error,
static int __end_that_request_first(struct request *req, int error,
				    int nr_bytes)
				    int nr_bytes)
@@ -1544,7 +1544,7 @@ static int __end_that_request_first(struct request *req, int error,
	blk_add_trace_rq(req->q, req, BLK_TA_COMPLETE);
	blk_add_trace_rq(req->q, req, BLK_TA_COMPLETE);


	/*
	/*
	 * for a REQ_BLOCK_PC request, we want to carry any eventual
	 * for a REQ_TYPE_BLOCK_PC request, we want to carry any eventual
	 * sense key with us all the way through
	 * sense key with us all the way through
	 */
	 */
	if (!blk_pc_request(req))
	if (!blk_pc_request(req))
@@ -1810,11 +1810,11 @@ EXPORT_SYMBOL_GPL(blk_rq_cur_bytes);
/**
/**
 * end_queued_request - end all I/O on a queued request
 * end_queued_request - end all I/O on a queued request
 * @rq:		the request being processed
 * @rq:		the request being processed
 * @uptodate:	error value or 0/1 uptodate flag
 * @uptodate:	error value or %0/%1 uptodate flag
 *
 *
 * Description:
 * Description:
 *     Ends all I/O on a request, and removes it from the block layer queues.
 *     Ends all I/O on a request, and removes it from the block layer queues.
 *     Not suitable for normal IO completion, unless the driver still has
 *     Not suitable for normal I/O completion, unless the driver still has
 *     the request attached to the block layer.
 *     the request attached to the block layer.
 *
 *
 **/
 **/
@@ -1827,7 +1827,7 @@ EXPORT_SYMBOL(end_queued_request);
/**
/**
 * end_dequeued_request - end all I/O on a dequeued request
 * end_dequeued_request - end all I/O on a dequeued request
 * @rq:		the request being processed
 * @rq:		the request being processed
 * @uptodate:	error value or 0/1 uptodate flag
 * @uptodate:	error value or %0/%1 uptodate flag
 *
 *
 * Description:
 * Description:
 *     Ends all I/O on a request. The request must already have been
 *     Ends all I/O on a request. The request must already have been
@@ -1845,14 +1845,14 @@ EXPORT_SYMBOL(end_dequeued_request);
/**
/**
 * end_request - end I/O on the current segment of the request
 * end_request - end I/O on the current segment of the request
 * @req:	the request being processed
 * @req:	the request being processed
 * @uptodate:	error value or 0/1 uptodate flag
 * @uptodate:	error value or %0/%1 uptodate flag
 *
 *
 * Description:
 * Description:
 *     Ends I/O on the current segment of a request. If that is the only
 *     Ends I/O on the current segment of a request. If that is the only
 *     remaining segment, the request is also completed and freed.
 *     remaining segment, the request is also completed and freed.
 *
 *
 *     This is a remnant of how older block drivers handled IO completions.
 *     This is a remnant of how older block drivers handled I/O completions.
 *     Modern drivers typically end IO on the full request in one go, unless
 *     Modern drivers typically end I/O on the full request in one go, unless
 *     they have a residual value to account for. For that case this function
 *     they have a residual value to account for. For that case this function
 *     isn't really useful, unless the residual just happens to be the
 *     isn't really useful, unless the residual just happens to be the
 *     full current segment. In other words, don't use this function in new
 *     full current segment. In other words, don't use this function in new
@@ -1870,12 +1870,12 @@ EXPORT_SYMBOL(end_request);
/**
/**
 * blk_end_io - Generic end_io function to complete a request.
 * blk_end_io - Generic end_io function to complete a request.
 * @rq:           the request being processed
 * @rq:           the request being processed
 * @error:        0 for success, < 0 for error
 * @error:        %0 for success, < %0 for error
 * @nr_bytes:     number of bytes to complete @rq
 * @nr_bytes:     number of bytes to complete @rq
 * @bidi_bytes:   number of bytes to complete @rq->next_rq
 * @bidi_bytes:   number of bytes to complete @rq->next_rq
 * @drv_callback: function called between completion of bios in the request
 * @drv_callback: function called between completion of bios in the request
 *                and completion of the request.
 *                and completion of the request.
 *                If the callback returns non 0, this helper returns without
 *                If the callback returns non %0, this helper returns without
 *                completion of the request.
 *                completion of the request.
 *
 *
 * Description:
 * Description:
@@ -1883,8 +1883,8 @@ EXPORT_SYMBOL(end_request);
 *     If @rq has leftover, sets it up for the next range of segments.
 *     If @rq has leftover, sets it up for the next range of segments.
 *
 *
 * Return:
 * Return:
 *     0 - we are done with this request
 *     %0 - we are done with this request
 *     1 - this request is not freed yet, it still has pending buffers.
 *     %1 - this request is not freed yet, it still has pending buffers.
 **/
 **/
static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
		      unsigned int bidi_bytes,
		      unsigned int bidi_bytes,
@@ -1919,7 +1919,7 @@ static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
/**
/**
 * blk_end_request - Helper function for drivers to complete the request.
 * blk_end_request - Helper function for drivers to complete the request.
 * @rq:       the request being processed
 * @rq:       the request being processed
 * @error:    0 for success, < 0 for error
 * @error:    %0 for success, < %0 for error
 * @nr_bytes: number of bytes to complete
 * @nr_bytes: number of bytes to complete
 *
 *
 * Description:
 * Description:
@@ -1927,8 +1927,8 @@ static int blk_end_io(struct request *rq, int error, unsigned int nr_bytes,
 *     If @rq has leftover, sets it up for the next range of segments.
 *     If @rq has leftover, sets it up for the next range of segments.
 *
 *
 * Return:
 * Return:
 *     0 - we are done with this request
 *     %0 - we are done with this request
 *     1 - still buffers pending for this request
 *     %1 - still buffers pending for this request
 **/
 **/
int blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
int blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
{
{
@@ -1939,15 +1939,15 @@ EXPORT_SYMBOL_GPL(blk_end_request);
/**
/**
 * __blk_end_request - Helper function for drivers to complete the request.
 * __blk_end_request - Helper function for drivers to complete the request.
 * @rq:       the request being processed
 * @rq:       the request being processed
 * @error:    0 for success, < 0 for error
 * @error:    %0 for success, < %0 for error
 * @nr_bytes: number of bytes to complete
 * @nr_bytes: number of bytes to complete
 *
 *
 * Description:
 * Description:
 *     Must be called with queue lock held unlike blk_end_request().
 *     Must be called with queue lock held unlike blk_end_request().
 *
 *
 * Return:
 * Return:
 *     0 - we are done with this request
 *     %0 - we are done with this request
 *     1 - still buffers pending for this request
 *     %1 - still buffers pending for this request
 **/
 **/
int __blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
int __blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
{
{
@@ -1966,7 +1966,7 @@ EXPORT_SYMBOL_GPL(__blk_end_request);
/**
/**
 * blk_end_bidi_request - Helper function for drivers to complete bidi request.
 * blk_end_bidi_request - Helper function for drivers to complete bidi request.
 * @rq:         the bidi request being processed
 * @rq:         the bidi request being processed
 * @error:      0 for success, < 0 for error
 * @error:      %0 for success, < %0 for error
 * @nr_bytes:   number of bytes to complete @rq
 * @nr_bytes:   number of bytes to complete @rq
 * @bidi_bytes: number of bytes to complete @rq->next_rq
 * @bidi_bytes: number of bytes to complete @rq->next_rq
 *
 *
@@ -1974,8 +1974,8 @@ EXPORT_SYMBOL_GPL(__blk_end_request);
 *     Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
 *     Ends I/O on a number of bytes attached to @rq and @rq->next_rq.
 *
 *
 * Return:
 * Return:
 *     0 - we are done with this request
 *     %0 - we are done with this request
 *     1 - still buffers pending for this request
 *     %1 - still buffers pending for this request
 **/
 **/
int blk_end_bidi_request(struct request *rq, int error, unsigned int nr_bytes,
int blk_end_bidi_request(struct request *rq, int error, unsigned int nr_bytes,
			 unsigned int bidi_bytes)
			 unsigned int bidi_bytes)
@@ -1987,11 +1987,11 @@ EXPORT_SYMBOL_GPL(blk_end_bidi_request);
/**
/**
 * blk_end_request_callback - Special helper function for tricky drivers
 * blk_end_request_callback - Special helper function for tricky drivers
 * @rq:           the request being processed
 * @rq:           the request being processed
 * @error:        0 for success, < 0 for error
 * @error:        %0 for success, < %0 for error
 * @nr_bytes:     number of bytes to complete
 * @nr_bytes:     number of bytes to complete
 * @drv_callback: function called between completion of bios in the request
 * @drv_callback: function called between completion of bios in the request
 *                and completion of the request.
 *                and completion of the request.
 *                If the callback returns non 0, this helper returns without
 *                If the callback returns non %0, this helper returns without
 *                completion of the request.
 *                completion of the request.
 *
 *
 * Description:
 * Description:
@@ -2004,8 +2004,8 @@ EXPORT_SYMBOL_GPL(blk_end_bidi_request);
 *     Don't use this interface in other places anymore.
 *     Don't use this interface in other places anymore.
 *
 *
 * Return:
 * Return:
 *     0 - we are done with this request
 *     %0 - we are done with this request
 *     1 - this request is not freed yet.
 *     %1 - this request is not freed yet.
 *          this request still has pending buffers or
 *          this request still has pending buffers or
 *          the driver doesn't want to finish this request yet.
 *          the driver doesn't want to finish this request yet.
 **/
 **/
+3 −3
Original line number Original line Diff line number Diff line
@@ -16,7 +16,7 @@
/**
/**
 * blk_end_sync_rq - executes a completion event on a request
 * blk_end_sync_rq - executes a completion event on a request
 * @rq: request to complete
 * @rq: request to complete
 * @error: end io status of the request
 * @error: end I/O status of the request
 */
 */
static void blk_end_sync_rq(struct request *rq, int error)
static void blk_end_sync_rq(struct request *rq, int error)
{
{
@@ -41,7 +41,7 @@ static void blk_end_sync_rq(struct request *rq, int error)
 * @done:	I/O completion handler
 * @done:	I/O completion handler
 *
 *
 * Description:
 * Description:
 *    Insert a fully prepared request at the back of the io scheduler queue
 *    Insert a fully prepared request at the back of the I/O scheduler queue
 *    for execution.  Don't wait for completion.
 *    for execution.  Don't wait for completion.
 */
 */
void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
@@ -72,7 +72,7 @@ EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
 * @at_head:    insert request at head or tail of queue
 * @at_head:    insert request at head or tail of queue
 *
 *
 * Description:
 * Description:
 *    Insert a fully prepared request at the back of the io scheduler queue
 *    Insert a fully prepared request at the back of the I/O scheduler queue
 *    for execution and wait for completion.
 *    for execution and wait for completion.
 */
 */
int blk_execute_rq(struct request_queue *q, struct gendisk *bd_disk,
int blk_execute_rq(struct request_queue *q, struct gendisk *bd_disk,
+2 −2
Original line number Original line Diff line number Diff line
@@ -109,8 +109,8 @@ EXPORT_SYMBOL(blk_rq_map_integrity_sg);


/**
/**
 * blk_integrity_compare - Compare integrity profile of two block devices
 * blk_integrity_compare - Compare integrity profile of two block devices
 * @b1:		Device to compare
 * @bd1:	Device to compare
 * @b2:		Device to compare
 * @bd2:	Device to compare
 *
 *
 * Description: Meta-devices like DM and MD need to verify that all
 * Description: Meta-devices like DM and MD need to verify that all
 * sub-devices use the same integrity format before advertising to
 * sub-devices use the same integrity format before advertising to
+8 −8
Original line number Original line Diff line number Diff line
@@ -85,17 +85,17 @@ static int __blk_rq_map_user(struct request_queue *q, struct request *rq,
}
}


/**
/**
 * blk_rq_map_user - map user data to a request, for REQ_BLOCK_PC usage
 * blk_rq_map_user - map user data to a request, for REQ_TYPE_BLOCK_PC usage
 * @q:		request queue where request should be inserted
 * @q:		request queue where request should be inserted
 * @rq:		request structure to fill
 * @rq:		request structure to fill
 * @ubuf:	the user buffer
 * @ubuf:	the user buffer
 * @len:	length of user data
 * @len:	length of user data
 *
 *
 * Description:
 * Description:
 *    Data will be mapped directly for zero copy io, if possible. Otherwise
 *    Data will be mapped directly for zero copy I/O, if possible. Otherwise
 *    a kernel bounce buffer is used.
 *    a kernel bounce buffer is used.
 *
 *
 *    A matching blk_rq_unmap_user() must be issued at the end of io, while
 *    A matching blk_rq_unmap_user() must be issued at the end of I/O, while
 *    still in process context.
 *    still in process context.
 *
 *
 *    Note: The mapped bio may need to be bounced through blk_queue_bounce()
 *    Note: The mapped bio may need to be bounced through blk_queue_bounce()
@@ -154,7 +154,7 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq,
EXPORT_SYMBOL(blk_rq_map_user);
EXPORT_SYMBOL(blk_rq_map_user);


/**
/**
 * blk_rq_map_user_iov - map user data to a request, for REQ_BLOCK_PC usage
 * blk_rq_map_user_iov - map user data to a request, for REQ_TYPE_BLOCK_PC usage
 * @q:		request queue where request should be inserted
 * @q:		request queue where request should be inserted
 * @rq:		request to map data to
 * @rq:		request to map data to
 * @iov:	pointer to the iovec
 * @iov:	pointer to the iovec
@@ -162,10 +162,10 @@ EXPORT_SYMBOL(blk_rq_map_user);
 * @len:	I/O byte count
 * @len:	I/O byte count
 *
 *
 * Description:
 * Description:
 *    Data will be mapped directly for zero copy io, if possible. Otherwise
 *    Data will be mapped directly for zero copy I/O, if possible. Otherwise
 *    a kernel bounce buffer is used.
 *    a kernel bounce buffer is used.
 *
 *
 *    A matching blk_rq_unmap_user() must be issued at the end of io, while
 *    A matching blk_rq_unmap_user() must be issued at the end of I/O, while
 *    still in process context.
 *    still in process context.
 *
 *
 *    Note: The mapped bio may need to be bounced through blk_queue_bounce()
 *    Note: The mapped bio may need to be bounced through blk_queue_bounce()
@@ -224,7 +224,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
 * Description:
 * Description:
 *    Unmap a rq previously mapped by blk_rq_map_user(). The caller must
 *    Unmap a rq previously mapped by blk_rq_map_user(). The caller must
 *    supply the original rq->bio from the blk_rq_map_user() return, since
 *    supply the original rq->bio from the blk_rq_map_user() return, since
 *    the io completion may have changed rq->bio.
 *    the I/O completion may have changed rq->bio.
 */
 */
int blk_rq_unmap_user(struct bio *bio)
int blk_rq_unmap_user(struct bio *bio)
{
{
@@ -250,7 +250,7 @@ int blk_rq_unmap_user(struct bio *bio)
EXPORT_SYMBOL(blk_rq_unmap_user);
EXPORT_SYMBOL(blk_rq_unmap_user);


/**
/**
 * blk_rq_map_kern - map kernel data to a request, for REQ_BLOCK_PC usage
 * blk_rq_map_kern - map kernel data to a request, for REQ_TYPE_BLOCK_PC usage
 * @q:		request queue where request should be inserted
 * @q:		request queue where request should be inserted
 * @rq:		request to fill
 * @rq:		request to fill
 * @kbuf:	the kernel buffer
 * @kbuf:	the kernel buffer
Loading