Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 28a8f0d3 authored by Mike Christie's avatar Mike Christie Committed by Jens Axboe
Browse files

block, drivers, fs: rename REQ_FLUSH to REQ_PREFLUSH



To avoid confusion between REQ_OP_FLUSH, which is handled by
request_fn drivers, and upper layers requesting the block layer
perform a flush sequence along with possibly a WRITE, this patch
renames REQ_FLUSH to REQ_PREFLUSH.

Signed-off-by: default avatarMike Christie <mchristi@redhat.com>
Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
Reviewed-by: default avatarHannes Reinecke <hare@suse.com>
Signed-off-by: default avatarJens Axboe <axboe@fb.com>
parent a418090a
Loading
Loading
Loading
Loading
+11 −11
Original line number Original line Diff line number Diff line
@@ -20,11 +20,11 @@ a forced cache flush, and the Force Unit Access (FUA) flag for requests.
Explicit cache flushes
Explicit cache flushes
----------------------
----------------------


The REQ_FLUSH flag can be OR ed into the r/w flags of a bio submitted from
The REQ_PREFLUSH flag can be OR ed into the r/w flags of a bio submitted from
the filesystem and will make sure the volatile cache of the storage device
the filesystem and will make sure the volatile cache of the storage device
has been flushed before the actual I/O operation is started.  This explicitly
has been flushed before the actual I/O operation is started.  This explicitly
guarantees that previously completed write requests are on non-volatile
guarantees that previously completed write requests are on non-volatile
storage before the flagged bio starts. In addition the REQ_FLUSH flag can be
storage before the flagged bio starts. In addition the REQ_PREFLUSH flag can be
set on an otherwise empty bio structure, which causes only an explicit cache
set on an otherwise empty bio structure, which causes only an explicit cache
flush without any dependent I/O.  It is recommend to use
flush without any dependent I/O.  It is recommend to use
the blkdev_issue_flush() helper for a pure cache flush.
the blkdev_issue_flush() helper for a pure cache flush.
@@ -41,21 +41,21 @@ signaled after the data has been committed to non-volatile storage.
Implementation details for filesystems
Implementation details for filesystems
--------------------------------------
--------------------------------------


Filesystems can simply set the REQ_FLUSH and REQ_FUA bits and do not have to
Filesystems can simply set the REQ_PREFLUSH and REQ_FUA bits and do not have to
worry if the underlying devices need any explicit cache flushing and how
worry if the underlying devices need any explicit cache flushing and how
the Forced Unit Access is implemented.  The REQ_FLUSH and REQ_FUA flags
the Forced Unit Access is implemented.  The REQ_PREFLUSH and REQ_FUA flags
may both be set on a single bio.
may both be set on a single bio.




Implementation details for make_request_fn based block drivers
Implementation details for make_request_fn based block drivers
--------------------------------------------------------------
--------------------------------------------------------------


These drivers will always see the REQ_FLUSH and REQ_FUA bits as they sit
These drivers will always see the REQ_PREFLUSH and REQ_FUA bits as they sit
directly below the submit_bio interface.  For remapping drivers the REQ_FUA
directly below the submit_bio interface.  For remapping drivers the REQ_FUA
bits need to be propagated to underlying devices, and a global flush needs
bits need to be propagated to underlying devices, and a global flush needs
to be implemented for bios with the REQ_FLUSH bit set.  For real device
to be implemented for bios with the REQ_PREFLUSH bit set.  For real device
drivers that do not have a volatile cache the REQ_FLUSH and REQ_FUA bits
drivers that do not have a volatile cache the REQ_PREFLUSH and REQ_FUA bits
on non-empty bios can simply be ignored, and REQ_FLUSH requests without
on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without
data can be completed successfully without doing any work.  Drivers for
data can be completed successfully without doing any work.  Drivers for
devices with volatile caches need to implement the support for these
devices with volatile caches need to implement the support for these
flags themselves without any help from the block layer.
flags themselves without any help from the block layer.
@@ -65,8 +65,8 @@ Implementation details for request_fn based block drivers
--------------------------------------------------------------
--------------------------------------------------------------


For devices that do not support volatile write caches there is no driver
For devices that do not support volatile write caches there is no driver
support required, the block layer completes empty REQ_FLUSH requests before
support required, the block layer completes empty REQ_PREFLUSH requests before
entering the driver and strips off the REQ_FLUSH and REQ_FUA bits from
entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
requests that have a payload.  For devices with volatile write caches the
requests that have a payload.  For devices with volatile write caches the
driver needs to tell the block layer that it supports flushing caches by
driver needs to tell the block layer that it supports flushing caches by
doing:
doing:
@@ -74,7 +74,7 @@ doing:
	blk_queue_write_cache(sdkp->disk->queue, true, false);
	blk_queue_write_cache(sdkp->disk->queue, true, false);


and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn.  Note that
and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn.  Note that
REQ_FLUSH requests with a payload are automatically turned into a sequence
REQ_PREFLUSH requests with a payload are automatically turned into a sequence
of an empty REQ_OP_FLUSH request followed by the actual write by the block
of an empty REQ_OP_FLUSH request followed by the actual write by the block
layer.  For devices that also support the FUA bit the block layer needs
layer.  For devices that also support the FUA bit the block layer needs
to be told to pass through the REQ_FUA bit using:
to be told to pass through the REQ_FUA bit using:
+5 −5
Original line number Original line Diff line number Diff line
@@ -14,14 +14,14 @@ Log Ordering


We log things in order of completion once we are sure the write is no longer in
We log things in order of completion once we are sure the write is no longer in
cache.  This means that normal WRITE requests are not actually logged until the
cache.  This means that normal WRITE requests are not actually logged until the
next REQ_FLUSH request.  This is to make it easier for userspace to replay the
next REQ_PREFLUSH request.  This is to make it easier for userspace to replay
log in a way that correlates to what is on disk and not what is in cache, to
the log in a way that correlates to what is on disk and not what is in cache,
make it easier to detect improper waiting/flushing.
to make it easier to detect improper waiting/flushing.


This works by attaching all WRITE requests to a list once the write completes.
This works by attaching all WRITE requests to a list once the write completes.
Once we see a REQ_FLUSH request we splice this list onto the request and once
Once we see a REQ_PREFLUSH request we splice this list onto the request and once
the FLUSH request completes we log all of the WRITEs and then the FLUSH.  Only
the FLUSH request completes we log all of the WRITEs and then the FLUSH.  Only
completed WRITEs, at the time the REQ_FLUSH is issued, are added in order to
completed WRITEs, at the time the REQ_PREFLUSH is issued, are added in order to
simulate the worst case scenario with regard to power failures.  Consider the
simulate the worst case scenario with regard to power failures.  Consider the
following example (W means write, C means complete):
following example (W means write, C means complete):


+6 −6
Original line number Original line Diff line number Diff line
@@ -1029,7 +1029,7 @@ static bool blk_rq_should_init_elevator(struct bio *bio)
	 * Flush requests do not use the elevator so skip initialization.
	 * Flush requests do not use the elevator so skip initialization.
	 * This allows a request to share the flush and elevator data.
	 * This allows a request to share the flush and elevator data.
	 */
	 */
	if (bio->bi_rw & (REQ_FLUSH | REQ_FUA))
	if (bio->bi_rw & (REQ_PREFLUSH | REQ_FUA))
		return false;
		return false;


	return true;
	return true;
@@ -1736,7 +1736,7 @@ static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio)
		return BLK_QC_T_NONE;
		return BLK_QC_T_NONE;
	}
	}


	if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
	if (bio->bi_rw & (REQ_PREFLUSH | REQ_FUA)) {
		spin_lock_irq(q->queue_lock);
		spin_lock_irq(q->queue_lock);
		where = ELEVATOR_INSERT_FLUSH;
		where = ELEVATOR_INSERT_FLUSH;
		goto get_rq;
		goto get_rq;
@@ -1968,9 +1968,9 @@ generic_make_request_checks(struct bio *bio)
	 * drivers without flush support don't have to worry
	 * drivers without flush support don't have to worry
	 * about them.
	 * about them.
	 */
	 */
	if ((bio->bi_rw & (REQ_FLUSH | REQ_FUA)) &&
	if ((bio->bi_rw & (REQ_PREFLUSH | REQ_FUA)) &&
	    !test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
	    !test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
		bio->bi_rw &= ~(REQ_FLUSH | REQ_FUA);
		bio->bi_rw &= ~(REQ_PREFLUSH | REQ_FUA);
		if (!nr_sectors) {
		if (!nr_sectors) {
			err = 0;
			err = 0;
			goto end_io;
			goto end_io;
@@ -2217,7 +2217,7 @@ int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
	 */
	 */
	BUG_ON(blk_queued_rq(rq));
	BUG_ON(blk_queued_rq(rq));


	if (rq->cmd_flags & (REQ_FLUSH|REQ_FUA))
	if (rq->cmd_flags & (REQ_PREFLUSH | REQ_FUA))
		where = ELEVATOR_INSERT_FLUSH;
		where = ELEVATOR_INSERT_FLUSH;


	add_acct_request(q, rq, where);
	add_acct_request(q, rq, where);
@@ -3311,7 +3311,7 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
		/*
		/*
		 * rq is already accounted, so use raw insert
		 * rq is already accounted, so use raw insert
		 */
		 */
		if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA))
		if (rq->cmd_flags & (REQ_PREFLUSH | REQ_FUA))
			__elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH);
			__elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH);
		else
		else
			__elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE);
			__elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE);
+8 −8
Original line number Original line Diff line number Diff line
@@ -10,8 +10,8 @@
 * optional steps - PREFLUSH, DATA and POSTFLUSH - according to the request
 * optional steps - PREFLUSH, DATA and POSTFLUSH - according to the request
 * properties and hardware capability.
 * properties and hardware capability.
 *
 *
 * If a request doesn't have data, only REQ_FLUSH makes sense, which
 * If a request doesn't have data, only REQ_PREFLUSH makes sense, which
 * indicates a simple flush request.  If there is data, REQ_FLUSH indicates
 * indicates a simple flush request.  If there is data, REQ_PREFLUSH indicates
 * that the device cache should be flushed before the data is executed, and
 * that the device cache should be flushed before the data is executed, and
 * REQ_FUA means that the data must be on non-volatile media on request
 * REQ_FUA means that the data must be on non-volatile media on request
 * completion.
 * completion.
@@ -20,11 +20,11 @@
 * difference.  The requests are either completed immediately if there's no
 * difference.  The requests are either completed immediately if there's no
 * data or executed as normal requests otherwise.
 * data or executed as normal requests otherwise.
 *
 *
 * If the device has writeback cache and supports FUA, REQ_FLUSH is
 * If the device has writeback cache and supports FUA, REQ_PREFLUSH is
 * translated to PREFLUSH but REQ_FUA is passed down directly with DATA.
 * translated to PREFLUSH but REQ_FUA is passed down directly with DATA.
 *
 *
 * If the device has writeback cache and doesn't support FUA, REQ_FLUSH is
 * If the device has writeback cache and doesn't support FUA, REQ_PREFLUSH
 * translated to PREFLUSH and REQ_FUA to POSTFLUSH.
 * is translated to PREFLUSH and REQ_FUA to POSTFLUSH.
 *
 *
 * The actual execution of flush is double buffered.  Whenever a request
 * The actual execution of flush is double buffered.  Whenever a request
 * needs to execute PRE or POSTFLUSH, it queues at
 * needs to execute PRE or POSTFLUSH, it queues at
@@ -103,7 +103,7 @@ static unsigned int blk_flush_policy(unsigned long fflags, struct request *rq)
		policy |= REQ_FSEQ_DATA;
		policy |= REQ_FSEQ_DATA;


	if (fflags & (1UL << QUEUE_FLAG_WC)) {
	if (fflags & (1UL << QUEUE_FLAG_WC)) {
		if (rq->cmd_flags & REQ_FLUSH)
		if (rq->cmd_flags & REQ_PREFLUSH)
			policy |= REQ_FSEQ_PREFLUSH;
			policy |= REQ_FSEQ_PREFLUSH;
		if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
		if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
		    (rq->cmd_flags & REQ_FUA))
		    (rq->cmd_flags & REQ_FUA))
@@ -391,9 +391,9 @@ void blk_insert_flush(struct request *rq)


	/*
	/*
	 * @policy now records what operations need to be done.  Adjust
	 * @policy now records what operations need to be done.  Adjust
	 * REQ_FLUSH and FUA for the driver.
	 * REQ_PREFLUSH and FUA for the driver.
	 */
	 */
	rq->cmd_flags &= ~REQ_FLUSH;
	rq->cmd_flags &= ~REQ_PREFLUSH;
	if (!(fflags & (1UL << QUEUE_FLAG_FUA)))
	if (!(fflags & (1UL << QUEUE_FLAG_FUA)))
		rq->cmd_flags &= ~REQ_FUA;
		rq->cmd_flags &= ~REQ_FUA;


+2 −2
Original line number Original line Diff line number Diff line
@@ -1247,7 +1247,7 @@ static int blk_mq_direct_issue_request(struct request *rq, blk_qc_t *cookie)
static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
{
{
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_rw);
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_rw);
	const int is_flush_fua = bio->bi_rw & (REQ_FLUSH | REQ_FUA);
	const int is_flush_fua = bio->bi_rw & (REQ_PREFLUSH | REQ_FUA);
	struct blk_map_ctx data;
	struct blk_map_ctx data;
	struct request *rq;
	struct request *rq;
	unsigned int request_count = 0;
	unsigned int request_count = 0;
@@ -1344,7 +1344,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio)
{
{
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_rw);
	const int is_sync = rw_is_sync(bio_op(bio), bio->bi_rw);
	const int is_flush_fua = bio->bi_rw & (REQ_FLUSH | REQ_FUA);
	const int is_flush_fua = bio->bi_rw & (REQ_PREFLUSH | REQ_FUA);
	struct blk_plug *plug;
	struct blk_plug *plug;
	unsigned int request_count = 0;
	unsigned int request_count = 0;
	struct blk_map_ctx data;
	struct blk_map_ctx data;
Loading