Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 5ccc3874 authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge branch 'for-linus' of git://git.kernel.dk/linux-block

* 'for-linus' of git://git.kernel.dk/linux-block: (23 commits)
  Revert "cfq: Remove special treatment for metadata rqs."
  block: fix flush machinery for stacking drivers with differring flush flags
  block: improve rq_affinity placement
  blktrace: add FLUSH/FUA support
  Move some REQ flags to the common bio/request area
  allow blk_flush_policy to return REQ_FSEQ_DATA independent of *FLUSH
  xen/blkback: Make description more obvious.
  cfq-iosched: Add documentation about idling
  block: Make rq_affinity = 1 work as expected
  block: swim3: fix unterminated of_device_id table
  block/genhd.c: remove useless cast in diskstats_show()
  drivers/cdrom/cdrom.c: relax check on dvd manufacturer value
  drivers/block/drbd/drbd_nl.c: use bitmap_parse instead of __bitmap_parse
  bsg-lib: add module.h include
  cfq-iosched: Reduce linked group count upon group destruction
  blk-throttle: correctly determine sync bio
  loop: fix deadlock when sysfs and LOOP_CLR_FD race against each other
  loop: add BLK_DEV_LOOP_MIN_COUNT=%i to allow distros 0 pre-allocated loop devices
  loop: add management interface for on-demand device allocation
  loop: replace linked list of allocated devices with an idr index
  ...
parents 0c3bef61 b53d1ed7
Loading
Loading
Loading
Loading
+71 −0
Original line number Diff line number Diff line
@@ -43,3 +43,74 @@ If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches
to IOPS mode and starts providing fairness in terms of number of requests
dispatched. Note that this mode switching takes effect only for group
scheduling. For non-cgroup users nothing should change.

CFQ IO scheduler Idling Theory
===============================
Idling on a queue is primarily about waiting for the next request to come
on same queue after completion of a request. In this process CFQ will not
dispatch requests from other cfq queues even if requests are pending there.

The rationale behind idling is that it can cut down on number of seeks
on rotational media. For example, if a process is doing dependent
sequential reads (next read will come on only after completion of previous
one), then not dispatching request from other queue should help as we
did not move the disk head and kept on dispatching sequential IO from
one queue.

CFQ has following service trees and various queues are put on these trees.

	sync-idle	sync-noidle	async

All cfq queues doing synchronous sequential IO go on to sync-idle tree.
On this tree we idle on each queue individually.

All synchronous non-sequential queues go on sync-noidle tree. Also any
request which are marked with REQ_NOIDLE go on this service tree. On this
tree we do not idle on individual queues instead idle on the whole group
of queues or the tree. So if there are 4 queues waiting for IO to dispatch
we will idle only once last queue has dispatched the IO and there is
no more IO on this service tree.

All async writes go on async service tree. There is no idling on async
queues.

CFQ has some optimizations for SSDs and if it detects a non-rotational
media which can support higher queue depth (multiple requests at in
flight at a time), then it cuts down on idling of individual queues and
all the queues move to sync-noidle tree and only tree idle remains. This
tree idling provides isolation with buffered write queues on async tree.

FAQ
===
Q1. Why to idle at all on queues marked with REQ_NOIDLE.

A1. We only do tree idle (all queues on sync-noidle tree) on queues marked
    with REQ_NOIDLE. This helps in providing isolation with all the sync-idle
    queues. Otherwise in presence of many sequential readers, other
    synchronous IO might not get fair share of disk.

    For example, if there are 10 sequential readers doing IO and they get
    100ms each. If a REQ_NOIDLE request comes in, it will be scheduled
    roughly after 1 second. If after completion of REQ_NOIDLE request we
    do not idle, and after a couple of milli seconds a another REQ_NOIDLE
    request comes in, again it will be scheduled after 1second. Repeat it
    and notice how a workload can lose its disk share and suffer due to
    multiple sequential readers.

    fsync can generate dependent IO where bunch of data is written in the
    context of fsync, and later some journaling data is written. Journaling
    data comes in only after fsync has finished its IO (atleast for ext4
    that seemed to be the case). Now if one decides not to idle on fsync
    thread due to REQ_NOIDLE, then next journaling write will not get
    scheduled for another second. A process doing small fsync, will suffer
    badly in presence of multiple sequential readers.

    Hence doing tree idling on threads using REQ_NOIDLE flag on requests
    provides isolation from multiple sequential readers and at the same
    time we do not idle on individual threads.

Q2. When to specify REQ_NOIDLE
A2. I would think whenever one is doing synchronous write and not expecting
    more writes to be dispatched from same context soon, should be able
    to specify REQ_NOIDLE on writes and that probably should work well for
    most of the cases.
+6 −3
Original line number Diff line number Diff line
@@ -1350,9 +1350,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
			it is equivalent to "nosmp", which also disables
			the IO APIC.

	max_loop=	[LOOP] Maximum number of loopback devices that can
			be mounted
			Format: <1-256>
	max_loop=	[LOOP] The number of loop block devices that get
	(loop.max_loop)	unconditionally pre-created at init time. The default
			number is configured by BLK_DEV_LOOP_MIN_COUNT. Instead
			of statically allocating a predefined number, loop
			devices can be requested on-demand with the
			/dev/loop-control interface.

	mcatest=	[IA-64]

+10 −0
Original line number Diff line number Diff line
@@ -65,6 +65,16 @@ config BLK_DEV_BSG

	  If unsure, say Y.

config BLK_DEV_BSGLIB
	bool "Block layer SG support v4 helper lib"
	default n
	select BLK_DEV_BSG
	help
	  Subsystems will normally enable this if needed. Users will not
	  normally need to manually enable this.

	  If unsure, say N.

config BLK_DEV_INTEGRITY
	bool "Block layer data integrity support"
	---help---
+1 −0
Original line number Diff line number Diff line
@@ -8,6 +8,7 @@ obj-$(CONFIG_BLOCK) := elevator.o blk-core.o blk-tag.o blk-sysfs.o \
			blk-iopoll.o blk-lib.o ioctl.o genhd.o scsi_ioctl.o

obj-$(CONFIG_BLK_DEV_BSG)	+= bsg.o
obj-$(CONFIG_BLK_DEV_BSGLIB)	+= bsg-lib.o
obj-$(CONFIG_BLK_CGROUP)	+= blk-cgroup.o
obj-$(CONFIG_BLK_DEV_THROTTLING)	+= blk-throttle.o
obj-$(CONFIG_IOSCHED_NOOP)	+= noop-iosched.o
+6 −2
Original line number Diff line number Diff line
@@ -1702,6 +1702,7 @@ EXPORT_SYMBOL_GPL(blk_rq_check_limits);
int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
{
	unsigned long flags;
	int where = ELEVATOR_INSERT_BACK;

	if (blk_rq_check_limits(q, rq))
		return -EIO;
@@ -1718,7 +1719,10 @@ int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
	 */
	BUG_ON(blk_queued_rq(rq));

	add_acct_request(q, rq, ELEVATOR_INSERT_BACK);
	if (rq->cmd_flags & (REQ_FLUSH|REQ_FUA))
		where = ELEVATOR_INSERT_FLUSH;

	add_acct_request(q, rq, where);
	spin_unlock_irqrestore(q->queue_lock, flags);

	return 0;
@@ -2275,7 +2279,7 @@ static bool blk_end_bidi_request(struct request *rq, int error,
 *     %false - we are done with this request
 *     %true  - still buffers pending for this request
 **/
static bool __blk_end_bidi_request(struct request *rq, int error,
bool __blk_end_bidi_request(struct request *rq, int error,
				   unsigned int nr_bytes, unsigned int bidi_bytes)
{
	if (blk_update_bidi_request(rq, error, nr_bytes, bidi_bytes))
Loading