Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit b80fed95 authored by Linus Torvalds's avatar Linus Torvalds
Browse files
Pull device mapper updates from Mike Snitzer:

 - based on Jens' 'for-4.7/core' to have DM thinp's discard support use
   bio_inc_remaining() and the block core's new async __blkdev_issue_discard()
   interface

 - make DM multipath's fast code-paths lockless, using lockless_deference,
   to significantly improve large NUMA performance when using blk-mq.
   The m->lock spinlock contention was a serious bottleneck.

 - a few other small code cleanups and Documentation fixes

* tag 'dm-4.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
  dm thin: unroll issue_discard() to create longer discard bio chains
  dm thin: use __blkdev_issue_discard for async discard support
  dm thin: remove __bio_inc_remaining() and switch to using bio_inc_remaining()
  dm raid: make sure no feature flags are set in metadata
  dm ioctl: drop use of __GFP_REPEAT in copy_params()'s __vmalloc() call
  dm stats: fix spelling mistake in Documentation
  dm cache: update cache-policies.txt now that mq is an alias for smq
  dm mpath: eliminate use of spinlock in IO fast-paths
  dm mpath: move trigger_event member to the end of 'struct multipath'
  dm mpath: use atomic_t for counting members of 'struct multipath'
  dm mpath: switch to using bitops for state flags
  dm thin: Remove return statement from void function
  dm: remove unused mapped_device argument from free_tio()
parents 24b9f0cf 202bae52
Loading
Loading
Loading
Loading
+16 −18
Original line number Diff line number Diff line
@@ -11,7 +11,7 @@ Every bio that is mapped by the target is referred to the policy.
The policy can return a simple HIT or MISS or issue a migration.

Currently there's no way for the policy to issue background work,
e.g. to start writing back dirty blocks that are going to be evicte
e.g. to start writing back dirty blocks that are going to be evicted
soon.

Because we map bios, rather than requests it's easy for the policy
@@ -48,7 +48,7 @@ with the multiqueue (mq) policy.

The smq policy (vs mq) offers the promise of less memory utilization,
improved performance and increased adaptability in the face of changing
workloads.  SMQ also does not have any cumbersome tuning knobs.
workloads.  smq also does not have any cumbersome tuning knobs.

Users may switch from "mq" to "smq" simply by appropriately reloading a
DM table that is using the cache target.  Doing so will cause all of the
@@ -57,47 +57,45 @@ degrade slightly until smq recalculates the origin device's hotspots
that should be cached.

Memory usage:
The mq policy uses a lot of memory; 88 bytes per cache block on a 64
The mq policy used a lot of memory; 88 bytes per cache block on a 64
bit machine.

SMQ uses 28bit indexes to implement it's data structures rather than
smq uses 28bit indexes to implement it's data structures rather than
pointers.  It avoids storing an explicit hit count for each block.  It
has a 'hotspot' queue rather than a pre cache which uses a quarter of
has a 'hotspot' queue, rather than a pre-cache, which uses a quarter of
the entries (each hotspot block covers a larger area than a single
cache block).

All these mean smq uses ~25bytes per cache block.  Still a lot of
All this means smq uses ~25bytes per cache block.  Still a lot of
memory, but a substantial improvement nontheless.

Level balancing:
MQ places entries in different levels of the multiqueue structures
based on their hit count (~ln(hit count)).  This means the bottom
levels generally have the most entries, and the top ones have very
few.  Having unbalanced levels like this reduces the efficacy of the
mq placed entries in different levels of the multiqueue structures
based on their hit count (~ln(hit count)).  This meant the bottom
levels generally had the most entries, and the top ones had very
few.  Having unbalanced levels like this reduced the efficacy of the
multiqueue.

SMQ does not maintain a hit count, instead it swaps hit entries with
smq does not maintain a hit count, instead it swaps hit entries with
the least recently used entry from the level above.  The overall
ordering being a side effect of this stochastic process.  With this
scheme we can decide how many entries occupy each multiqueue level,
resulting in better promotion/demotion decisions.

Adaptability:
The MQ policy maintains a hit count for each cache block.  For a
The mq policy maintained a hit count for each cache block.  For a
different block to get promoted to the cache it's hit count has to
exceed the lowest currently in the cache.  This means it can take a
exceed the lowest currently in the cache.  This meant it could take a
long time for the cache to adapt between varying IO patterns.
Periodically degrading the hit counts could help with this, but I
haven't found a nice general solution.

SMQ doesn't maintain hit counts, so a lot of this problem just goes
smq doesn't maintain hit counts, so a lot of this problem just goes
away.  In addition it tracks performance of the hotspot queue, which
is used to decide which blocks to promote.  If the hotspot queue is
performing badly then it starts moving entries more quickly between
levels.  This lets it adapt to new IO patterns very quickly.

Performance:
Testing SMQ shows substantially better performance than MQ.
Testing smq shows substantially better performance than mq.

cleaner
-------
+1 −1
Original line number Diff line number Diff line
@@ -205,7 +205,7 @@ statistics on them:

  dmsetup message vol 0 @stats_create - /100

Set the auxillary data string to "foo bar baz" (the escape for each
Set the auxiliary data string to "foo bar baz" (the escape for each
space must also be escaped, otherwise the shell will consume them):

  dmsetup message vol 0 @stats_set_aux 0 foo\\ bar\\ baz
+1 −1
Original line number Diff line number Diff line
@@ -1723,7 +1723,7 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
	if (!dmi) {
		unsigned noio_flag;
		noio_flag = memalloc_noio_save();
		dmi = __vmalloc(param_kernel->data_size, GFP_NOIO | __GFP_REPEAT | __GFP_HIGH | __GFP_HIGHMEM, PAGE_KERNEL);
		dmi = __vmalloc(param_kernel->data_size, GFP_NOIO | __GFP_HIGH | __GFP_HIGHMEM, PAGE_KERNEL);
		memalloc_noio_restore(noio_flag);
		if (dmi)
			*param_flags |= DM_PARAMS_VMALLOC;
+195 −156

File changed.

Preview size limit exceeded, changes collapsed.

+6 −1
Original line number Diff line number Diff line
@@ -1037,6 +1037,11 @@ static int super_validate(struct raid_set *rs, struct md_rdev *rdev)
	if (!mddev->events && super_init_validation(mddev, rdev))
		return -EINVAL;

	if (le32_to_cpu(sb->features)) {
		rs->ti->error = "Unable to assemble array: No feature flags supported yet";
		return -EINVAL;
	}

	/* Enable bitmap creation for RAID levels != 0 */
	mddev->bitmap_info.offset = (rs->raid_type->level) ? to_sector(4096) : 0;
	rdev->mddev->bitmap_info.default_offset = mddev->bitmap_info.offset;
@@ -1718,7 +1723,7 @@ static void raid_resume(struct dm_target *ti)

static struct target_type raid_target = {
	.name = "raid",
	.version = {1, 7, 0},
	.version = {1, 8, 0},
	.module = THIS_MODULE,
	.ctr = raid_ctr,
	.dtr = raid_dtr,
Loading