Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 31e6b01f authored by Nick Piggin's avatar Nick Piggin
Browse files

fs: rcu-walk for path lookup



Perform common cases of path lookups without any stores or locking in the
ancestor dentry elements. This is called rcu-walk, as opposed to the current
algorithm which is a refcount based walk, or ref-walk.

This results in far fewer atomic operations on every path element,
significantly improving path lookup performance. It also avoids cacheline
bouncing on common dentries, significantly improving scalability.

The overall design is like this:
* LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
* Take the RCU lock for the entire path walk, starting with the acquiring
  of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
  not required for dentry persistence.
* synchronize_rcu is called when unregistering a filesystem, so we can
  access d_ops and i_ops during rcu-walk.
* Similarly take the vfsmount lock for the entire path walk. So now mnt
  refcounts are not required for persistence. Also we are free to perform mount
  lookups, and to assume dentry mount points and mount roots are stable up and
  down the path.
* Have a per-dentry seqlock to protect the dentry name, parent, and inode,
  so we can load this tuple atomically, and also check whether any of its
  members have changed.
* Dentry lookups (based on parent, candidate string tuple) recheck the parent
  sequence after the child is found in case anything changed in the parent
  during the path walk.
* inode is also RCU protected so we can load d_inode and use the inode for
  limited things.
* i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
* i_op can be loaded.

When we reach the destination dentry, we lock it, recheck lookup sequence,
and increment its refcount and mountpoint refcount. RCU and vfsmount locks
are dropped. This is termed "dropping rcu-walk". If the dentry refcount does
not match, we can not drop rcu-walk gracefully at the current point in the
lokup, so instead return -ECHILD (for want of a better errno). This signals the
path walking code to re-do the entire lookup with a ref-walk.

Aside from the final dentry, there are other situations that may be encounted
where we cannot continue rcu-walk. In that case, we drop rcu-walk (ie. take
a reference on the last good dentry) and continue with a ref-walk. Again, if
we can drop rcu-walk gracefully, we return -ECHILD and do the whole lookup
using ref-walk. But it is very important that we can continue with ref-walk
for most cases, particularly to avoid the overhead of double lookups, and to
gain the scalability advantages on common path elements (like cwd and root).

The cases where rcu-walk cannot continue are:
* NULL dentry (ie. any uncached path element)
* parent with d_inode->i_op->permission or ACLs
* dentries with d_revalidate
* Following links

In future patches, permission checks and d_revalidate become rcu-walk aware. It
may be possible eventually to make following links rcu-walk aware.

Uncached path elements will always require dropping to ref-walk mode, at the
very least because i_mutex needs to be grabbed, and objects allocated.

Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
parent 3c22cd57
Loading
Loading
Loading
Loading
+0 −172
Original line number Diff line number Diff line
RCU-based dcache locking model
==============================

On many workloads, the most common operation on dcache is to look up a
dentry, given a parent dentry and the name of the child. Typically,
for every open(), stat() etc., the dentry corresponding to the
pathname will be looked up by walking the tree starting with the first
component of the pathname and using that dentry along with the next
component to look up the next level and so on. Since it is a frequent
operation for workloads like multiuser environments and web servers,
it is important to optimize this path.

Prior to 2.5.10, dcache_lock was acquired in d_lookup and thus in
every component during path look-up. Since 2.5.10 onwards, fast-walk
algorithm changed this by holding the dcache_lock at the beginning and
walking as many cached path component dentries as possible. This
significantly decreases the number of acquisition of
dcache_lock. However it also increases the lock hold time
significantly and affects performance in large SMP machines. Since
2.5.62 kernel, dcache has been using a new locking model that uses RCU
to make dcache look-up lock-free.

The current dcache locking model is not very different from the
existing dcache locking model. Prior to 2.5.62 kernel, dcache_lock
protected the hash chain, d_child, d_alias, d_lru lists as well as
d_inode and several other things like mount look-up. RCU-based changes
affect only the way the hash chain is protected. For everything else
the dcache_lock must be taken for both traversing as well as
updating. The hash chain updates too take the dcache_lock.  The
significant change is the way d_lookup traverses the hash chain, it
doesn't acquire the dcache_lock for this and rely on RCU to ensure
that the dentry has not been *freed*.

dcache_lock no longer exists, dentry locking is explained in fs/dcache.c

Dcache locking details
======================

For many multi-user workloads, open() and stat() on files are very
frequently occurring operations. Both involve walking of path names to
find the dentry corresponding to the concerned file. In 2.4 kernel,
dcache_lock was held during look-up of each path component. Contention
and cache-line bouncing of this global lock caused significant
scalability problems. With the introduction of RCU in Linux kernel,
this was worked around by making the look-up of path components during
path walking lock-free.


Safe lock-free look-up of dcache hash table
===========================================

Dcache is a complex data structure with the hash table entries also
linked together in other lists. In 2.4 kernel, dcache_lock protected
all the lists. RCU dentry hash walking works like this:

1. The deletion from hash chain is done using hlist_del_rcu() macro
   which doesn't initialize next pointer of the deleted dentry and
   this allows us to walk safely lock-free while a deletion is
   happening. This is a standard hlist_rcu iteration.

2. Insertion of a dentry into the hash table is done using
   hlist_add_head_rcu() which take care of ordering the writes - the
   writes to the dentry must be visible before the dentry is
   inserted. This works in conjunction with hlist_for_each_rcu(),
   which has since been replaced by hlist_for_each_entry_rcu(), while
   walking the hash chain. The only requirement is that all
   initialization to the dentry must be done before
   hlist_add_head_rcu() since we don't have lock protection
   while traversing the hash chain.

3. The dentry looked up without holding locks cannot be returned for
   walking if it is unhashed. It then may have a NULL d_inode or other
   bogosity since RCU doesn't protect the other fields in the dentry. We
   therefore use a flag DCACHE_UNHASHED to indicate unhashed dentries
   and use this in conjunction with a per-dentry lock (d_lock). Once
   looked up without locks, we acquire the per-dentry lock (d_lock) and
   check if the dentry is unhashed. If so, the look-up is failed. If not,
   the reference count of the dentry is increased and the dentry is
   returned.

4. Once a dentry is looked up, it must be ensured during the path walk
   for that component it doesn't go away. In pre-2.5.10 code, this was
   done holding a reference to the dentry. dcache_rcu does the same.
   In some sense, dcache_rcu path walking looks like the pre-2.5.10
   version.

5. All dentry hash chain updates must take the per-dentry lock (see
   fs/dcache.c). This excludes dput() to ensure that a dentry that has
   been looked up concurrently does not get deleted before dget() can
   take a ref.

6. There are several ways to do reference counting of RCU protected
   objects. One such example is in ipv4 route cache where deferred
   freeing (using call_rcu()) is done as soon as the reference count
   goes to zero. This cannot be done in the case of dentries because
   tearing down of dentries require blocking (dentry_iput()) which
   isn't supported from RCU callbacks. Instead, tearing down of
   dentries happen synchronously in dput(), but actual freeing happens
   later when RCU grace period is over. This allows safe lock-free
   walking of the hash chains, but a matched dentry may have been
   partially torn down. The checking of DCACHE_UNHASHED flag with
   d_lock held detects such dentries and prevents them from being
   returned from look-up.


Maintaining POSIX rename semantics
==================================

Since look-up of dentries is lock-free, it can race against a
concurrent rename operation. For example, during rename of file A to
B, look-up of either A or B must succeed.  So, if look-up of B happens
after A has been removed from the hash chain but not added to the new
hash chain, it may fail.  Also, a comparison while the name is being
written concurrently by a rename may result in false positive matches
violating rename semantics.  Issues related to race with rename are
handled as described below :

1. Look-up can be done in two ways - d_lookup() which is safe from
   simultaneous renames and __d_lookup() which is not.  If
   __d_lookup() fails, it must be followed up by a d_lookup() to
   correctly determine whether a dentry is in the hash table or
   not. d_lookup() protects look-ups using a sequence lock
   (rename_lock).

2. The name associated with a dentry (d_name) may be changed if a
   rename is allowed to happen simultaneously. To avoid memcmp() in
   __d_lookup() go out of bounds due to a rename and false positive
   comparison, the name comparison is done while holding the
   per-dentry lock. This prevents concurrent renames during this
   operation.

3. Hash table walking during look-up may move to a different bucket as
   the current dentry is moved to a different bucket due to rename.
   But we use hlists in dcache hash table and they are
   null-terminated.  So, even if a dentry moves to a different bucket,
   hash chain walk will terminate. [with a list_head list, it may not
   since termination is when the list_head in the original bucket is
   reached].  Since we redo the d_parent check and compare name while
   holding d_lock, lock-free look-up will not race against d_move().

4. There can be a theoretical race when a dentry keeps coming back to
   original bucket due to double moves. Due to this look-up may
   consider that it has never moved and can end up in a infinite loop.
   But this is not any worse that theoretical livelocks we already
   have in the kernel.


Important guidelines for filesystem developers related to dcache_rcu
====================================================================

1. Existing dcache interfaces (pre-2.5.62) exported to filesystem
   don't change. Only dcache internal implementation changes. However
   filesystems *must not* delete from the dentry hash chains directly
   using the list macros like allowed earlier. They must use dcache
   APIs like d_drop() or __d_drop() depending on the situation.

2. d_flags is now protected by a per-dentry lock (d_lock). All access
   to d_flags must be protected by it.

3. For a hashed dentry, checking of d_count needs to be protected by
   d_lock.


Papers and other documentation on dcache locking
================================================

1. Scaling dcache with RCU (http://linuxjournal.com/article.php?sid=7124).

2. http://lse.sourceforge.net/locking/dcache/dcache.html


+345 −0

File added.

Preview size limit exceeded, changes collapsed.

+181 −22
Original line number Diff line number Diff line
@@ -152,9 +152,23 @@ static void d_free(struct dentry *dentry)
		call_rcu(&dentry->d_u.d_rcu, __d_free);
}

/**
 * dentry_rcuwalk_barrier - invalidate in-progress rcu-walk lookups
 * After this call, in-progress rcu-walk path lookup will fail. This
 * should be called after unhashing, and after changing d_inode (if
 * the dentry has not already been unhashed).
 */
static inline void dentry_rcuwalk_barrier(struct dentry *dentry)
{
	assert_spin_locked(&dentry->d_lock);
	/* Go through a barrier */
	write_seqcount_barrier(&dentry->d_seq);
}

/*
 * Release the dentry's inode, using the filesystem
 * d_iput() operation if defined.
 * d_iput() operation if defined. Dentry has no refcount
 * and is unhashed.
 */
static void dentry_iput(struct dentry * dentry)
	__releases(dentry->d_lock)
@@ -178,6 +192,28 @@ static void dentry_iput(struct dentry * dentry)
	}
}

/*
 * Release the dentry's inode, using the filesystem
 * d_iput() operation if defined. dentry remains in-use.
 */
static void dentry_unlink_inode(struct dentry * dentry)
	__releases(dentry->d_lock)
	__releases(dcache_inode_lock)
{
	struct inode *inode = dentry->d_inode;
	dentry->d_inode = NULL;
	list_del_init(&dentry->d_alias);
	dentry_rcuwalk_barrier(dentry);
	spin_unlock(&dentry->d_lock);
	spin_unlock(&dcache_inode_lock);
	if (!inode->i_nlink)
		fsnotify_inoderemove(inode);
	if (dentry->d_op && dentry->d_op->d_iput)
		dentry->d_op->d_iput(dentry, inode);
	else
		iput(inode);
}

/*
 * dentry_lru_(add|del|move_tail) must be called with d_lock held.
 */
@@ -272,6 +308,7 @@ void __d_drop(struct dentry *dentry)
		spin_lock(&dcache_hash_lock);
		hlist_del_rcu(&dentry->d_hash);
		spin_unlock(&dcache_hash_lock);
		dentry_rcuwalk_barrier(dentry);
	}
}
EXPORT_SYMBOL(__d_drop);
@@ -309,6 +346,7 @@ static inline struct dentry *dentry_kill(struct dentry *dentry, int ref)
		spin_unlock(&dcache_inode_lock);
		goto relock;
	}

	if (ref)
		dentry->d_count--;
	/* if dentry was on the d_lru list delete it from there */
@@ -1221,6 +1259,7 @@ struct dentry *d_alloc(struct dentry * parent, const struct qstr *name)
	dentry->d_count = 1;
	dentry->d_flags = DCACHE_UNHASHED;
	spin_lock_init(&dentry->d_lock);
	seqcount_init(&dentry->d_seq);
	dentry->d_inode = NULL;
	dentry->d_parent = NULL;
	dentry->d_sb = NULL;
@@ -1269,6 +1308,7 @@ static void __d_instantiate(struct dentry *dentry, struct inode *inode)
	if (inode)
		list_add(&dentry->d_alias, &inode->i_dentry);
	dentry->d_inode = inode;
	dentry_rcuwalk_barrier(dentry);
	spin_unlock(&dentry->d_lock);
	fsnotify_d_instantiate(dentry, inode);
}
@@ -1610,6 +1650,111 @@ struct dentry *d_add_ci(struct dentry *dentry, struct inode *inode,
}
EXPORT_SYMBOL(d_add_ci);

/**
 * __d_lookup_rcu - search for a dentry (racy, store-free)
 * @parent: parent dentry
 * @name: qstr of name we wish to find
 * @seq: returns d_seq value at the point where the dentry was found
 * @inode: returns dentry->d_inode when the inode was found valid.
 * Returns: dentry, or NULL
 *
 * __d_lookup_rcu is the dcache lookup function for rcu-walk name
 * resolution (store-free path walking) design described in
 * Documentation/filesystems/path-lookup.txt.
 *
 * This is not to be used outside core vfs.
 *
 * __d_lookup_rcu must only be used in rcu-walk mode, ie. with vfsmount lock
 * held, and rcu_read_lock held. The returned dentry must not be stored into
 * without taking d_lock and checking d_seq sequence count against @seq
 * returned here.
 *
 * A refcount may be taken on the found dentry with the __d_rcu_to_refcount
 * function.
 *
 * Alternatively, __d_lookup_rcu may be called again to look up the child of
 * the returned dentry, so long as its parent's seqlock is checked after the
 * child is looked up. Thus, an interlocking stepping of sequence lock checks
 * is formed, giving integrity down the path walk.
 */
struct dentry *__d_lookup_rcu(struct dentry *parent, struct qstr *name,
				unsigned *seq, struct inode **inode)
{
	unsigned int len = name->len;
	unsigned int hash = name->hash;
	const unsigned char *str = name->name;
	struct hlist_head *head = d_hash(parent, hash);
	struct hlist_node *node;
	struct dentry *dentry;

	/*
	 * Note: There is significant duplication with __d_lookup_rcu which is
	 * required to prevent single threaded performance regressions
	 * especially on architectures where smp_rmb (in seqcounts) are costly.
	 * Keep the two functions in sync.
	 */

	/*
	 * The hash list is protected using RCU.
	 *
	 * Carefully use d_seq when comparing a candidate dentry, to avoid
	 * races with d_move().
	 *
	 * It is possible that concurrent renames can mess up our list
	 * walk here and result in missing our dentry, resulting in the
	 * false-negative result. d_lookup() protects against concurrent
	 * renames using rename_lock seqlock.
	 *
	 * See Documentation/vfs/dcache-locking.txt for more details.
	 */
	hlist_for_each_entry_rcu(dentry, node, head, d_hash) {
		struct inode *i;
		const char *tname;
		int tlen;

		if (dentry->d_name.hash != hash)
			continue;

seqretry:
		*seq = read_seqcount_begin(&dentry->d_seq);
		if (dentry->d_parent != parent)
			continue;
		if (d_unhashed(dentry))
			continue;
		tlen = dentry->d_name.len;
		tname = dentry->d_name.name;
		i = dentry->d_inode;
		/*
		 * This seqcount check is required to ensure name and
		 * len are loaded atomically, so as not to walk off the
		 * edge of memory when walking. If we could load this
		 * atomically some other way, we could drop this check.
		 */
		if (read_seqcount_retry(&dentry->d_seq, *seq))
			goto seqretry;
		if (parent->d_op && parent->d_op->d_compare) {
			if (parent->d_op->d_compare(parent, *inode,
						dentry, i,
						tlen, tname, name))
				continue;
		} else {
			if (tlen != len)
				continue;
			if (memcmp(tname, str, tlen))
				continue;
		}
		/*
		 * No extra seqcount check is required after the name
		 * compare. The caller must perform a seqcount check in
		 * order to do anything useful with the returned dentry
		 * anyway.
		 */
		*inode = i;
		return dentry;
	}
	return NULL;
}

/**
 * d_lookup - search for a dentry
 * @parent: parent dentry
@@ -1623,7 +1768,7 @@ EXPORT_SYMBOL(d_add_ci);
 */
struct dentry *d_lookup(struct dentry *parent, struct qstr *name)
{
	struct dentry * dentry = NULL;
	struct dentry *dentry;
	unsigned seq;

        do {
@@ -1636,7 +1781,7 @@ struct dentry * d_lookup(struct dentry * parent, struct qstr * name)
}
EXPORT_SYMBOL(d_lookup);

/*
/**
 * __d_lookup - search for a dentry (racy)
 * @parent: parent dentry
 * @name: qstr of name we wish to find
@@ -1657,10 +1802,17 @@ struct dentry * __d_lookup(struct dentry * parent, struct qstr * name)
	unsigned int hash = name->hash;
	const unsigned char *str = name->name;
	struct hlist_head *head = d_hash(parent,hash);
	struct dentry *found = NULL;
	struct hlist_node *node;
	struct dentry *found = NULL;
	struct dentry *dentry;

	/*
	 * Note: There is significant duplication with __d_lookup_rcu which is
	 * required to prevent single threaded performance regressions
	 * especially on architectures where smp_rmb (in seqcounts) are costly.
	 * Keep the two functions in sync.
	 */

	/*
	 * The hash list is protected using RCU.
	 *
@@ -1677,24 +1829,15 @@ struct dentry * __d_lookup(struct dentry * parent, struct qstr * name)
	rcu_read_lock();
	
	hlist_for_each_entry_rcu(dentry, node, head, d_hash) {
		struct qstr *qstr;
		const char *tname;
		int tlen;

		if (dentry->d_name.hash != hash)
			continue;
		if (dentry->d_parent != parent)
			continue;

		spin_lock(&dentry->d_lock);

		/*
		 * Recheck the dentry after taking the lock - d_move may have
		 * changed things. Don't bother checking the hash because
		 * we're about to compare the whole name anyway.
		 */
		if (dentry->d_parent != parent)
			goto next;

		/* non-existing due to RCU? */
		if (d_unhashed(dentry))
			goto next;

@@ -1702,16 +1845,17 @@ struct dentry * __d_lookup(struct dentry * parent, struct qstr * name)
		 * It is safe to compare names since d_move() cannot
		 * change the qstr (protected by d_lock).
		 */
		qstr = &dentry->d_name;
		tlen = dentry->d_name.len;
		tname = dentry->d_name.name;
		if (parent->d_op && parent->d_op->d_compare) {
			if (parent->d_op->d_compare(parent, parent->d_inode,
						dentry, dentry->d_inode,
						qstr->len, qstr->name, name))
						tlen, tname, name))
				goto next;
		} else {
			if (qstr->len != len)
			if (tlen != len)
				goto next;
			if (memcmp(qstr->name, str, len))
			if (memcmp(tname, str, tlen))
				goto next;
		}

@@ -1821,7 +1965,7 @@ void d_delete(struct dentry * dentry)
			goto again;
		}
		dentry->d_flags &= ~DCACHE_CANT_MOUNT;
		dentry_iput(dentry);
		dentry_unlink_inode(dentry);
		fsnotify_nameremove(dentry, isdir);
		return;
	}
@@ -1884,7 +2028,9 @@ void dentry_update_name_case(struct dentry *dentry, struct qstr *name)
	BUG_ON(dentry->d_name.len != name->len); /* d_lookup gives this */

	spin_lock(&dentry->d_lock);
	write_seqcount_begin(&dentry->d_seq);
	memcpy((unsigned char *)dentry->d_name.name, name->name, name->len);
	write_seqcount_end(&dentry->d_seq);
	spin_unlock(&dentry->d_lock);
}
EXPORT_SYMBOL(dentry_update_name_case);
@@ -1997,6 +2143,9 @@ void d_move(struct dentry * dentry, struct dentry * target)

	dentry_lock_for_move(dentry, target);

	write_seqcount_begin(&dentry->d_seq);
	write_seqcount_begin(&target->d_seq);

	/* Move the dentry to the target hash queue, if on different bucket */
	spin_lock(&dcache_hash_lock);
	if (!d_unhashed(dentry))
@@ -2005,6 +2154,7 @@ void d_move(struct dentry * dentry, struct dentry * target)
	spin_unlock(&dcache_hash_lock);

	/* Unhash the target: dput() will then get rid of it */
	/* __d_drop does write_seqcount_barrier, but they're OK to nest. */
	__d_drop(target);

	list_del(&dentry->d_u.d_child);
@@ -2028,6 +2178,9 @@ void d_move(struct dentry * dentry, struct dentry * target)

	list_add(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs);

	write_seqcount_end(&target->d_seq);
	write_seqcount_end(&dentry->d_seq);

	dentry_unlock_parents_for_move(dentry, target);
	spin_unlock(&target->d_lock);
	fsnotify_d_move(dentry);
@@ -2110,6 +2263,9 @@ static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon)

	dentry_lock_for_move(anon, dentry);

	write_seqcount_begin(&dentry->d_seq);
	write_seqcount_begin(&anon->d_seq);

	dparent = dentry->d_parent;
	aparent = anon->d_parent;

@@ -2130,6 +2286,9 @@ static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon)
	else
		INIT_LIST_HEAD(&anon->d_u.d_child);

	write_seqcount_end(&dentry->d_seq);
	write_seqcount_end(&anon->d_seq);

	dentry_unlock_parents_for_move(anon, dentry);
	spin_unlock(&dentry->d_lock);

+3 −0
Original line number Diff line number Diff line
@@ -115,6 +115,9 @@ int unregister_filesystem(struct file_system_type * fs)
		tmp = &(*tmp)->next;
	}
	write_unlock(&file_systems_lock);

	synchronize_rcu();

	return -EINVAL;
}

+606 −137

File changed.

Preview size limit exceeded, changes collapsed.

Loading