Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 7f427d3a authored by Linus Torvalds's avatar Linus Torvalds
Browse files
Pull parallel filesystem directory handling update from Al Viro.

This is the main parallel directory work by Al that makes the vfs layer
able to do lookup and readdir in parallel within a single directory.
That's a big change, since this used to be all protected by the
directory inode mutex.

The inode mutex is replaced by an rwsem, and serialization of lookups of
a single name is done by a "in-progress" dentry marker.

The series begins with xattr cleanups, and then ends with switching
filesystems over to actually doing the readdir in parallel (switching to
the "iterate_shared()" that only takes the read lock).

A more detailed explanation of the process from Al Viro:
 "The xattr work starts with some acl fixes, then switches ->getxattr to
  passing inode and dentry separately.  This is the point where the
  things start to get tricky - that got merged into the very beginning
  of the -rc3-based #work.lookups, to allow untangling the
  security_d_instantiate() mess.  The xattr work itself proceeds to
  switch a lot of filesystems to generic_...xattr(); no complications
  there.

  After that initial xattr work, the series then does the following:

   - untangle security_d_instantiate()

   - convert a bunch of open-coded lookup_one_len_unlocked() to calls of
     that thing; one such place (in overlayfs) actually yields a trivial
     conflict with overlayfs fixes later in the cycle - overlayfs ended
     up switching to a variant of lookup_one_len_unlocked() sans the
     permission checks.  I would've dropped that commit (it gets
     overridden on merge from #ovl-fixes in #for-next; proper resolution
     is to use the variant in mainline fs/overlayfs/super.c), but I
     didn't want to rebase the damn thing - it was fairly late in the
     cycle...

   - some filesystems had managed to depend on lookup/lookup exclusion
     for *fs-internal* data structures in a way that would break if we
     relaxed the VFS exclusion.  Fixing hadn't been hard, fortunately.

   - core of that series - parallel lookup machinery, replacing
     ->i_mutex with rwsem, making lookup_slow() take it only shared.  At
     that point lookups happen in parallel; lookups on the same name
     wait for the in-progress one to be done with that dentry.

     Surprisingly little code, at that - almost all of it is in
     fs/dcache.c, with fs/namei.c changes limited to lookup_slow() -
     making it use the new primitive and actually switching to locking
     shared.

   - parallel readdir stuff - first of all, we provide the exclusion on
     per-struct file basis, same as we do for read() vs lseek() for
     regular files.  That takes care of most of the needed exclusion in
     readdir/readdir; however, these guys are trickier than lookups, so
     I went for switching them one-by-one.  To do that, a new method
     '->iterate_shared()' is added and filesystems are switched to it
     as they are either confirmed to be OK with shared lock on directory
     or fixed to be OK with that.  I hope to kill the original method
     come next cycle (almost all in-tree filesystems are switched
     already), but it's still not quite finished.

   - several filesystems get switched to parallel readdir.  The
     interesting part here is dealing with dcache preseeding by readdir;
     that needs minor adjustment to be safe with directory locked only
     shared.

     Most of the filesystems doing that got switched to in those
     commits.  Important exception: NFS.  Turns out that NFS folks, with
     their, er, insistence on VFS getting the fuck out of the way of the
     Smart Filesystem Code That Knows How And What To Lock(tm) have
     grown the locking of their own.  They had their own homegrown
     rwsem, with lookup/readdir/atomic_open being *writers* (sillyunlink
     is the reader there).  Of course, with VFS getting the fuck out of
     the way, as requested, the actual smarts of the smart filesystem
     code etc. had become exposed...

   - do_last/lookup_open/atomic_open cleanups.  As the result, open()
     without O_CREAT locks the directory only shared.  Including the
     ->atomic_open() case.  Backmerge from #for-linus in the middle of
     that - atomic_open() fix got brought in.

   - then comes NFS switch to saner (VFS-based ;-) locking, killing the
     homegrown "lookup and readdir are writers" kinda-sorta rwsem.  All
     exclusion for sillyunlink/lookup is done by the parallel lookups
     mechanism.  Exclusion between sillyunlink and rmdir is a real rwsem
     now - rmdir being the writer.

     Result: NFS lookups/readdirs/O_CREAT-less opens happen in parallel
     now.

   - the rest of the series consists of switching a lot of filesystems
     to parallel readdir; in a lot of cases ->llseek() gets simplified
     as well.  One backmerge in there (again, #for-linus - rockridge
     fix)"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (74 commits)
  ext4: switch to ->iterate_shared()
  hfs: switch to ->iterate_shared()
  hfsplus: switch to ->iterate_shared()
  hostfs: switch to ->iterate_shared()
  hpfs: switch to ->iterate_shared()
  hpfs: handle allocation failures in hpfs_add_pos()
  gfs2: switch to ->iterate_shared()
  f2fs: switch to ->iterate_shared()
  afs: switch to ->iterate_shared()
  befs: switch to ->iterate_shared()
  befs: constify stuff a bit
  isofs: switch to ->iterate_shared()
  get_acorn_filename(): deobfuscate a bit
  btrfs: switch to ->iterate_shared()
  logfs: no need to lock directory in lseek
  switch ecryptfs to ->iterate_shared
  9p: switch to ->iterate_shared()
  fat: switch to ->iterate_shared()
  romfs, squashfs: switch to ->iterate_shared()
  more trivial ->iterate_shared conversions
  ...
parents ede40902 0e0162bb
Loading
Loading
Loading
Loading
+53 −0
Original line number Original line Diff line number Diff line
@@ -525,3 +525,56 @@ in your dentry operations instead.
	set_delayed_call() where it used to set *cookie.
	set_delayed_call() where it used to set *cookie.
	->put_link() is gone - just give the destructor to set_delayed_call()
	->put_link() is gone - just give the destructor to set_delayed_call()
	in ->get_link().
	in ->get_link().
--
[mandatory]
	->getxattr() and xattr_handler.get() get dentry and inode passed separately.
	dentry might be yet to be attached to inode, so do _not_ use its ->d_inode
	in the instances.  Rationale: !@#!@# security_d_instantiate() needs to be
	called before we attach dentry to inode.
--
[mandatory]
	symlinks are no longer the only inodes that do *not* have i_bdev/i_cdev/
	i_pipe/i_link union zeroed out at inode eviction.  As the result, you can't
	assume that non-NULL value in ->i_nlink at ->destroy_inode() implies that
	it's a symlink.  Checking ->i_mode is really needed now.  In-tree we had
	to fix shmem_destroy_callback() that used to take that kind of shortcut;
	watch out, since that shortcut is no longer valid.
--
[mandatory]
	->i_mutex is replaced with ->i_rwsem now.  inode_lock() et.al. work as
	they used to - they just take it exclusive.  However, ->lookup() may be
	called with parent locked shared.  Its instances must not
		* use d_instantiate) and d_rehash() separately - use d_add() or
		  d_splice_alias() instead.
		* use d_rehash() alone - call d_add(new_dentry, NULL) instead.
		* in the unlikely case when (read-only) access to filesystem
		  data structures needs exclusion for some reason, arrange it
		  yourself.  None of the in-tree filesystems needed that.
		* rely on ->d_parent and ->d_name not changing after dentry has
		  been fed to d_add() or d_splice_alias().  Again, none of the
		  in-tree instances relied upon that.
	We are guaranteed that lookups of the same name in the same directory
	will not happen in parallel ("same" in the sense of your ->d_compare()).
	Lookups on different names in the same directory can and do happen in
	parallel now.
--
[recommended]
	->iterate_shared() is added; it's a parallel variant of ->iterate().
	Exclusion on struct file level is still provided (as well as that
	between it and lseek on the same struct file), but if your directory
	has been opened several times, you can get these called in parallel.
	Exclusion between that method and all directory-modifying ones is
	still provided, of course.

	Often enough ->iterate() can serve as ->iterate_shared() without any
	changes - it is a read-only operation, after all.  If you have any
	per-inode or per-dentry in-core data structures modified by ->iterate(),
	you might need something to serialize the access to them.  If you
	do dcache pre-seeding, you'll need to switch to d_alloc_parallel() for
	that; look for in-tree examples.

	Old method is only used if the new one is absent; eventually it will
	be removed.  Switch while you still can; the old one won't stay.
--
[mandatory]
	->atomic_open() calls without O_CREAT may happen in parallel.
+2 −2
Original line number Original line Diff line number Diff line
@@ -147,7 +147,7 @@ SYSCALL_DEFINE4(osf_getdirentries, unsigned int, fd,
		long __user *, basep)
		long __user *, basep)
{
{
	int error;
	int error;
	struct fd arg = fdget(fd);
	struct fd arg = fdget_pos(fd);
	struct osf_dirent_callback buf = {
	struct osf_dirent_callback buf = {
		.ctx.actor = osf_filldir,
		.ctx.actor = osf_filldir,
		.dirent = dirent,
		.dirent = dirent,
@@ -164,7 +164,7 @@ SYSCALL_DEFINE4(osf_getdirentries, unsigned int, fd,
	if (count != buf.count)
	if (count != buf.count)
		error = count - buf.count;
		error = count - buf.count;


	fdput(arg);
	fdput_pos(arg);
	return error;
	return error;
}
}


+1 −1
Original line number Original line Diff line number Diff line
@@ -238,7 +238,7 @@ const struct file_operations spufs_context_fops = {
	.release	= spufs_dir_close,
	.release	= spufs_dir_close,
	.llseek		= dcache_dir_lseek,
	.llseek		= dcache_dir_lseek,
	.read		= generic_read_dir,
	.read		= generic_read_dir,
	.iterate	= dcache_readdir,
	.iterate_shared	= dcache_readdir,
	.fsync		= noop_fsync,
	.fsync		= noop_fsync,
};
};
EXPORT_SYMBOL_GPL(spufs_context_fops);
EXPORT_SYMBOL_GPL(spufs_context_fops);
+1 −3
Original line number Original line Diff line number Diff line
@@ -1865,7 +1865,6 @@ static loff_t ll_dir_seek(struct file *file, loff_t offset, int origin)
	int api32 = ll_need_32bit_api(sbi);
	int api32 = ll_need_32bit_api(sbi);
	loff_t ret = -EINVAL;
	loff_t ret = -EINVAL;


	inode_lock(inode);
	switch (origin) {
	switch (origin) {
	case SEEK_SET:
	case SEEK_SET:
		break;
		break;
@@ -1903,7 +1902,6 @@ static loff_t ll_dir_seek(struct file *file, loff_t offset, int origin)
	goto out;
	goto out;


out:
out:
	inode_unlock(inode);
	return ret;
	return ret;
}
}


@@ -1922,7 +1920,7 @@ const struct file_operations ll_dir_operations = {
	.open     = ll_dir_open,
	.open     = ll_dir_open,
	.release  = ll_dir_release,
	.release  = ll_dir_release,
	.read     = generic_read_dir,
	.read     = generic_read_dir,
	.iterate  = ll_readdir,
	.iterate_shared  = ll_readdir,
	.unlocked_ioctl   = ll_dir_ioctl,
	.unlocked_ioctl   = ll_dir_ioctl,
	.fsync    = ll_fsync,
	.fsync    = ll_fsync,
};
};
+2 −2
Original line number Original line Diff line number Diff line
@@ -1042,8 +1042,8 @@ static inline __u64 ll_file_maxbytes(struct inode *inode)
/* llite/xattr.c */
/* llite/xattr.c */
int ll_setxattr(struct dentry *dentry, const char *name,
int ll_setxattr(struct dentry *dentry, const char *name,
		const void *value, size_t size, int flags);
		const void *value, size_t size, int flags);
ssize_t ll_getxattr(struct dentry *dentry, const char *name,
ssize_t ll_getxattr(struct dentry *dentry, struct inode *inode,
		    void *buffer, size_t size);
		    const char *name, void *buffer, size_t size);
ssize_t ll_listxattr(struct dentry *dentry, char *buffer, size_t size);
ssize_t ll_listxattr(struct dentry *dentry, char *buffer, size_t size);
int ll_removexattr(struct dentry *dentry, const char *name);
int ll_removexattr(struct dentry *dentry, const char *name);


Loading