Loading fs/Kconfig +57 −79 Original line number Diff line number Diff line Loading @@ -1544,10 +1544,6 @@ config UFS_FS The recently released UFS2 variant (used in FreeBSD 5.x) is READ-ONLY supported. If you only intend to mount files from some other Unix over the network using NFS, you don't need the UFS file system support (but you need NFS file system support obviously). Note that this option is generally not needed for floppies, since a good portable way to transport files and directories between unixes (and even other operating systems) is given by the tar program ("man Loading Loading @@ -1587,6 +1583,7 @@ menuconfig NETWORK_FILESYSTEMS Say Y here to get to see options for network filesystems and filesystem-related networking code, such as NFS daemon and RPCSEC security modules. This option alone does not add any kernel code. If you say N, all options in this submenu will be skipped and Loading @@ -1595,76 +1592,92 @@ menuconfig NETWORK_FILESYSTEMS if NETWORK_FILESYSTEMS config NFS_FS tristate "NFS file system support" tristate "NFS client support" depends on INET select LOCKD select SUNRPC select NFS_ACL_SUPPORT if NFS_V3_ACL help If you are connected to some other (usually local) Unix computer (using SLIP, PLIP, PPP or Ethernet) and want to mount files residing on that computer (the NFS server) using the Network File Sharing protocol, say Y. "Mounting files" means that the client can access the files with usual UNIX commands as if they were sitting on the client's hard disk. For this to work, the server must run the programs nfsd and mountd (but does not need to have NFS file system support enabled in its kernel). NFS is explained in the Network Administrator's Guide, available from <http://www.tldp.org/docs.html#guide>, on its man page: "man nfs", and in the NFS-HOWTO. A superior but less widely used alternative to NFS is provided by the Coda file system; see "Coda file system support" below. Choose Y here if you want to access files residing on other computers using Sun's Network File System protocol. To compile this file system support as a module, choose M here: the module will be called nfs. If you say Y here, you should have said Y to TCP/IP networking also. This option would enlarge your kernel by about 27 KB. To mount file systems exported by NFS servers, you also need to install the user space mount.nfs command which can be found in the Linux nfs-utils package, available from http://linux-nfs.org/. Information about using the mount command is available in the mount(8) man page. More detail about the Linux NFS client implementation is available via the nfs(5) man page. To compile this file system support as a module, choose M here: the module will be called nfs. Below you can choose which versions of the NFS protocol are available in the kernel to mount NFS servers. Support for NFS version 2 (RFC 1094) is always available when NFS_FS is selected. If you are configuring a diskless machine which will mount its root file system over NFS at boot time, say Y here and to "Kernel level IP autoconfiguration" above and to "Root file system on NFS" below. You cannot compile this driver as a module in this case. There are two packages designed for booting diskless machines over the net: netboot, available from <http://ftp1.sourceforge.net/netboot/>, and Etherboot, available from <http://ftp1.sourceforge.net/etherboot/>. To configure a system which mounts its root file system via NFS at boot time, say Y here, select "Kernel level IP autoconfiguration" in the NETWORK menu, and select "Root file system on NFS" below. You cannot compile this file system as a module in this case. If you don't know what all this is about, say N. If unsure, say N. config NFS_V3 bool "Provide NFSv3 client support" bool "NFS client support for NFS version 3" depends on NFS_FS help Say Y here if you want your NFS client to be able to speak version 3 of the NFS protocol. This option enables support for version 3 of the NFS protocol (RFC 1813) in the kernel's NFS client. If unsure, say Y. config NFS_V3_ACL bool "Provide client support for the NFSv3 ACL protocol extension" bool "NFS client support for the NFSv3 ACL protocol extension" depends on NFS_V3 help Implement the NFSv3 ACL protocol extension for manipulating POSIX Access Control Lists. The server should also be compiled with the NFSv3 ACL protocol extension; see the CONFIG_NFSD_V3_ACL option. Some NFS servers support an auxiliary NFSv3 ACL protocol that Sun added to Solaris but never became an official part of the NFS version 3 protocol. This protocol extension allows applications on NFS clients to manipulate POSIX Access Control Lists on files residing on NFS servers. NFS servers enforce ACLs on local files whether this protocol is available or not. Choose Y here if your NFS server supports the Solaris NFSv3 ACL protocol extension and you want your NFS client to allow applications to access and modify ACLs on files on the server. Most NFS servers don't support the Solaris NFSv3 ACL protocol extension. You can choose N here or specify the "noacl" mount option to prevent your NFS client from trying to use the NFSv3 ACL protocol. If unsure, say N. config NFS_V4 bool "Provide NFSv4 client support (EXPERIMENTAL)" bool "NFS client support for NFS version 4 (EXPERIMENTAL)" depends on NFS_FS && EXPERIMENTAL select RPCSEC_GSS_KRB5 help Say Y here if you want your NFS client to be able to speak the newer version 4 of the NFS protocol. This option enables support for version 4 of the NFS protocol (RFC 3530) in the kernel's NFS client. Note: Requires auxiliary userspace daemons which may be found on http://www.citi.umich.edu/projects/nfsv4/ To mount NFS servers using NFSv4, you also need to install user space programs which can be found in the Linux nfs-utils package, available from http://linux-nfs.org/. If unsure, say N. config ROOT_NFS bool "Root file system on NFS" depends on NFS_FS=y && IP_PNP help If you want your system to mount its root file system via NFS, choose Y here. This is common practice for managing systems without local permanent storage. For details, read <file:Documentation/filesystems/nfsroot.txt>. Most people say N here. config NFSD tristate "NFS server support" depends on INET Loading Loading @@ -1746,20 +1759,6 @@ config NFSD_V4 If unsure, say N. config ROOT_NFS bool "Root file system on NFS" depends on NFS_FS=y && IP_PNP help If you want your Linux box to mount its whole root file system (the one containing the directory /) from some other computer over the net via NFS (presumably because your box doesn't have a hard disk), say Y. Read <file:Documentation/filesystems/nfsroot.txt> for details. It is likely that in this case, you also want to say Y to "Kernel level IP autoconfiguration" so that your box can discover its network address at boot time. Most people say N here. config LOCKD tristate Loading Loading @@ -1800,27 +1799,6 @@ config SUNRPC_XPRT_RDMA If unsure, say N. config SUNRPC_BIND34 bool "Support for rpcbind versions 3 & 4 (EXPERIMENTAL)" depends on SUNRPC && EXPERIMENTAL default n help RPC requests over IPv6 networks require support for larger addresses when performing an RPC bind. Sun added support for IPv6 addressing by creating two new versions of the rpcbind protocol (RFC 1833). This option enables support in the kernel RPC client for querying rpcbind servers via versions 3 and 4 of the rpcbind protocol. The kernel automatically falls back to version 2 if a remote rpcbind service does not support versions 3 or 4. By themselves, these new versions do not provide support for RPC over IPv6, but the new protocol versions are necessary to support it. If unsure, say N to get traditional behavior (version 2 rpcbind requests only). config RPCSEC_GSS_KRB5 tristate "Secure RPC: Kerberos V mechanism (EXPERIMENTAL)" depends on SUNRPC && EXPERIMENTAL Loading fs/lockd/clntproc.c +1 −1 Original line number Diff line number Diff line Loading @@ -430,7 +430,7 @@ nlmclnt_test(struct nlm_rqst *req, struct file_lock *fl) * Report the conflicting lock back to the application. */ fl->fl_start = req->a_res.lock.fl.fl_start; fl->fl_end = req->a_res.lock.fl.fl_start; fl->fl_end = req->a_res.lock.fl.fl_end; fl->fl_type = req->a_res.lock.fl.fl_type; fl->fl_pid = 0; break; Loading fs/nfs/callback.c +16 −18 Original line number Diff line number Diff line Loading @@ -27,7 +27,7 @@ struct nfs_callback_data { unsigned int users; struct svc_serv *serv; struct svc_rqst *rqst; struct task_struct *task; }; Loading Loading @@ -91,21 +91,17 @@ nfs_callback_svc(void *vrqstp) svc_process(rqstp); } unlock_kernel(); nfs_callback_info.task = NULL; svc_exit_thread(rqstp); return 0; } /* * Bring up the server process if it is not already up. * Bring up the callback thread if it is not already up. */ int nfs_callback_up(void) { struct svc_serv *serv = NULL; struct svc_rqst *rqstp; int ret = 0; lock_kernel(); mutex_lock(&nfs_callback_mutex); if (nfs_callback_info.users++ || nfs_callback_info.task != NULL) goto out; Loading @@ -121,22 +117,23 @@ int nfs_callback_up(void) nfs_callback_tcpport = ret; dprintk("Callback port = 0x%x\n", nfs_callback_tcpport); rqstp = svc_prepare_thread(serv, &serv->sv_pools[0]); if (IS_ERR(rqstp)) { ret = PTR_ERR(rqstp); nfs_callback_info.rqst = svc_prepare_thread(serv, &serv->sv_pools[0]); if (IS_ERR(nfs_callback_info.rqst)) { ret = PTR_ERR(nfs_callback_info.rqst); nfs_callback_info.rqst = NULL; goto out_err; } svc_sock_update_bufs(serv); nfs_callback_info.serv = serv; nfs_callback_info.task = kthread_run(nfs_callback_svc, rqstp, nfs_callback_info.task = kthread_run(nfs_callback_svc, nfs_callback_info.rqst, "nfsv4-svc"); if (IS_ERR(nfs_callback_info.task)) { ret = PTR_ERR(nfs_callback_info.task); nfs_callback_info.serv = NULL; svc_exit_thread(nfs_callback_info.rqst); nfs_callback_info.rqst = NULL; nfs_callback_info.task = NULL; svc_exit_thread(rqstp); goto out_err; } out: Loading @@ -149,7 +146,6 @@ out: if (serv) svc_destroy(serv); mutex_unlock(&nfs_callback_mutex); unlock_kernel(); return ret; out_err: dprintk("Couldn't create callback socket or server thread; err = %d\n", Loading @@ -159,17 +155,19 @@ out_err: } /* * Kill the server process if it is not already down. * Kill the callback thread if it's no longer being used. */ void nfs_callback_down(void) { lock_kernel(); mutex_lock(&nfs_callback_mutex); nfs_callback_info.users--; if (nfs_callback_info.users == 0 && nfs_callback_info.task != NULL) if (nfs_callback_info.users == 0 && nfs_callback_info.task != NULL) { kthread_stop(nfs_callback_info.task); svc_exit_thread(nfs_callback_info.rqst); nfs_callback_info.rqst = NULL; nfs_callback_info.task = NULL; } mutex_unlock(&nfs_callback_mutex); unlock_kernel(); } static int nfs_callback_authenticate(struct svc_rqst *rqstp) Loading fs/nfs/client.c +8 −5 Original line number Diff line number Diff line Loading @@ -431,14 +431,14 @@ static void nfs_init_timeout_values(struct rpc_timeout *to, int proto, { to->to_initval = timeo * HZ / 10; to->to_retries = retrans; if (!to->to_retries) to->to_retries = 2; switch (proto) { case XPRT_TRANSPORT_TCP: case XPRT_TRANSPORT_RDMA: if (to->to_retries == 0) to->to_retries = NFS_DEF_TCP_RETRANS; if (to->to_initval == 0) to->to_initval = 60 * HZ; to->to_initval = NFS_DEF_TCP_TIMEO * HZ / 10; if (to->to_initval > NFS_MAX_TCP_TIMEOUT) to->to_initval = NFS_MAX_TCP_TIMEOUT; to->to_increment = to->to_initval; Loading @@ -450,14 +450,17 @@ static void nfs_init_timeout_values(struct rpc_timeout *to, int proto, to->to_exponential = 0; break; case XPRT_TRANSPORT_UDP: default: if (to->to_retries == 0) to->to_retries = NFS_DEF_UDP_RETRANS; if (!to->to_initval) to->to_initval = 11 * HZ / 10; to->to_initval = NFS_DEF_UDP_TIMEO * HZ / 10; if (to->to_initval > NFS_MAX_UDP_TIMEOUT) to->to_initval = NFS_MAX_UDP_TIMEOUT; to->to_maxval = NFS_MAX_UDP_TIMEOUT; to->to_exponential = 1; break; default: BUG(); } } Loading fs/nfs/dir.c +19 −7 Original line number Diff line number Diff line Loading @@ -133,8 +133,11 @@ nfs_opendir(struct inode *inode, struct file *filp) { int res; dfprintk(VFS, "NFS: opendir(%s/%ld)\n", inode->i_sb->s_id, inode->i_ino); dfprintk(FILE, "NFS: open dir(%s/%s)\n", filp->f_path.dentry->d_parent->d_name.name, filp->f_path.dentry->d_name.name); nfs_inc_stats(inode, NFSIOS_VFSOPEN); lock_kernel(); /* Call generic open code in order to cache credentials */ Loading Loading @@ -528,7 +531,7 @@ static int nfs_readdir(struct file *filp, void *dirent, filldir_t filldir) struct nfs_fattr fattr; long res; dfprintk(VFS, "NFS: readdir(%s/%s) starting at cookie %Lu\n", dfprintk(FILE, "NFS: readdir(%s/%s) starting at cookie %llu\n", dentry->d_parent->d_name.name, dentry->d_name.name, (long long)filp->f_pos); nfs_inc_stats(inode, NFSIOS_VFSGETDENTS); Loading Loading @@ -595,7 +598,7 @@ out: unlock_kernel(); if (res > 0) res = 0; dfprintk(VFS, "NFS: readdir(%s/%s) returns %ld\n", dfprintk(FILE, "NFS: readdir(%s/%s) returns %ld\n", dentry->d_parent->d_name.name, dentry->d_name.name, res); return res; Loading @@ -603,7 +606,15 @@ out: static loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int origin) { mutex_lock(&filp->f_path.dentry->d_inode->i_mutex); struct dentry *dentry = filp->f_path.dentry; struct inode *inode = dentry->d_inode; dfprintk(FILE, "NFS: llseek dir(%s/%s, %lld, %d)\n", dentry->d_parent->d_name.name, dentry->d_name.name, offset, origin); mutex_lock(&inode->i_mutex); switch (origin) { case 1: offset += filp->f_pos; Loading @@ -619,7 +630,7 @@ static loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int origin) nfs_file_open_context(filp)->dir_cookie = 0; } out: mutex_unlock(&filp->f_path.dentry->d_inode->i_mutex); mutex_unlock(&inode->i_mutex); return offset; } Loading @@ -629,10 +640,11 @@ out: */ static int nfs_fsync_dir(struct file *filp, struct dentry *dentry, int datasync) { dfprintk(VFS, "NFS: fsync_dir(%s/%s) datasync %d\n", dfprintk(FILE, "NFS: fsync dir(%s/%s) datasync %d\n", dentry->d_parent->d_name.name, dentry->d_name.name, datasync); nfs_inc_stats(dentry->d_inode, NFSIOS_VFSFSYNC); return 0; } Loading Loading
fs/Kconfig +57 −79 Original line number Diff line number Diff line Loading @@ -1544,10 +1544,6 @@ config UFS_FS The recently released UFS2 variant (used in FreeBSD 5.x) is READ-ONLY supported. If you only intend to mount files from some other Unix over the network using NFS, you don't need the UFS file system support (but you need NFS file system support obviously). Note that this option is generally not needed for floppies, since a good portable way to transport files and directories between unixes (and even other operating systems) is given by the tar program ("man Loading Loading @@ -1587,6 +1583,7 @@ menuconfig NETWORK_FILESYSTEMS Say Y here to get to see options for network filesystems and filesystem-related networking code, such as NFS daemon and RPCSEC security modules. This option alone does not add any kernel code. If you say N, all options in this submenu will be skipped and Loading @@ -1595,76 +1592,92 @@ menuconfig NETWORK_FILESYSTEMS if NETWORK_FILESYSTEMS config NFS_FS tristate "NFS file system support" tristate "NFS client support" depends on INET select LOCKD select SUNRPC select NFS_ACL_SUPPORT if NFS_V3_ACL help If you are connected to some other (usually local) Unix computer (using SLIP, PLIP, PPP or Ethernet) and want to mount files residing on that computer (the NFS server) using the Network File Sharing protocol, say Y. "Mounting files" means that the client can access the files with usual UNIX commands as if they were sitting on the client's hard disk. For this to work, the server must run the programs nfsd and mountd (but does not need to have NFS file system support enabled in its kernel). NFS is explained in the Network Administrator's Guide, available from <http://www.tldp.org/docs.html#guide>, on its man page: "man nfs", and in the NFS-HOWTO. A superior but less widely used alternative to NFS is provided by the Coda file system; see "Coda file system support" below. Choose Y here if you want to access files residing on other computers using Sun's Network File System protocol. To compile this file system support as a module, choose M here: the module will be called nfs. If you say Y here, you should have said Y to TCP/IP networking also. This option would enlarge your kernel by about 27 KB. To mount file systems exported by NFS servers, you also need to install the user space mount.nfs command which can be found in the Linux nfs-utils package, available from http://linux-nfs.org/. Information about using the mount command is available in the mount(8) man page. More detail about the Linux NFS client implementation is available via the nfs(5) man page. To compile this file system support as a module, choose M here: the module will be called nfs. Below you can choose which versions of the NFS protocol are available in the kernel to mount NFS servers. Support for NFS version 2 (RFC 1094) is always available when NFS_FS is selected. If you are configuring a diskless machine which will mount its root file system over NFS at boot time, say Y here and to "Kernel level IP autoconfiguration" above and to "Root file system on NFS" below. You cannot compile this driver as a module in this case. There are two packages designed for booting diskless machines over the net: netboot, available from <http://ftp1.sourceforge.net/netboot/>, and Etherboot, available from <http://ftp1.sourceforge.net/etherboot/>. To configure a system which mounts its root file system via NFS at boot time, say Y here, select "Kernel level IP autoconfiguration" in the NETWORK menu, and select "Root file system on NFS" below. You cannot compile this file system as a module in this case. If you don't know what all this is about, say N. If unsure, say N. config NFS_V3 bool "Provide NFSv3 client support" bool "NFS client support for NFS version 3" depends on NFS_FS help Say Y here if you want your NFS client to be able to speak version 3 of the NFS protocol. This option enables support for version 3 of the NFS protocol (RFC 1813) in the kernel's NFS client. If unsure, say Y. config NFS_V3_ACL bool "Provide client support for the NFSv3 ACL protocol extension" bool "NFS client support for the NFSv3 ACL protocol extension" depends on NFS_V3 help Implement the NFSv3 ACL protocol extension for manipulating POSIX Access Control Lists. The server should also be compiled with the NFSv3 ACL protocol extension; see the CONFIG_NFSD_V3_ACL option. Some NFS servers support an auxiliary NFSv3 ACL protocol that Sun added to Solaris but never became an official part of the NFS version 3 protocol. This protocol extension allows applications on NFS clients to manipulate POSIX Access Control Lists on files residing on NFS servers. NFS servers enforce ACLs on local files whether this protocol is available or not. Choose Y here if your NFS server supports the Solaris NFSv3 ACL protocol extension and you want your NFS client to allow applications to access and modify ACLs on files on the server. Most NFS servers don't support the Solaris NFSv3 ACL protocol extension. You can choose N here or specify the "noacl" mount option to prevent your NFS client from trying to use the NFSv3 ACL protocol. If unsure, say N. config NFS_V4 bool "Provide NFSv4 client support (EXPERIMENTAL)" bool "NFS client support for NFS version 4 (EXPERIMENTAL)" depends on NFS_FS && EXPERIMENTAL select RPCSEC_GSS_KRB5 help Say Y here if you want your NFS client to be able to speak the newer version 4 of the NFS protocol. This option enables support for version 4 of the NFS protocol (RFC 3530) in the kernel's NFS client. Note: Requires auxiliary userspace daemons which may be found on http://www.citi.umich.edu/projects/nfsv4/ To mount NFS servers using NFSv4, you also need to install user space programs which can be found in the Linux nfs-utils package, available from http://linux-nfs.org/. If unsure, say N. config ROOT_NFS bool "Root file system on NFS" depends on NFS_FS=y && IP_PNP help If you want your system to mount its root file system via NFS, choose Y here. This is common practice for managing systems without local permanent storage. For details, read <file:Documentation/filesystems/nfsroot.txt>. Most people say N here. config NFSD tristate "NFS server support" depends on INET Loading Loading @@ -1746,20 +1759,6 @@ config NFSD_V4 If unsure, say N. config ROOT_NFS bool "Root file system on NFS" depends on NFS_FS=y && IP_PNP help If you want your Linux box to mount its whole root file system (the one containing the directory /) from some other computer over the net via NFS (presumably because your box doesn't have a hard disk), say Y. Read <file:Documentation/filesystems/nfsroot.txt> for details. It is likely that in this case, you also want to say Y to "Kernel level IP autoconfiguration" so that your box can discover its network address at boot time. Most people say N here. config LOCKD tristate Loading Loading @@ -1800,27 +1799,6 @@ config SUNRPC_XPRT_RDMA If unsure, say N. config SUNRPC_BIND34 bool "Support for rpcbind versions 3 & 4 (EXPERIMENTAL)" depends on SUNRPC && EXPERIMENTAL default n help RPC requests over IPv6 networks require support for larger addresses when performing an RPC bind. Sun added support for IPv6 addressing by creating two new versions of the rpcbind protocol (RFC 1833). This option enables support in the kernel RPC client for querying rpcbind servers via versions 3 and 4 of the rpcbind protocol. The kernel automatically falls back to version 2 if a remote rpcbind service does not support versions 3 or 4. By themselves, these new versions do not provide support for RPC over IPv6, but the new protocol versions are necessary to support it. If unsure, say N to get traditional behavior (version 2 rpcbind requests only). config RPCSEC_GSS_KRB5 tristate "Secure RPC: Kerberos V mechanism (EXPERIMENTAL)" depends on SUNRPC && EXPERIMENTAL Loading
fs/lockd/clntproc.c +1 −1 Original line number Diff line number Diff line Loading @@ -430,7 +430,7 @@ nlmclnt_test(struct nlm_rqst *req, struct file_lock *fl) * Report the conflicting lock back to the application. */ fl->fl_start = req->a_res.lock.fl.fl_start; fl->fl_end = req->a_res.lock.fl.fl_start; fl->fl_end = req->a_res.lock.fl.fl_end; fl->fl_type = req->a_res.lock.fl.fl_type; fl->fl_pid = 0; break; Loading
fs/nfs/callback.c +16 −18 Original line number Diff line number Diff line Loading @@ -27,7 +27,7 @@ struct nfs_callback_data { unsigned int users; struct svc_serv *serv; struct svc_rqst *rqst; struct task_struct *task; }; Loading Loading @@ -91,21 +91,17 @@ nfs_callback_svc(void *vrqstp) svc_process(rqstp); } unlock_kernel(); nfs_callback_info.task = NULL; svc_exit_thread(rqstp); return 0; } /* * Bring up the server process if it is not already up. * Bring up the callback thread if it is not already up. */ int nfs_callback_up(void) { struct svc_serv *serv = NULL; struct svc_rqst *rqstp; int ret = 0; lock_kernel(); mutex_lock(&nfs_callback_mutex); if (nfs_callback_info.users++ || nfs_callback_info.task != NULL) goto out; Loading @@ -121,22 +117,23 @@ int nfs_callback_up(void) nfs_callback_tcpport = ret; dprintk("Callback port = 0x%x\n", nfs_callback_tcpport); rqstp = svc_prepare_thread(serv, &serv->sv_pools[0]); if (IS_ERR(rqstp)) { ret = PTR_ERR(rqstp); nfs_callback_info.rqst = svc_prepare_thread(serv, &serv->sv_pools[0]); if (IS_ERR(nfs_callback_info.rqst)) { ret = PTR_ERR(nfs_callback_info.rqst); nfs_callback_info.rqst = NULL; goto out_err; } svc_sock_update_bufs(serv); nfs_callback_info.serv = serv; nfs_callback_info.task = kthread_run(nfs_callback_svc, rqstp, nfs_callback_info.task = kthread_run(nfs_callback_svc, nfs_callback_info.rqst, "nfsv4-svc"); if (IS_ERR(nfs_callback_info.task)) { ret = PTR_ERR(nfs_callback_info.task); nfs_callback_info.serv = NULL; svc_exit_thread(nfs_callback_info.rqst); nfs_callback_info.rqst = NULL; nfs_callback_info.task = NULL; svc_exit_thread(rqstp); goto out_err; } out: Loading @@ -149,7 +146,6 @@ out: if (serv) svc_destroy(serv); mutex_unlock(&nfs_callback_mutex); unlock_kernel(); return ret; out_err: dprintk("Couldn't create callback socket or server thread; err = %d\n", Loading @@ -159,17 +155,19 @@ out_err: } /* * Kill the server process if it is not already down. * Kill the callback thread if it's no longer being used. */ void nfs_callback_down(void) { lock_kernel(); mutex_lock(&nfs_callback_mutex); nfs_callback_info.users--; if (nfs_callback_info.users == 0 && nfs_callback_info.task != NULL) if (nfs_callback_info.users == 0 && nfs_callback_info.task != NULL) { kthread_stop(nfs_callback_info.task); svc_exit_thread(nfs_callback_info.rqst); nfs_callback_info.rqst = NULL; nfs_callback_info.task = NULL; } mutex_unlock(&nfs_callback_mutex); unlock_kernel(); } static int nfs_callback_authenticate(struct svc_rqst *rqstp) Loading
fs/nfs/client.c +8 −5 Original line number Diff line number Diff line Loading @@ -431,14 +431,14 @@ static void nfs_init_timeout_values(struct rpc_timeout *to, int proto, { to->to_initval = timeo * HZ / 10; to->to_retries = retrans; if (!to->to_retries) to->to_retries = 2; switch (proto) { case XPRT_TRANSPORT_TCP: case XPRT_TRANSPORT_RDMA: if (to->to_retries == 0) to->to_retries = NFS_DEF_TCP_RETRANS; if (to->to_initval == 0) to->to_initval = 60 * HZ; to->to_initval = NFS_DEF_TCP_TIMEO * HZ / 10; if (to->to_initval > NFS_MAX_TCP_TIMEOUT) to->to_initval = NFS_MAX_TCP_TIMEOUT; to->to_increment = to->to_initval; Loading @@ -450,14 +450,17 @@ static void nfs_init_timeout_values(struct rpc_timeout *to, int proto, to->to_exponential = 0; break; case XPRT_TRANSPORT_UDP: default: if (to->to_retries == 0) to->to_retries = NFS_DEF_UDP_RETRANS; if (!to->to_initval) to->to_initval = 11 * HZ / 10; to->to_initval = NFS_DEF_UDP_TIMEO * HZ / 10; if (to->to_initval > NFS_MAX_UDP_TIMEOUT) to->to_initval = NFS_MAX_UDP_TIMEOUT; to->to_maxval = NFS_MAX_UDP_TIMEOUT; to->to_exponential = 1; break; default: BUG(); } } Loading
fs/nfs/dir.c +19 −7 Original line number Diff line number Diff line Loading @@ -133,8 +133,11 @@ nfs_opendir(struct inode *inode, struct file *filp) { int res; dfprintk(VFS, "NFS: opendir(%s/%ld)\n", inode->i_sb->s_id, inode->i_ino); dfprintk(FILE, "NFS: open dir(%s/%s)\n", filp->f_path.dentry->d_parent->d_name.name, filp->f_path.dentry->d_name.name); nfs_inc_stats(inode, NFSIOS_VFSOPEN); lock_kernel(); /* Call generic open code in order to cache credentials */ Loading Loading @@ -528,7 +531,7 @@ static int nfs_readdir(struct file *filp, void *dirent, filldir_t filldir) struct nfs_fattr fattr; long res; dfprintk(VFS, "NFS: readdir(%s/%s) starting at cookie %Lu\n", dfprintk(FILE, "NFS: readdir(%s/%s) starting at cookie %llu\n", dentry->d_parent->d_name.name, dentry->d_name.name, (long long)filp->f_pos); nfs_inc_stats(inode, NFSIOS_VFSGETDENTS); Loading Loading @@ -595,7 +598,7 @@ out: unlock_kernel(); if (res > 0) res = 0; dfprintk(VFS, "NFS: readdir(%s/%s) returns %ld\n", dfprintk(FILE, "NFS: readdir(%s/%s) returns %ld\n", dentry->d_parent->d_name.name, dentry->d_name.name, res); return res; Loading @@ -603,7 +606,15 @@ out: static loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int origin) { mutex_lock(&filp->f_path.dentry->d_inode->i_mutex); struct dentry *dentry = filp->f_path.dentry; struct inode *inode = dentry->d_inode; dfprintk(FILE, "NFS: llseek dir(%s/%s, %lld, %d)\n", dentry->d_parent->d_name.name, dentry->d_name.name, offset, origin); mutex_lock(&inode->i_mutex); switch (origin) { case 1: offset += filp->f_pos; Loading @@ -619,7 +630,7 @@ static loff_t nfs_llseek_dir(struct file *filp, loff_t offset, int origin) nfs_file_open_context(filp)->dir_cookie = 0; } out: mutex_unlock(&filp->f_path.dentry->d_inode->i_mutex); mutex_unlock(&inode->i_mutex); return offset; } Loading @@ -629,10 +640,11 @@ out: */ static int nfs_fsync_dir(struct file *filp, struct dentry *dentry, int datasync) { dfprintk(VFS, "NFS: fsync_dir(%s/%s) datasync %d\n", dfprintk(FILE, "NFS: fsync dir(%s/%s) datasync %d\n", dentry->d_parent->d_name.name, dentry->d_name.name, datasync); nfs_inc_stats(dentry->d_inode, NFSIOS_VFSFSYNC); return 0; } Loading