Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit a670fcb4 authored by Jeff Garzik's avatar Jeff Garzik
Browse files

/spare/repo/netdev-2.6 branch 'master'

parents 327309e8 b0825488
Loading
Loading
Loading
Loading
+3 −3
Original line number Diff line number Diff line
@@ -1624,10 +1624,10 @@ E: ajoshi@shell.unixbox.com
D: fbdev hacking

N: Jesper Juhl
E: juhl-lkml@dif.dk
D: Various small janitor fixes, cleanups etc.
E: jesper.juhl@gmail.com
D: Various fixes, cleanups and minor features.
S: Lemnosvej 1, 3.tv
S: 2300 Copenhagen S
S: 2300 Copenhagen S.
S: Denmark

N: Jozsef Kadlecsik
+1 −0
Original line number Diff line number Diff line
@@ -65,6 +65,7 @@ o isdn4k-utils 3.1pre1 # isdnctrl 2>&1|grep version
o  nfs-utils              1.0.5                   # showmount --version
o  procps                 3.2.0                   # ps --version
o  oprofile               0.9                     # oprofiled --version
o  udev                   058                     # udevinfo -V

Kernel compilation
==================
+2 −0
Original line number Diff line number Diff line
@@ -41,6 +41,7 @@ COPYING
CREDITS
CVS
ChangeSet
Image
Kerntypes
MODS.txt
Module.symvers
@@ -103,6 +104,7 @@ logo_*.c
logo_*_clut224.c
logo_*_mono.c
lxdialog
mach-types.h
make_times_h
map
maui_boot.h
+2 −2
Original line number Diff line number Diff line
@@ -103,11 +103,11 @@ Who: Jody McIntyre <scjody@steamballoon.com>
---------------------------

What:	register_serial/unregister_serial
When:	December 2005
When:	September 2005
Why:	This interface does not allow serial ports to be registered against
	a struct device, and as such does not allow correct power management
	of such ports.  8250-based ports should use serial8250_register_port
	and serial8250_unregister_port instead.
	and serial8250_unregister_port, or platform devices instead.
Who:	Russell King <rmk@arm.linux.org.uk>

---------------------------
+45 −32
Original line number Diff line number Diff line
@@ -5,14 +5,18 @@

Document started 15 Mar 2005 by Robert Love <rml@novell.com>


(i) User Interface

Inotify is controlled by a set of three sys calls 
Inotify is controlled by a set of three system calls and normal file I/O on a
returned file descriptor.

First step in using inotify is to initialise an inotify instance
First step in using inotify is to initialise an inotify instance:

	int fd = inotify_init ();

Each instance is associated with a unique, ordered queue.

Change events are managed by "watches".  A watch is an (object,mask) pair where
the object is a file or directory and the mask is a bit mask of one or more
inotify events that the application wishes to receive.  See <linux/inotify.h>
@@ -22,43 +26,52 @@ Watches are added via a path to the file.

Watches on a directory will return events on any files inside of the directory.

Adding a watch is simple,
Adding a watch is simple:

	int wd = inotify_add_watch (fd, path, mask);

You can add a large number of files via something like

	for each file to watch {
		int wd = inotify_add_watch (fd, file, mask);
	}
Where "fd" is the return value from inotify_init(), path is the path to the
object to watch, and mask is the watch mask (see <linux/inotify.h>).

You can update an existing watch in the same manner, by passing in a new mask.

An existing watch is removed via the INOTIFY_IGNORE ioctl, for example
An existing watch is removed via

	inotify_rm_watch (fd, wd);
	int ret = inotify_rm_watch (fd, wd);

Events are provided in the form of an inotify_event structure that is read(2)
from a inotify instance fd.  The filename is of dynamic length and follows the 
struct. It is of size len.  The filename is padded with null bytes to ensure 
proper alignment.  This padding is reflected in len.
from a given inotify instance.  The filename is of dynamic length and follows
the struct. It is of size len.  The filename is padded with null bytes to
ensure proper alignment.  This padding is reflected in len.

You can slurp multiple events by passing a large buffer, for example

	size_t len = read (fd, buf, BUF_LEN);

Will return as many events as are available and fit in BUF_LEN.
Where "buf" is a pointer to an array of "inotify_event" structures at least
BUF_LEN bytes in size.  The above example will return as many events as are
available and fit in BUF_LEN.

each inotify instance fd is also select()- and poll()-able.
Each inotify instance fd is also select()- and poll()-able.

You can find the size of the current event queue via the FIONREAD ioctl.
You can find the size of the current event queue via the standard FIONREAD
ioctl on the fd returned by inotify_init().

All watches are destroyed and cleaned up on close.


(ii) Internal Kernel Implementation
(ii)

Prototypes:

	int inotify_init (void);
	int inotify_add_watch (int fd, const char *path, __u32 mask);
	int inotify_rm_watch (int fd, __u32 mask);


Each open inotify instance is associated with an inotify_device structure.
(iii) Internal Kernel Implementation

Each inotify instance is associated with an inotify_device structure.

Each watch is associated with an inotify_watch structure.  Watches are chained
off of each associated device and each associated inode.
@@ -66,7 +79,7 @@ off of each associated device and each associated inode.
See fs/inotify.c for the locking and lifetime rules.


(iii) Rationale
(iv) Rationale

Q: What is the design decision behind not tying the watch to the open fd of
   the watched object?
@@ -75,9 +88,9 @@ A: Watches are associated with an open inotify device, not an open file.
   This solves the primary problem with dnotify: keeping the file open pins
   the file and thus, worse, pins the mount.  Dnotify is therefore infeasible
   for use on a desktop system with removable media as the media cannot be
   unmounted.
   unmounted.  Watching a file should not require that it be open.

Q: What is the design decision behind using an-fd-per-device as opposed to
Q: What is the design decision behind using an-fd-per-instance as opposed to
   an fd-per-watch?

A: An fd-per-watch quickly consumes more file descriptors than are allowed,
@@ -86,8 +99,8 @@ A: An fd-per-watch quickly consumes more file descriptors than are allowed,
   can use epoll, but requiring both is a silly and extraneous requirement.
   A watch consumes less memory than an open file, separating the number
   spaces is thus sensible.  The current design is what user-space developers
   want: Users initialize inotify, once, and add n watches, requiring but one fd
   and no twiddling with fd limits.  Initializing an inotify instance two
   want: Users initialize inotify, once, and add n watches, requiring but one
   fd and no twiddling with fd limits.  Initializing an inotify instance two
   thousand times is silly.  If we can implement user-space's preferences 
   cleanly--and we can, the idr layer makes stuff like this trivial--then we 
   should.
@@ -111,9 +124,6 @@ A: An fd-per-watch quickly consumes more file descriptors than are allowed,
     example, love it.  Trust me, I asked.  It is not a surprise: Who'd want
     to manage and block on 1000 fd's via select?

   - You'd have to manage the fd's, as an example: Call close() when you
     received a delete event.

   - No way to get out of band data.

   - 1024 is still too low.  ;-)
@@ -122,6 +132,11 @@ A: An fd-per-watch quickly consumes more file descriptors than are allowed,
   scales to 1000s of directories, juggling 1000s of fd's just does not seem
   the right interface.  It is too heavy.

   Additionally, it _is_ possible to  more than one instance  and
   juggle more than one queue and thus more than one associated fd.  There
   need not be a one-fd-per-process mapping; it is one-fd-per-queue and a
   process can easily want more than one queue.

Q: Why the system call approach?

A: The poor user-space interface is the second biggest problem with dnotify.
@@ -131,8 +146,6 @@ A: The poor user-space interface is the second biggest problem with dnotify.
   Obtaining the fd and managing the watches could have been done either via a
   device file or a family of new system calls.  We decided to implement a
   family of system calls because that is the preffered approach for new kernel
   features and it means our user interface requirements.

   Additionally, it _is_ possible to  more than one instance  and
   juggle more than one queue and thus more than one associated fd.
   interfaces.  The only real difference was whether we wanted to use open(2)
   and ioctl(2) or a couple of new system calls.  System calls beat ioctls.
Loading