blob: 6d501903f68ed823c12fc788c5f2be6e3d556338 [file] [log] [blame]
Robert Love6f979332005-07-15 03:56:33 -07001 inotify
2 a powerful yet simple file change notification system
Robert Love0eeca282005-07-12 17:06:03 -04003
4
5
6Document started 15 Mar 2005 by Robert Love <rml@novell.com>
7
Robert Love6f979332005-07-15 03:56:33 -07008
Robert Love0eeca282005-07-12 17:06:03 -04009(i) User Interface
10
Robert Love6f979332005-07-15 03:56:33 -070011Inotify is controlled by a set of three system calls and normal file I/O on a
12returned file descriptor.
Robert Love0eeca282005-07-12 17:06:03 -040013
Robert Love6f979332005-07-15 03:56:33 -070014First step in using inotify is to initialise an inotify instance:
Robert Love0eeca282005-07-12 17:06:03 -040015
16 int fd = inotify_init ();
17
Robert Love6f979332005-07-15 03:56:33 -070018Each instance is associated with a unique, ordered queue.
19
Robert Love0eeca282005-07-12 17:06:03 -040020Change events are managed by "watches". A watch is an (object,mask) pair where
21the object is a file or directory and the mask is a bit mask of one or more
22inotify events that the application wishes to receive. See <linux/inotify.h>
23for valid events. A watch is referenced by a watch descriptor, or wd.
24
25Watches are added via a path to the file.
26
27Watches on a directory will return events on any files inside of the directory.
28
Robert Love6f979332005-07-15 03:56:33 -070029Adding a watch is simple:
Robert Love0eeca282005-07-12 17:06:03 -040030
31 int wd = inotify_add_watch (fd, path, mask);
32
Robert Love6f979332005-07-15 03:56:33 -070033Where "fd" is the return value from inotify_init(), path is the path to the
34object to watch, and mask is the watch mask (see <linux/inotify.h>).
Robert Love0eeca282005-07-12 17:06:03 -040035
36You can update an existing watch in the same manner, by passing in a new mask.
37
Robert Love6f979332005-07-15 03:56:33 -070038An existing watch is removed via
Robert Love0eeca282005-07-12 17:06:03 -040039
Robert Love6f979332005-07-15 03:56:33 -070040 int ret = inotify_rm_watch (fd, wd);
Robert Love0eeca282005-07-12 17:06:03 -040041
42Events are provided in the form of an inotify_event structure that is read(2)
Robert Love6f979332005-07-15 03:56:33 -070043from a given inotify instance. The filename is of dynamic length and follows
44the struct. It is of size len. The filename is padded with null bytes to
45ensure proper alignment. This padding is reflected in len.
Robert Love0eeca282005-07-12 17:06:03 -040046
47You can slurp multiple events by passing a large buffer, for example
48
49 size_t len = read (fd, buf, BUF_LEN);
50
Robert Love6f979332005-07-15 03:56:33 -070051Where "buf" is a pointer to an array of "inotify_event" structures at least
52BUF_LEN bytes in size. The above example will return as many events as are
53available and fit in BUF_LEN.
Robert Love0eeca282005-07-12 17:06:03 -040054
Robert Love6f979332005-07-15 03:56:33 -070055Each inotify instance fd is also select()- and poll()-able.
Robert Love0eeca282005-07-12 17:06:03 -040056
Robert Love6f979332005-07-15 03:56:33 -070057You can find the size of the current event queue via the standard FIONREAD
58ioctl on the fd returned by inotify_init().
Robert Love0eeca282005-07-12 17:06:03 -040059
60All watches are destroyed and cleaned up on close.
61
62
Robert Love6f979332005-07-15 03:56:33 -070063(ii)
Robert Love0eeca282005-07-12 17:06:03 -040064
Robert Love6f979332005-07-15 03:56:33 -070065Prototypes:
66
67 int inotify_init (void);
68 int inotify_add_watch (int fd, const char *path, __u32 mask);
69 int inotify_rm_watch (int fd, __u32 mask);
70
71
72(iii) Internal Kernel Implementation
73
74Each inotify instance is associated with an inotify_device structure.
Robert Love0eeca282005-07-12 17:06:03 -040075
76Each watch is associated with an inotify_watch structure. Watches are chained
77off of each associated device and each associated inode.
78
79See fs/inotify.c for the locking and lifetime rules.
80
81
Robert Love6f979332005-07-15 03:56:33 -070082(iv) Rationale
Robert Love0eeca282005-07-12 17:06:03 -040083
84Q: What is the design decision behind not tying the watch to the open fd of
85 the watched object?
86
87A: Watches are associated with an open inotify device, not an open file.
88 This solves the primary problem with dnotify: keeping the file open pins
89 the file and thus, worse, pins the mount. Dnotify is therefore infeasible
90 for use on a desktop system with removable media as the media cannot be
Robert Love6f979332005-07-15 03:56:33 -070091 unmounted. Watching a file should not require that it be open.
Robert Love0eeca282005-07-12 17:06:03 -040092
Robert Love6f979332005-07-15 03:56:33 -070093Q: What is the design decision behind using an-fd-per-instance as opposed to
Robert Love0eeca282005-07-12 17:06:03 -040094 an fd-per-watch?
95
96A: An fd-per-watch quickly consumes more file descriptors than are allowed,
97 more fd's than are feasible to manage, and more fd's than are optimally
98 select()-able. Yes, root can bump the per-process fd limit and yes, users
99 can use epoll, but requiring both is a silly and extraneous requirement.
100 A watch consumes less memory than an open file, separating the number
101 spaces is thus sensible. The current design is what user-space developers
Robert Love6f979332005-07-15 03:56:33 -0700102 want: Users initialize inotify, once, and add n watches, requiring but one
103 fd and no twiddling with fd limits. Initializing an inotify instance two
Robert Love0eeca282005-07-12 17:06:03 -0400104 thousand times is silly. If we can implement user-space's preferences
105 cleanly--and we can, the idr layer makes stuff like this trivial--then we
106 should.
107
108 There are other good arguments. With a single fd, there is a single
109 item to block on, which is mapped to a single queue of events. The single
110 fd returns all watch events and also any potential out-of-band data. If
111 every fd was a separate watch,
112
113 - There would be no way to get event ordering. Events on file foo and
114 file bar would pop poll() on both fd's, but there would be no way to tell
115 which happened first. A single queue trivially gives you ordering. Such
116 ordering is crucial to existing applications such as Beagle. Imagine
117 "mv a b ; mv b a" events without ordering.
118
119 - We'd have to maintain n fd's and n internal queues with state,
120 versus just one. It is a lot messier in the kernel. A single, linear
121 queue is the data structure that makes sense.
122
123 - User-space developers prefer the current API. The Beagle guys, for
124 example, love it. Trust me, I asked. It is not a surprise: Who'd want
125 to manage and block on 1000 fd's via select?
126
Robert Love0eeca282005-07-12 17:06:03 -0400127 - No way to get out of band data.
128
129 - 1024 is still too low. ;-)
130
131 When you talk about designing a file change notification system that
132 scales to 1000s of directories, juggling 1000s of fd's just does not seem
133 the right interface. It is too heavy.
134
Robert Love6f979332005-07-15 03:56:33 -0700135 Additionally, it _is_ possible to more than one instance and
136 juggle more than one queue and thus more than one associated fd. There
137 need not be a one-fd-per-process mapping; it is one-fd-per-queue and a
138 process can easily want more than one queue.
139
Robert Love0eeca282005-07-12 17:06:03 -0400140Q: Why the system call approach?
141
142A: The poor user-space interface is the second biggest problem with dnotify.
143 Signals are a terrible, terrible interface for file notification. Or for
144 anything, for that matter. The ideal solution, from all perspectives, is a
145 file descriptor-based one that allows basic file I/O and poll/select.
146 Obtaining the fd and managing the watches could have been done either via a
147 device file or a family of new system calls. We decided to implement a
148 family of system calls because that is the preffered approach for new kernel
Robert Love6f979332005-07-15 03:56:33 -0700149 interfaces. The only real difference was whether we wanted to use open(2)
150 and ioctl(2) or a couple of new system calls. System calls beat ioctls.
Robert Love0eeca282005-07-12 17:06:03 -0400151