Jens Axboe | ebac465 | 2005-12-08 15:25:21 +0100 | [diff] [blame] | 1 | fio |
| 2 | --- |
| 3 | |
| 4 | fio is a tool that will spawn a number of thread doing a particular |
| 5 | type of io action as specified by the user. fio takes a number of |
| 6 | global parameters, each inherited by the thread unless otherwise |
| 7 | parameters given to them overriding that setting is given. |
| 8 | |
Jens Axboe | 2b02b54 | 2005-12-08 15:29:14 +0100 | [diff] [blame] | 9 | |
| 10 | Source |
| 11 | ------ |
| 12 | |
| 13 | fio resides in a git repo, the canonical place is: |
| 14 | |
| 15 | git://brick.kernel.dk/data/git/fio.git |
| 16 | |
| 17 | Snapshots are frequently generated as well and they include the git |
| 18 | meta data as well. You can download them here: |
| 19 | |
| 20 | http://brick.kernel.dk/snaps/ |
| 21 | |
Jens Axboe | 1053a10 | 2006-06-06 09:23:13 +0200 | [diff] [blame] | 22 | Pascal Bleser <guru@unixtech.be> has fio RPMs in his repository, you |
| 23 | can find them here: |
| 24 | |
| 25 | http://linux01.gwdg.de/~pbleser/rpm-navigation.php?cat=System/fio |
| 26 | |
Jens Axboe | 2b02b54 | 2005-12-08 15:29:14 +0100 | [diff] [blame] | 27 | |
Jens Axboe | bbfd6b0 | 2006-06-07 19:42:54 +0200 | [diff] [blame] | 28 | Building |
| 29 | -------- |
| 30 | |
| 31 | Just type 'make' and 'make install'. If on FreeBSD, for now you have to |
| 32 | specify the FreeBSD Makefile with -f, eg: |
| 33 | |
| 34 | $ make -f Makefile.Freebsd && make -f Makefile.FreeBSD install |
| 35 | |
Jens Axboe | edffcb9 | 2006-06-08 13:40:18 +0200 | [diff] [blame^] | 36 | Likewise with OpenSolaris, use the Makefile.solaris to compile there. |
Jens Axboe | bbfd6b0 | 2006-06-07 19:42:54 +0200 | [diff] [blame] | 37 | This might change in the future if I opt for an autoconf type setup. |
| 38 | |
| 39 | |
Jens Axboe | ebac465 | 2005-12-08 15:25:21 +0100 | [diff] [blame] | 40 | Options |
| 41 | ------- |
| 42 | |
| 43 | $ fio |
| 44 | -s IO is sequential |
| 45 | -b block size in KiB for each io |
| 46 | -t <sec> Runtime in seconds |
| 47 | -r For random io, sequence must be repeatable |
| 48 | -R <on> If one thread fails to meet rate, quit all |
| 49 | -o <on> Use direct IO is 1, buffered if 0 |
| 50 | -l Generate per-job latency logs |
| 51 | -w Generate per-job bandwidth logs |
| 52 | -f <file> Read <file> for job descriptions |
Jens Axboe | 4785f99 | 2006-05-26 03:59:10 +0200 | [diff] [blame] | 53 | -h Print help info |
Jens Axboe | ebac465 | 2005-12-08 15:25:21 +0100 | [diff] [blame] | 54 | -v Print version information and exit |
| 55 | |
| 56 | The <jobs> format is as follows: |
| 57 | |
Jens Axboe | 0145205 | 2006-06-07 10:29:47 +0200 | [diff] [blame] | 58 | name=x Use 'x' as the identifier for this job. |
Jens Axboe | ebac465 | 2005-12-08 15:25:21 +0100 | [diff] [blame] | 59 | directory=x Use 'x' as the top level directory for storing files |
Jens Axboe | 3d60d1e | 2006-05-25 06:31:06 +0200 | [diff] [blame] | 60 | rw=x 'x' may be: read, randread, write, randwrite, |
| 61 | rw (read-write mix), randrw (read-write random mix) |
Jens Axboe | a6ccc7b | 2006-06-02 10:14:15 +0200 | [diff] [blame] | 62 | rwmixcycle=x Base cycle for switching between read and write |
| 63 | in msecs. |
| 64 | rwmixread=x 'x' percentage of rw mix ios will be reads. If |
| 65 | rwmixwrite is also given, the last of the two will |
| 66 | be used if they don't add up to 100%. |
| 67 | rwmixwrite=x 'x' percentage of rw mix ios will be writes. See |
| 68 | rwmixread. |
Jens Axboe | ebac465 | 2005-12-08 15:25:21 +0100 | [diff] [blame] | 69 | size=x Set file size to x bytes (x string can include k/m/g) |
| 70 | ioengine=x 'x' may be: aio/libaio/linuxaio for Linux aio, |
| 71 | posixaio for POSIX aio, sync for regular read/write io, |
Jens Axboe | 8756e4d | 2006-05-27 20:24:53 +0200 | [diff] [blame] | 72 | mmap for mmap'ed io, splice for using splice/vmsplice, |
| 73 | or sgio for direct SG_IO io. The latter only works on |
| 74 | Linux on SCSI (or SCSI-like devices, such as |
| 75 | usb-storage or sata/libata driven) devices. |
Jens Axboe | ebac465 | 2005-12-08 15:25:21 +0100 | [diff] [blame] | 76 | iodepth=x For async io, allow 'x' ios in flight |
| 77 | overwrite=x If 'x', layout a write file first. |
| 78 | prio=x Run io at prio X, 0-7 is the kernel allowed range |
| 79 | prioclass=x Run io at prio class X |
| 80 | bs=x Use 'x' for thread blocksize. May include k/m postfix. |
| 81 | bsrange=x-y Mix thread block sizes randomly between x and y. May |
| 82 | also include k/m postfix. |
| 83 | direct=x 1 for direct IO, 0 for buffered IO |
| 84 | thinktime=x "Think" x usec after each io |
| 85 | rate=x Throttle rate to x KiB/sec |
| 86 | ratemin=x Quit if rate of x KiB/sec can't be met |
| 87 | ratecycle=x ratemin averaged over x msecs |
| 88 | cpumask=x Only allow job to run on CPUs defined by mask. |
| 89 | fsync=x If writing, fsync after every x blocks have been written |
| 90 | startdelay=x Start this thread x seconds after startup |
| 91 | timeout=x Terminate x seconds after startup |
| 92 | offset=x Start io at offset x (x string can include k/m/g) |
| 93 | invalidate=x Invalidate page cache for file prior to doing io |
| 94 | sync=x Use sync writes if x and writing |
| 95 | mem=x If x == malloc, use malloc for buffers. If x == shm, |
| 96 | use shm for buffers. If x == mmap, use anon mmap. |
| 97 | exitall When one thread quits, terminate the others |
| 98 | bwavgtime=x Average bandwidth stats over an x msec window. |
| 99 | create_serialize=x If 'x', serialize file creation. |
| 100 | create_fsync=x If 'x', run fsync() after file creation. |
Jens Axboe | fc1a471 | 2006-05-30 13:04:05 +0200 | [diff] [blame] | 101 | end_fsync=x If 'x', run fsync() after end-of-job. |
Jens Axboe | ebac465 | 2005-12-08 15:25:21 +0100 | [diff] [blame] | 102 | loops=x Run the job 'x' number of times. |
| 103 | verify=x If 'x' == md5, use md5 for verifies. If 'x' == crc32, |
| 104 | use crc32 for verifies. md5 is 'safer', but crc32 is |
| 105 | a lot faster. Only makes sense for writing to a file. |
| 106 | stonewall Wait for preceeding jobs to end before running. |
| 107 | numjobs=x Create 'x' similar entries for this job |
| 108 | thread Use pthreads instead of forked jobs |
Jens Axboe | 20dc95c | 2005-12-09 10:29:35 +0100 | [diff] [blame] | 109 | zonesize=x |
| 110 | zoneskip=y Zone options must be paired. If given, the job |
| 111 | will skip y bytes for every x read/written. This |
| 112 | can be used to gauge hard drive speed over the entire |
| 113 | platter, without reading everything. Both x/y can |
| 114 | include k/m/g suffix. |
Jens Axboe | aea47d4 | 2006-05-26 19:27:29 +0200 | [diff] [blame] | 115 | iolog=x Open and read io pattern from file 'x'. The file must |
| 116 | contain one io action per line in the following format: |
| 117 | rw, offset, length |
| 118 | where with rw=0/1 for read/write, and the offset |
| 119 | and length entries being in bytes. |
Jens Axboe | 843a741 | 2006-06-01 21:14:21 -0700 | [diff] [blame] | 120 | write_iolog=x Write an iolog to file 'x' in the same format as iolog. |
| 121 | The iolog options are exclusive, if both given the |
| 122 | read iolog will be performed. |
Jens Axboe | c04f7ec | 2006-05-31 10:13:16 +0200 | [diff] [blame] | 123 | lockmem=x Lock down x amount of memory on the machine, to |
| 124 | simulate a machine with less memory available. x can |
| 125 | include k/m/g suffix. |
Jens Axboe | b6f4d88 | 2006-06-02 10:32:51 +0200 | [diff] [blame] | 126 | nice=x Run job at given nice value. |
Jens Axboe | 4e0ba8a | 2006-06-06 09:36:28 +0200 | [diff] [blame] | 127 | exec_prerun=x Run 'x' before job io is begun. |
| 128 | exec_postrun=x Run 'x' after job io has finished. |
Jens Axboe | da86774 | 2006-06-06 10:39:10 -0700 | [diff] [blame] | 129 | ioscheduler=x Use ioscheduler 'x' for this job. |
Jens Axboe | ebac465 | 2005-12-08 15:25:21 +0100 | [diff] [blame] | 130 | |
| 131 | Examples using a job file |
| 132 | ------------------------- |
| 133 | |
| 134 | A sample job file doing the same as above would look like this: |
| 135 | |
| 136 | [read_file] |
| 137 | rw=0 |
| 138 | bs=4096 |
| 139 | |
| 140 | [write_file] |
| 141 | rw=1 |
| 142 | bs=16384 |
| 143 | |
| 144 | And fio would be invoked as: |
| 145 | |
| 146 | $ fio -o1 -s -f file_with_above |
| 147 | |
| 148 | The second example would look like this: |
| 149 | |
| 150 | [rf1] |
| 151 | rw=0 |
| 152 | prio=6 |
| 153 | |
| 154 | [rf2] |
| 155 | rw=0 |
| 156 | prio=3 |
| 157 | |
| 158 | [rf3] |
| 159 | rw=0 |
| 160 | prio=0 |
| 161 | direct=1 |
| 162 | |
| 163 | And fio would be invoked as: |
| 164 | |
| 165 | $ fio -o0 -s -b4096 -f file_with_above |
| 166 | |
| 167 | 'global' is a reserved keyword. When used as the filename, it sets the |
| 168 | default options for the threads following that section. It is possible |
| 169 | to have more than one global section in the file, as it only affects |
| 170 | subsequent jobs. |
| 171 | |
| 172 | Also see the examples/ dir for sample job files. |
| 173 | |
| 174 | |
| 175 | Interpreting the output |
| 176 | ----------------------- |
| 177 | |
| 178 | fio spits out a lot of output. While running, fio will display the |
| 179 | status of the jobs created. An example of that would be: |
| 180 | |
| 181 | Threads now running: 2 : [ww] [5.73% done] |
| 182 | |
| 183 | The characters inside the square brackets denote the current status of |
| 184 | each thread. The possible values (in typical life cycle order) are: |
| 185 | |
| 186 | Idle Run |
| 187 | ---- --- |
| 188 | P Thread setup, but not started. |
| 189 | C Thread created and running, but not doing anything yet |
| 190 | R Running, doing sequential reads. |
| 191 | r Running, doing random reads. |
| 192 | W Running, doing sequential writes. |
| 193 | w Running, doing random writes. |
| 194 | V Running, doing verification of written data. |
| 195 | E Thread exited, not reaped by main thread yet. |
| 196 | _ Thread reaped. |
| 197 | |
| 198 | The other values are fairly self explanatory - number of thread currently |
| 199 | running and doing io, and the estimated completion percentage. |
| 200 | |
| 201 | When fio is done (or interrupted by ctrl-c), it will show the data for |
| 202 | each thread, group of threads, and disks in that order. For each data |
| 203 | direction, the output looks like: |
| 204 | |
| 205 | Client1 (g=0): err= 0: |
| 206 | write: io= 32MiB, bw= 666KiB/s, runt= 50320msec |
| 207 | slat (msec): min= 0, max= 136, avg= 0.03, dev= 1.92 |
| 208 | clat (msec): min= 0, max= 631, avg=48.50, dev=86.82 |
| 209 | bw (KiB/s) : min= 0, max= 1196, per=51.00%, avg=664.02, dev=681.68 |
| 210 | cpu : usr=1.49%, sys=0.25%, ctx=7969 |
| 211 | |
| 212 | The client number is printed, along with the group id and error of that |
| 213 | thread. Below is the io statistics, here for writes. In the order listed, |
| 214 | they denote: |
| 215 | |
| 216 | io= Number of megabytes io performed |
| 217 | bw= Average bandwidth rate |
| 218 | runt= The runtime of that thread |
| 219 | slat= Submission latency (avg being the average, dev being the |
| 220 | standard deviation). This is the time it took to submit |
| 221 | the io. For sync io, the slat is really the completion |
| 222 | latency, since queue/complete is one operation there. |
| 223 | clat= Completion latency. Same names as slat, this denotes the |
| 224 | time from submission to completion of the io pieces. For |
| 225 | sync io, clat will usually be equal (or very close) to 0, |
| 226 | as the time from submit to complete is basically just |
| 227 | CPU time (io has already been done, see slat explanation). |
| 228 | bw= Bandwidth. Same names as the xlat stats, but also includes |
| 229 | an approximate percentage of total aggregate bandwidth |
| 230 | this thread received in this group. This last value is |
| 231 | only really useful if the threads in this group are on the |
| 232 | same disk, since they are then competing for disk access. |
| 233 | cpu= CPU usage. User and system time, along with the number |
| 234 | of context switches this thread went through. |
| 235 | |
| 236 | After each client has been listed, the group statistics are printed. They |
| 237 | will look like this: |
| 238 | |
| 239 | Run status group 0 (all jobs): |
| 240 | READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec |
| 241 | WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec |
| 242 | |
| 243 | For each data direction, it prints: |
| 244 | |
| 245 | io= Number of megabytes io performed. |
| 246 | aggrb= Aggregate bandwidth of threads in this group. |
| 247 | minb= The minimum average bandwidth a thread saw. |
| 248 | maxb= The maximum average bandwidth a thread saw. |
| 249 | mint= The minimum runtime of a thread. |
| 250 | maxt= The maximum runtime of a thread. |
| 251 | |
| 252 | And finally, the disk statistics are printed. They will look like this: |
| 253 | |
| 254 | Disk stats (read/write): |
| 255 | sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00% |
| 256 | |
| 257 | Each value is printed for both reads and writes, with reads first. The |
| 258 | numbers denote: |
| 259 | |
| 260 | ios= Number of ios performed by all groups. |
| 261 | merge= Number of merges io the io scheduler. |
| 262 | ticks= Number of ticks we kept the disk busy. |
| 263 | io_queue= Total time spent in the disk queue. |
| 264 | util= The disk utilization. A value of 100% means we kept the disk |
| 265 | busy constantly, 50% would be a disk idling half of the time. |