| Jens Axboe | 71bfa16 | 2006-10-25 11:08:19 +0200 | [diff] [blame] | 1 | Table of contents | 
 | 2 | ----------------- | 
 | 3 |  | 
 | 4 | 1. Overview | 
 | 5 | 2. How fio works | 
 | 6 | 3. Running fio | 
 | 7 | 4. Job file format | 
 | 8 | 5. Detailed list of parameters | 
 | 9 | 6. Normal output | 
 | 10 | 7. Terse output | 
 | 11 |  | 
 | 12 |  | 
 | 13 | 1.0 Overview and history | 
 | 14 | ------------------------ | 
 | 15 | fio was originally written to save me the hassle of writing special test | 
 | 16 | case programs when I wanted to test a specific workload, either for | 
 | 17 | performance reasons or to find/reproduce a bug. The process of writing | 
 | 18 | such a test app can be tiresome, especially if you have to do it often. | 
 | 19 | Hence I needed a tool that would be able to simulate a given io workload | 
 | 20 | without resorting to writing a tailored test case again and again. | 
 | 21 |  | 
 | 22 | A test work load is difficult to define, though. There can be any number | 
 | 23 | of processes or threads involved, and they can each be using their own | 
 | 24 | way of generating io. You could have someone dirtying large amounts of | 
 | 25 | memory in an memory mapped file, or maybe several threads issuing | 
 | 26 | reads using asynchronous io. fio needed to be flexible enough to | 
 | 27 | simulate both of these cases, and many more. | 
 | 28 |  | 
 | 29 | 2.0 How fio works | 
 | 30 | ----------------- | 
 | 31 | The first step in getting fio to simulate a desired io workload, is | 
 | 32 | writing a job file describing that specific setup. A job file may contain | 
 | 33 | any number of threads and/or files - the typical contents of the job file | 
 | 34 | is a global section defining shared parameters, and one or more job | 
 | 35 | sections describing the jobs involved. When run, fio parses this file | 
 | 36 | and sets everything up as described. If we break down a job from top to | 
 | 37 | bottom, it contains the following basic parameters: | 
 | 38 |  | 
 | 39 | 	IO type		Defines the io pattern issued to the file(s). | 
 | 40 | 			We may only be reading sequentially from this | 
 | 41 | 			file(s), or we may be writing randomly. Or even | 
 | 42 | 			mixing reads and writes, sequentially or randomly. | 
 | 43 |  | 
 | 44 | 	Block size	In how large chunks are we issuing io? This may be | 
 | 45 | 			a single value, or it may describe a range of | 
 | 46 | 			block sizes. | 
 | 47 |  | 
 | 48 | 	IO size		How much data are we going to be reading/writing. | 
 | 49 |  | 
 | 50 | 	IO engine	How do we issue io? We could be memory mapping the | 
 | 51 | 			file, we could be using regular read/write, we | 
 | 52 | 			could be using splice, async io, or even | 
 | 53 | 			SG (SCSI generic sg). | 
 | 54 |  | 
 | 55 | 	IO depth	If the io engine is async, how large a queueing | 
 | 56 | 			depth do we want to maintain? | 
 | 57 |  | 
 | 58 | 	IO type		Should we be doing buffered io, or direct/raw io? | 
 | 59 |  | 
 | 60 | 	Num files	How many files are we spreading the workload over. | 
 | 61 |  | 
 | 62 | 	Num threads	How many threads or processes should we spread | 
 | 63 | 			this workload over. | 
 | 64 | 	 | 
 | 65 | The above are the basic parameters defined for a workload, in addition | 
 | 66 | there's a multitude of parameters that modify other aspects of how this | 
 | 67 | job behaves. | 
 | 68 |  | 
 | 69 |  | 
 | 70 | 3.0 Running fio | 
 | 71 | --------------- | 
 | 72 | See the README file for command line parameters, there are only a few | 
 | 73 | of them. | 
 | 74 |  | 
 | 75 | Running fio is normally the easiest part - you just give it the job file | 
 | 76 | (or job files) as parameters: | 
 | 77 |  | 
 | 78 | $ fio job_file | 
 | 79 |  | 
 | 80 | and it will start doing what the job_file tells it to do. You can give | 
 | 81 | more than one job file on the command line, fio will serialize the running | 
 | 82 | of those files. Internally that is the same as using the 'stonewall' | 
 | 83 | parameter described the the parameter section. | 
 | 84 |  | 
 | 85 | fio does not need to run as root, except if the files or devices specified | 
 | 86 | in the job section requires that. Some other options may also be restricted, | 
 | 87 | such as memory locking, io scheduler switching, and descreasing the nice value. | 
 | 88 |  | 
 | 89 |  | 
 | 90 | 4.0 Job file format | 
 | 91 | ------------------- | 
 | 92 | As previously described, fio accepts one or more job files describing | 
 | 93 | what it is supposed to do. The job file format is the classic ini file, | 
 | 94 | where the names enclosed in [] brackets define the job name. You are free | 
 | 95 | to use any ascii name you want, except 'global' which has special meaning. | 
 | 96 | A global section sets defaults for the jobs described in that file. A job | 
 | 97 | may override a global section parameter, and a job file may even have | 
 | 98 | several global sections if so desired. A job is only affected by a global | 
 | 99 | section residing above it. If the first character in a line is a ';', the | 
 | 100 | entire line is discarded as a comment. | 
 | 101 |  | 
 | 102 | So lets look at a really simple job file that define to threads, each | 
 | 103 | randomly reading from a 128MiB file. | 
 | 104 |  | 
 | 105 | ; -- start job file -- | 
 | 106 | [global] | 
 | 107 | rw=randread | 
 | 108 | size=128m | 
 | 109 |  | 
 | 110 | [job1] | 
 | 111 |  | 
 | 112 | [job2] | 
 | 113 |  | 
 | 114 | ; -- end job file -- | 
 | 115 |  | 
 | 116 | As you can see, the job file sections themselves are empty as all the | 
 | 117 | described parameters are shared. As no filename= option is given, fio | 
 | 118 | makes up a filename for each of the jobs as it sees fit. | 
 | 119 |  | 
 | 120 | Lets look at an example that have a number of processes writing randomly | 
 | 121 | to files. | 
 | 122 |  | 
 | 123 | ; -- start job file -- | 
 | 124 | [random-writers] | 
 | 125 | ioengine=libaio | 
 | 126 | iodepth=4 | 
 | 127 | rw=randwrite | 
 | 128 | bs=32k | 
 | 129 | direct=0 | 
 | 130 | size=64m | 
 | 131 | numjobs=4 | 
 | 132 |  | 
 | 133 | ; -- end job file -- | 
 | 134 |  | 
 | 135 | Here we have no global section, as we only have one job defined anyway. | 
 | 136 | We want to use async io here, with a depth of 4 for each file. We also | 
 | 137 | increased the buffer size used to 32KiB and define numjobs to 4 to | 
 | 138 | fork 4 identical jobs. The result is 4 processes each randomly writing | 
 | 139 | to their own 64MiB file. | 
 | 140 |  | 
 | 141 | fio ships with a few example job files, you can also look there for | 
 | 142 | inspiration. | 
 | 143 |  | 
 | 144 |  | 
 | 145 | 5.0 Detailed list of parameters | 
 | 146 | ------------------------------- | 
 | 147 |  | 
 | 148 | This section describes in details each parameter associated with a job. | 
 | 149 | Some parameters take an option of a given type, such as an integer or | 
 | 150 | a string. The following types are used: | 
 | 151 |  | 
 | 152 | str	String. This is a sequence of alpha characters. | 
 | 153 | int	Integer. A whole number value, may be negative. | 
 | 154 | siint	SI integer. A whole number value, which may contain a postfix | 
 | 155 | 	describing the base of the number. Accepted postfixes are k/m/g, | 
 | 156 | 	meaning kilo, mega, and giga. So if you want to specifiy 4096, | 
 | 157 | 	you could either write out '4096' or just give 4k. The postfixes | 
 | 158 | 	signify base 2 values, so 1024 is 1k and 1024k is 1m and so on. | 
 | 159 | bool	Boolean. Usually parsed as an integer, however only defined for | 
 | 160 | 	true and false (1 and 0). | 
 | 161 | irange	Integer range with postfix. Allows value range to be given, such | 
 | 162 | 	as 1024-4096. Also see siint. | 
 | 163 |  | 
 | 164 | With the above in mind, here follows the complete list of fio job | 
 | 165 | parameters. | 
 | 166 |  | 
 | 167 | name=str	ASCII name of the job. This may be used to override the | 
 | 168 | 		name printed by fio for this job. Otherwise the job | 
 | 169 | 		name is used. | 
 | 170 |  | 
 | 171 | directory=str	Prefix filenames with this directory. Used to places files | 
 | 172 | 		in a different location than "./". | 
 | 173 |  | 
 | 174 | filename=str	Fio normally makes up a filename based on the job name, | 
 | 175 | 		thread number, and file number. If you want to share | 
 | 176 | 		files between threads in a job or several jobs, specify | 
 | 177 | 		a filename for each of them to override the default. | 
 | 178 |  | 
 | 179 | rw=str		Type of io pattern. Accepted values are: | 
 | 180 |  | 
 | 181 | 			read		Sequential reads | 
 | 182 | 			write		Sequential writes | 
 | 183 | 			randwrite	Random writes | 
 | 184 | 			randread	Random reads | 
 | 185 | 			rw		Sequential mixed reads and writes | 
 | 186 | 			randrw		Random mixed reads and writes | 
 | 187 |  | 
 | 188 | 		For the mixed io types, the default is to split them 50/50. | 
 | 189 | 		For certain types of io the result may still be skewed a bit, | 
 | 190 | 		since the speed may be different. | 
 | 191 |  | 
 | 192 | size=siint	The total size of file io for this job. This may describe | 
 | 193 | 		the size of the single file the job uses, or it may be | 
 | 194 | 		divided between the number of files in the job. If the | 
 | 195 | 		file already exists, the file size will be adjusted to this | 
 | 196 | 		size if larger than the current file size. If this parameter | 
 | 197 | 		is not given and the file exists, the file size will be used. | 
 | 198 |  | 
 | 199 | bs=siint	The block size used for the io units. Defaults to 4k. | 
 | 200 |  | 
 | 201 | bsrange=irange	Instead of giving a single block size, specify a range | 
 | 202 | 		and fio will mix the issued io block sizes. The issued | 
 | 203 | 		io unit will always be a multiple of the minimum value | 
 | 204 | 		given. | 
 | 205 |  | 
 | 206 | nrfiles=int	Number of files to use for this job. Defaults to 1. | 
 | 207 |  | 
 | 208 | ioengine=str	Defines how the job issues io to the file. The following | 
 | 209 | 		types are defined: | 
 | 210 |  | 
 | 211 | 			sync	Basic read(2) or write(2) io. lseek(2) is | 
 | 212 | 				used to position the io location. | 
 | 213 |  | 
 | 214 | 			libaio	Linux native asynchronous io. | 
 | 215 |  | 
 | 216 | 			posixaio glibc posix asynchronous io. | 
 | 217 |  | 
 | 218 | 			mmap	File is memory mapped and data copied | 
 | 219 | 				to/from using memcpy(3). | 
 | 220 |  | 
 | 221 | 			splice	splice(2) is used to transfer the data and | 
 | 222 | 				vmsplice(2) to transfer data from user | 
 | 223 | 				space to the kernel. | 
 | 224 |  | 
 | 225 | 			sg	SCSI generic sg v3 io. May either be | 
 | 226 | 				syncrhonous using the SG_IO ioctl, or if | 
 | 227 | 				the target is an sg character device | 
 | 228 | 				we use read(2) and write(2) for asynchronous | 
 | 229 | 				io. | 
 | 230 |  | 
 | 231 | iodepth=int	This defines how many io units to keep in flight against | 
 | 232 | 		the file. The default is 1 for each file defined in this | 
 | 233 | 		job, can be overridden with a larger value for higher | 
 | 234 | 		concurrency. | 
 | 235 |  | 
 | 236 | direct=bool	If value is true, use non-buffered io. This is usually | 
 | 237 | 		O_DIRECT. Defaults to true. | 
 | 238 |  | 
 | 239 | offset=siint	Start io at the given offset in the file. The data before | 
 | 240 | 		the given offset will not be touched. This effectively | 
 | 241 | 		caps the file size at real_size - offset. | 
 | 242 |  | 
 | 243 | fsync=int	If writing to a file, issue a sync of the dirty data | 
 | 244 | 		for every number of blocks given. For example, if you give | 
 | 245 | 		32 as a parameter, fio will sync the file for every 32 | 
 | 246 | 		writes issued. If fio is using non-buffered io, we may | 
 | 247 | 		not sync the file. The exception is the sg io engine, which | 
 | 248 | 		syncronizes the disk cache anyway. | 
 | 249 |  | 
 | 250 | overwrite=bool	If writing to a file, setup the file first and do overwrites. | 
 | 251 |  | 
 | 252 | end_fsync=bool	If true, fsync file contents when the job exits. | 
 | 253 |  | 
 | 254 | rwmixcycle=int	Value in miliseconds describing how often to switch between | 
 | 255 | 		reads and writes for a mixed workload. The default is | 
 | 256 | 		500 msecs. | 
 | 257 |  | 
 | 258 | rwmixread=int	How large a percentage of the mix should be reads. | 
 | 259 |  | 
 | 260 | rwmixwrite=int	How large a percentage of the mix should be writes. If both | 
 | 261 | 		rwmixread and rwmixwrite is given and the values do not add | 
 | 262 | 		up to 100%, the latter of the two will be used to override | 
 | 263 | 		the first. | 
 | 264 |  | 
 | 265 | nice=int	Run the job with the given nice value. See man nice(2). | 
 | 266 |  | 
 | 267 | prio=int	Set the io priority value of this job. Linux limits us to | 
 | 268 | 		a positive value between 0 and 7, with 0 being the highest. | 
 | 269 | 		See man ionice(1). | 
 | 270 |  | 
 | 271 | prioclass=int	Set the io priority class. See man ionice(1). | 
 | 272 |  | 
 | 273 | thinktime=int	Stall the job x microseconds after an io has completed before | 
 | 274 | 		issuing the next. May be used to simulate processing being | 
 | 275 | 		done by an application. | 
 | 276 |  | 
 | 277 | rate=int	Cap the bandwidth used by this job to this number of KiB/sec. | 
 | 278 |  | 
 | 279 | ratemin=int	Tell fio to do whatever it can to maintain at least this | 
 | 280 | 		bandwidth. | 
 | 281 |  | 
 | 282 | ratecycle=int	Average bandwidth for 'rate' and 'ratemin' over this number | 
 | 283 | 		of miliseconds. | 
 | 284 |  | 
 | 285 | cpumask=int	Set the CPU affinity of this job. The parameter given is a | 
 | 286 | 		bitmask of allowed CPU's the job may run on. See man | 
 | 287 | 		sched_setaffinity(2). | 
 | 288 |  | 
 | 289 | startdelay=int	Start this job the specified number of seconds after fio | 
 | 290 | 		has started. Only useful if the job file contains several | 
 | 291 | 		jobs, and you want to delay starting some jobs to a certain | 
 | 292 | 		time. | 
 | 293 |  | 
 | 294 | timeout=int	Tell fio to terminate processing after the specified number | 
 | 295 | 		of seconds. It can be quite hard to determine for how long | 
 | 296 | 		a specified job will run, so this parameter is handy to | 
 | 297 | 		cap the total runtime to a given time. | 
 | 298 |  | 
 | 299 | invalidate=bool	Invalidate the buffer/page cache parts for this file prior | 
 | 300 | 		to starting io. Defaults to true. | 
 | 301 |  | 
 | 302 | sync=bool	Use sync io for buffered writes. For the majority of the | 
 | 303 | 		io engines, this means using O_SYNC. | 
 | 304 |  | 
 | 305 | mem=str		Fio can use various types of memory as the io unit buffer. | 
 | 306 | 		The allowed values are: | 
 | 307 |  | 
 | 308 | 			malloc	Use memory from malloc(3) as the buffers. | 
 | 309 |  | 
 | 310 | 			shm	Use shared memory as the buffers. Allocated | 
 | 311 | 				through shmget(2). | 
 | 312 |  | 
 | 313 | 			mmap	Use anonymous memory maps as the buffers. | 
 | 314 | 				Allocated through mmap(2). | 
 | 315 |  | 
 | 316 | 		The area allocated is a function of the maximum allowed | 
 | 317 | 		bs size for the job, multiplied by the io depth given. | 
 | 318 |  | 
 | 319 | exitall		When one job finishes, terminate the rest. The default is | 
 | 320 | 		to wait for each job to finish, sometimes that is not the | 
 | 321 | 		desired action. | 
 | 322 |  | 
 | 323 | bwavgtime=int	Average the calculated bandwidth over the given time. Value | 
 | 324 | 		is specified in miliseconds. | 
 | 325 |  | 
 | 326 | create_serialize=bool	If true, serialize the file creating for the jobs. | 
 | 327 | 			This may be handy to avoid interleaving of data | 
 | 328 | 			files, which may greatly depend on the filesystem | 
 | 329 | 			used and even the number of processors in the system. | 
 | 330 |  | 
 | 331 | create_fsync=bool	fsync the data file after creation. This is the | 
 | 332 | 			default. | 
 | 333 |  | 
 | 334 | unlink		Unlink the job files when done. fio defaults to doing this, | 
 | 335 | 		if it created the file itself. | 
 | 336 |  | 
 | 337 | loops=int	Run the specified number of iterations of this job. Used | 
 | 338 | 		to repeat the same workload a given number of times. Defaults | 
 | 339 | 		to 1. | 
 | 340 |  | 
 | 341 | verify=str	If writing to a file, fio can verify the file contents | 
 | 342 | 		after each iteration of the job. The allowed values are: | 
 | 343 |  | 
 | 344 | 			md5	Use an md5 sum of the data area and store | 
 | 345 | 				it in the header of each block. | 
 | 346 |  | 
 | 347 | 			crc32	Use a crc32 sum of the data area and store | 
 | 348 | 				it in the header of each block. | 
 | 349 |  | 
 | 350 | 		This option can be used for repeated burnin tests of a | 
 | 351 | 		system to make sure that the written data is also | 
 | 352 | 		correctly read back. | 
 | 353 |  | 
 | 354 | stonewall	Wait for preceeding jobs in the job file to exit, before | 
 | 355 | 		starting this one. Can be used to insert serialization | 
 | 356 | 		points in the job file. | 
 | 357 |  | 
 | 358 | numjobs=int	Create the specified number of clones of this job. May be | 
 | 359 | 		used to setup a larger number of threads/processes doing | 
 | 360 | 		the same thing. | 
 | 361 |  | 
 | 362 | thread		fio defaults to forking jobs, however if this option is | 
 | 363 | 		given, fio will use pthread_create(3) to create threads | 
 | 364 | 		instead. | 
 | 365 |  | 
 | 366 | zonesize=siint	Divide a file into zones of the specified size. See zoneskip. | 
 | 367 |  | 
 | 368 | zoneskip=siint	Skip the specified number of bytes when zonesize data has | 
 | 369 | 		been read. The two zone options can be used to only do | 
 | 370 | 		io on zones of a file. | 
 | 371 |  | 
| Jens Axboe | 076efc7 | 2006-10-27 11:24:25 +0200 | [diff] [blame] | 372 | write_iolog=str	Write the issued io patterns to the specified file. See | 
 | 373 | 		read_iolog. | 
| Jens Axboe | 71bfa16 | 2006-10-25 11:08:19 +0200 | [diff] [blame] | 374 |  | 
| Jens Axboe | 076efc7 | 2006-10-27 11:24:25 +0200 | [diff] [blame] | 375 | read_iolog=str	Open an iolog with the specified file name and replay the | 
| Jens Axboe | 71bfa16 | 2006-10-25 11:08:19 +0200 | [diff] [blame] | 376 | 		io patterns it contains. This can be used to store a | 
 | 377 | 		workload and replay it sometime later. | 
 | 378 |  | 
 | 379 | write_bw_log	If given, write a bandwidth log of the jobs in this job | 
 | 380 | 		file. Can be used to store data of the bandwidth of the | 
| Jens Axboe | e0da9bc | 2006-10-25 13:08:57 +0200 | [diff] [blame] | 381 | 		jobs in their lifetime. The included fio_generate_plots | 
 | 382 | 		script uses gnuplot to turn these text files into nice | 
 | 383 | 		graphs. | 
| Jens Axboe | 71bfa16 | 2006-10-25 11:08:19 +0200 | [diff] [blame] | 384 |  | 
 | 385 | write_lat_log	Same as write_bw_log, except that this option stores io | 
 | 386 | 		completion latencies instead. | 
 | 387 |  | 
 | 388 | lockmem=siint	Pin down the specified amount of memory with mlock(2). Can | 
 | 389 | 		potentially be used instead of removing memory or booting | 
 | 390 | 		with less memory to simulate a smaller amount of memory. | 
 | 391 |  | 
 | 392 | exec_prerun=str	Before running this job, issue the command specified | 
 | 393 | 		through system(3). | 
 | 394 |  | 
 | 395 | exec_postrun=str After the job completes, issue the command specified | 
 | 396 | 		 though system(3). | 
 | 397 |  | 
 | 398 | ioscheduler=str	Attempt to switch the device hosting the file to the specified | 
 | 399 | 		io scheduler before running. | 
 | 400 |  | 
 | 401 | cpuload=int	If the job is a CPU cycle eater, attempt to use the specified | 
 | 402 | 		percentage of CPU cycles. | 
 | 403 |  | 
 | 404 | cpuchunks=int	If the job is a CPU cycle eater, split the load into | 
 | 405 | 		cycles of the given time. In miliseconds. | 
 | 406 |  | 
 | 407 |  | 
 | 408 | 6.0 Interpreting the output | 
 | 409 | --------------------------- | 
 | 410 |  | 
 | 411 | fio spits out a lot of output. While running, fio will display the | 
 | 412 | status of the jobs created. An example of that would be: | 
 | 413 |  | 
 | 414 | Threads running: 1: [_r] [24.79% done] [eta 00h:01m:31s] | 
 | 415 |  | 
 | 416 | The characters inside the square brackets denote the current status of | 
 | 417 | each thread. The possible values (in typical life cycle order) are: | 
 | 418 |  | 
 | 419 | Idle	Run | 
 | 420 | ----    --- | 
 | 421 | P		Thread setup, but not started. | 
 | 422 | C		Thread created. | 
 | 423 | I		Thread initialized, waiting. | 
 | 424 | 	R	Running, doing sequential reads. | 
 | 425 | 	r	Running, doing random reads. | 
 | 426 | 	W	Running, doing sequential writes. | 
 | 427 | 	w	Running, doing random writes. | 
 | 428 | 	M	Running, doing mixed sequential reads/writes. | 
 | 429 | 	m	Running, doing mixed random reads/writes. | 
 | 430 | 	F	Running, currently waiting for fsync() | 
 | 431 | V		Running, doing verification of written data. | 
 | 432 | E		Thread exited, not reaped by main thread yet. | 
 | 433 | _		Thread reaped. | 
 | 434 |  | 
 | 435 | The other values are fairly self explanatory - number of threads | 
 | 436 | currently running and doing io, and the estimated completion percentage | 
 | 437 | and time for the running group. It's impossible to estimate runtime | 
 | 438 | of the following groups (if any). | 
 | 439 |  | 
 | 440 | When fio is done (or interrupted by ctrl-c), it will show the data for | 
 | 441 | each thread, group of threads, and disks in that order. For each data | 
 | 442 | direction, the output looks like: | 
 | 443 |  | 
 | 444 | Client1 (g=0): err= 0: | 
 | 445 |   write: io=    32MiB, bw=   666KiB/s, runt= 50320msec | 
 | 446 |     slat (msec): min=    0, max=  136, avg= 0.03, dev= 1.92 | 
 | 447 |     clat (msec): min=    0, max=  631, avg=48.50, dev=86.82 | 
 | 448 |     bw (KiB/s) : min=    0, max= 1196, per=51.00%, avg=664.02, dev=681.68 | 
 | 449 |   cpu        : usr=1.49%, sys=0.25%, ctx=7969 | 
 | 450 |  | 
 | 451 | The client number is printed, along with the group id and error of that | 
 | 452 | thread. Below is the io statistics, here for writes. In the order listed, | 
 | 453 | they denote: | 
 | 454 |  | 
 | 455 | io=		Number of megabytes io performed | 
 | 456 | bw=		Average bandwidth rate | 
 | 457 | runt=		The runtime of that thread | 
 | 458 | 	slat=	Submission latency (avg being the average, dev being the | 
 | 459 | 		standard deviation). This is the time it took to submit | 
 | 460 | 		the io. For sync io, the slat is really the completion | 
 | 461 | 		latency, since queue/complete is one operation there. | 
 | 462 | 	clat=	Completion latency. Same names as slat, this denotes the | 
 | 463 | 		time from submission to completion of the io pieces. For | 
 | 464 | 		sync io, clat will usually be equal (or very close) to 0, | 
 | 465 | 		as the time from submit to complete is basically just | 
 | 466 | 		CPU time (io has already been done, see slat explanation). | 
 | 467 | 	bw=	Bandwidth. Same names as the xlat stats, but also includes | 
 | 468 | 		an approximate percentage of total aggregate bandwidth | 
 | 469 | 		this thread received in this group. This last value is | 
 | 470 | 		only really useful if the threads in this group are on the | 
 | 471 | 		same disk, since they are then competing for disk access. | 
 | 472 | cpu=		CPU usage. User and system time, along with the number | 
 | 473 | 		of context switches this thread went through. | 
 | 474 |  | 
 | 475 | After each client has been listed, the group statistics are printed. They | 
 | 476 | will look like this: | 
 | 477 |  | 
 | 478 | Run status group 0 (all jobs): | 
 | 479 |    READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec | 
 | 480 |   WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec | 
 | 481 |  | 
 | 482 | For each data direction, it prints: | 
 | 483 |  | 
 | 484 | io=		Number of megabytes io performed. | 
 | 485 | aggrb=		Aggregate bandwidth of threads in this group. | 
 | 486 | minb=		The minimum average bandwidth a thread saw. | 
 | 487 | maxb=		The maximum average bandwidth a thread saw. | 
 | 488 | mint=		The smallest runtime of the threads in that group. | 
 | 489 | maxt=		The longest runtime of the threads in that group. | 
 | 490 |  | 
 | 491 | And finally, the disk statistics are printed. They will look like this: | 
 | 492 |  | 
 | 493 | Disk stats (read/write): | 
 | 494 |   sda: ios=16398/16511, merge=30/162, ticks=6853/819634, in_queue=826487, util=100.00% | 
 | 495 |  | 
 | 496 | Each value is printed for both reads and writes, with reads first. The | 
 | 497 | numbers denote: | 
 | 498 |  | 
 | 499 | ios=		Number of ios performed by all groups. | 
 | 500 | merge=		Number of merges io the io scheduler. | 
 | 501 | ticks=		Number of ticks we kept the disk busy. | 
 | 502 | io_queue=	Total time spent in the disk queue. | 
 | 503 | util=		The disk utilization. A value of 100% means we kept the disk | 
 | 504 | 		busy constantly, 50% would be a disk idling half of the time. | 
 | 505 |  | 
 | 506 |  | 
 | 507 | 7.0 Terse output | 
 | 508 | ---------------- | 
 | 509 |  | 
 | 510 | For scripted usage where you typically want to generate tables or graphs | 
 | 511 | of the results, fio can output the results in a comma seperated format. | 
 | 512 | The format is one long line of values, such as: | 
 | 513 |  | 
 | 514 | client1,0,0,936,331,2894,0,0,0.000000,0.000000,1,170,22.115385,34.290410,16,714,84.252874%,366.500000,566.417819,3496,1237,2894,0,0,0.000000,0.000000,0,246,6.671625,21.436952,0,2534,55.465300%,1406.600000,2008.044216,0.000000%,0.431928%,1109 | 
 | 515 |  | 
 | 516 | Split up, the format is as follows: | 
 | 517 |  | 
 | 518 | 	jobname, groupid, error | 
 | 519 | 	READ status: | 
 | 520 | 		KiB IO, bandwidth (KiB/sec), runtime (msec) | 
 | 521 | 		Submission latency: min, max, mean, deviation | 
 | 522 | 		Completion latency: min, max, mean, deviation | 
 | 523 | 		Bw: min, max, aggreate percentage of total, mean, deviation | 
 | 524 | 	WRITE status: | 
 | 525 | 		KiB IO, bandwidth (KiB/sec), runtime (msec) | 
 | 526 | 		Submission latency: min, max, mean, deviation | 
 | 527 | 		Completion latency: min, max, mean, deviation | 
 | 528 | 		Bw: min, max, aggreate percentage of total, mean, deviation | 
 | 529 | 	CPU usage: user, system, context switches | 
 | 530 |  |