Get rid of KiB vs KB distinction

Confuses more than it does good, drop it and default to just using KB,
MB, etc.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
diff --git a/HOWTO b/HOWTO
index 5099c83..2d155aa 100644
--- a/HOWTO
+++ b/HOWTO
@@ -112,7 +112,7 @@
 '#', the entire line is discarded as a comment.
 
 So let's look at a really simple job file that defines two processes, each
-randomly reading from a 128MiB file.
+randomly reading from a 128MB file.
 
 ; -- start job file --
 [global]
@@ -150,9 +150,9 @@
 
 Here we have no global section, as we only have one job defined anyway.
 We want to use async io here, with a depth of 4 for each file. We also
-increased the buffer size used to 32KiB and define numjobs to 4 to
+increased the buffer size used to 32KB and define numjobs to 4 to
 fork 4 identical jobs. The result is 4 processes each randomly writing
-to their own 64MiB file. Instead of using the above job file, you could
+to their own 64MB file. Instead of using the above job file, you could
 have given the parameters on the command line. For this case, you would
 specify:
 
@@ -691,7 +691,7 @@
 		that for shmhuge and mmaphuge to work, the system must have
 		free huge pages allocated. This can normally be checked
 		and set by reading/writing /proc/sys/vm/nr_hugepages on a
-		Linux system. Fio assumes a huge page is 4MiB in size. So
+		Linux system. Fio assumes a huge page is 4MB in size. So
 		to calculate the number of huge pages you need for a given
 		job file, add up the io depth of all jobs (normally one unless
 		iodepth= is used) and multiply by the maximum bs set. Then
@@ -715,7 +715,7 @@
 
 hugepage-size=int
 		Defines the size of a huge page. Must at least be equal
-		to the system setting, see /proc/meminfo. Defaults to 4MiB.
+		to the system setting, see /proc/meminfo. Defaults to 4MB.
 		Should probably always be a multiple of megabytes, so using
 		hugepage-size=Xm is the preferred way to set this to avoid
 		setting a non-pow-2 bad value.
@@ -1005,10 +1005,10 @@
 direction, the output looks like:
 
 Client1 (g=0): err= 0:
-  write: io=    32MiB, bw=   666KiB/s, runt= 50320msec
+  write: io=    32MB, bw=   666KB/s, runt= 50320msec
     slat (msec): min=    0, max=  136, avg= 0.03, stdev= 1.92
     clat (msec): min=    0, max=  631, avg=48.50, stdev=86.82
-    bw (KiB/s) : min=    0, max= 1196, per=51.00%, avg=664.02, stdev=681.68
+    bw (KB/s) : min=    0, max= 1196, per=51.00%, avg=664.02, stdev=681.68
   cpu        : usr=1.49%, sys=0.25%, ctx=7969, majf=0, minf=17
   IO depths    : 1=0.1%, 2=0.3%, 4=0.5%, 8=99.0%, 16=0.0%, 32=0.0%, >32=0.0%
      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
@@ -1068,8 +1068,8 @@
 will look like this:
 
 Run status group 0 (all jobs):
-   READ: io=64MiB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
-  WRITE: io=64MiB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
+   READ: io=64MB, aggrb=22178, minb=11355, maxb=11814, mint=2840msec, maxt=2955msec
+  WRITE: io=64MB, aggrb=1302, minb=666, maxb=669, mint=50093msec, maxt=50320msec
 
 For each data direction, it prints:
 
@@ -1112,12 +1112,12 @@
 
 	jobname, groupid, error
 	READ status:
-		KiB IO, bandwidth (KiB/sec), runtime (msec)
+		KB IO, bandwidth (KB/sec), runtime (msec)
 		Submission latency: min, max, mean, deviation
 		Completion latency: min, max, mean, deviation
 		Bw: min, max, aggregate percentage of total, mean, deviation
 	WRITE status:
-		KiB IO, bandwidth (KiB/sec), runtime (msec)
+		KB IO, bandwidth (KB/sec), runtime (msec)
 		Submission latency: min, max, mean, deviation
 		Completion latency: min, max, mean, deviation
 		Bw: min, max, aggregate percentage of total, mean, deviation