Update in-tree Google Benchmark to current ToT.

I've put some work into the Google Benchmark library in order to make it easier
to benchmark libc++. These changes have already been upstreamed into
Google Benchmark and this patch applies the changes to the in-tree version.

The main improvement in the addition of a 'compare_bench.py' script which
makes it very easy to compare benchmarks. For example to compare the native
STL to libc++ you would run:

`$ compare_bench.py ./util_smartptr.native.out ./util_smartptr.libcxx.out`

And the output would look like:

RUNNING: ./util_smartptr.native.out
Benchmark                          Time           CPU Iterations
----------------------------------------------------------------
BM_SharedPtrCreateDestroy         62 ns         62 ns   10937500
BM_SharedPtrIncDecRef             31 ns         31 ns   23972603
BM_WeakPtrIncDecRef               28 ns         28 ns   23648649
RUNNING: ./util_smartptr.libcxx.out
Benchmark                          Time           CPU Iterations
----------------------------------------------------------------
BM_SharedPtrCreateDestroy         46 ns         46 ns   14957265
BM_SharedPtrIncDecRef             31 ns         31 ns   22435897
BM_WeakPtrIncDecRef               34 ns         34 ns   21084337
Comparing ./util_smartptr.native.out to ./util_smartptr.libcxx.out
Benchmark                          Time           CPU
-----------------------------------------------------
BM_SharedPtrCreateDestroy         -0.26         -0.26
BM_SharedPtrIncDecRef             +0.00         +0.00
BM_WeakPtrIncDecRef               +0.21         +0.21

llvm-svn: 278147
diff --git a/libcxx/utils/google-benchmark/tools/compare_bench.py b/libcxx/utils/google-benchmark/tools/compare_bench.py
new file mode 100644
index 0000000..ed0f133e
--- /dev/null
+++ b/libcxx/utils/google-benchmark/tools/compare_bench.py
@@ -0,0 +1,30 @@
+#!/usr/bin/env python
+"""
+compare_bench.py - Compare two benchmarks or their results and report the
+                   difference.
+"""
+import sys
+import gbench
+from gbench import util, report
+
+def main():
+    # Parse the command line flags
+    def usage():
+        print('compare_bench.py <test1> <test2> [benchmark options]...')
+        exit(1)
+    if '--help' in sys.argv or len(sys.argv) < 3:
+        usage()
+    tests = sys.argv[1:3]
+    bench_opts = sys.argv[3:]
+    bench_opts = list(bench_opts)
+    # Run the benchmarks and report the results
+    json1 = gbench.util.run_or_load_benchmark(tests[0], bench_opts)
+    json2 = gbench.util.run_or_load_benchmark(tests[1], bench_opts)
+    output_lines = gbench.report.generate_difference_report(json1, json2)
+    print 'Comparing %s to %s' % (tests[0], tests[1])
+    for ln in output_lines:
+        print(ln)
+
+
+if __name__ == '__main__':
+    main()