Add bench.py as a driver script to run some benchmarks on lldb.
Add benchmarks for expression evaluations (TestExpressionCmd.py) and disassembly (TestDoAttachThenDisassembly.py).

An example:
[17:45:55] johnny:/Volumes/data/lldb/svn/trunk/test $ ./bench.py 2>&1 | grep -P '^lldb.*benchmark:'
lldb startup delay (create fresh target) benchmark: Avg: 0.104274 (Laps: 30, Total Elapsed Time: 3.128214)
lldb startup delay (set first breakpoint) benchmark: Avg: 0.102216 (Laps: 30, Total Elapsed Time: 3.066470)
lldb frame variable benchmark: Avg: 1.649162 (Laps: 20, Total Elapsed Time: 32.983245)
lldb stepping benchmark: Avg: 0.104409 (Laps: 50, Total Elapsed Time: 5.220461)
lldb expr cmd benchmark: Avg: 0.206774 (Laps: 25, Total Elapsed Time: 5.169350)
lldb disassembly benchmark: Avg: 0.089086 (Laps: 10, Total Elapsed Time: 0.890859)

llvm-svn: 142708
diff --git a/lldb/test/bench.py b/lldb/test/bench.py
new file mode 100755
index 0000000..634fb18
--- /dev/null
+++ b/lldb/test/bench.py
@@ -0,0 +1,48 @@
+#!/usr/bin/env python
+
+"""
+A simple bench runner which delegates to the ./dotest.py test driver to run the
+benchmarks defined in the list named 'benches'.
+
+You need to hand edit 'benches' to modify/change the command lines passed to the
+test driver.
+
+Use the following to get only the benchmark results in your terminal output:
+
+    ./bench.py 2>&1 | grep -P '^lldb.*benchmark:'
+"""
+
+import os, sys
+import re
+
+# dotest.py invocation with no '-e exe-path' uses lldb as the inferior program,
+# unless there is a mentioning of custom executable program.
+benches = [
+    # Measure startup delays creating a target and setting a breakpoint at main.
+    './dotest.py -v +b -n -p TestStartupDelays.py',
+
+    # Measure 'frame variable' response after stopping at Driver::MainLoop().
+    './dotest.py -v +b -x "-F Driver::MainLoop()" -n -p TestFrameVariableResponse.py',
+
+    # Measure stepping speed after stopping at Driver::MainLoop().
+    './dotest.py -v +b -x "-F Driver::MainLoop()" -n -p TestSteppingSpeed.py',
+
+    # Measure expression cmd response with a simple custom executable program.
+    './dotest.py +b -n -p TestExpressionCmd.py',
+
+    # Attach to a spawned lldb process then run disassembly benchmarks.
+    './dotest.py -v +b -n -p TestDoAttachThenDisassembly.py'
+]
+
+def main():
+    """Read the items from 'benches' and run the command line one by one."""
+    print "Starting bench runner...."
+
+    for command in benches:
+        print "Running %s" % (command)
+        os.system(command)
+
+    print "Bench runner done."
+
+if __name__ == '__main__':
+    main()