camera2: Move ITS tests to CTS verifier.

Bug: 17994909

Change-Id: Ie788b1ae3a6b079e37a3472c46aed3dfdcfffe2c
diff --git a/apps/CameraITS/.gitignore b/apps/CameraITS/.gitignore
new file mode 100644
index 0000000..259969b
--- /dev/null
+++ b/apps/CameraITS/.gitignore
@@ -0,0 +1,11 @@
+# Ignore files that are created asa result of running the ITS tests.
+
+*.json
+*.yuv
+*.jpg
+*.jpeg
+*.png
+*.pyc
+its.target.cfg
+.DS_Store
+
diff --git a/apps/CameraITS/Android.mk b/apps/CameraITS/Android.mk
new file mode 100644
index 0000000..f91a2e7
--- /dev/null
+++ b/apps/CameraITS/Android.mk
@@ -0,0 +1,28 @@
+# Copyright (C) 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+its-dir-name := CameraITS
+its-dir := $(HOST_OUT)/$(its-dir-name)
+
+camera-its: $(its-dir)
+
+.PHONY: camera-its
+
+$(its-dir):
+	echo $(its_dir)
+	mkdir -p $(its-dir)
+	$(ACP) -rfp cts/apps/$(its-dir-name)/* $(its-dir)
+	rm $(its-dir)/Android.mk
+
diff --git a/apps/CameraITS/README b/apps/CameraITS/README
new file mode 100644
index 0000000..c41536a
--- /dev/null
+++ b/apps/CameraITS/README
@@ -0,0 +1,292 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+Android Camera Imaging Test Suite (ITS)
+=======================================
+
+1. Introduction
+---------------
+
+The ITS is a framework for running tests on the images produced by an Android
+camera. The general goal of each test is to configure the camera in a desired
+manner and capture one or more shots, and then examine the shots to see if
+they contain the expected image data. Many of the tests will require that the
+camera is pointed at a specific target chart or be illuminated at a specific
+intensity.
+
+2. Setup
+--------
+
+There are two components to the ITS:
+1. The Android device running CtsVerifier.apk.
+2. A host machine connected to the Android device that runs Python tests.
+
+2.1. Device setup
+-----------------
+
+Build and install CtsVerifier.apk for your device. Along with the regular
+CTS verifier tests, this includes the ITS service used by the ITS python
+scripts to capture image data. After setting up your shell for Android builds,
+from the cts/apps/CameraITS directory run the following commands:
+
+    cd <BUILD_PATH>/cts/apps/CtsVerifier
+    mma -j32
+    adb install -r <YOUR_OUTPUT_PATH>/CtsVerifier.apk
+
+using whatever path is appropriate to your output CtsVerifier.apk file.
+
+Note: If you are installing this from CTS Verifier bundle
+(android-cts-verifier.zip) rather than from a full Android source tree, both
+the CtsVerifier.apk and CameraITS directory can be found at the top level of
+the zipped android-cts-verifier folder.
+
+2.2. Host PC setup
+------------------
+
+The first pre-requisite is the Android SDK, as adb is used to communicate with
+the device.
+
+The test framework is based on Python on the host machine. It requires
+Python 2.7 and the scipy/numpy stack, including the Python Imaging Library.
+
+(For Ubuntu users)
+
+    sudo apt-get install python-numpy python-scipy python-matplotlib
+
+(For other users)
+
+All of these pieces can be installed on your host machine separately,
+however it is highly recommended to install a bundled distribution of
+Python that comes with these modules. Some different bundles are listed
+here:
+
+    http://www.scipy.org/install.html
+
+Of these, Anaconda has been verified to work with these scripts, and it is
+available on Mac, Linux, and Windows from here:
+
+    http://continuum.io/downloads
+
+Note that the Anaconda python executable's directory must be at the front of
+your PATH environment variable, assuming that you are using this Python
+distribution. The Anaconda installer may set this up for you automatically.
+
+Once your Python installation is ready, set up the test environment.
+
+2.2.1. Linux + Mac OS X
+-----------------------
+
+On Linux or Mac OS X, run the following command (in a terminal) from the
+cts/apps/CameraITS directory, from a bash shell:
+
+    source build/envsetup.sh
+
+This will do some basic sanity checks on your Python installation, and set up
+the PYTHONPATH environment variable.
+
+2.2.2. Windows
+--------------
+
+On Windows, the bash script won't run (unless you have cygwin (which has not
+been tested)), but all you need to do is set your PYTHONPATH environment
+variable in your shell to point to the cts/apps/CameraITS/pymodules directory,
+giving an absolute path. Without this, you'll get "import" errors when running
+the test scripts.
+
+3. Python framework overview
+----------------------------
+
+The Python modules are under the pymodules directory, in the "its" package.
+
+* its.device: encapsulates communication with ITS service included in the
+  CtsVerifier.apk running on the device
+* its.objects: contains a collection of functions for creating Python objects
+  corresponding to the Java objects which the ITS service uses
+* its.image: contains a collection of functions (built on numpy arrays) for
+  processing captured images
+* its.error: the exception/error class used in this framework
+* its.target: functions to set and measure the exposure level to use for
+  manual shots in tests, to ensure that the images are exposed well for the
+  target scene
+* its.dng: functions to work with DNG metadata
+
+All of these module have associated unit tests; to run the unit tests, execute
+the modules (rather than importing them).
+
+3.1. Device control
+-------------------
+
+The its.device.ItsSession class encapsulates a session with a connected device
+under test (which is running CtsVerifier.apk). The session is over TCP, which is
+forwarded over adb.
+
+As an overview, the ItsSession.do_capture() function takes a Python dictionary
+object as an argument, converts that object to JSON, and sends it to the
+device over tcp which then deserializes from the JSON object representation to
+Camera2 Java objects (CaptureRequests) which are used to specify one or more
+captures. Once the captures are complete, the resultant images are copied back
+to the host machine (over tcp again), along with JSON representations of the
+CaptureResult and other objects that describe the shot that was actually taken.
+
+The Python capture request object(s) can contain key/value entries corresponding
+to any of the Java CaptureRequest object fields.
+
+The output surface's width, height, and format can also be specified. Currently
+supported formats are "jpg", "raw", "raw10", "dng", and "yuv", where "yuv" is
+YUV420 fully planar. The default output surface is a full sensor YUV420 frame.
+
+The metadata that is returned along with the captured images is also in JSON
+format, serialized from the CaptureRequest and CaptureResult objects that were
+passed to the capture listener, as well as the CameraProperties object.
+
+3.2. Image processing and analysis
+----------------------------------
+
+The its.image module is a collection of Python functions, built on top of numpy
+arrays, for manipulating captured images. Some functions of note include:
+
+    load_yuv420_to_rgb_image
+    apply_lut_to_image
+    apply_matrix_to_image
+    write_image
+
+The scripts in the tests directory make use of these modules.
+
+Note that it's important to do heavy image processing using the efficient numpy
+ndarray operations, rather than writing complex loops in standard Python to
+process pixels. Refer to online docs and examples of numpy for information on
+this.
+
+3.3. Tests
+----------
+
+The tests directory contains a number of self-contained test scripts. All
+tests should pass if the tree is in a good state.
+
+Most of the tests save various files in the current directory. To have all the
+output files put into a separate directory, run the script from that directory,
+for example:
+
+    mkdir out
+    cd out
+    python ../tests/scene1/test_linearity.py
+
+Any test can be specified to reboot the camera prior to capturing any shots, by
+adding a "reboot" or "reboot=N" command line argument, where N is the number of
+seconds to wait after rebooting the device before sending any commands; the
+default is 30 seconds.
+
+    python tests/scene1/test_linearity.py reboot
+    python tests/scene1/test_linearity.py reboot=20
+
+It's possible that a test could leave the camera in a bad state, in particular
+if there are any bugs in the HAL or the camera framework. Rebooting the device
+can be used to get it into a known clean state again.
+
+Each test assumes some sort of target or scene. There are multiple scene<N>
+folders under the tests directory, and each contains a README file which
+describes the scene for the scripts in that folder.
+
+By default, camera device id=0 is opened when the script connects to the unit,
+however this can be specified by adding a "camera=1" or similar argument to
+the script command line. On a typical device, camera=0 is the main (rear)
+camera, and camera=1 is the front-facing camera.
+
+    python tests/scene1/test_linearity.py camera=1
+
+The tools/run_all_tests.py script should be executed from the top-level
+CameraITS directory, and it will run all of the tests in an automated fashion,
+saving the generated output files along with the stdout and stderr dumps to
+a temporary directory.
+
+    python tools/run_all_tests.py
+
+This can be run with the "noinit" argument, and in general any args provided
+to this command line will be passed to each script as it is executed.
+
+The tests/inprog directory contains a mix of unfinished, in-progress, and
+incomplete tests. These may or may not be useful in testing a HAL impl.,
+and as these tests are copmleted they will be moved into the scene<N> folders.
+
+When running individual tests from the command line (as in the examples here),
+each test run will ensure that the ITS service is running on the device and is
+ready to accept TCP connections. When using a separate test harness to control
+this infrastructure, the "noinit" command line argument can be provided to
+skip this step; in this case, the test will just try to open a socket to the
+service on the device, and will fail if it's not running and ready.
+
+    python tests/scene1/test_linearity.py noinit
+
+3.4. Target exposure
+--------------------
+
+The tools/config.py script is a wrapper for the its.target module, which is
+used to set an exposure level based on the scene that the camera is imaging.
+The purpose of this is to be able to have tests which use hard-coded manual
+exposure controls, while at the same time ensuring that the captured images
+are properly exposed for the test (and aren't clamped to white or black).
+
+If no argument is provided, the script will use the camera to measure the
+scene to determine the exposure level. An argument can be provided to hard-
+code the exposure level.
+
+    python tools/config.py
+    python tools/config.py 16531519962
+
+This creates a file named its.target.cfg in the current directory, storing the
+target exposure level. Tests that use the its.target module will be reusing
+this value, if they are run from the same directory and if they contain the
+"target" command line argument:
+
+    python tests/scene1/test_linearity.py target
+
+If the "target" argument isn't present, then the script won't use any cached
+its.target.cfg values that may be present in the current directory.
+
+3.5. Docs
+---------
+
+The pydoc tool can generate HTML docs for the ITS Python modules, using the
+following command (run after PYTHONPATH has been set up as described above):
+
+    pydoc -w its its.device its.image its.error its.objects its.dng its.target
+
+There is a tutorial script in the tests folder (named tutorial.py). It
+illustrates a number of the its.image and its.device primitives, and shows
+how to work with image data in general using this infrastructure. (Its code
+is commented with explanatory remarks.)
+
+    python tests/tutorial.py
+
+3.6. List of command line args
+---------------------------------
+
+The above doc sections describe the following command line arguments that may
+be provided when running a test:
+
+    reboot
+    reboot=N
+    target
+    noinit
+    camera=N
+
+4. Known issues
+---------------
+
+The Python test scripts don't work if multiple devices are connected to the
+host machine; currently, the its.device module uses a simplistic "adb -d"
+approach to communicating with the device, assuming that there is only one
+device connected. Fixing this is a TODO.
+
diff --git a/apps/CameraITS/build/envsetup.sh b/apps/CameraITS/build/envsetup.sh
new file mode 100644
index 0000000..a95c445
--- /dev/null
+++ b/apps/CameraITS/build/envsetup.sh
@@ -0,0 +1,45 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This file should be sourced from bash. Sets environment variables for
+# running tests, and also checks that a number of dependences are present
+# and that the unit tests for the modules passed (indicating that the setup
+# is correct).
+
+[[ "${BASH_SOURCE[0]}" != "${0}" ]] || \
+    { echo ">> Script must be sourced with 'source $0'" >&2; exit 1; }
+
+command -v adb >/dev/null 2>&1 || \
+    echo ">> Require adb executable to be in path" >&2
+
+command -v python >/dev/null 2>&1 || \
+    echo ">> Require python executable to be in path" >&2
+
+python -V 2>&1 | grep -q "Python 2.7" || \
+    echo ">> Require python 2.7" >&2
+
+for M in numpy PIL Image matplotlib pylab
+do
+    python -c "import $M" >/dev/null 2>&1 || \
+        echo ">> Require Python $M module" >&2
+done
+
+export PYTHONPATH="$PWD/pymodules:$PYTHONPATH"
+
+for M in device objects image caps dng target error
+do
+    python "pymodules/its/$M.py" 2>&1 | grep -q "OK" || \
+        echo ">> Unit test for $M failed" >&2
+done
+
diff --git a/apps/CameraITS/pymodules/its/__init__.py b/apps/CameraITS/pymodules/its/__init__.py
new file mode 100644
index 0000000..59058be
--- /dev/null
+++ b/apps/CameraITS/pymodules/its/__init__.py
@@ -0,0 +1,14 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
diff --git a/apps/CameraITS/pymodules/its/caps.py b/apps/CameraITS/pymodules/its/caps.py
new file mode 100644
index 0000000..6caebc0
--- /dev/null
+++ b/apps/CameraITS/pymodules/its/caps.py
@@ -0,0 +1,162 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import its.objects
+
+def full(props):
+    """Returns whether a device is a FULL capability camera2 device.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    return props.has_key("android.info.supportedHardwareLevel") and \
+           props["android.info.supportedHardwareLevel"] == 1
+
+def limited(props):
+    """Returns whether a device is a LIMITED capability camera2 device.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    return props.has_key("android.info.supportedHardwareLevel") and \
+           props["android.info.supportedHardwareLevel"] == 0
+
+def legacy(props):
+    """Returns whether a device is a LEGACY capability camera2 device.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    return props.has_key("android.info.supportedHardwareLevel") and \
+           props["android.info.supportedHardwareLevel"] == 2
+
+def manual_sensor(props):
+    """Returns whether a device supports MANUAL_SENSOR capabilities.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    return    props.has_key("android.request.availableCapabilities") and \
+              1 in props["android.request.availableCapabilities"] \
+           or full(props)
+
+def manual_post_proc(props):
+    """Returns whether a device supports MANUAL_POST_PROCESSING capabilities.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    return    props.has_key("android.request.availableCapabilities") and \
+              2 in props["android.request.availableCapabilities"] \
+           or full(props)
+
+def raw(props):
+    """Returns whether a device supports RAW capabilities.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    return props.has_key("android.request.availableCapabilities") and \
+           3 in props["android.request.availableCapabilities"]
+
+def raw16(props):
+    """Returns whether a device supports RAW16 output.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    return len(its.objects.get_available_output_sizes("raw", props)) > 0
+
+def raw10(props):
+    """Returns whether a device supports RAW10 output.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    return len(its.objects.get_available_output_sizes("raw10", props)) > 0
+
+def sensor_fusion(props):
+    """Returns whether the camera and motion sensor timestamps for the device
+    are in the same time domain and can be compared direcctly.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    return props.has_key("android.sensor.info.timestampSource") and \
+           props["android.sensor.info.timestampSource"] == 1
+
+def read_3a(props):
+    """Return whether a device supports reading out the following 3A settings:
+        sensitivity
+        exposure time
+        awb gain
+        awb cct
+        focus distance
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    # TODO: check available result keys explicitly
+    return manual_sensor(props) and manual_post_proc(props)
+
+def compute_target_exposure(props):
+    """Return whether a device supports target exposure computation in its.target module.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        Boolean.
+    """
+    return manual_sensor(props) and manual_post_proc(props)
+
+class __UnitTest(unittest.TestCase):
+    """Run a suite of unit tests on this module.
+    """
+    # TODO: Add more unit tests.
+
+if __name__ == '__main__':
+    unittest.main()
+
diff --git a/apps/CameraITS/pymodules/its/device.py b/apps/CameraITS/pymodules/its/device.py
new file mode 100644
index 0000000..51590a9
--- /dev/null
+++ b/apps/CameraITS/pymodules/its/device.py
@@ -0,0 +1,532 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.error
+import os
+import os.path
+import sys
+import re
+import json
+import time
+import unittest
+import socket
+import subprocess
+import hashlib
+import numpy
+
+class ItsSession(object):
+    """Controls a device over adb to run ITS scripts.
+
+    The script importing this module (on the host machine) prepares JSON
+    objects encoding CaptureRequests, specifying sets of parameters to use
+    when capturing an image using the Camera2 APIs. This class encapsualtes
+    sending the requests to the device, monitoring the device's progress, and
+    copying the resultant captures back to the host machine when done. TCP
+    forwarded over adb is the transport mechanism used.
+
+    The device must have CtsVerifier.apk installed.
+
+    Attributes:
+        sock: The open socket.
+    """
+
+    # Open a connection to localhost:6000, forwarded to port 6000 on the device.
+    # TODO: Support multiple devices running over different TCP ports.
+    IPADDR = '127.0.0.1'
+    PORT = 6000
+    BUFFER_SIZE = 4096
+
+    # Seconds timeout on each socket operation.
+    SOCK_TIMEOUT = 10.0
+
+    PACKAGE = 'com.android.cts.verifier.camera.its'
+    INTENT_START = 'com.android.cts.verifier.camera.its.START'
+    ACTION_ITS_RESULT = 'com.android.cts.verifier.camera.its.ACTION_ITS_RESULT'
+    EXTRA_SUCCESS = 'camera.its.extra.SUCCESS'
+
+    # TODO: Handle multiple connected devices.
+    ADB = "adb -d"
+
+    # Definitions for some of the common output format options for do_capture().
+    # Each gets images of full resolution for each requested format.
+    CAP_RAW = {"format":"raw"}
+    CAP_DNG = {"format":"dng"}
+    CAP_YUV = {"format":"yuv"}
+    CAP_JPEG = {"format":"jpeg"}
+    CAP_RAW_YUV = [{"format":"raw"}, {"format":"yuv"}]
+    CAP_DNG_YUV = [{"format":"dng"}, {"format":"yuv"}]
+    CAP_RAW_JPEG = [{"format":"raw"}, {"format":"jpeg"}]
+    CAP_DNG_JPEG = [{"format":"dng"}, {"format":"jpeg"}]
+    CAP_YUV_JPEG = [{"format":"yuv"}, {"format":"jpeg"}]
+    CAP_RAW_YUV_JPEG = [{"format":"raw"}, {"format":"yuv"}, {"format":"jpeg"}]
+    CAP_DNG_YUV_JPEG = [{"format":"dng"}, {"format":"yuv"}, {"format":"jpeg"}]
+
+    # Method to handle the case where the service isn't already running.
+    # This occurs when a test is invoked directly from the command line, rather
+    # than as a part of a separate test harness which is setting up the device
+    # and the TCP forwarding.
+    def __pre_init(self):
+
+        # This also includes the optional reboot handling: if the user
+        # provides a "reboot" or "reboot=N" arg, then reboot the device,
+        # waiting for N seconds (default 30) before returning.
+        for s in sys.argv[1:]:
+            if s[:6] == "reboot":
+                duration = 30
+                if len(s) > 7 and s[6] == "=":
+                    duration = int(s[7:])
+                print "Rebooting device"
+                _run("%s reboot" % (ItsSession.ADB));
+                _run("%s wait-for-device" % (ItsSession.ADB))
+                time.sleep(duration)
+                print "Reboot complete"
+
+        # TODO: Figure out why "--user 0" is needed, and fix the problem.
+        _run('%s shell am force-stop --user 0 %s' % (ItsSession.ADB, self.PACKAGE))
+        _run(('%s shell am startservice --user 0 -t text/plain '
+              '-a %s') % (ItsSession.ADB, self.INTENT_START))
+
+        # Wait until the socket is ready to accept a connection.
+        proc = subprocess.Popen(
+                ItsSession.ADB.split() + ["logcat"],
+                stdout=subprocess.PIPE)
+        logcat = proc.stdout
+        while True:
+            line = logcat.readline().strip()
+            if line.find('ItsService ready') >= 0:
+                break
+        proc.kill()
+
+        # Setup the TCP-over-ADB forwarding.
+        _run('%s forward tcp:%d tcp:%d' % (ItsSession.ADB,self.PORT,self.PORT))
+
+    def __init__(self):
+        if "noinit" not in sys.argv:
+            self.__pre_init()
+        self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+        self.sock.connect((self.IPADDR, self.PORT))
+        self.sock.settimeout(self.SOCK_TIMEOUT)
+        self.__close_camera()
+        self.__open_camera()
+
+    def __del__(self):
+        if hasattr(self, 'sock') and self.sock:
+            self.__close_camera()
+            self.sock.close()
+
+    def __enter__(self):
+        return self
+
+    def __exit__(self, type, value, traceback):
+        return False
+
+    def __read_response_from_socket(self):
+        # Read a line (newline-terminated) string serialization of JSON object.
+        chars = []
+        while len(chars) == 0 or chars[-1] != '\n':
+            ch = self.sock.recv(1)
+            if len(ch) == 0:
+                # Socket was probably closed; otherwise don't get empty strings
+                raise its.error.Error('Problem with socket on device side')
+            chars.append(ch)
+        line = ''.join(chars)
+        jobj = json.loads(line)
+        # Optionally read a binary buffer of a fixed size.
+        buf = None
+        if jobj.has_key("bufValueSize"):
+            n = jobj["bufValueSize"]
+            buf = bytearray(n)
+            view = memoryview(buf)
+            while n > 0:
+                nbytes = self.sock.recv_into(view, n)
+                view = view[nbytes:]
+                n -= nbytes
+            buf = numpy.frombuffer(buf, dtype=numpy.uint8)
+        return jobj, buf
+
+    def __open_camera(self):
+        # Get the camera ID to open as an argument.
+        camera_id = 0
+        for s in sys.argv[1:]:
+            if s[:7] == "camera=" and len(s) > 7:
+                camera_id = int(s[7:])
+        cmd = {"cmdName":"open", "cameraId":camera_id}
+        self.sock.send(json.dumps(cmd) + "\n")
+        data,_ = self.__read_response_from_socket()
+        if data['tag'] != 'cameraOpened':
+            raise its.error.Error('Invalid command response')
+
+    def __close_camera(self):
+        cmd = {"cmdName":"close"}
+        self.sock.send(json.dumps(cmd) + "\n")
+        data,_ = self.__read_response_from_socket()
+        if data['tag'] != 'cameraClosed':
+            raise its.error.Error('Invalid command response')
+
+    def do_vibrate(self, pattern):
+        """Cause the device to vibrate to a specific pattern.
+
+        Args:
+            pattern: Durations (ms) for which to turn on or off the vibrator.
+                The first value indicates the number of milliseconds to wait
+                before turning the vibrator on. The next value indicates the
+                number of milliseconds for which to keep the vibrator on
+                before turning it off. Subsequent values alternate between
+                durations in milliseconds to turn the vibrator off or to turn
+                the vibrator on.
+
+        Returns:
+            Nothing.
+        """
+        cmd = {}
+        cmd["cmdName"] = "doVibrate"
+        cmd["pattern"] = pattern
+        self.sock.send(json.dumps(cmd) + "\n")
+        data,_ = self.__read_response_from_socket()
+        if data['tag'] != 'vibrationStarted':
+            raise its.error.Error('Invalid command response')
+
+    def start_sensor_events(self):
+        """Start collecting sensor events on the device.
+
+        See get_sensor_events for more info.
+
+        Returns:
+            Nothing.
+        """
+        cmd = {}
+        cmd["cmdName"] = "startSensorEvents"
+        self.sock.send(json.dumps(cmd) + "\n")
+        data,_ = self.__read_response_from_socket()
+        if data['tag'] != 'sensorEventsStarted':
+            raise its.error.Error('Invalid command response')
+
+    def get_sensor_events(self):
+        """Get a trace of all sensor events on the device.
+
+        The trace starts when the start_sensor_events function is called. If
+        the test runs for a long time after this call, then the device's
+        internal memory can fill up. Calling get_sensor_events gets all events
+        from the device, and then stops the device from collecting events and
+        clears the internal buffer; to start again, the start_sensor_events
+        call must be used again.
+
+        Events from the accelerometer, compass, and gyro are returned; each
+        has a timestamp and x,y,z values.
+
+        Note that sensor events are only produced if the device isn't in its
+        standby mode (i.e.) if the screen is on.
+
+        Returns:
+            A Python dictionary with three keys ("accel", "mag", "gyro") each
+            of which maps to a list of objects containing "time","x","y","z"
+            keys.
+        """
+        cmd = {}
+        cmd["cmdName"] = "getSensorEvents"
+        self.sock.send(json.dumps(cmd) + "\n")
+        data,_ = self.__read_response_from_socket()
+        if data['tag'] != 'sensorEvents':
+            raise its.error.Error('Invalid command response')
+        return data['objValue']
+
+    def get_camera_properties(self):
+        """Get the camera properties object for the device.
+
+        Returns:
+            The Python dictionary object for the CameraProperties object.
+        """
+        cmd = {}
+        cmd["cmdName"] = "getCameraProperties"
+        self.sock.send(json.dumps(cmd) + "\n")
+        data,_ = self.__read_response_from_socket()
+        if data['tag'] != 'cameraProperties':
+            raise its.error.Error('Invalid command response')
+        return data['objValue']['cameraProperties']
+
+    def do_3a(self, regions_ae=[[0,0,1,1,1]],
+                    regions_awb=[[0,0,1,1,1]],
+                    regions_af=[[0,0,1,1,1]],
+                    do_ae=True, do_awb=True, do_af=True,
+                    lock_ae=False, lock_awb=False,
+                    get_results=False):
+        """Perform a 3A operation on the device.
+
+        Triggers some or all of AE, AWB, and AF, and returns once they have
+        converged. Uses the vendor 3A that is implemented inside the HAL.
+
+        Throws an assertion if 3A fails to converge.
+
+        Args:
+            regions_ae: List of weighted AE regions.
+            regions_awb: List of weighted AWB regions.
+            regions_af: List of weighted AF regions.
+            do_ae: Trigger AE and wait for it to converge.
+            do_awb: Wait for AWB to converge.
+            do_af: Trigger AF and wait for it to converge.
+            lock_ae: Request AE lock after convergence, and wait for it.
+            lock_awb: Request AWB lock after convergence, and wait for it.
+            get_results: Return the 3A results from this function.
+
+        Region format in args:
+            Arguments are lists of weighted regions; each weighted region is a
+            list of 5 values, [x,y,w,h, wgt], and each argument is a list of
+            these 5-value lists. The coordinates are given as normalized
+            rectangles (x,y,w,h) specifying the region. For example:
+                [[0.0, 0.0, 1.0, 0.5, 5], [0.0, 0.5, 1.0, 0.5, 10]].
+            Weights are non-negative integers.
+
+        Returns:
+            Five values are returned if get_results is true::
+            * AE sensitivity; None if do_ae is False
+            * AE exposure time; None if do_ae is False
+            * AWB gains (list); None if do_awb is False
+            * AWB transform (list); None if do_awb is false
+            * AF focus position; None if do_af is false
+            Otherwise, it returns five None values.
+        """
+        print "Running vendor 3A on device"
+        cmd = {}
+        cmd["cmdName"] = "do3A"
+        cmd["regions"] = {"ae": sum(regions_ae, []),
+                          "awb": sum(regions_awb, []),
+                          "af": sum(regions_af, [])}
+        cmd["triggers"] = {"ae": do_ae, "af": do_af}
+        if lock_ae:
+            cmd["aeLock"] = True
+        if lock_awb:
+            cmd["awbLock"] = True
+        self.sock.send(json.dumps(cmd) + "\n")
+
+        # Wait for each specified 3A to converge.
+        ae_sens = None
+        ae_exp = None
+        awb_gains = None
+        awb_transform = None
+        af_dist = None
+        converged = False
+        while True:
+            data,_ = self.__read_response_from_socket()
+            vals = data['strValue'].split()
+            if data['tag'] == 'aeResult':
+                ae_sens, ae_exp = [int(i) for i in vals]
+            elif data['tag'] == 'afResult':
+                af_dist = float(vals[0])
+            elif data['tag'] == 'awbResult':
+                awb_gains = [float(f) for f in vals[:4]]
+                awb_transform = [float(f) for f in vals[4:]]
+            elif data['tag'] == '3aConverged':
+                converged = True
+            elif data['tag'] == '3aDone':
+                break
+            else:
+                raise its.error.Error('Invalid command response')
+        if converged and not get_results:
+            return None,None,None,None,None
+        if (do_ae and ae_sens == None or do_awb and awb_gains == None
+                or do_af and af_dist == None or not converged):
+            raise its.error.Error('3A failed to converge')
+        return ae_sens, ae_exp, awb_gains, awb_transform, af_dist
+
+    def do_capture(self, cap_request, out_surfaces=None):
+        """Issue capture request(s), and read back the image(s) and metadata.
+
+        The main top-level function for capturing one or more images using the
+        device. Captures a single image if cap_request is a single object, and
+        captures a burst if it is a list of objects.
+
+        The out_surfaces field can specify the width(s), height(s), and
+        format(s) of the captured image. The formats may be "yuv", "jpeg",
+        "dng", "raw", or "raw10". The default is a YUV420 frame ("yuv")
+        corresponding to a full sensor frame.
+
+        Note that one or more surfaces can be specified, allowing a capture to
+        request images back in multiple formats (e.g.) raw+yuv, raw+jpeg,
+        yuv+jpeg, raw+yuv+jpeg. If the size is omitted for a surface, the
+        default is the largest resolution available for the format of that
+        surface. At most one output surface can be specified for a given format,
+        and raw+dng, raw10+dng, and raw+raw10 are not supported as combinations.
+
+        Example of a single capture request:
+
+            {
+                "android.sensor.exposureTime": 100*1000*1000,
+                "android.sensor.sensitivity": 100
+            }
+
+        Example of a list of capture requests:
+
+            [
+                {
+                    "android.sensor.exposureTime": 100*1000*1000,
+                    "android.sensor.sensitivity": 100
+                },
+                {
+                    "android.sensor.exposureTime": 100*1000*1000,
+                    "android.sensor.sensitivity": 200
+                }
+            ]
+
+        Examples of output surface specifications:
+
+            {
+                "width": 640,
+                "height": 480,
+                "format": "yuv"
+            }
+
+            [
+                {
+                    "format": "jpeg"
+                },
+                {
+                    "format": "raw"
+                }
+            ]
+
+        The following variables defined in this class are shortcuts for
+        specifying one or more formats where each output is the full size for
+        that format; they can be used as values for the out_surfaces arguments:
+
+            CAP_RAW
+            CAP_DNG
+            CAP_YUV
+            CAP_JPEG
+            CAP_RAW_YUV
+            CAP_DNG_YUV
+            CAP_RAW_JPEG
+            CAP_DNG_JPEG
+            CAP_YUV_JPEG
+            CAP_RAW_YUV_JPEG
+            CAP_DNG_YUV_JPEG
+
+        If multiple formats are specified, then this function returns multuple
+        capture objects, one for each requested format. If multiple formats and
+        multiple captures (i.e. a burst) are specified, then this function
+        returns multiple lists of capture objects. In both cases, the order of
+        the returned objects matches the order of the requested formats in the
+        out_surfaces parameter. For example:
+
+            yuv_cap            = do_capture( req1                           )
+            yuv_cap            = do_capture( req1,        yuv_fmt           )
+            yuv_cap,  raw_cap  = do_capture( req1,        [yuv_fmt,raw_fmt] )
+            yuv_caps           = do_capture( [req1,req2], yuv_fmt           )
+            yuv_caps, raw_caps = do_capture( [req1,req2], [yuv_fmt,raw_fmt] )
+
+        Args:
+            cap_request: The Python dict/list specifying the capture(s), which
+                will be converted to JSON and sent to the device.
+            out_surfaces: (Optional) specifications of the output image formats
+                and sizes to use for each capture.
+
+        Returns:
+            An object, list of objects, or list of lists of objects, where each
+            object contains the following fields:
+            * data: the image data as a numpy array of bytes.
+            * width: the width of the captured image.
+            * height: the height of the captured image.
+            * format: image the format, in ["yuv","jpeg","raw","raw10","dng"].
+            * metadata: the capture result object (Python dictionaty).
+        """
+        cmd = {}
+        cmd["cmdName"] = "doCapture"
+        if not isinstance(cap_request, list):
+            cmd["captureRequests"] = [cap_request]
+        else:
+            cmd["captureRequests"] = cap_request
+        if out_surfaces is not None:
+            if not isinstance(out_surfaces, list):
+                cmd["outputSurfaces"] = [out_surfaces]
+            else:
+                cmd["outputSurfaces"] = out_surfaces
+            formats = [c["format"] if c.has_key("format") else "yuv"
+                       for c in cmd["outputSurfaces"]]
+            formats = [s if s != "jpg" else "jpeg" for s in formats]
+        else:
+            formats = ['yuv']
+        ncap = len(cmd["captureRequests"])
+        nsurf = 1 if out_surfaces is None else len(cmd["outputSurfaces"])
+        if len(formats) > len(set(formats)):
+            raise its.error.Error('Duplicate format requested')
+        if "dng" in formats and "raw" in formats or \
+                "dng" in formats and "raw10" in formats or \
+                "raw" in formats and "raw10" in formats:
+            raise its.error.Error('Different raw formats not supported')
+        print "Capturing %d frame%s with %d format%s [%s]" % (
+                  ncap, "s" if ncap>1 else "", nsurf, "s" if nsurf>1 else "",
+                  ",".join(formats))
+        self.sock.send(json.dumps(cmd) + "\n")
+
+        # Wait for ncap*nsurf images and ncap metadata responses.
+        # Assume that captures come out in the same order as requested in
+        # the burst, however indifidual images of different formats ca come
+        # out in any order for that capture.
+        nbufs = 0
+        bufs = {"yuv":[], "raw":[], "raw10":[], "dng":[], "jpeg":[]}
+        mds = []
+        widths = None
+        heights = None
+        while nbufs < ncap*nsurf or len(mds) < ncap:
+            jsonObj,buf = self.__read_response_from_socket()
+            if jsonObj['tag'] in ['jpegImage', 'yuvImage', 'rawImage', \
+                    'raw10Image', 'dngImage'] and buf is not None:
+                fmt = jsonObj['tag'][:-5]
+                bufs[fmt].append(buf)
+                nbufs += 1
+            elif jsonObj['tag'] == 'captureResults':
+                mds.append(jsonObj['objValue']['captureResult'])
+                outputs = jsonObj['objValue']['outputs']
+                widths = [out['width'] for out in outputs]
+                heights = [out['height'] for out in outputs]
+            else:
+                # Just ignore other tags
+                None
+        rets = []
+        for j,fmt in enumerate(formats):
+            objs = []
+            for i in range(ncap):
+                obj = {}
+                obj["data"] = bufs[fmt][i]
+                obj["width"] = widths[j]
+                obj["height"] = heights[j]
+                obj["format"] = fmt
+                obj["metadata"] = mds[i]
+                objs.append(obj)
+            rets.append(objs if ncap>1 else objs[0])
+        return rets if len(rets)>1 else rets[0]
+
+    @staticmethod
+    def report_result(success):
+        _run(('%s shell am broadcast '
+              '-a %s --ez %s %s') % (ItsSession.ADB, ItsSession.ACTION_ITS_RESULT, \
+              ItsSession.EXTRA_SUCCESS, 'True' if success else 'False' ))
+
+
+def _run(cmd):
+    """Replacement for os.system, with hiding of stdout+stderr messages.
+    """
+    with open(os.devnull, 'wb') as devnull:
+        subprocess.check_call(
+                cmd.split(), stdout=devnull, stderr=subprocess.STDOUT)
+
+class __UnitTest(unittest.TestCase):
+    """Run a suite of unit tests on this module.
+    """
+
+    # TODO: Add some unit tests.
+    None
+
+if __name__ == '__main__':
+    unittest.main()
+
diff --git a/apps/CameraITS/pymodules/its/dng.py b/apps/CameraITS/pymodules/its/dng.py
new file mode 100644
index 0000000..f331d02
--- /dev/null
+++ b/apps/CameraITS/pymodules/its/dng.py
@@ -0,0 +1,174 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import numpy
+import numpy.linalg
+import unittest
+
+# Illuminant IDs
+A = 0
+D65 = 1
+
+def compute_cm_fm(illuminant, gains, ccm, cal):
+    """Compute the ColorMatrix (CM) and ForwardMatrix (FM).
+
+    Given a captured shot of a grey chart illuminated by either a D65 or a
+    standard A illuminant, the HAL will produce the WB gains and transform,
+    in the android.colorCorrection.gains and android.colorCorrection.transform
+    tags respectively. These values have both golden module and per-unit
+    calibration baked in.
+
+    This function is used to take the per-unit gains, ccm, and calibration
+    matrix, and compute the values that the DNG ColorMatrix and ForwardMatrix
+    for the specified illuminant should be. These CM and FM values should be
+    the same for all DNG files captured by all units of the same model (e.g.
+    all Nexus 5 units). The calibration matrix should be the same for all DNGs
+    saved by the same unit, but will differ unit-to-unit.
+
+    Args:
+        illuminant: 0 (A) or 1 (D65).
+        gains: White balance gains, as a list of 4 floats.
+        ccm: White balance transform matrix, as a list of 9 floats.
+        cal: Per-unit calibration matrix, as a list of 9 floats.
+
+    Returns:
+        CM: The 3x3 ColorMatrix for the specified illuminant, as a numpy array
+        FM: The 3x3 ForwardMatrix for the specified illuminant, as a numpy array
+    """
+
+    ###########################################################################
+    # Standard matrices.
+
+    # W is the matrix that maps sRGB to XYZ.
+    # See: http://www.brucelindbloom.com/
+    W = numpy.array([
+        [ 0.4124564,  0.3575761,  0.1804375],
+        [ 0.2126729,  0.7151522,  0.0721750],
+        [ 0.0193339,  0.1191920,  0.9503041]])
+
+    # HH is the chromatic adaptation matrix from D65 (since sRGB's ref white is
+    # D65) to D50 (since CIE XYZ's ref white is D50).
+    HH = numpy.array([
+        [ 1.0478112,  0.0228866, -0.0501270],
+        [ 0.0295424,  0.9904844, -0.0170491],
+        [-0.0092345,  0.0150436,  0.7521316]])
+
+    # H is a chromatic adaptation matrix from D65 (because sRGB's reference
+    # white is D65) to the calibration illuminant (which is a standard matrix
+    # depending on the illuminant). For a D65 illuminant, the matrix is the
+    # identity. For the A illuminant, the matrix uses the linear Bradford
+    # adaptation method to map from D65 to A.
+    # See: http://www.brucelindbloom.com/
+    H_D65 = numpy.array([
+        [ 1.0,        0.0,        0.0],
+        [ 0.0,        1.0,        0.0],
+        [ 0.0,        0.0,        1.0]])
+    H_A = numpy.array([
+        [ 1.2164557,  0.1109905, -0.1549325],
+        [ 0.1533326,  0.9152313, -0.0559953],
+        [-0.0239469,  0.0358984,  0.3147529]])
+    H = [H_A, H_D65][illuminant]
+
+    ###########################################################################
+    # Per-model matrices (that should be the same for all units of a particular
+    # phone/camera. These are statics in the HAL camera properties.
+
+    # G is formed by taking the r,g,b gains and putting them into a
+    # diagonal matrix.
+    G = numpy.array([[gains[0],0,0], [0,gains[1],0], [0,0,gains[3]]])
+
+    # S is just the CCM.
+    S = numpy.array([ccm[0:3], ccm[3:6], ccm[6:9]])
+
+    ###########################################################################
+    # Per-unit matrices.
+
+    # The per-unit calibration matrix for the given illuminant.
+    CC = numpy.array([cal[0:3],cal[3:6],cal[6:9]])
+
+    ###########################################################################
+    # Derived matrices. These should match up with DNG-related matrices
+    # provided by the HAL.
+
+    # The color matrix and forward matrix are computed as follows:
+    #   CM = inv(H * W * S * G * CC)
+    #   FM = HH * W * S
+    CM = numpy.linalg.inv(
+            numpy.dot(numpy.dot(numpy.dot(numpy.dot(H, W), S), G), CC))
+    FM = numpy.dot(numpy.dot(HH, W), S)
+
+    # The color matrix is normalized so that it maps the D50 (PCS) white
+    # point to a maximum component value of 1.
+    CM = CM / max(numpy.dot(CM, (0.9642957, 1.0, 0.8251046)))
+
+    return CM, FM
+
+def compute_asn(illuminant, cal, CM):
+    """Compute the AsShotNeutral DNG value.
+
+    This value is the only dynamic DNG value; the ForwardMatrix, ColorMatrix,
+    and CalibrationMatrix values should be the same for every DNG saved by
+    a given unit. The AsShotNeutral depends on the scene white balance
+    estimate.
+
+    This function computes what the DNG AsShotNeutral values should be, for
+    a given ColorMatrix (which is computed from the WB gains and CCM for a
+    shot taken of a grey chart under either A or D65 illuminants) and the
+    per-unit calibration matrix.
+
+    Args:
+        illuminant: 0 (A) or 1 (D65).
+        cal: Per-unit calibration matrix, as a list of 9 floats.
+        CM: The computed 3x3 ColorMatrix for the illuminant, as a numpy array.
+
+    Returns:
+        ASN: The AsShotNeutral value, as a length-3 numpy array.
+    """
+
+    ###########################################################################
+    # Standard matrices.
+
+    # XYZCAL is the  XYZ coordinate of calibration illuminant (so A or D65).
+    # See: Wyszecki & Stiles, "Color Science", second edition.
+    XYZCAL_A = numpy.array([1.098675, 1.0, 0.355916])
+    XYZCAL_D65 = numpy.array([0.950456, 1.0, 1.089058])
+    XYZCAL = [XYZCAL_A, XYZCAL_D65][illuminant]
+
+    ###########################################################################
+    # Per-unit matrices.
+
+    # The per-unit calibration matrix for the given illuminant.
+    CC = numpy.array([cal[0:3],cal[3:6],cal[6:9]])
+
+    ###########################################################################
+    # Derived matrices.
+
+    # The AsShotNeutral value is then the product of this final color matrix
+    # with the XYZ coordinate of calibration illuminant.
+    #   ASN = CC * CM * XYZCAL
+    ASN = numpy.dot(numpy.dot(CC, CM), XYZCAL)
+
+    # Normalize so the max vector element is 1.0.
+    ASN = ASN / max(ASN)
+
+    return ASN
+
+class __UnitTest(unittest.TestCase):
+    """Run a suite of unit tests on this module.
+    """
+    # TODO: Add more unit tests.
+
+if __name__ == '__main__':
+    unittest.main()
+
diff --git a/apps/CameraITS/pymodules/its/error.py b/apps/CameraITS/pymodules/its/error.py
new file mode 100644
index 0000000..884389b
--- /dev/null
+++ b/apps/CameraITS/pymodules/its/error.py
@@ -0,0 +1,26 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+
+class Error(Exception):
+    pass
+
+class __UnitTest(unittest.TestCase):
+    """Run a suite of unit tests on this module.
+    """
+
+if __name__ == '__main__':
+    unittest.main()
+
diff --git a/apps/CameraITS/pymodules/its/image.py b/apps/CameraITS/pymodules/its/image.py
new file mode 100644
index 0000000..a05c4e6
--- /dev/null
+++ b/apps/CameraITS/pymodules/its/image.py
@@ -0,0 +1,745 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import matplotlib
+matplotlib.use('Agg')
+
+import its.error
+import pylab
+import sys
+import Image
+import numpy
+import math
+import unittest
+import cStringIO
+import scipy.stats
+import copy
+
+DEFAULT_YUV_TO_RGB_CCM = numpy.matrix([
+                                [1.000,  0.000,  1.402],
+                                [1.000, -0.344, -0.714],
+                                [1.000,  1.772,  0.000]])
+
+DEFAULT_YUV_OFFSETS = numpy.array([0, 128, 128])
+
+DEFAULT_GAMMA_LUT = numpy.array(
+        [math.floor(65535 * math.pow(i/65535.0, 1/2.2) + 0.5)
+         for i in xrange(65536)])
+
+DEFAULT_INVGAMMA_LUT = numpy.array(
+        [math.floor(65535 * math.pow(i/65535.0, 2.2) + 0.5)
+         for i in xrange(65536)])
+
+MAX_LUT_SIZE = 65536
+
+def convert_capture_to_rgb_image(cap,
+                                 ccm_yuv_to_rgb=DEFAULT_YUV_TO_RGB_CCM,
+                                 yuv_off=DEFAULT_YUV_OFFSETS,
+                                 props=None):
+    """Convert a captured image object to a RGB image.
+
+    Args:
+        cap: A capture object as returned by its.device.do_capture.
+        ccm_yuv_to_rgb: (Optional) the 3x3 CCM to convert from YUV to RGB.
+        yuv_off: (Optional) offsets to subtract from each of Y,U,V values.
+        props: (Optional) camera properties object (of static values);
+            required for processing raw images.
+
+    Returns:
+        RGB float-3 image array, with pixel values in [0.0, 1.0].
+    """
+    w = cap["width"]
+    h = cap["height"]
+    if cap["format"] == "raw10":
+        assert(props is not None)
+        cap = unpack_raw10_capture(cap, props)
+    if cap["format"] == "yuv":
+        y = cap["data"][0:w*h]
+        u = cap["data"][w*h:w*h*5/4]
+        v = cap["data"][w*h*5/4:w*h*6/4]
+        return convert_yuv420_to_rgb_image(y, u, v, w, h)
+    elif cap["format"] == "jpeg":
+        return decompress_jpeg_to_rgb_image(cap["data"])
+    elif cap["format"] == "raw":
+        assert(props is not None)
+        r,gr,gb,b = convert_capture_to_planes(cap, props)
+        return convert_raw_to_rgb_image(r,gr,gb,b, props, cap["metadata"])
+    else:
+        raise its.error.Error('Invalid format %s' % (cap["format"]))
+
+def unpack_raw10_capture(cap, props):
+    """Unpack a raw-10 capture to a raw-16 capture.
+
+    Args:
+        cap: A raw-10 capture object.
+        props: Camera propertis object.
+
+    Returns:
+        New capture object with raw-16 data.
+    """
+    # Data is packed as 4x10b pixels in 5 bytes, with the first 4 bytes holding
+    # the MSPs of the pixels, and the 5th byte holding 4x2b LSBs.
+    w,h = cap["width"], cap["height"]
+    if w % 4 != 0:
+        raise its.error.Error('Invalid raw-10 buffer width')
+    cap = copy.deepcopy(cap)
+    cap["data"] = unpack_raw10_image(cap["data"].reshape(h,w*5/4))
+    cap["format"] = "raw"
+    return cap
+
+def unpack_raw10_image(img):
+    """Unpack a raw-10 image to a raw-16 image.
+
+    Output image will have the 10 LSBs filled in each 16b word, and the 6 MSBs
+    will be set to zero.
+
+    Args:
+        img: A raw-10 image, as a uint8 numpy array.
+
+    Returns:
+        Image as a uint16 numpy array, with all row padding stripped.
+    """
+    if img.shape[1] % 5 != 0:
+        raise its.error.Error('Invalid raw-10 buffer width')
+    w = img.shape[1]*4/5
+    h = img.shape[0]
+    # Cut out the 4x8b MSBs and shift to bits [10:2] in 16b words.
+    msbs = numpy.delete(img, numpy.s_[4::5], 1)
+    msbs = msbs.astype(numpy.uint16)
+    msbs = numpy.left_shift(msbs, 2)
+    msbs = msbs.reshape(h,w)
+    # Cut out the 4x2b LSBs and put each in bits [2:0] of their own 8b words.
+    lsbs = img[::, 4::5].reshape(h,w/4)
+    lsbs = numpy.right_shift(
+            numpy.packbits(numpy.unpackbits(lsbs).reshape(h,w/4,4,2),3), 6)
+    lsbs = lsbs.reshape(h,w)
+    # Fuse the MSBs and LSBs back together
+    img16 = numpy.bitwise_or(msbs, lsbs).reshape(h,w)
+    return img16
+
+def convert_capture_to_planes(cap, props=None):
+    """Convert a captured image object to separate image planes.
+
+    Decompose an image into multiple images, corresponding to different planes.
+
+    For YUV420 captures ("yuv"):
+        Returns Y,U,V planes, where the Y plane is full-res and the U,V planes
+        are each 1/2 x 1/2 of the full res.
+
+    For Bayer captures ("raw" or "raw10"):
+        Returns planes in the order R,Gr,Gb,B, regardless of the Bayer pattern
+        layout. Each plane is 1/2 x 1/2 of the full res.
+
+    For JPEG captures ("jpeg"):
+        Returns R,G,B full-res planes.
+
+    Args:
+        cap: A capture object as returned by its.device.do_capture.
+        props: (Optional) camera properties object (of static values);
+            required for processing raw images.
+
+    Returns:
+        A tuple of float numpy arrays (one per plane), consisting of pixel
+            values in the range [0.0, 1.0].
+    """
+    w = cap["width"]
+    h = cap["height"]
+    if cap["format"] == "raw10":
+        assert(props is not None)
+        cap = unpack_raw10_capture(cap, props)
+    if cap["format"] == "yuv":
+        y = cap["data"][0:w*h]
+        u = cap["data"][w*h:w*h*5/4]
+        v = cap["data"][w*h*5/4:w*h*6/4]
+        return ((y.astype(numpy.float32) / 255.0).reshape(h, w, 1),
+                (u.astype(numpy.float32) / 255.0).reshape(h/2, w/2, 1),
+                (v.astype(numpy.float32) / 255.0).reshape(h/2, w/2, 1))
+    elif cap["format"] == "jpeg":
+        rgb = decompress_jpeg_to_rgb_image(cap["data"]).reshape(w*h*3)
+        return (rgb[::3].reshape(h,w,1),
+                rgb[1::3].reshape(h,w,1),
+                rgb[2::3].reshape(h,w,1))
+    elif cap["format"] == "raw":
+        assert(props is not None)
+        white_level = float(props['android.sensor.info.whiteLevel'])
+        img = numpy.ndarray(shape=(h*w,), dtype='<u2',
+                            buffer=cap["data"][0:w*h*2])
+        img = img.astype(numpy.float32).reshape(h,w) / white_level
+        imgs = [img[::2].reshape(w*h/2)[::2].reshape(h/2,w/2,1),
+                img[::2].reshape(w*h/2)[1::2].reshape(h/2,w/2,1),
+                img[1::2].reshape(w*h/2)[::2].reshape(h/2,w/2,1),
+                img[1::2].reshape(w*h/2)[1::2].reshape(h/2,w/2,1)]
+        idxs = get_canonical_cfa_order(props)
+        return [imgs[i] for i in idxs]
+    else:
+        raise its.error.Error('Invalid format %s' % (cap["format"]))
+
+def get_canonical_cfa_order(props):
+    """Returns a mapping from the Bayer 2x2 top-left grid in the CFA to
+    the standard order R,Gr,Gb,B.
+
+    Args:
+        props: Camera properties object.
+
+    Returns:
+        List of 4 integers, corresponding to the positions in the 2x2 top-
+            left Bayer grid of R,Gr,Gb,B, where the 2x2 grid is labeled as
+            0,1,2,3 in row major order.
+    """
+    # Note that raw streams aren't croppable, so the cropRegion doesn't need
+    # to be considered when determining the top-left pixel color.
+    cfa_pat = props['android.sensor.info.colorFilterArrangement']
+    if cfa_pat == 0:
+        # RGGB
+        return [0,1,2,3]
+    elif cfa_pat == 1:
+        # GRBG
+        return [1,0,3,2]
+    elif cfa_pat == 2:
+        # GBRG
+        return [2,3,0,1]
+    elif cfa_pat == 3:
+        # BGGR
+        return [3,2,1,0]
+    else:
+        raise its.error.Error("Not supported")
+
+def get_gains_in_canonical_order(props, gains):
+    """Reorders the gains tuple to the canonical R,Gr,Gb,B order.
+
+    Args:
+        props: Camera properties object.
+        gains: List of 4 values, in R,G_even,G_odd,B order.
+
+    Returns:
+        List of gains values, in R,Gr,Gb,B order.
+    """
+    cfa_pat = props['android.sensor.info.colorFilterArrangement']
+    if cfa_pat in [0,1]:
+        # RGGB or GRBG, so G_even is Gr
+        return gains
+    elif cfa_pat in [2,3]:
+        # GBRG or BGGR, so G_even is Gb
+        return [gains[0], gains[2], gains[1], gains[3]]
+    else:
+        raise its.error.Error("Not supported")
+
+def convert_raw_to_rgb_image(r_plane, gr_plane, gb_plane, b_plane,
+                             props, cap_res):
+    """Convert a Bayer raw-16 image to an RGB image.
+
+    Includes some extremely rudimentary demosaicking and color processing
+    operations; the output of this function shouldn't be used for any image
+    quality analysis.
+
+    Args:
+        r_plane,gr_plane,gb_plane,b_plane: Numpy arrays for each color plane
+            in the Bayer image, with pixels in the [0.0, 1.0] range.
+        props: Camera properties object.
+        cap_res: Capture result (metadata) object.
+
+    Returns:
+        RGB float-3 image array, with pixel values in [0.0, 1.0]
+    """
+    # Values required for the RAW to RGB conversion.
+    assert(props is not None)
+    white_level = float(props['android.sensor.info.whiteLevel'])
+    black_levels = props['android.sensor.blackLevelPattern']
+    gains = cap_res['android.colorCorrection.gains']
+    ccm = cap_res['android.colorCorrection.transform']
+
+    # Reorder black levels and gains to R,Gr,Gb,B, to match the order
+    # of the planes.
+    idxs = get_canonical_cfa_order(props)
+    black_levels = [black_levels[i] for i in idxs]
+    gains = get_gains_in_canonical_order(props, gains)
+
+    # Convert CCM from rational to float, as numpy arrays.
+    ccm = numpy.array(its.objects.rational_to_float(ccm)).reshape(3,3)
+
+    # Need to scale the image back to the full [0,1] range after subtracting
+    # the black level from each pixel.
+    scale = white_level / (white_level - max(black_levels))
+
+    # Three-channel black levels, normalized to [0,1] by white_level.
+    black_levels = numpy.array([b/white_level for b in [
+            black_levels[i] for i in [0,1,3]]])
+
+    # Three-channel gains.
+    gains = numpy.array([gains[i] for i in [0,1,3]])
+
+    h,w = r_plane.shape[:2]
+    img = numpy.dstack([r_plane,(gr_plane+gb_plane)/2.0,b_plane])
+    img = (((img.reshape(h,w,3) - black_levels) * scale) * gains).clip(0.0,1.0)
+    img = numpy.dot(img.reshape(w*h,3), ccm.T).reshape(h,w,3).clip(0.0,1.0)
+    return img
+
+def convert_yuv420_to_rgb_image(y_plane, u_plane, v_plane,
+                                w, h,
+                                ccm_yuv_to_rgb=DEFAULT_YUV_TO_RGB_CCM,
+                                yuv_off=DEFAULT_YUV_OFFSETS):
+    """Convert a YUV420 8-bit planar image to an RGB image.
+
+    Args:
+        y_plane: The packed 8-bit Y plane.
+        u_plane: The packed 8-bit U plane.
+        v_plane: The packed 8-bit V plane.
+        w: The width of the image.
+        h: The height of the image.
+        ccm_yuv_to_rgb: (Optional) the 3x3 CCM to convert from YUV to RGB.
+        yuv_off: (Optional) offsets to subtract from each of Y,U,V values.
+
+    Returns:
+        RGB float-3 image array, with pixel values in [0.0, 1.0].
+    """
+    y = numpy.subtract(y_plane, yuv_off[0])
+    u = numpy.subtract(u_plane, yuv_off[1]).view(numpy.int8)
+    v = numpy.subtract(v_plane, yuv_off[2]).view(numpy.int8)
+    u = u.reshape(h/2, w/2).repeat(2, axis=1).repeat(2, axis=0)
+    v = v.reshape(h/2, w/2).repeat(2, axis=1).repeat(2, axis=0)
+    yuv = numpy.dstack([y, u.reshape(w*h), v.reshape(w*h)])
+    flt = numpy.empty([h, w, 3], dtype=numpy.float32)
+    flt.reshape(w*h*3)[:] = yuv.reshape(h*w*3)[:]
+    flt = numpy.dot(flt.reshape(w*h,3), ccm_yuv_to_rgb.T).clip(0, 255)
+    rgb = numpy.empty([h, w, 3], dtype=numpy.uint8)
+    rgb.reshape(w*h*3)[:] = flt.reshape(w*h*3)[:]
+    return rgb.astype(numpy.float32) / 255.0
+
+def load_yuv420_to_rgb_image(yuv_fname,
+                             w, h,
+                             ccm_yuv_to_rgb=DEFAULT_YUV_TO_RGB_CCM,
+                             yuv_off=DEFAULT_YUV_OFFSETS):
+    """Load a YUV420 image file, and return as an RGB image.
+
+    Args:
+        yuv_fname: The path of the YUV420 file.
+        w: The width of the image.
+        h: The height of the image.
+        ccm_yuv_to_rgb: (Optional) the 3x3 CCM to convert from YUV to RGB.
+        yuv_off: (Optional) offsets to subtract from each of Y,U,V values.
+
+    Returns:
+        RGB float-3 image array, with pixel values in [0.0, 1.0].
+    """
+    with open(yuv_fname, "rb") as f:
+        y = numpy.fromfile(f, numpy.uint8, w*h, "")
+        v = numpy.fromfile(f, numpy.uint8, w*h/4, "")
+        u = numpy.fromfile(f, numpy.uint8, w*h/4, "")
+        return convert_yuv420_to_rgb_image(y,u,v,w,h,ccm_yuv_to_rgb,yuv_off)
+
+def load_yuv420_to_yuv_planes(yuv_fname, w, h):
+    """Load a YUV420 image file, and return separate Y, U, and V plane images.
+
+    Args:
+        yuv_fname: The path of the YUV420 file.
+        w: The width of the image.
+        h: The height of the image.
+
+    Returns:
+        Separate Y, U, and V images as float-1 Numpy arrays, pixels in [0,1].
+        Note that pixel (0,0,0) is not black, since U,V pixels are centered at
+        0.5, and also that the Y and U,V plane images returned are different
+        sizes (due to chroma subsampling in the YUV420 format).
+    """
+    with open(yuv_fname, "rb") as f:
+        y = numpy.fromfile(f, numpy.uint8, w*h, "")
+        v = numpy.fromfile(f, numpy.uint8, w*h/4, "")
+        u = numpy.fromfile(f, numpy.uint8, w*h/4, "")
+        return ((y.astype(numpy.float32) / 255.0).reshape(h, w, 1),
+                (u.astype(numpy.float32) / 255.0).reshape(h/2, w/2, 1),
+                (v.astype(numpy.float32) / 255.0).reshape(h/2, w/2, 1))
+
+def decompress_jpeg_to_rgb_image(jpeg_buffer):
+    """Decompress a JPEG-compressed image, returning as an RGB image.
+
+    Args:
+        jpeg_buffer: The JPEG stream.
+
+    Returns:
+        A numpy array for the RGB image, with pixels in [0,1].
+    """
+    img = Image.open(cStringIO.StringIO(jpeg_buffer))
+    w = img.size[0]
+    h = img.size[1]
+    return numpy.array(img).reshape(h,w,3) / 255.0
+
+def apply_lut_to_image(img, lut):
+    """Applies a LUT to every pixel in a float image array.
+
+    Internally converts to a 16b integer image, since the LUT can work with up
+    to 16b->16b mappings (i.e. values in the range [0,65535]). The lut can also
+    have fewer than 65536 entries, however it must be sized as a power of 2
+    (and for smaller luts, the scale must match the bitdepth).
+
+    For a 16b lut of 65536 entries, the operation performed is:
+
+        lut[r * 65535] / 65535 -> r'
+        lut[g * 65535] / 65535 -> g'
+        lut[b * 65535] / 65535 -> b'
+
+    For a 10b lut of 1024 entries, the operation becomes:
+
+        lut[r * 1023] / 1023 -> r'
+        lut[g * 1023] / 1023 -> g'
+        lut[b * 1023] / 1023 -> b'
+
+    Args:
+        img: Numpy float image array, with pixel values in [0,1].
+        lut: Numpy table encoding a LUT, mapping 16b integer values.
+
+    Returns:
+        Float image array after applying LUT to each pixel.
+    """
+    n = len(lut)
+    if n <= 0 or n > MAX_LUT_SIZE or (n & (n - 1)) != 0:
+        raise its.error.Error('Invalid arg LUT size: %d' % (n))
+    m = float(n-1)
+    return (lut[(img * m).astype(numpy.uint16)] / m).astype(numpy.float32)
+
+def apply_matrix_to_image(img, mat):
+    """Multiplies a 3x3 matrix with each float-3 image pixel.
+
+    Each pixel is considered a column vector, and is left-multiplied by
+    the given matrix:
+
+        [     ]   r    r'
+        [ mat ] * g -> g'
+        [     ]   b    b'
+
+    Args:
+        img: Numpy float image array, with pixel values in [0,1].
+        mat: Numpy 3x3 matrix.
+
+    Returns:
+        The numpy float-3 image array resulting from the matrix mult.
+    """
+    h = img.shape[0]
+    w = img.shape[1]
+    img2 = numpy.empty([h, w, 3], dtype=numpy.float32)
+    img2.reshape(w*h*3)[:] = (numpy.dot(img.reshape(h*w, 3), mat.T)
+                             ).reshape(w*h*3)[:]
+    return img2
+
+def get_image_patch(img, xnorm, ynorm, wnorm, hnorm):
+    """Get a patch (tile) of an image.
+
+    Args:
+        img: Numpy float image array, with pixel values in [0,1].
+        xnorm,ynorm,wnorm,hnorm: Normalized (in [0,1]) coords for the tile.
+
+    Returns:
+        Float image array of the patch.
+    """
+    hfull = img.shape[0]
+    wfull = img.shape[1]
+    xtile = math.ceil(xnorm * wfull)
+    ytile = math.ceil(ynorm * hfull)
+    wtile = math.floor(wnorm * wfull)
+    htile = math.floor(hnorm * hfull)
+    return img[ytile:ytile+htile,xtile:xtile+wtile,:].copy()
+
+def compute_image_means(img):
+    """Calculate the mean of each color channel in the image.
+
+    Args:
+        img: Numpy float image array, with pixel values in [0,1].
+
+    Returns:
+        A list of mean values, one per color channel in the image.
+    """
+    means = []
+    chans = img.shape[2]
+    for i in xrange(chans):
+        means.append(numpy.mean(img[:,:,i], dtype=numpy.float64))
+    return means
+
+def compute_image_variances(img):
+    """Calculate the variance of each color channel in the image.
+
+    Args:
+        img: Numpy float image array, with pixel values in [0,1].
+
+    Returns:
+        A list of mean values, one per color channel in the image.
+    """
+    variances = []
+    chans = img.shape[2]
+    for i in xrange(chans):
+        variances.append(numpy.var(img[:,:,i], dtype=numpy.float64))
+    return variances
+
+def write_image(img, fname, apply_gamma=False):
+    """Save a float-3 numpy array image to a file.
+
+    Supported formats: PNG, JPEG, and others; see PIL docs for more.
+
+    Image can be 3-channel, which is interpreted as RGB, or can be 1-channel,
+    which is greyscale.
+
+    Can optionally specify that the image should be gamma-encoded prior to
+    writing it out; this should be done if the image contains linear pixel
+    values, to make the image look "normal".
+
+    Args:
+        img: Numpy image array data.
+        fname: Path of file to save to; the extension specifies the format.
+        apply_gamma: (Optional) apply gamma to the image prior to writing it.
+    """
+    if apply_gamma:
+        img = apply_lut_to_image(img, DEFAULT_GAMMA_LUT)
+    (h, w, chans) = img.shape
+    if chans == 3:
+        Image.fromarray((img * 255.0).astype(numpy.uint8), "RGB").save(fname)
+    elif chans == 1:
+        img3 = (img * 255.0).astype(numpy.uint8).repeat(3).reshape(h,w,3)
+        Image.fromarray(img3, "RGB").save(fname)
+    else:
+        raise its.error.Error('Unsupported image type')
+
+def downscale_image(img, f):
+    """Shrink an image by a given integer factor.
+
+    This function computes output pixel values by averaging over rectangular
+    regions of the input image; it doesn't skip or sample pixels, and all input
+    image pixels are evenly weighted.
+
+    If the downscaling factor doesn't cleanly divide the width and/or height,
+    then the remaining pixels on the right or bottom edge are discarded prior
+    to the downscaling.
+
+    Args:
+        img: The input image as an ndarray.
+        f: The downscaling factor, which should be an integer.
+
+    Returns:
+        The new (downscaled) image, as an ndarray.
+    """
+    h,w,chans = img.shape
+    f = int(f)
+    assert(f >= 1)
+    h = (h/f)*f
+    w = (w/f)*f
+    img = img[0:h:,0:w:,::]
+    chs = []
+    for i in xrange(chans):
+        ch = img.reshape(h*w*chans)[i::chans].reshape(h,w)
+        ch = ch.reshape(h,w/f,f).mean(2).reshape(h,w/f)
+        ch = ch.T.reshape(w/f,h/f,f).mean(2).T.reshape(h/f,w/f)
+        chs.append(ch.reshape(h*w/(f*f)))
+    img = numpy.vstack(chs).T.reshape(h/f,w/f,chans)
+    return img
+
+def __measure_color_checker_patch(img, xc,yc, patch_size):
+    r = patch_size/2
+    tile = img[yc-r:yc+r+1:, xc-r:xc+r+1:, ::]
+    means = tile.mean(1).mean(0)
+    return means
+
+def get_color_checker_chart_patches(img, debug_fname_prefix=None):
+    """Return the center coords of each patch in a color checker chart.
+
+    Assumptions:
+    * Chart is vertical or horizontal w.r.t. camera, but not diagonal.
+    * Chart is (roughly) planar-parallel to the camera.
+    * Chart is centered in frame (roughly).
+    * Around/behind chart is white/grey background.
+    * The only black pixels in the image are from the chart.
+    * Chart is 100% visible and contained within image.
+    * No other objects within image.
+    * Image is well-exposed.
+    * Standard color checker chart with standard-sized black borders.
+
+    The values returned are in the coordinate system of the chart; that is,
+    the "origin" patch is the brown patch that is in the chart's top-left
+    corner when it is in the normal upright/horizontal orientation. (The chart
+    may be any of the four main orientations in the image.)
+
+    The chart is 6x4 patches in the normal upright orientation. The return
+    values of this function are the center coordinate of the top-left patch,
+    and the displacement vectors to the next patches to the right and below
+    the top-left patch. From these pieces of data, the center coordinates of
+    any of the patches can be computed.
+
+    Args:
+        img: Input image, as a numpy array with pixels in [0,1].
+        debug_fname_prefix: If not None, the (string) name of a file prefix to
+            use to save a number of debug images for visulaizing the output of
+            this function; can be used to see if the patches are being found
+            successfully.
+
+    Returns:
+        6x4 list of lists of integer (x,y) coords of the center of each patch,
+        ordered in the "chart order" (6x4 row major).
+    """
+
+    # Shrink the original image.
+    DOWNSCALE_FACTOR = 4
+    img_small = downscale_image(img, DOWNSCALE_FACTOR)
+
+    # Make a threshold image, which is 1.0 where the image is black,
+    # and 0.0 elsewhere.
+    BLACK_PIXEL_THRESH = 0.2
+    mask_img = scipy.stats.threshold(
+                img_small.max(2), BLACK_PIXEL_THRESH, 1.1, 0.0)
+    mask_img = 1.0 - scipy.stats.threshold(mask_img, -0.1, 0.1, 1.0)
+
+    if debug_fname_prefix is not None:
+        h,w = mask_img.shape
+        write_image(img, debug_fname_prefix+"_0.jpg")
+        write_image(mask_img.repeat(3).reshape(h,w,3),
+                debug_fname_prefix+"_1.jpg")
+
+    # Mask image flattened to a single row or column (by averaging).
+    # Also apply a threshold to these arrays.
+    FLAT_PIXEL_THRESH = 0.05
+    flat_row = mask_img.mean(0)
+    flat_col = mask_img.mean(1)
+    flat_row = [0 if v < FLAT_PIXEL_THRESH else 1 for v in flat_row]
+    flat_col = [0 if v < FLAT_PIXEL_THRESH else 1 for v in flat_col]
+
+    # Start and end of the non-zero region of the flattened row/column.
+    flat_row_nonzero = [i for i in range(len(flat_row)) if flat_row[i]>0]
+    flat_col_nonzero = [i for i in range(len(flat_col)) if flat_col[i]>0]
+    flat_row_min, flat_row_max = min(flat_row_nonzero), max(flat_row_nonzero)
+    flat_col_min, flat_col_max = min(flat_col_nonzero), max(flat_col_nonzero)
+
+    # Orientation of chart, and number of grid cells horz. and vertically.
+    orient = "h" if flat_row_max-flat_row_min>flat_col_max-flat_col_min else "v"
+    xgrids = 6 if orient=="h" else 4
+    ygrids = 6 if orient=="v" else 4
+
+    # Get better bounds on the patches region, lopping off some of the excess
+    # black border.
+    HRZ_BORDER_PAD_FRAC = 0.0138
+    VERT_BORDER_PAD_FRAC = 0.0395
+    xpad = HRZ_BORDER_PAD_FRAC if orient=="h" else VERT_BORDER_PAD_FRAC
+    ypad = HRZ_BORDER_PAD_FRAC if orient=="v" else VERT_BORDER_PAD_FRAC
+    xchart = flat_row_min + (flat_row_max - flat_row_min) * xpad
+    ychart = flat_col_min + (flat_col_max - flat_col_min) * ypad
+    wchart = (flat_row_max - flat_row_min) * (1 - 2*xpad)
+    hchart = (flat_col_max - flat_col_min) * (1 - 2*ypad)
+
+    # Get the colors of the 4 corner patches, in clockwise order, by measuring
+    # the average value of a small patch at each of the 4 patch centers.
+    colors = []
+    centers = []
+    for (x,y) in [(0,0), (xgrids-1,0), (xgrids-1,ygrids-1), (0,ygrids-1)]:
+        xc = xchart + (x + 0.5)*wchart/xgrids
+        yc = ychart + (y + 0.5)*hchart/ygrids
+        xc = int(xc * DOWNSCALE_FACTOR + 0.5)
+        yc = int(yc * DOWNSCALE_FACTOR + 0.5)
+        centers.append((xc,yc))
+        chan_means = __measure_color_checker_patch(img, xc,yc, 32)
+        colors.append(sum(chan_means) / len(chan_means))
+
+    # The brightest corner is the white patch, the darkest is the black patch.
+    # The black patch should be counter-clockwise from the white patch.
+    white_patch_index = None
+    for i in range(4):
+        if colors[i] == max(colors) and \
+                colors[(i-1+4)%4] == min(colors):
+            white_patch_index = i%4
+    assert(white_patch_index is not None)
+
+    # Return the coords of the origin (top-left when the chart is in the normal
+    # upright orientation) patch's center, and the vector displacement to the
+    # center of the second patch on the first row of the chart (when in the
+    # normal upright orienation).
+    origin_index = (white_patch_index+1)%4
+    prev_index = (origin_index-1+4)%4
+    next_index = (origin_index+1)%4
+    origin_center = centers[origin_index]
+    prev_center = centers[prev_index]
+    next_center = centers[next_index]
+    vec_across = tuple([(next_center[i]-origin_center[i])/5.0 for i in [0,1]])
+    vec_down = tuple([(prev_center[i]-origin_center[i])/3.0 for i in [0,1]])
+
+    # Compute the center of each patch.
+    patches = [[],[],[],[]]
+    for yi in range(4):
+        for xi in range(6):
+            x0,y0 = origin_center
+            dxh,dyh = vec_across
+            dxv,dyv = vec_down
+            xc = int(x0 + dxh*xi + dxv*yi)
+            yc = int(y0 + dyh*xi + dyv*yi)
+            patches[yi].append((xc,yc))
+
+    # Sanity check: test that the R,G,B,black,white patches are correct.
+    patch_info = [(2,2,[0]), # Red
+                  (2,1,[1]), # Green
+                  (2,0,[2]), # Blue
+                  (3,0,[0,1,2]), # White
+                  (3,5,[])] # Black
+    for i in range(len(patch_info)):
+        yi,xi,high_chans = patch_info[i]
+        low_chans = [i for i in [0,1,2] if i not in high_chans]
+        xc,yc = patches[yi][xi]
+        means = __measure_color_checker_patch(img, xc,yc, 64)
+        if (min([means[i] for i in high_chans]+[1]) < \
+                max([means[i] for i in low_chans]+[0])):
+            print "Color patch sanity check failed: patch", i
+            # If the debug info is requested, then don't assert that the patches
+            # are matched, to allow the caller to see the output.
+            if debug_fname_prefix is None:
+                assert(0)
+
+    if debug_fname_prefix is not None:
+        for (xc,yc) in sum(patches,[]):
+            img[yc,xc] = 1.0
+        write_image(img, debug_fname_prefix+"_2.jpg")
+
+    return patches
+
+class __UnitTest(unittest.TestCase):
+    """Run a suite of unit tests on this module.
+    """
+
+    # TODO: Add more unit tests.
+
+    def test_apply_matrix_to_image(self):
+        """Unit test for apply_matrix_to_image.
+
+        Test by using a canned set of values on a 1x1 pixel image.
+
+            [ 1 2 3 ]   [ 0.1 ]   [ 1.4 ]
+            [ 4 5 6 ] * [ 0.2 ] = [ 3.2 ]
+            [ 7 8 9 ]   [ 0.3 ]   [ 5.0 ]
+               mat         x         y
+        """
+        mat = numpy.array([[1,2,3],[4,5,6],[7,8,9]])
+        x = numpy.array([0.1,0.2,0.3]).reshape(1,1,3)
+        y = apply_matrix_to_image(x, mat).reshape(3).tolist()
+        y_ref = [1.4,3.2,5.0]
+        passed = all([math.fabs(y[i] - y_ref[i]) < 0.001 for i in xrange(3)])
+        self.assertTrue(passed)
+
+    def test_apply_lut_to_image(self):
+        """ Unit test for apply_lut_to_image.
+
+        Test by using a canned set of values on a 1x1 pixel image. The LUT will
+        simply double the value of the index:
+
+            lut[x] = 2*x
+        """
+        lut = numpy.array([2*i for i in xrange(65536)])
+        x = numpy.array([0.1,0.2,0.3]).reshape(1,1,3)
+        y = apply_lut_to_image(x, lut).reshape(3).tolist()
+        y_ref = [0.2,0.4,0.6]
+        passed = all([math.fabs(y[i] - y_ref[i]) < 0.001 for i in xrange(3)])
+        self.assertTrue(passed)
+
+if __name__ == '__main__':
+    unittest.main()
+
diff --git a/apps/CameraITS/pymodules/its/objects.py b/apps/CameraITS/pymodules/its/objects.py
new file mode 100644
index 0000000..d11ef84
--- /dev/null
+++ b/apps/CameraITS/pymodules/its/objects.py
@@ -0,0 +1,190 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import os.path
+import sys
+import re
+import json
+import tempfile
+import time
+import unittest
+import subprocess
+import math
+
+def int_to_rational(i):
+    """Function to convert Python integers to Camera2 rationals.
+
+    Args:
+        i: Python integer or list of integers.
+
+    Returns:
+        Python dictionary or list of dictionaries representing the given int(s)
+        as rationals with denominator=1.
+    """
+    if isinstance(i, list):
+        return [{"numerator":val, "denominator":1} for val in i]
+    else:
+        return {"numerator":i, "denominator":1}
+
+def float_to_rational(f, denom=128):
+    """Function to convert Python floats to Camera2 rationals.
+
+    Args:
+        f: Python float or list of floats.
+        denom: (Optonal) the denominator to use in the output rationals.
+
+    Returns:
+        Python dictionary or list of dictionaries representing the given
+        float(s) as rationals.
+    """
+    if isinstance(f, list):
+        return [{"numerator":math.floor(val*denom+0.5), "denominator":denom}
+                for val in f]
+    else:
+        return {"numerator":math.floor(f*denom+0.5), "denominator":denom}
+
+def rational_to_float(r):
+    """Function to convert Camera2 rational objects to Python floats.
+
+    Args:
+        r: Rational or list of rationals, as Python dictionaries.
+
+    Returns:
+        Float or list of floats.
+    """
+    if isinstance(r, list):
+        return [float(val["numerator"]) / float(val["denominator"])
+                for val in r]
+    else:
+        return float(r["numerator"]) / float(r["denominator"])
+
+def manual_capture_request(sensitivity, exp_time, linear_tonemap=False):
+    """Return a capture request with everything set to manual.
+
+    Uses identity/unit color correction, and the default tonemap curve.
+    Optionally, the tonemap can be specified as being linear.
+
+    Args:
+        sensitivity: The sensitivity value to populate the request with.
+        exp_time: The exposure time, in nanoseconds, to populate the request
+            with.
+        linear_tonemap: [Optional] whether a linear tonemap should be used
+            in this request.
+
+    Returns:
+        The default manual capture request, ready to be passed to the
+        its.device.do_capture function.
+    """
+    req = {
+        "android.control.mode": 0,
+        "android.control.aeMode": 0,
+        "android.control.awbMode": 0,
+        "android.control.afMode": 0,
+        "android.control.effectMode": 0,
+        "android.sensor.frameDuration": 0,
+        "android.sensor.sensitivity": sensitivity,
+        "android.sensor.exposureTime": exp_time,
+        "android.colorCorrection.mode": 0,
+        "android.colorCorrection.transform":
+                int_to_rational([1,0,0, 0,1,0, 0,0,1]),
+        "android.colorCorrection.gains": [1,1,1,1],
+        "android.tonemap.mode": 1,
+        }
+    if linear_tonemap:
+        req["android.tonemap.mode"] = 0
+        req["android.tonemap.curveRed"] = [0.0,0.0, 1.0,1.0]
+        req["android.tonemap.curveGreen"] = [0.0,0.0, 1.0,1.0]
+        req["android.tonemap.curveBlue"] = [0.0,0.0, 1.0,1.0]
+    return req
+
+def auto_capture_request():
+    """Return a capture request with everything set to auto.
+    """
+    return {
+        "android.control.mode": 1,
+        "android.control.aeMode": 1,
+        "android.control.awbMode": 1,
+        "android.control.afMode": 1,
+        "android.colorCorrection.mode": 1,
+        "android.tonemap.mode": 1,
+        }
+
+def get_available_output_sizes(fmt, props):
+    """Return a sorted list of available output sizes for a given format.
+
+    Args:
+        fmt: the output format, as a string in ["jpg", "yuv", "raw"].
+        props: the object returned from its.device.get_camera_properties().
+
+    Returns:
+        A sorted list of (w,h) tuples (sorted large-to-small).
+    """
+    fmt_codes = {"raw":0x20, "raw10":0x25, "yuv":0x23, "jpg":0x100, "jpeg":0x100}
+    configs = props['android.scaler.streamConfigurationMap']\
+                   ['availableStreamConfigurations']
+    fmt_configs = [cfg for cfg in configs if cfg['format'] == fmt_codes[fmt]]
+    out_configs = [cfg for cfg in fmt_configs if cfg['input'] == False]
+    out_sizes = [(cfg['width'],cfg['height']) for cfg in out_configs]
+    out_sizes.sort(reverse=True)
+    return out_sizes
+
+def get_fastest_manual_capture_settings(props):
+    """Return a capture request and format spec for the fastest capture.
+
+    Args:
+        props: the object returned from its.device.get_camera_properties().
+
+    Returns:
+        Two values, the first is a capture request, and the second is an output
+        format specification, for the fastest possible (legal) capture that
+        can be performed on this device (with the smallest output size).
+    """
+    fmt = "yuv"
+    size = get_available_output_sizes(fmt, props)[-1]
+    out_spec = {"format":fmt, "width":size[0], "height":size[1]}
+    s = min(props['android.sensor.info.sensitivityRange'])
+    e = min(props['android.sensor.info.exposureTimeRange'])
+    req = manual_capture_request(s,e)
+    return req, out_spec
+
+class __UnitTest(unittest.TestCase):
+    """Run a suite of unit tests on this module.
+    """
+
+    def test_int_to_rational(self):
+        """Unit test for int_to_rational.
+        """
+        self.assertEqual(int_to_rational(10),
+                         {"numerator":10,"denominator":1})
+        self.assertEqual(int_to_rational([1,2]),
+                         [{"numerator":1,"denominator":1},
+                          {"numerator":2,"denominator":1}])
+
+    def test_float_to_rational(self):
+        """Unit test for float_to_rational.
+        """
+        self.assertEqual(float_to_rational(0.5001, 64),
+                        {"numerator":32, "denominator":64})
+
+    def test_rational_to_float(self):
+        """Unit test for rational_to_float.
+        """
+        self.assertTrue(
+                abs(rational_to_float({"numerator":32,"denominator":64})-0.5)
+                < 0.0001)
+
+if __name__ == '__main__':
+    unittest.main()
+
diff --git a/apps/CameraITS/pymodules/its/target.py b/apps/CameraITS/pymodules/its/target.py
new file mode 100644
index 0000000..3715f34
--- /dev/null
+++ b/apps/CameraITS/pymodules/its/target.py
@@ -0,0 +1,266 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.device
+import its.image
+import its.objects
+import os
+import os.path
+import sys
+import json
+import unittest
+import json
+
+CACHE_FILENAME = "its.target.cfg"
+
+def __do_target_exposure_measurement(its_session):
+    """Use device 3A and captured shots to determine scene exposure.
+
+    Creates a new ITS device session (so this function should not be called
+    while another session to the device is open).
+
+    Assumes that the camera is pointed at a scene that is reasonably uniform
+    and reasonably lit -- that is, an appropriate target for running the ITS
+    tests that assume such uniformity.
+
+    Measures the scene using device 3A and then by taking a shot to hone in on
+    the exact exposure level that will result in a center 10% by 10% patch of
+    the scene having a intensity level of 0.5 (in the pixel range of [0,1])
+    when a linear tonemap is used. That is, the pixels coming off the sensor
+    should be at approximately 50% intensity (however note that it's actually
+    the luma value in the YUV image that is being targeted to 50%).
+
+    The computed exposure value is the product of the sensitivity (ISO) and
+    exposure time (ns) to achieve that sensor exposure level.
+
+    Args:
+        its_session: Holds an open device session.
+
+    Returns:
+        The measured product of sensitivity and exposure time that results in
+            the luma channel of captured shots having an intensity of 0.5.
+    """
+    print "Measuring target exposure"
+
+    # Get AE+AWB lock first, so the auto values in the capture result are
+    # populated properly.
+    r = [[0.45, 0.45, 0.1, 0.1, 1]]
+    sens, exp_time, gains, xform, _ \
+            = its_session.do_3a(r,r,r,do_af=False,get_results=True)
+
+    # Convert the transform to rational.
+    xform_rat = [{"numerator":int(100*x),"denominator":100} for x in xform]
+
+    # Linear tonemap
+    tmap = sum([[i/63.0,i/63.0] for i in range(64)], [])
+
+    # Capture a manual shot with this exposure, using a linear tonemap.
+    # Use the gains+transform returned by the AWB pass.
+    req = its.objects.manual_capture_request(sens, exp_time)
+    req["android.tonemap.mode"] = 0
+    req["android.tonemap.curveRed"] = tmap
+    req["android.tonemap.curveGreen"] = tmap
+    req["android.tonemap.curveBlue"] = tmap
+    req["android.colorCorrection.transform"] = xform_rat
+    req["android.colorCorrection.gains"] = gains
+    cap = its_session.do_capture(req)
+
+    # Compute the mean luma of a center patch.
+    yimg,uimg,vimg = its.image.convert_capture_to_planes(cap)
+    tile = its.image.get_image_patch(yimg, 0.45, 0.45, 0.1, 0.1)
+    luma_mean = its.image.compute_image_means(tile)
+
+    # Compute the exposure value that would result in a luma of 0.5.
+    return sens * exp_time * 0.5 / luma_mean[0]
+
+def __set_cached_target_exposure(exposure):
+    """Saves the given exposure value to a cached location.
+
+    Once a value is cached, a call to __get_cached_target_exposure will return
+    the value, even from a subsequent test/script run. That is, the value is
+    persisted.
+
+    The value is persisted in a JSON file in the current directory (from which
+    the script calling this function is run).
+
+    Args:
+        exposure: The value to cache.
+    """
+    print "Setting cached target exposure"
+    with open(CACHE_FILENAME, "w") as f:
+        f.write(json.dumps({"exposure":exposure}))
+
+def __get_cached_target_exposure():
+    """Get the cached exposure value.
+
+    Returns:
+        The cached exposure value, or None if there is no valid cached value.
+    """
+    try:
+        with open(CACHE_FILENAME, "r") as f:
+            o = json.load(f)
+            return o["exposure"]
+    except:
+        return None
+
+def clear_cached_target_exposure():
+    """If there is a cached exposure value, clear it.
+    """
+    if os.path.isfile(CACHE_FILENAME):
+        os.remove(CACHE_FILENAME)
+
+def set_hardcoded_exposure(exposure):
+    """Set a hard-coded exposure value, rather than relying on measurements.
+
+    The exposure value is the product of sensitivity (ISO) and eposure time
+    (ns) that will result in a center-patch luma value of 0.5 (using a linear
+    tonemap) for the scene that the camera is pointing at.
+
+    If bringing up a new HAL implementation and the ability use the device to
+    measure the scene isn't there yet (e.g. device 3A doesn't work), then a
+    cache file of the appropriate name can be manually created and populated
+    with a hard-coded value using this function.
+
+    Args:
+        exposure: The hard-coded exposure value to set.
+    """
+    __set_cached_target_exposure(exposure)
+
+def get_target_exposure(its_session=None):
+    """Get the target exposure to use.
+
+    If there is a cached value and if the "target" command line parameter is
+    present, then return the cached value. Otherwise, measure a new value from
+    the scene, cache it, then return it.
+
+    Args:
+        its_session: Optional, holding an open device session.
+
+    Returns:
+        The target exposure value.
+    """
+    cached_exposure = None
+    for s in sys.argv[1:]:
+        if s == "target":
+            cached_exposure = __get_cached_target_exposure()
+    if cached_exposure is not None:
+        print "Using cached target exposure"
+        return cached_exposure
+    if its_session is None:
+        with its.device.ItsSession() as cam:
+            measured_exposure = __do_target_exposure_measurement(cam)
+    else:
+        measured_exposure = __do_target_exposure_measurement(its_session)
+    __set_cached_target_exposure(measured_exposure)
+    return measured_exposure
+
+def get_target_exposure_combos(its_session=None):
+    """Get a set of legal combinations of target (exposure time, sensitivity).
+
+    Gets the target exposure value, which is a product of sensitivity (ISO) and
+    exposure time, and returns equivalent tuples of (exposure time,sensitivity)
+    that are all legal and that correspond to the four extrema in this 2D param
+    space, as well as to two "middle" points.
+
+    Will open a device session if its_session is None.
+
+    Args:
+        its_session: Optional, holding an open device session.
+
+    Returns:
+        Object containing six legal (exposure time, sensitivity) tuples, keyed
+        by the following strings:
+            "minExposureTime"
+            "midExposureTime"
+            "maxExposureTime"
+            "minSensitivity"
+            "midSensitivity"
+            "maxSensitivity
+    """
+    if its_session is None:
+        with its.device.ItsSession() as cam:
+            exposure = get_target_exposure(cam)
+            props = cam.get_camera_properties()
+    else:
+        exposure = get_target_exposure(its_session)
+        props = its_session.get_camera_properties()
+
+    sens_range = props['android.sensor.info.sensitivityRange']
+    exp_time_range = props['android.sensor.info.exposureTimeRange']
+
+    # Combo 1: smallest legal exposure time.
+    e1_expt = exp_time_range[0]
+    e1_sens = exposure / e1_expt
+    if e1_sens > sens_range[1]:
+        e1_sens = sens_range[1]
+        e1_expt = exposure / e1_sens
+
+    # Combo 2: largest legal exposure time.
+    e2_expt = exp_time_range[1]
+    e2_sens = exposure / e2_expt
+    if e2_sens < sens_range[0]:
+        e2_sens = sens_range[0]
+        e2_expt = exposure / e2_sens
+
+    # Combo 3: smallest legal sensitivity.
+    e3_sens = sens_range[0]
+    e3_expt = exposure / e3_sens
+    if e3_expt > exp_time_range[1]:
+        e3_expt = exp_time_range[1]
+        e3_sens = exposure / e3_expt
+
+    # Combo 4: largest legal sensitivity.
+    e4_sens = sens_range[1]
+    e4_expt = exposure / e4_sens
+    if e4_expt < exp_time_range[0]:
+        e4_expt = exp_time_range[0]
+        e4_sens = exposure / e4_expt
+
+    # Combo 5: middle exposure time.
+    e5_expt = (exp_time_range[0] + exp_time_range[1]) / 2.0
+    e5_sens = exposure / e5_expt
+    if e5_sens > sens_range[1]:
+        e5_sens = sens_range[1]
+        e5_expt = exposure / e5_sens
+    if e5_sens < sens_range[0]:
+        e5_sens = sens_range[0]
+        e5_expt = exposure / e5_sens
+
+    # Combo 6: middle sensitivity.
+    e6_sens = (sens_range[0] + sens_range[1]) / 2.0
+    e6_expt = exposure / e6_sens
+    if e6_expt > exp_time_range[1]:
+        e6_expt = exp_time_range[1]
+        e6_sens = exposure / e6_expt
+    if e6_expt < exp_time_range[0]:
+        e6_expt = exp_time_range[0]
+        e6_sens = exposure / e6_expt
+
+    return {
+        "minExposureTime" : (int(e1_expt), int(e1_sens)),
+        "maxExposureTime" : (int(e2_expt), int(e2_sens)),
+        "minSensitivity" : (int(e3_expt), int(e3_sens)),
+        "maxSensitivity" : (int(e4_expt), int(e4_sens)),
+        "midExposureTime" : (int(e5_expt), int(e5_sens)),
+        "midSensitivity" : (int(e6_expt), int(e6_sens))
+        }
+
+class __UnitTest(unittest.TestCase):
+    """Run a suite of unit tests on this module.
+    """
+    # TODO: Add some unit tests.
+
+if __name__ == '__main__':
+    unittest.main()
+
diff --git a/apps/CameraITS/tests/inprog/scene2/README b/apps/CameraITS/tests/inprog/scene2/README
new file mode 100644
index 0000000..3a0953f
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/scene2/README
@@ -0,0 +1,8 @@
+Scene 2 requires a camera lab with controlled illuminants, for example
+light sources capable of producing D65, D50, A, TL84, etc. illumination.
+Specific charts may also be required, for example grey cards, color
+checker charts, and resolution charts. The individual tests will specify
+the setup that they require.
+
+If a test requires that the camera be in any particular orientaion, it will
+specify this too. Otherwise, the camara can be in either portrait or lanscape.
diff --git a/apps/CameraITS/tests/inprog/scene2/test_dng_tags.py b/apps/CameraITS/tests/inprog/scene2/test_dng_tags.py
new file mode 100644
index 0000000..0c96ca7
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/scene2/test_dng_tags.py
@@ -0,0 +1,94 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.dng
+import its.objects
+import numpy
+import os.path
+
+def main():
+    """Test that the DNG tags are internally self-consistent.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+
+        # Assumes that illuminant 1 is D65, and illuminant 2 is standard A.
+        # TODO: Generalize DNG tags check for any provided illuminants.
+        illum_code = [21, 17] # D65, A
+        illum_str = ['D65', 'A']
+        ref_str = ['android.sensor.referenceIlluminant%d'%(i) for i in [1,2]]
+        cm_str = ['android.sensor.colorTransform%d'%(i) for i in [1,2]]
+        fm_str = ['android.sensor.forwardMatrix%d'%(i) for i in [1,2]]
+        cal_str = ['android.sensor.calibrationTransform%d'%(i) for i in [1,2]]
+        dng_illum = [its.dng.D65, its.dng.A]
+
+        for i in [0,1]:
+            assert(props[ref_str[i]] == illum_code[i])
+            raw_input("\n[Point camera at grey card under %s and press ENTER]"%(
+                    illum_str[i]))
+
+            cam.do_3a(do_af=False)
+            cap = cam.do_capture(its.objects.auto_capture_request())
+            gains = cap["metadata"]["android.colorCorrection.gains"]
+            ccm = its.objects.rational_to_float(
+                    cap["metadata"]["android.colorCorrection.transform"])
+            cal = its.objects.rational_to_float(props[cal_str[i]])
+            print "HAL reported gains:\n", numpy.array(gains)
+            print "HAL reported ccm:\n", numpy.array(ccm).reshape(3,3)
+            print "HAL reported cal:\n", numpy.array(cal).reshape(3,3)
+
+            # Dump the image.
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_%s.jpg" % (NAME, illum_str[i]))
+
+            # Compute the matrices that are expected under this illuminant from
+            # the HAL-reported WB gains, CCM, and calibration matrix.
+            cm, fm = its.dng.compute_cm_fm(dng_illum[i], gains, ccm, cal)
+            asn = its.dng.compute_asn(dng_illum[i], cal, cm)
+            print "Expected ColorMatrix:\n", cm
+            print "Expected ForwardMatrix:\n", fm
+            print "Expected AsShotNeutral:\n", asn
+
+            # Get the matrices that are reported by the HAL for this
+            # illuminant.
+            cm_ref = numpy.array(its.objects.rational_to_float(
+                    props[cm_str[i]])).reshape(3,3)
+            fm_ref = numpy.array(its.objects.rational_to_float(
+                    props[fm_str[i]])).reshape(3,3)
+            asn_ref = numpy.array(its.objects.rational_to_float(
+                    cap['metadata']['android.sensor.neutralColorPoint']))
+            print "Reported ColorMatrix:\n", cm_ref
+            print "Reported ForwardMatrix:\n", fm_ref
+            print "Reported AsShotNeutral:\n", asn_ref
+
+            # The color matrix may be scaled (between the reported and
+            # expected values).
+            cm_scale = cm.mean(1).mean(0) / cm_ref.mean(1).mean(0)
+            print "ColorMatrix scale factor:", cm_scale
+
+            # Compute the deltas between reported and expected.
+            print "Ratios in ColorMatrix:\n", cm / cm_ref
+            print "Deltas in ColorMatrix (after normalizing):\n", cm/cm_scale - cm_ref
+            print "Deltas in ForwardMatrix:\n", fm - fm_ref
+            print "Deltas in AsShotNeutral:\n", asn - asn_ref
+
+            # TODO: Add pass/fail test on DNG matrices.
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/inprog/test_3a_remote.py b/apps/CameraITS/tests/inprog/test_3a_remote.py
new file mode 100644
index 0000000..c76ff6d
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_3a_remote.py
@@ -0,0 +1,70 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import os.path
+import pprint
+import math
+import numpy
+import matplotlib.pyplot
+import mpl_toolkits.mplot3d
+
+def main():
+    """Run 3A remotely (from this script).
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        w_map = props["android.lens.info.shadingMapSize"]["width"]
+        h_map = props["android.lens.info.shadingMapSize"]["height"]
+
+        # TODO: Test for 3A convergence, and exit this test once converged.
+
+        triggered = False
+        while True:
+            req = its.objects.auto_capture_request()
+            req["android.statistics.lensShadingMapMode"] = 1
+            req['android.control.aePrecaptureTrigger'] = (0 if triggered else 1)
+            req['android.control.afTrigger'] = (0 if triggered else 1)
+            triggered = True
+
+            cap = cam.do_capture(req)
+
+            ae_state = cap["metadata"]["android.control.aeState"]
+            awb_state = cap["metadata"]["android.control.awbState"]
+            af_state = cap["metadata"]["android.control.afState"]
+            gains = cap["metadata"]["android.colorCorrection.gains"]
+            transform = cap["metadata"]["android.colorCorrection.transform"]
+            exp_time = cap["metadata"]['android.sensor.exposureTime']
+            lsc_map = cap["metadata"]["android.statistics.lensShadingMap"]
+            foc_dist = cap["metadata"]['android.lens.focusDistance']
+            foc_range = cap["metadata"]['android.lens.focusRange']
+
+            print "States (AE,AWB,AF):", ae_state, awb_state, af_state
+            print "Gains:", gains
+            print "Transform:", [its.objects.rational_to_float(t)
+                                 for t in transform]
+            print "AE region:", cap["metadata"]['android.control.aeRegions']
+            print "AF region:", cap["metadata"]['android.control.afRegions']
+            print "AWB region:", cap["metadata"]['android.control.awbRegions']
+            print "LSC map:", w_map, h_map, lsc_map[:8]
+            print "Focus (dist,range):", foc_dist, foc_range
+            print ""
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/inprog/test_black_level.py b/apps/CameraITS/tests/inprog/test_black_level.py
new file mode 100644
index 0000000..37dab94
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_black_level.py
@@ -0,0 +1,99 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+import numpy
+
+def main():
+    """Black level consistence test.
+
+    Test: capture dark frames and check if black level correction is done
+    correctly.
+    1. Black level should be roughly consistent for repeating shots.
+    2. Noise distribution should be roughly centered at black level.
+
+    Shoot with the camera covered (i.e.) dark/black. The test varies the
+    sensitivity parameter.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    NUM_REPEAT = 3
+    NUM_STEPS = 3
+
+    # Only check the center part where LSC has little effects.
+    R = 200
+
+    # The most frequent pixel value in each image; assume this is the black
+    # level, since the images are all dark (shot with the lens covered).
+    ymodes = []
+    umodes = []
+    vmodes = []
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        sens_range = props['android.sensor.info.sensitivityRange']
+        sens_step = (sens_range[1] - sens_range[0]) / float(NUM_STEPS-1)
+        sensitivities = [sens_range[0] + i*sens_step for i in range(NUM_STEPS)]
+        print "Sensitivities:", sensitivities
+
+        for si, s in enumerate(sensitivities):
+            for rep in xrange(NUM_REPEAT):
+                req = its.objects.manual_capture_request(100, 1*1000*1000)
+                req["android.blackLevel.lock"] = True
+                req["android.sensor.sensitivity"] = s
+                cap = cam.do_capture(req)
+                yimg,uimg,vimg = its.image.convert_capture_to_planes(cap)
+                w = cap["width"]
+                h = cap["height"]
+
+                # Magnify the noise in saved images to help visualize.
+                its.image.write_image(yimg * 2,
+                                      "%s_s=%05d_y.jpg" % (NAME, s), True)
+                its.image.write_image(numpy.absolute(uimg - 0.5) * 2,
+                                      "%s_s=%05d_u.jpg" % (NAME, s), True)
+
+                yimg = yimg[w/2-R:w/2+R, h/2-R:h/2+R]
+                uimg = uimg[w/4-R/2:w/4+R/2, w/4-R/2:w/4+R/2]
+                vimg = vimg[w/4-R/2:w/4+R/2, w/4-R/2:w/4+R/2]
+                yhist,_ = numpy.histogram(yimg*255, 256, (0,256))
+                ymodes.append(numpy.argmax(yhist))
+                uhist,_ = numpy.histogram(uimg*255, 256, (0,256))
+                umodes.append(numpy.argmax(uhist))
+                vhist,_ = numpy.histogram(vimg*255, 256, (0,256))
+                vmodes.append(numpy.argmax(vhist))
+
+                # Take 32 bins from Y, U, and V.
+                # Histograms of U and V are cropped at the center of 128.
+                pylab.plot(range(32), yhist.tolist()[0:32], 'rgb'[si])
+                pylab.plot(range(32), uhist.tolist()[112:144], 'rgb'[si]+'--')
+                pylab.plot(range(32), vhist.tolist()[112:144], 'rgb'[si]+'--')
+
+    pylab.xlabel("DN: Y[0:32], U[112:144], V[112:144]")
+    pylab.ylabel("Pixel count")
+    pylab.title("Histograms for different sensitivities")
+    matplotlib.pyplot.savefig("%s_plot_histograms.png" % (NAME))
+
+    print "Y black levels:", ymodes
+    print "U black levels:", umodes
+    print "V black levels:", vmodes
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/inprog/test_blc_lsc.py b/apps/CameraITS/tests/inprog/test_blc_lsc.py
new file mode 100644
index 0000000..ce120a2
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_blc_lsc.py
@@ -0,0 +1,106 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test that BLC and LSC look reasonable.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    r_means_center = []
+    g_means_center = []
+    b_means_center = []
+    r_means_corner = []
+    g_means_corner = []
+    b_means_corner = []
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        expt_range = props['android.sensor.info.exposureTimeRange']
+
+        # Get AE+AWB lock first, so the auto values in the capture result are
+        # populated properly.
+        r = [[0,0,1,1,1]]
+        ae_sen,ae_exp,awb_gains,awb_transform,_ \
+                = cam.do_3a(r,r,r,do_af=False,get_results=True)
+        print "AE:", ae_sen, ae_exp / 1000000.0
+        print "AWB:", awb_gains, awb_transform
+
+        # Set analog gain (sensitivity) to 800
+        ae_exp = ae_exp * ae_sen / 800
+        ae_sen = 800
+
+        # Capture range of exposures from 1/100x to 4x of AE estimate.
+        exposures = [ae_exp*x/100.0 for x in [1]+range(10,401,40)]
+        exposures = [e for e in exposures
+                     if e >= expt_range[0] and e <= expt_range[1]]
+
+        # Convert the transform back to rational.
+        awb_transform_rat = its.objects.float_to_rational(awb_transform)
+
+        # Linear tonemap
+        tmap = sum([[i/63.0,i/63.0] for i in range(64)], [])
+
+        reqs = []
+        for e in exposures:
+            req = its.objects.manual_capture_request(ae_sen,e)
+            req["android.tonemap.mode"] = 0
+            req["android.tonemap.curveRed"] = tmap
+            req["android.tonemap.curveGreen"] = tmap
+            req["android.tonemap.curveBlue"] = tmap
+            req["android.colorCorrection.transform"] = awb_transform_rat
+            req["android.colorCorrection.gains"] = awb_gains
+            reqs.append(req)
+
+        caps = cam.do_capture(reqs)
+        for i,cap in enumerate(caps):
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_i=%d.jpg"%(NAME, i))
+
+            tile_center = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb_means = its.image.compute_image_means(tile_center)
+            r_means_center.append(rgb_means[0])
+            g_means_center.append(rgb_means[1])
+            b_means_center.append(rgb_means[2])
+
+            tile_corner = its.image.get_image_patch(img, 0.0, 0.0, 0.1, 0.1)
+            rgb_means = its.image.compute_image_means(tile_corner)
+            r_means_corner.append(rgb_means[0])
+            g_means_corner.append(rgb_means[1])
+            b_means_corner.append(rgb_means[2])
+
+    fig = matplotlib.pyplot.figure()
+    pylab.plot(exposures, r_means_center, 'r')
+    pylab.plot(exposures, g_means_center, 'g')
+    pylab.plot(exposures, b_means_center, 'b')
+    pylab.ylim([0,1])
+    matplotlib.pyplot.savefig("%s_plot_means_center.png" % (NAME))
+
+    fig = matplotlib.pyplot.figure()
+    pylab.plot(exposures, r_means_corner, 'r')
+    pylab.plot(exposures, g_means_corner, 'g')
+    pylab.plot(exposures, b_means_corner, 'b')
+    pylab.ylim([0,1])
+    matplotlib.pyplot.savefig("%s_plot_means_corner.png" % (NAME))
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/inprog/test_burst_sameness_auto.py b/apps/CameraITS/tests/inprog/test_burst_sameness_auto.py
new file mode 100644
index 0000000..f3d49be
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_burst_sameness_auto.py
@@ -0,0 +1,93 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import os.path
+import numpy
+
+def main():
+    """Take long bursts of images and check that they're all identical.
+
+    Assumes a static scene. Can be used to idenfity if there are sporadic
+    frames that are processed differently or have artifacts, or if 3A isn't
+    stable, since this test converges 3A at the start but doesn't lock 3A
+    throughout capture.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    BURST_LEN = 50
+    BURSTS = 5
+    FRAMES = BURST_LEN * BURSTS
+
+    SPREAD_THRESH = 0.03
+
+    with its.device.ItsSession() as cam:
+
+        # Capture at the smallest resolution.
+        props = cam.get_camera_properties()
+        if not its.caps.manual_sensor(props):
+            print "Test skipped"
+            return
+
+        _, fmt = its.objects.get_fastest_manual_capture_settings(props)
+        w,h = fmt["width"], fmt["height"]
+
+        # Converge 3A prior to capture.
+        cam.do_3a(lock_ae=True, lock_awb=True)
+
+        # After 3A has converged, lock AE+AWB for the duration of the test.
+        req = its.objects.auto_capture_request()
+        req["android.blackLevel.lock"] = True
+        req["android.control.awbLock"] = True
+        req["android.control.aeLock"] = True
+
+        # Capture bursts of YUV shots.
+        # Get the mean values of a center patch for each.
+        # Also build a 4D array, which is an array of all RGB images.
+        r_means = []
+        g_means = []
+        b_means = []
+        imgs = numpy.empty([FRAMES,h,w,3])
+        for j in range(BURSTS):
+            caps = cam.do_capture([req]*BURST_LEN, [fmt])
+            for i,cap in enumerate(caps):
+                n = j*BURST_LEN + i
+                imgs[n] = its.image.convert_capture_to_rgb_image(cap)
+                tile = its.image.get_image_patch(imgs[n], 0.45, 0.45, 0.1, 0.1)
+                means = its.image.compute_image_means(tile)
+                r_means.append(means[0])
+                g_means.append(means[1])
+                b_means.append(means[2])
+
+        # Dump all images.
+        print "Dumping images"
+        for i in range(FRAMES):
+            its.image.write_image(imgs[i], "%s_frame%03d.jpg"%(NAME,i))
+
+        # The mean image.
+        img_mean = imgs.mean(0)
+        its.image.write_image(img_mean, "%s_mean.jpg"%(NAME))
+
+        # Pass/fail based on center patch similarity.
+        for means in [r_means, g_means, b_means]:
+            spread = max(means) - min(means)
+            print spread
+            assert(spread < SPREAD_THRESH)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/inprog/test_burst_sameness_fullres_auto.py b/apps/CameraITS/tests/inprog/test_burst_sameness_fullres_auto.py
new file mode 100644
index 0000000..a8d1d45
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_burst_sameness_fullres_auto.py
@@ -0,0 +1,91 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import os.path
+import numpy
+import pylab
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Take long bursts of images and check that they're all identical.
+
+    Assumes a static scene. Can be used to idenfity if there are sporadic
+    frames that are processed differently or have artifacts, or if 3A isn't
+    stable, since this test converges 3A at the start but doesn't lock 3A
+    throughout capture.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    BURST_LEN = 6
+    BURSTS = 2
+    FRAMES = BURST_LEN * BURSTS
+
+    DELTA_THRESH = 0.1
+
+    with its.device.ItsSession() as cam:
+
+        # Capture at full resolution.
+        props = cam.get_camera_properties()
+        w,h = its.objects.get_available_output_sizes("yuv", props)[0]
+
+        # Converge 3A prior to capture.
+        cam.do_3a(lock_ae=True, lock_awb=True)
+
+        # After 3A has converged, lock AE+AWB for the duration of the test.
+        req = its.objects.auto_capture_request()
+        req["android.blackLevel.lock"] = True
+        req["android.control.awbLock"] = True
+        req["android.control.aeLock"] = True
+
+        # Capture bursts of YUV shots.
+        # Build a 4D array, which is an array of all RGB images after down-
+        # scaling them by a factor of 4x4.
+        imgs = numpy.empty([FRAMES,h/4,w/4,3])
+        for j in range(BURSTS):
+            caps = cam.do_capture([req]*BURST_LEN)
+            for i,cap in enumerate(caps):
+                n = j*BURST_LEN + i
+                imgs[n] = its.image.downscale_image(
+                        its.image.convert_capture_to_rgb_image(cap), 4)
+
+        # Dump all images.
+        print "Dumping images"
+        for i in range(FRAMES):
+            its.image.write_image(imgs[i], "%s_frame%03d.jpg"%(NAME,i))
+
+        # The mean image.
+        img_mean = imgs.mean(0)
+        its.image.write_image(img_mean, "%s_mean.jpg"%(NAME))
+
+        # Compute the deltas of each image from the mean image; this test
+        # passes if none of the deltas are large.
+        print "Computing frame differences"
+        delta_maxes = []
+        for i in range(FRAMES):
+            deltas = (imgs[i] - img_mean).reshape(h*w*3/16)
+            delta_max_pos = numpy.max(deltas)
+            delta_max_neg = numpy.min(deltas)
+            delta_maxes.append(max(abs(delta_max_pos), abs(delta_max_neg)))
+        max_delta_max = max(delta_maxes)
+        print "Frame %d has largest diff %f" % (
+                delta_maxes.index(max_delta_max), max_delta_max)
+        assert(max_delta_max < DELTA_THRESH)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/inprog/test_crop_region.py b/apps/CameraITS/tests/inprog/test_crop_region.py
new file mode 100644
index 0000000..396603f
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_crop_region.py
@@ -0,0 +1,67 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import its.image
+import its.device
+import its.objects
+
+
+def main():
+    """Takes shots with different sensor crop regions.
+    """
+    name = os.path.basename(__file__).split(".")[0]
+
+    # Regions specified here in x,y,w,h normalized form.
+    regions = [[0.0, 0.0, 0.5, 0.5], # top left
+               [0.0, 0.5, 0.5, 0.5], # bottom left
+               [0.1, 0.9, 0.5, 1.0]] # right side (top + bottom)
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        r = props['android.sensor.info.pixelArraySize']
+        w = r['width']
+        h = r['height']
+
+        # Capture a full frame first.
+        reqs = [its.objects.auto_capture_request()]
+        print "Capturing img0 with the full sensor region"
+
+        # Capture a frame for each of the regions.
+        for i,region in enumerate(regions):
+            req = its.objects.auto_capture_request()
+            req['android.scaler.cropRegion'] = {
+                    "left": int(region[0] * w),
+                    "top": int(region[1] * h),
+                    "right": int((region[0]+region[2])*w),
+                    "bottom": int((region[1]+region[3])*h)}
+            reqs.append(req)
+            crop = req['android.scaler.cropRegion']
+            print "Capturing img%d with crop: %d,%d %dx%d"%(i+1,
+                    crop["left"],crop["top"],
+                    crop["right"]-crop["left"],crop["bottom"]-crop["top"])
+
+        cam.do_3a()
+        caps = cam.do_capture(reqs)
+
+        for i,cap in enumerate(caps):
+            img = its.image.convert_capture_to_rgb_image(cap)
+            crop = cap["metadata"]['android.scaler.cropRegion']
+            its.image.write_image(img, "%s_img%d.jpg"%(name,i))
+            print "Captured img%d with crop: %d,%d %dx%d"%(i,
+                    crop["left"],crop["top"],
+                    crop["right"]-crop["left"],crop["bottom"]-crop["top"])
+
+if __name__ == '__main__':
+    main()
diff --git a/apps/CameraITS/tests/inprog/test_ev_compensation.py b/apps/CameraITS/tests/inprog/test_ev_compensation.py
new file mode 100644
index 0000000..f9b0cd3
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_ev_compensation.py
@@ -0,0 +1,71 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import os.path
+import pylab
+import matplotlib
+import matplotlib.pyplot
+import numpy
+
+def main():
+    """Tests that EV compensation is applied.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    MAX_LUMA_DELTA_THRESH = 0.01
+    AVG_LUMA_DELTA_THRESH = 0.001
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        cam.do_3a()
+
+        # Capture auto shots, but with a linear tonemap.
+        req = its.objects.auto_capture_request()
+        req["android.tonemap.mode"] = 0
+        req["android.tonemap.curveRed"] = (0.0, 0.0, 1.0, 1.0)
+        req["android.tonemap.curveGreen"] = (0.0, 0.0, 1.0, 1.0)
+        req["android.tonemap.curveBlue"] = (0.0, 0.0, 1.0, 1.0)
+
+        evs = range(-4,5)
+        lumas = []
+        for ev in evs:
+            req['android.control.aeExposureCompensation'] = ev
+            cap = cam.do_capture(req)
+            y = its.image.convert_capture_to_planes(cap)[0]
+            tile = its.image.get_image_patch(y, 0.45,0.45,0.1,0.1)
+            lumas.append(its.image.compute_image_means(tile)[0])
+
+        ev_step_size_in_stops = its.objects.rational_to_float(
+                props['android.control.aeCompensationStep'])
+        luma_increase_per_step = pow(2, ev_step_size_in_stops)
+        expected_lumas = [lumas[0] * pow(luma_increase_per_step, i) \
+                for i in range(len(evs))]
+
+        pylab.plot(evs, lumas, 'r')
+        pylab.plot(evs, expected_lumas, 'b')
+        matplotlib.pyplot.savefig("%s_plot_means.png" % (NAME))
+
+        luma_diffs = [expected_lumas[i] - lumas[i] for i in range(len(evs))]
+        max_diff = max(luma_diffs)
+        avg_diff = sum(luma_diffs) / len(luma_diffs)
+        print "Max delta between modeled and measured lumas:", max_diff
+        print "Avg delta between modeled and measured lumas:", avg_diff
+        assert(max_diff < MAX_LUMA_DELTA_THRESH)
+        assert(avg_diff < AVG_LUMA_DELTA_THRESH)
+
+if __name__ == '__main__':
+    main()
diff --git a/apps/CameraITS/tests/inprog/test_faces.py b/apps/CameraITS/tests/inprog/test_faces.py
new file mode 100644
index 0000000..228dac8
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_faces.py
@@ -0,0 +1,41 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import os.path
+
+def main():
+    """Test face detection.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+        cam.do_3a()
+        req = its.objects.auto_capture_request()
+        req['android.statistics.faceDetectMode'] = 2
+        caps = cam.do_capture([req]*5)
+        for i,cap in enumerate(caps):
+            md = cap['metadata']
+            print "Frame %d face metadata:" % i
+            print "  Ids:", md['android.statistics.faceIds']
+            print "  Landmarks:", md['android.statistics.faceLandmarks']
+            print "  Rectangles:", md['android.statistics.faceRectangles']
+            print "  Scores:", md['android.statistics.faceScores']
+            print ""
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/inprog/test_param_black_level_lock.py b/apps/CameraITS/tests/inprog/test_param_black_level_lock.py
new file mode 100644
index 0000000..7d0be92
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_param_black_level_lock.py
@@ -0,0 +1,76 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+import numpy
+
+def main():
+    """Test that when the black level is locked, it doesn't change.
+
+    Shoot with the camera covered (i.e.) dark/black. The test varies the
+    sensitivity parameter and checks if the black level changes.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    NUM_STEPS = 5
+
+    req = {
+        "android.blackLevel.lock": True,
+        "android.control.mode": 0,
+        "android.control.aeMode": 0,
+        "android.control.awbMode": 0,
+        "android.control.afMode": 0,
+        "android.sensor.frameDuration": 0,
+        "android.sensor.exposureTime": 10*1000*1000
+        }
+
+    # The most frequent pixel value in each image; assume this is the black
+    # level, since the images are all dark (shot with the lens covered).
+    modes = []
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        sens_range = props['android.sensor.info.sensitivityRange']
+        sensitivities = range(sens_range[0],
+                              sens_range[1]+1,
+                              int((sens_range[1] - sens_range[0]) / NUM_STEPS))
+        for si, s in enumerate(sensitivities):
+            req["android.sensor.sensitivity"] = s
+            cap = cam.do_capture(req)
+            yimg,_,_ = its.image.convert_capture_to_planes(cap)
+            hist,_ = numpy.histogram(yimg*255, 256, (0,256))
+            modes.append(numpy.argmax(hist))
+
+            # Add this histogram to a plot; solid for shots without BL
+            # lock, dashes for shots with BL lock
+            pylab.plot(range(16), hist.tolist()[:16])
+
+    pylab.xlabel("Luma DN, showing [0:16] out of full [0:256] range")
+    pylab.ylabel("Pixel count")
+    pylab.title("Histograms for different sensitivities")
+    matplotlib.pyplot.savefig("%s_plot_histograms.png" % (NAME))
+
+    # Check that the black levels are all the same.
+    print "Black levels:", modes
+    assert(all([modes[i] == modes[0] for i in range(len(modes))]))
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/inprog/test_param_edge_mode.py b/apps/CameraITS/tests/inprog/test_param_edge_mode.py
new file mode 100644
index 0000000..e928f21
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_param_edge_mode.py
@@ -0,0 +1,48 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test that the android.edge.mode parameter is applied.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    req = {
+        "android.control.mode": 0,
+        "android.control.aeMode": 0,
+        "android.control.awbMode": 0,
+        "android.control.afMode": 0,
+        "android.sensor.frameDuration": 0,
+        "android.sensor.exposureTime": 30*1000*1000,
+        "android.sensor.sensitivity": 100
+        }
+
+    with its.device.ItsSession() as cam:
+        sens, exp, gains, xform, focus = cam.do_3a(get_results=True)
+        for e in [0,1,2]:
+            req["android.edge.mode"] = e
+            cap = cam.do_capture(req)
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_mode=%d.jpg" % (NAME, e))
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/inprog/test_test_patterns.py b/apps/CameraITS/tests/inprog/test_test_patterns.py
new file mode 100644
index 0000000..f75b141
--- /dev/null
+++ b/apps/CameraITS/tests/inprog/test_test_patterns.py
@@ -0,0 +1,41 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import os.path
+
+def main():
+    """Test sensor test patterns.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+        caps = []
+        for i in range(1,6):
+            req = its.objects.manual_capture_request(100, 10*1000*1000)
+            req['android.sensor.testPatternData'] = [40, 100, 160, 220]
+            req['android.sensor.testPatternMode'] = i
+
+            # Capture the shot twice, and use the second one, so the pattern
+            # will have stabilized.
+            caps = cam.do_capture([req]*2)
+
+            img = its.image.convert_capture_to_rgb_image(caps[1])
+            its.image.write_image(img, "%s_pattern=%d.jpg" % (NAME, i))
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene0/README b/apps/CameraITS/tests/scene0/README
new file mode 100644
index 0000000..50be04a
--- /dev/null
+++ b/apps/CameraITS/tests/scene0/README
@@ -0,0 +1,3 @@
+This scene has no requirements; scene 0 tests don't actually
+look at the image content, and the camera can be pointed at
+any target (or even flat on the desk).
diff --git a/apps/CameraITS/tests/scene0/test_camera_properties.py b/apps/CameraITS/tests/scene0/test_camera_properties.py
new file mode 100644
index 0000000..05fc364
--- /dev/null
+++ b/apps/CameraITS/tests/scene0/test_camera_properties.py
@@ -0,0 +1,43 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.caps
+import its.device
+import its.objects
+import pprint
+
+def main():
+    """Basic test to query and print out camera properties.
+    """
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+
+        pprint.pprint(props)
+
+        # Test that a handful of required keys are present.
+        if its.caps.manual_sensor(props):
+            assert(props.has_key('android.sensor.info.sensitivityRange'))
+
+        assert(props.has_key('android.sensor.orientation'))
+        assert(props.has_key('android.scaler.streamConfigurationMap'))
+        assert(props.has_key('android.lens.facing'))
+
+        print "JPG sizes:", its.objects.get_available_output_sizes("jpg", props)
+        print "RAW sizes:", its.objects.get_available_output_sizes("raw", props)
+        print "YUV sizes:", its.objects.get_available_output_sizes("yuv", props)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene0/test_capture_result_dump.py b/apps/CameraITS/tests/scene0/test_capture_result_dump.py
new file mode 100644
index 0000000..c8b1f8f
--- /dev/null
+++ b/apps/CameraITS/tests/scene0/test_capture_result_dump.py
@@ -0,0 +1,42 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.caps
+import its.image
+import its.device
+import its.objects
+import its.target
+import pprint
+
+def main():
+    """Test that a capture result is returned from a manual capture; dump it.
+    """
+
+    with its.device.ItsSession() as cam:
+        # Arbitrary capture request exposure values; image content is not
+        # important for this test, only the metadata.
+        props = cam.get_camera_properties()
+        if not its.caps.manual_sensor(props):
+            print "Test skipped"
+            return
+
+        req,fmt = its.objects.get_fastest_manual_capture_settings(props)
+        cap = cam.do_capture(req, fmt)
+        pprint.pprint(cap["metadata"])
+
+        # No pass/fail check; test passes if it completes.
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene0/test_gyro_bias.py b/apps/CameraITS/tests/scene0/test_gyro_bias.py
new file mode 100644
index 0000000..64a5ff0
--- /dev/null
+++ b/apps/CameraITS/tests/scene0/test_gyro_bias.py
@@ -0,0 +1,80 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import time
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+import numpy
+
+def main():
+    """Test if the gyro has stable output when device is stationary.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    # Number of samples averaged together, in the plot.
+    N = 20
+
+    # Pass/fail thresholds for gyro drift
+    MEAN_THRESH = 0.01
+    VAR_THRESH = 0.001
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        # Only run test if the appropriate caps are claimed.
+        if not its.caps.sensor_fusion(props):
+            print "Test skipped"
+            return
+
+        print "Collecting gyro events"
+        cam.start_sensor_events()
+        time.sleep(5)
+        gyro_events = cam.get_sensor_events()["gyro"]
+
+    nevents = (len(gyro_events) / N) * N
+    gyro_events = gyro_events[:nevents]
+    times = numpy.array([(e["time"] - gyro_events[0]["time"])/1000000000.0
+                         for e in gyro_events])
+    xs = numpy.array([e["x"] for e in gyro_events])
+    ys = numpy.array([e["y"] for e in gyro_events])
+    zs = numpy.array([e["z"] for e in gyro_events])
+
+    # Group samples into size-N groups and average each together, to get rid
+    # of individual rnadom spikes in the data.
+    times = times[N/2::N]
+    xs = xs.reshape(nevents/N, N).mean(1)
+    ys = ys.reshape(nevents/N, N).mean(1)
+    zs = zs.reshape(nevents/N, N).mean(1)
+
+    pylab.plot(times, xs, 'r', label="x")
+    pylab.plot(times, ys, 'g', label="y")
+    pylab.plot(times, zs, 'b', label="z")
+    pylab.xlabel("Time (seconds)")
+    pylab.ylabel("Gyro readings (mean of %d samples)"%(N))
+    pylab.legend()
+    matplotlib.pyplot.savefig("%s_plot.png" % (NAME))
+
+    for samples in [xs,ys,zs]:
+        assert(samples.mean() < MEAN_THRESH)
+        assert(numpy.var(samples) < VAR_THRESH)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene0/test_jitter.py b/apps/CameraITS/tests/scene0/test_jitter.py
new file mode 100644
index 0000000..29b3047
--- /dev/null
+++ b/apps/CameraITS/tests/scene0/test_jitter.py
@@ -0,0 +1,67 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import os.path
+import pylab
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Measure jitter in camera timestamps.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    # Pass/fail thresholds
+    MIN_AVG_FRAME_DELTA = 30 # at least 30ms delta between frames
+    MAX_VAR_FRAME_DELTA = 0.01 # variance of frame deltas
+    MAX_FRAME_DELTA_JITTER = 0.3 # max ms gap from the average frame delta
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.manual_sensor(props):
+            print "Test skipped"
+            return
+
+        req, fmt = its.objects.get_fastest_manual_capture_settings(props)
+        caps = cam.do_capture([req]*50, [fmt])
+
+        # Print out the millisecond delta between the start of each exposure
+        tstamps = [c['metadata']['android.sensor.timestamp'] for c in caps]
+        deltas = [tstamps[i]-tstamps[i-1] for i in range(1,len(tstamps))]
+        deltas_ms = [d/1000000.0 for d in deltas]
+        avg = sum(deltas_ms) / len(deltas_ms)
+        var = sum([d*d for d in deltas_ms]) / len(deltas_ms) - avg * avg
+        range0 = min(deltas_ms) - avg
+        range1 = max(deltas_ms) - avg
+        print "Average:", avg
+        print "Variance:", var
+        print "Jitter range:", range0, "to", range1
+
+        # Draw a plot.
+        pylab.plot(range(len(deltas_ms)), deltas_ms)
+        matplotlib.pyplot.savefig("%s_deltas.png" % (NAME))
+
+        # Test for pass/fail.
+        assert(avg > MIN_AVG_FRAME_DELTA)
+        assert(var < MAX_VAR_FRAME_DELTA)
+        assert(abs(range0) < MAX_FRAME_DELTA_JITTER)
+        assert(abs(range1) < MAX_FRAME_DELTA_JITTER)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene0/test_metadata.py b/apps/CameraITS/tests/scene0/test_metadata.py
new file mode 100644
index 0000000..b4ca4cb
--- /dev/null
+++ b/apps/CameraITS/tests/scene0/test_metadata.py
@@ -0,0 +1,98 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import its.target
+import its.caps
+
+def main():
+    """Test the validity of some metadata entries.
+
+    Looks at capture results and at the camera characteristics objects.
+    """
+    global md, props, failed
+
+    with its.device.ItsSession() as cam:
+        # Arbitrary capture request exposure values; image content is not
+        # important for this test, only the metadata.
+        props = cam.get_camera_properties()
+        auto_req = its.objects.auto_capture_request()
+        cap = cam.do_capture(auto_req)
+        md = cap["metadata"]
+
+    print "Hardware level"
+    print "  Legacy:", its.caps.legacy(props)
+    print "  Limited:", its.caps.limited(props)
+    print "  Full:", its.caps.full(props)
+    print "Capabilities"
+    print "  Manual sensor:", its.caps.manual_sensor(props)
+    print "  Manual post-proc:", its.caps.manual_post_proc(props)
+    print "  Raw:", its.caps.raw(props)
+    print "  Sensor fusion:", its.caps.sensor_fusion(props)
+
+    # Test: hardware level should be a valid value.
+    check('props.has_key("android.info.supportedHardwareLevel")')
+    check('props["android.info.supportedHardwareLevel"] is not None')
+    check('props["android.info.supportedHardwareLevel"] in [0,1,2]')
+    full = getval('props["android.info.supportedHardwareLevel"]') == 1
+
+    # Test: rollingShutterSkew, and frameDuration tags must all be present,
+    # and rollingShutterSkew must be greater than zero and smaller than all
+    # of the possible frame durations.
+    check('md.has_key("android.sensor.frameDuration")')
+    check('md["android.sensor.frameDuration"] is not None')
+    check('md.has_key("android.sensor.rollingShutterSkew")')
+    check('md["android.sensor.rollingShutterSkew"] is not None')
+    check('md["android.sensor.frameDuration"] > '
+          'md["android.sensor.rollingShutterSkew"] > 0')
+
+    # Test: timestampSource must be a valid value.
+    check('props.has_key("android.sensor.info.timestampSource")')
+    check('props["android.sensor.info.timestampSource"] is not None')
+    check('props["android.sensor.info.timestampSource"] in [0,1]')
+
+    # Test: croppingType must be a valid value, and for full devices, it
+    # must be FREEFORM=1.
+    check('props.has_key("android.scaler.croppingType")')
+    check('props["android.scaler.croppingType"] is not None')
+    check('props["android.scaler.croppingType"] in [0,1]')
+    if full:
+        check('props["android.scaler.croppingType"] == 1')
+
+    assert(not failed)
+
+def getval(expr, default=None):
+    try:
+        return eval(expr)
+    except:
+        return default
+
+failed = False
+def check(expr):
+    global md, props, failed
+    try:
+        if eval(expr):
+            print "Passed>", expr
+        else:
+            print "Failed>>", expr
+            failed = True
+    except:
+        print "Failed>>", expr
+        failed = True
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene0/test_param_sensitivity_burst.py b/apps/CameraITS/tests/scene0/test_param_sensitivity_burst.py
new file mode 100644
index 0000000..eb9a3c1
--- /dev/null
+++ b/apps/CameraITS/tests/scene0/test_param_sensitivity_burst.py
@@ -0,0 +1,49 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+
+def main():
+    """Test that the android.sensor.sensitivity parameter is applied properly
+    within a burst. Inspects the output metadata only (not the image data).
+    """
+
+    NUM_STEPS = 3
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.manual_sensor(props):
+            print "Test skipped"
+            return
+
+        sens_range = props['android.sensor.info.sensitivityRange']
+        sens_step = (sens_range[1] - sens_range[0]) / NUM_STEPS
+        sens_list = range(sens_range[0], sens_range[1], sens_step)
+        e = min(props['android.sensor.info.exposureTimeRange'])
+        reqs = [its.objects.manual_capture_request(s,e) for s in sens_list]
+        _,fmt = its.objects.get_fastest_manual_capture_settings(props)
+
+        caps = cam.do_capture(reqs, fmt)
+        for i,cap in enumerate(caps):
+            s_req = sens_list[i]
+            s_res = cap["metadata"]["android.sensor.sensitivity"]
+            assert(s_req == s_res)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene0/test_sensor_events.py b/apps/CameraITS/tests/scene0/test_sensor_events.py
new file mode 100644
index 0000000..61f0383
--- /dev/null
+++ b/apps/CameraITS/tests/scene0/test_sensor_events.py
@@ -0,0 +1,44 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.device
+import its.caps
+import time
+
+def main():
+    """Basic test to query and print out sensor events.
+
+    Test will only work if the screen is on (i.e.) the device isn't in standby.
+    Pass if some of each event are received.
+    """
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        # Only run test if the appropriate caps are claimed.
+        if not its.caps.sensor_fusion(props):
+            print "Test skipped"
+            return
+
+        cam.start_sensor_events()
+        time.sleep(1)
+        events = cam.get_sensor_events()
+        print "Events over 1s: %d gyro, %d accel, %d mag"%(
+                len(events["gyro"]), len(events["accel"]), len(events["mag"]))
+        assert(len(events["gyro"]) > 0)
+        assert(len(events["accel"]) > 0)
+        assert(len(events["mag"]) > 0)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene0/test_unified_timestamps.py b/apps/CameraITS/tests/scene0/test_unified_timestamps.py
new file mode 100644
index 0000000..cdc9567
--- /dev/null
+++ b/apps/CameraITS/tests/scene0/test_unified_timestamps.py
@@ -0,0 +1,67 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.device
+import its.objects
+import its.caps
+import time
+
+def main():
+    """Test if image and motion sensor events are in the same time domain.
+    """
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+
+        # Only run test if the appropriate caps are claimed.
+        if not its.caps.sensor_fusion(props):
+            print "Test skipped"
+            return
+
+        # Get the timestamp of a captured image.
+        req, fmt = its.objects.get_fastest_manual_capture_settings(props)
+        cap = cam.do_capture(req, fmt)
+        ts_image0 = cap['metadata']['android.sensor.timestamp']
+
+        # Get the timestamps of motion events.
+        print "Reading sensor measurements"
+        cam.start_sensor_events()
+        time.sleep(0.5)
+        events = cam.get_sensor_events()
+        assert(len(events["gyro"]) > 0)
+        assert(len(events["accel"]) > 0)
+        assert(len(events["mag"]) > 0)
+        ts_gyro0 = events["gyro"][0]["time"]
+        ts_gyro1 = events["gyro"][-1]["time"]
+        ts_accel0 = events["accel"][0]["time"]
+        ts_accel1 = events["accel"][-1]["time"]
+        ts_mag0 = events["mag"][0]["time"]
+        ts_mag1 = events["mag"][-1]["time"]
+
+        # Get the timestamp of another image.
+        cap = cam.do_capture(req, fmt)
+        ts_image1 = cap['metadata']['android.sensor.timestamp']
+
+        print "Image timestamps:", ts_image0, ts_image1
+        print "Gyro timestamps:", ts_gyro0, ts_gyro1
+        print "Accel timestamps:", ts_accel0, ts_accel1
+        print "Mag timestamps:", ts_mag0, ts_mag1
+
+        # The motion timestamps must be between the two image timestamps.
+        assert ts_image0 < min(ts_gyro0, ts_accel0, ts_mag0) < ts_image1
+        assert ts_image0 < max(ts_gyro1, ts_accel1, ts_mag1) < ts_image1
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/README b/apps/CameraITS/tests/scene1/README
new file mode 100755
index 0000000..93543d9
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/README
@@ -0,0 +1,16 @@
+Scene 1 description:
+* Camera on tripod in portrait or landscape orientation
+* Scene mostly filled by grey card, with white background behind grey card
+* Illuminated by simple light source, for example a desk lamp
+* Uniformity of lighting and target positioning need not be precise
+
+This is intended to be a very simple setup that can be recreated on an
+engineer's desk without any large or expensive equipment. The tests for this
+scene in general only look at a patch in the middle of the image (which is
+assumed to be within the bounds of the grey card).
+
+Note that the scene should not be completely uniform; don't have the grey card
+100% fill the field of view and use a high quality uniform light source, for
+example, and don't use a diffuser on top of the camera to simulate a grey
+scene.
+
diff --git a/apps/CameraITS/tests/scene1/test_3a.py b/apps/CameraITS/tests/scene1/test_3a.py
new file mode 100644
index 0000000..b53fc73
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_3a.py
@@ -0,0 +1,42 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.device
+import its.caps
+
+def main():
+    """Basic test for bring-up of 3A.
+
+    To pass, 3A must converge. Check that the returned 3A values are legal.
+    """
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.read_3a(props):
+            print "Test skipped"
+            return
+
+        sens, exp, gains, xform, focus = cam.do_3a(get_results=True)
+        print "AE: sensitivity %d, exposure %dms" % (sens, exp/1000000)
+        print "AWB: gains", gains, "transform", xform
+        print "AF: distance", focus
+        assert(sens > 0)
+        assert(exp > 0)
+        assert(len(gains) == 4)
+        assert(len(xform) == 9)
+        assert(focus >= 0)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_ae_precapture_trigger.py b/apps/CameraITS/tests/scene1/test_ae_precapture_trigger.py
new file mode 100644
index 0000000..59b7db1
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_ae_precapture_trigger.py
@@ -0,0 +1,78 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.device
+import its.caps
+import its.objects
+import its.target
+
+def main():
+    """Test the AE state machine when using the precapture trigger.
+    """
+
+    INACTIVE = 0
+    SEARCHING = 1
+    CONVERGED = 2
+    LOCKED = 3
+    FLASHREQUIRED = 4
+    PRECAPTURE = 5
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        _,fmt = its.objects.get_fastest_manual_capture_settings(props)
+
+        # Capture 5 manual requests, with AE disabled, and the last request
+        # has an AE precapture trigger (which should be ignored since AE is
+        # disabled).
+        manual_reqs = []
+        e, s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        manual_req = its.objects.manual_capture_request(s,e)
+        manual_req['android.control.aeMode'] = 0 # Off
+        manual_reqs += [manual_req]*4
+        precap_req = its.objects.manual_capture_request(s,e)
+        precap_req['android.control.aeMode'] = 0 # Off
+        precap_req['android.control.aePrecaptureTrigger'] = 1 # Start
+        manual_reqs.append(precap_req)
+        caps = cam.do_capture(manual_reqs, fmt)
+        for cap in caps:
+            assert(cap['metadata']['android.control.aeState'] == INACTIVE)
+
+        # Capture an auto request and verify the AE state; no trigger.
+        auto_req = its.objects.auto_capture_request()
+        auto_req['android.control.aeMode'] = 1  # On
+        cap = cam.do_capture(auto_req, fmt)
+        state = cap['metadata']['android.control.aeState']
+        print "AE state after auto request:", state
+        assert(state in [SEARCHING, CONVERGED])
+
+        # Capture with auto request with a precapture trigger.
+        auto_req['android.control.aePrecaptureTrigger'] = 1  # Start
+        cap = cam.do_capture(auto_req, fmt)
+        state = cap['metadata']['android.control.aeState']
+        print "AE state after auto request with precapture trigger:", state
+        assert(state in [SEARCHING, CONVERGED, PRECAPTURE])
+
+        # Capture some more auto requests, and AE should converge.
+        auto_req['android.control.aePrecaptureTrigger'] = 0
+        caps = cam.do_capture([auto_req]*5, fmt)
+        state = caps[-1]['metadata']['android.control.aeState']
+        print "AE state after auto request:", state
+        assert(state == CONVERGED)
+
+if __name__ == '__main__':
+    main()
diff --git a/apps/CameraITS/tests/scene1/test_auto_vs_manual.py b/apps/CameraITS/tests/scene1/test_auto_vs_manual.py
new file mode 100644
index 0000000..a9d5ce4
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_auto_vs_manual.py
@@ -0,0 +1,95 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import os.path
+import math
+
+def main():
+    """Capture auto and manual shots that should look the same.
+
+    Manual shots taken with just manual WB, and also with manual WB+tonemap.
+
+    In all cases, the general color/look of the shots should be the same,
+    however there can be variations in brightness/contrast due to different
+    "auto" ISP blocks that may be disabled in the manual flows.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if (not its.caps.manual_sensor(props) or
+            not its.caps.manual_post_proc(props)):
+            print "Test skipped"
+            return
+
+        # Converge 3A and get the estimates.
+        sens, exp, gains, xform, focus = cam.do_3a(get_results=True)
+        xform_rat = its.objects.float_to_rational(xform)
+        print "AE sensitivity %d, exposure %dms" % (sens, exp/1000000.0)
+        print "AWB gains", gains
+        print "AWB transform", xform
+        print "AF distance", focus
+
+        # Auto capture.
+        req = its.objects.auto_capture_request()
+        cap_auto = cam.do_capture(req)
+        img_auto = its.image.convert_capture_to_rgb_image(cap_auto)
+        its.image.write_image(img_auto, "%s_auto.jpg" % (NAME))
+        xform_a = its.objects.rational_to_float(
+                cap_auto["metadata"]["android.colorCorrection.transform"])
+        gains_a = cap_auto["metadata"]["android.colorCorrection.gains"]
+        print "Auto gains:", gains_a
+        print "Auto transform:", xform_a
+
+        # Manual capture 1: WB
+        req = its.objects.manual_capture_request(sens, exp)
+        req["android.colorCorrection.transform"] = xform_rat
+        req["android.colorCorrection.gains"] = gains
+        cap_man1 = cam.do_capture(req)
+        img_man1 = its.image.convert_capture_to_rgb_image(cap_man1)
+        its.image.write_image(img_man1, "%s_manual_wb.jpg" % (NAME))
+        xform_m1 = its.objects.rational_to_float(
+                cap_man1["metadata"]["android.colorCorrection.transform"])
+        gains_m1 = cap_man1["metadata"]["android.colorCorrection.gains"]
+        print "Manual wb gains:", gains_m1
+        print "Manual wb transform:", xform_m1
+
+        # Manual capture 2: WB + tonemap
+        gamma = sum([[i/63.0,math.pow(i/63.0,1/2.2)] for i in xrange(64)],[])
+        req["android.tonemap.mode"] = 0
+        req["android.tonemap.curveRed"] = gamma
+        req["android.tonemap.curveGreen"] = gamma
+        req["android.tonemap.curveBlue"] = gamma
+        cap_man2 = cam.do_capture(req)
+        img_man2 = its.image.convert_capture_to_rgb_image(cap_man2)
+        its.image.write_image(img_man2, "%s_manual_wb_tm.jpg" % (NAME))
+        xform_m2 = its.objects.rational_to_float(
+                cap_man2["metadata"]["android.colorCorrection.transform"])
+        gains_m2 = cap_man2["metadata"]["android.colorCorrection.gains"]
+        print "Manual wb+tm gains:", gains_m2
+        print "Manual wb+tm transform:", xform_m2
+
+        # Check that the WB gains and transform reported in each capture
+        # result match with the original AWB estimate from do_3a.
+        for g,x in [(gains_a,xform_a),(gains_m1,xform_m1),(gains_m2,xform_m2)]:
+            assert(all([abs(xform[i] - x[i]) < 0.05 for i in range(9)]))
+            assert(all([abs(gains[i] - g[i]) < 0.05 for i in range(4)]))
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_black_white.py b/apps/CameraITS/tests/scene1/test_black_white.py
new file mode 100644
index 0000000..e471602
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_black_white.py
@@ -0,0 +1,86 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test that the device will produce full black+white images.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    r_means = []
+    g_means = []
+    b_means = []
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.manual_sensor(props):
+            print "Test skipped"
+            return
+
+        expt_range = props['android.sensor.info.exposureTimeRange']
+        sens_range = props['android.sensor.info.sensitivityRange']
+
+        # Take a shot with very low ISO and exposure time. Expect it to
+        # be black.
+        print "Black shot: sens = %d, exp time = %.4fms" % (
+                sens_range[0], expt_range[0]/1000000.0)
+        req = its.objects.manual_capture_request(sens_range[0], expt_range[0])
+        cap = cam.do_capture(req)
+        img = its.image.convert_capture_to_rgb_image(cap)
+        its.image.write_image(img, "%s_black.jpg" % (NAME))
+        tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+        black_means = its.image.compute_image_means(tile)
+        r_means.append(black_means[0])
+        g_means.append(black_means[1])
+        b_means.append(black_means[2])
+        print "Dark pixel means:", black_means
+
+        # Take a shot with very high ISO and exposure time. Expect it to
+        # be white.
+        print "White shot: sens = %d, exp time = %.2fms" % (
+                sens_range[1], expt_range[1]/1000000.0)
+        req = its.objects.manual_capture_request(sens_range[1], expt_range[1])
+        cap = cam.do_capture(req)
+        img = its.image.convert_capture_to_rgb_image(cap)
+        its.image.write_image(img, "%s_white.jpg" % (NAME))
+        tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+        white_means = its.image.compute_image_means(tile)
+        r_means.append(white_means[0])
+        g_means.append(white_means[1])
+        b_means.append(white_means[2])
+        print "Bright pixel means:", white_means
+
+        # Draw a plot.
+        pylab.plot([0,1], r_means, 'r')
+        pylab.plot([0,1], g_means, 'g')
+        pylab.plot([0,1], b_means, 'b')
+        pylab.ylim([0,1])
+        matplotlib.pyplot.savefig("%s_plot_means.png" % (NAME))
+
+        for val in black_means:
+            assert(val < 0.025)
+        for val in white_means:
+            assert(val > 0.975)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_burst_sameness_manual.py b/apps/CameraITS/tests/scene1/test_burst_sameness_manual.py
new file mode 100644
index 0000000..3858c0c
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_burst_sameness_manual.py
@@ -0,0 +1,86 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import os.path
+import numpy
+
+def main():
+    """Take long bursts of images and check that they're all identical.
+
+    Assumes a static scene. Can be used to idenfity if there are sporadic
+    frames that are processed differently or have artifacts. Uses manual
+    capture settings.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    BURST_LEN = 50
+    BURSTS = 5
+    FRAMES = BURST_LEN * BURSTS
+
+    SPREAD_THRESH = 0.03
+
+    with its.device.ItsSession() as cam:
+
+        # Capture at the smallest resolution.
+        props = cam.get_camera_properties()
+        if not its.caps.manual_sensor(props):
+            print "Test skipped"
+            return
+
+        _, fmt = its.objects.get_fastest_manual_capture_settings(props)
+        e, s = its.target.get_target_exposure_combos(cam)["minSensitivity"]
+        req = its.objects.manual_capture_request(s, e)
+        w,h = fmt["width"], fmt["height"]
+
+        # Capture bursts of YUV shots.
+        # Get the mean values of a center patch for each.
+        # Also build a 4D array, which is an array of all RGB images.
+        r_means = []
+        g_means = []
+        b_means = []
+        imgs = numpy.empty([FRAMES,h,w,3])
+        for j in range(BURSTS):
+            caps = cam.do_capture([req]*BURST_LEN, [fmt])
+            for i,cap in enumerate(caps):
+                n = j*BURST_LEN + i
+                imgs[n] = its.image.convert_capture_to_rgb_image(cap)
+                tile = its.image.get_image_patch(imgs[n], 0.45, 0.45, 0.1, 0.1)
+                means = its.image.compute_image_means(tile)
+                r_means.append(means[0])
+                g_means.append(means[1])
+                b_means.append(means[2])
+
+        # Dump all images.
+        print "Dumping images"
+        for i in range(FRAMES):
+            its.image.write_image(imgs[i], "%s_frame%03d.jpg"%(NAME,i))
+
+        # The mean image.
+        img_mean = imgs.mean(0)
+        its.image.write_image(img_mean, "%s_mean.jpg"%(NAME))
+
+        # Pass/fail based on center patch similarity.
+        for means in [r_means, g_means, b_means]:
+            spread = max(means) - min(means)
+            print spread
+            assert(spread < SPREAD_THRESH)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_capture_result.py b/apps/CameraITS/tests/scene1/test_capture_result.py
new file mode 100644
index 0000000..304e811
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_capture_result.py
@@ -0,0 +1,214 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import os.path
+import numpy
+import matplotlib.pyplot
+
+# Required for 3d plot to work
+import mpl_toolkits.mplot3d
+
+def main():
+    """Test that valid data comes back in CaptureResult objects.
+    """
+    global NAME, auto_req, manual_req, w_map, h_map
+    global manual_tonemap, manual_transform, manual_gains, manual_region
+    global manual_exp_time, manual_sensitivity, manual_gains_ok
+
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if (not its.caps.manual_sensor(props) or
+            not its.caps.manual_post_proc(props)):
+            print "Test skipped"
+            return
+
+        manual_tonemap = [0,0, 1,1] # Linear
+        manual_transform = its.objects.int_to_rational([1,2,3, 4,5,6, 7,8,9])
+        manual_gains = [1,2,3,4]
+        manual_region = [{"x":8,"y":8,"width":128,"height":128,"weight":1}]
+        manual_exp_time = min(props['android.sensor.info.exposureTimeRange'])
+        manual_sensitivity = min(props['android.sensor.info.sensitivityRange'])
+
+        # The camera HAL may not support different gains for two G channels.
+        manual_gains_ok = [[1,2,3,4],[1,2,2,4],[1,3,3,4]]
+
+        auto_req = its.objects.auto_capture_request()
+        auto_req["android.statistics.lensShadingMapMode"] = 1
+
+        manual_req = {
+            "android.control.mode": 0,
+            "android.control.aeMode": 0,
+            "android.control.awbMode": 0,
+            "android.control.afMode": 0,
+            "android.sensor.frameDuration": 0,
+            "android.sensor.sensitivity": manual_sensitivity,
+            "android.sensor.exposureTime": manual_exp_time,
+            "android.colorCorrection.mode": 0,
+            "android.colorCorrection.transform": manual_transform,
+            "android.colorCorrection.gains": manual_gains,
+            "android.tonemap.mode": 0,
+            "android.tonemap.curveRed": manual_tonemap,
+            "android.tonemap.curveGreen": manual_tonemap,
+            "android.tonemap.curveBlue": manual_tonemap,
+            "android.control.aeRegions": manual_region,
+            "android.control.afRegions": manual_region,
+            "android.control.awbRegions": manual_region,
+            "android.statistics.lensShadingMapMode":1
+            }
+
+        w_map = props["android.lens.info.shadingMapSize"]["width"]
+        h_map = props["android.lens.info.shadingMapSize"]["height"]
+
+        print "Testing auto capture results"
+        lsc_map_auto = test_auto(cam, w_map, h_map)
+        print "Testing manual capture results"
+        test_manual(cam, w_map, h_map, lsc_map_auto)
+        print "Testing auto capture results again"
+        test_auto(cam, w_map, h_map)
+
+# A very loose definition for two floats being close to each other;
+# there may be different interpolation and rounding used to get the
+# two values, and all this test is looking at is whether there is
+# something obviously broken; it's not looking for a perfect match.
+def is_close_float(n1, n2):
+    return abs(n1 - n2) < 0.05
+
+def is_close_rational(n1, n2):
+    return is_close_float(its.objects.rational_to_float(n1),
+                          its.objects.rational_to_float(n2))
+
+def draw_lsc_plot(w_map, h_map, lsc_map, name):
+    for ch in range(4):
+        fig = matplotlib.pyplot.figure()
+        ax = fig.gca(projection='3d')
+        xs = numpy.array([range(w_map)] * h_map).reshape(h_map, w_map)
+        ys = numpy.array([[i]*w_map for i in range(h_map)]).reshape(
+                h_map, w_map)
+        zs = numpy.array(lsc_map[ch::4]).reshape(h_map, w_map)
+        ax.plot_wireframe(xs, ys, zs)
+        matplotlib.pyplot.savefig("%s_plot_lsc_%s_ch%d.png"%(NAME,name,ch))
+
+def test_auto(cam, w_map, h_map):
+    # Get 3A lock first, so the auto values in the capture result are
+    # populated properly.
+    rect = [[0,0,1,1,1]]
+    cam.do_3a(rect, rect, rect, do_af=False)
+
+    cap = cam.do_capture(auto_req)
+    cap_res = cap["metadata"]
+
+    gains = cap_res["android.colorCorrection.gains"]
+    transform = cap_res["android.colorCorrection.transform"]
+    exp_time = cap_res['android.sensor.exposureTime']
+    lsc_map = cap_res["android.statistics.lensShadingMap"]
+    ctrl_mode = cap_res["android.control.mode"]
+
+    print "Control mode:", ctrl_mode
+    print "Gains:", gains
+    print "Transform:", [its.objects.rational_to_float(t)
+                         for t in transform]
+    print "AE region:", cap_res['android.control.aeRegions']
+    print "AF region:", cap_res['android.control.afRegions']
+    print "AWB region:", cap_res['android.control.awbRegions']
+    print "LSC map:", w_map, h_map, lsc_map[:8]
+
+    assert(ctrl_mode == 1)
+
+    # Color correction gain and transform must be valid.
+    assert(len(gains) == 4)
+    assert(len(transform) == 9)
+    assert(all([g > 0 for g in gains]))
+    assert(all([t["denominator"] != 0 for t in transform]))
+
+    # Color correction should not match the manual settings.
+    assert(any([not is_close_float(gains[i], manual_gains[i])
+                for i in xrange(4)]))
+    assert(any([not is_close_rational(transform[i], manual_transform[i])
+                for i in xrange(9)]))
+
+    # Exposure time must be valid.
+    assert(exp_time > 0)
+
+    # Lens shading map must be valid.
+    assert(w_map > 0 and h_map > 0 and w_map * h_map * 4 == len(lsc_map))
+    assert(all([m >= 1 for m in lsc_map]))
+
+    draw_lsc_plot(w_map, h_map, lsc_map, "auto")
+
+    return lsc_map
+
+def test_manual(cam, w_map, h_map, lsc_map_auto):
+    cap = cam.do_capture(manual_req)
+    cap_res = cap["metadata"]
+
+    gains = cap_res["android.colorCorrection.gains"]
+    transform = cap_res["android.colorCorrection.transform"]
+    curves = [cap_res["android.tonemap.curveRed"],
+              cap_res["android.tonemap.curveGreen"],
+              cap_res["android.tonemap.curveBlue"]]
+    exp_time = cap_res['android.sensor.exposureTime']
+    lsc_map = cap_res["android.statistics.lensShadingMap"]
+    ctrl_mode = cap_res["android.control.mode"]
+
+    print "Control mode:", ctrl_mode
+    print "Gains:", gains
+    print "Transform:", [its.objects.rational_to_float(t)
+                         for t in transform]
+    print "Tonemap:", curves[0][1::16]
+    print "AE region:", cap_res['android.control.aeRegions']
+    print "AF region:", cap_res['android.control.afRegions']
+    print "AWB region:", cap_res['android.control.awbRegions']
+    print "LSC map:", w_map, h_map, lsc_map[:8]
+
+    assert(ctrl_mode == 0)
+
+    # Color correction gain and transform must be valid.
+    # Color correction gains and transform should be the same size and
+    # values as the manually set values.
+    assert(len(gains) == 4)
+    assert(len(transform) == 9)
+    assert( all([is_close_float(gains[i], manual_gains_ok[0][i])
+                 for i in xrange(4)]) or
+            all([is_close_float(gains[i], manual_gains_ok[1][i])
+                 for i in xrange(4)]) or
+            all([is_close_float(gains[i], manual_gains_ok[2][i])
+                 for i in xrange(4)]))
+    assert(all([is_close_rational(transform[i], manual_transform[i])
+                for i in xrange(9)]))
+
+    # Tonemap must be valid.
+    # The returned tonemap must be linear.
+    for c in curves:
+        assert(len(c) > 0)
+        assert(all([is_close_float(c[i], c[i+1])
+                    for i in xrange(0,len(c),2)]))
+
+    # Exposure time must be close to the requested exposure time.
+    assert(is_close_float(exp_time/1000000.0, manual_exp_time/1000000.0))
+
+    # Lens shading map must be valid.
+    assert(w_map > 0 and h_map > 0 and w_map * h_map * 4 == len(lsc_map))
+    assert(all([m >= 1 for m in lsc_map]))
+
+    draw_lsc_plot(w_map, h_map, lsc_map, "manual")
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_crop_region_raw.py b/apps/CameraITS/tests/scene1/test_crop_region_raw.py
new file mode 100644
index 0000000..94c8e2b
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_crop_region_raw.py
@@ -0,0 +1,116 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import numpy
+import os.path
+
+def main():
+    """Test that raw streams are not croppable.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    DIFF_THRESH = 0.05
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if (not its.caps.compute_target_exposure(props) or
+            not its.caps.raw16(props)):
+            print "Test skipped"
+            return
+
+        a = props['android.sensor.info.activeArraySize']
+        ax, ay = a["left"], a["top"]
+        aw, ah = a["right"] - a["left"], a["bottom"] - a["top"]
+        print "Active sensor region: (%d,%d %dx%d)" % (ax, ay, aw, ah)
+
+        # Capture without a crop region.
+        # Use a manual request with a linear tonemap so that the YUV and RAW
+        # should look the same (once converted by the its.image module).
+        e, s = its.target.get_target_exposure_combos(cam)["minSensitivity"]
+        req = its.objects.manual_capture_request(s,e, True)
+        cap1_raw, cap1_yuv = cam.do_capture(req, cam.CAP_RAW_YUV)
+
+        # Capture with a center crop region.
+        req["android.scaler.cropRegion"] = {
+                "top": ay + ah/3,
+                "left": ax + aw/3,
+                "right": ax + 2*aw/3,
+                "bottom": ay + 2*ah/3}
+        cap2_raw, cap2_yuv = cam.do_capture(req, cam.CAP_RAW_YUV)
+
+        reported_crops = []
+        imgs = {}
+        for s,cap in [("yuv_full",cap1_yuv), ("raw_full",cap1_raw),
+                ("yuv_crop",cap2_yuv), ("raw_crop",cap2_raw)]:
+            img = its.image.convert_capture_to_rgb_image(cap, props=props)
+            its.image.write_image(img, "%s_%s.jpg" % (NAME, s))
+            r = cap["metadata"]["android.scaler.cropRegion"]
+            x, y = a["left"], a["top"]
+            w, h = a["right"] - a["left"], a["bottom"] - a["top"]
+            reported_crops.append((x,y,w,h))
+            imgs[s] = img
+            print "Crop on %s: (%d,%d %dx%d)" % (s, x,y,w,h)
+
+        # The metadata should report uncropped for all shots (since there is
+        # at least 1 uncropped stream in each case).
+        for (x,y,w,h) in reported_crops:
+            assert((ax,ay,aw,ah) == (x,y,w,h))
+
+        # Also check the image content; 3 of the 4 shots should match.
+        # Note that all the shots are RGB below; the variable names correspond
+        # to what was captured.
+        # Average the images down 4x4 -> 1 prior to comparison to smooth out
+        # noise.
+        # Shrink the YUV images an additional 2x2 -> 1 to account for the size
+        # reduction that the raw images went through in the RGB conversion.
+        imgs2 = {}
+        for s,img in imgs.iteritems():
+            h,w,ch = img.shape
+            m = 4
+            if s in ["yuv_full", "yuv_crop"]:
+                m = 8
+            img = img.reshape(h/m,m,w/m,m,3).mean(3).mean(1).reshape(h/m,w/m,3)
+            imgs2[s] = img
+            print s, img.shape
+
+        # Strip any border pixels from the raw shots (since the raw images may
+        # be larger than the YUV images). Assume a symmetric padded border.
+        xpad = (imgs2["raw_full"].shape[1] - imgs2["yuv_full"].shape[1]) / 2
+        ypad = (imgs2["raw_full"].shape[0] - imgs2["yuv_full"].shape[0]) / 2
+        wyuv = imgs2["yuv_full"].shape[1]
+        hyuv = imgs2["yuv_full"].shape[0]
+        imgs2["raw_full"]=imgs2["raw_full"][ypad:ypad+hyuv:,xpad:xpad+wyuv:,::]
+        imgs2["raw_crop"]=imgs2["raw_crop"][ypad:ypad+hyuv:,xpad:xpad+wyuv:,::]
+        print "Stripping padding before comparison:", xpad, ypad
+
+        for s,img in imgs2.iteritems():
+            its.image.write_image(img, "%s_comp_%s.jpg" % (NAME, s))
+
+        # Compute image diffs.
+        diff_yuv = numpy.fabs((imgs2["yuv_full"] - imgs2["yuv_crop"])).mean()
+        diff_raw = numpy.fabs((imgs2["raw_full"] - imgs2["raw_crop"])).mean()
+        print "YUV diff (crop vs. non-crop):", diff_yuv
+        print "RAW diff (crop vs. non-crop):", diff_raw
+
+        assert(diff_yuv > DIFF_THRESH)
+        assert(diff_raw < DIFF_THRESH)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_crop_regions.py b/apps/CameraITS/tests/scene1/test_crop_regions.py
new file mode 100644
index 0000000..da0cd0a
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_crop_regions.py
@@ -0,0 +1,106 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import os.path
+import numpy
+
+def main():
+    """Test that crop regions work.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    # A list of 5 regions, specified in normalized (x,y,w,h) coords.
+    # The regions correspond to: TL, TR, BL, BR, CENT
+    REGIONS = [(0.0, 0.0, 0.5, 0.5),
+               (0.5, 0.0, 0.5, 0.5),
+               (0.0, 0.5, 0.5, 0.5),
+               (0.5, 0.5, 0.5, 0.5),
+               (0.25, 0.25, 0.5, 0.5)]
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        a = props['android.sensor.info.activeArraySize']
+        ax, ay = a["left"], a["top"]
+        aw, ah = a["right"] - a["left"], a["bottom"] - a["top"]
+        e, s = its.target.get_target_exposure_combos(cam)["minSensitivity"]
+        print "Active sensor region (%d,%d %dx%d)" % (ax, ay, aw, ah)
+
+        # Uses a 2x digital zoom.
+        assert(props['android.scaler.availableMaxDigitalZoom'] >= 2)
+
+        # Capture a full frame.
+        req = its.objects.manual_capture_request(s,e)
+        cap_full = cam.do_capture(req)
+        img_full = its.image.convert_capture_to_rgb_image(cap_full)
+        its.image.write_image(img_full, "%s_full.jpg" % (NAME))
+        wfull, hfull = cap_full["width"], cap_full["height"]
+
+        # Capture a burst of crop region frames.
+        # Note that each region is 1/2x1/2 of the full frame, and is digitally
+        # zoomed into the full size output image, so must be downscaled (below)
+        # by 2x when compared to a tile of the full image.
+        reqs = []
+        for x,y,w,h in REGIONS:
+            req = its.objects.manual_capture_request(s,e)
+            req["android.scaler.cropRegion"] = {
+                    "top": int(ah * y),
+                    "left": int(aw * x),
+                    "right": int(aw * (x + w)),
+                    "bottom": int(ah * (y + h))}
+            reqs.append(req)
+        caps_regions = cam.do_capture(reqs)
+        match_failed = False
+        for i,cap in enumerate(caps_regions):
+            a = cap["metadata"]["android.scaler.cropRegion"]
+            ax, ay = a["left"], a["top"]
+            aw, ah = a["right"] - a["left"], a["bottom"] - a["top"]
+
+            # Match this crop image against each of the five regions of
+            # the full image, to find the best match (which should be
+            # the region that corresponds to this crop image).
+            img_crop = its.image.convert_capture_to_rgb_image(cap)
+            img_crop = its.image.downscale_image(img_crop, 2)
+            its.image.write_image(img_crop, "%s_crop%d.jpg" % (NAME, i))
+            min_diff = None
+            min_diff_region = None
+            for j,(x,y,w,h) in enumerate(REGIONS):
+                tile_full = its.image.get_image_patch(img_full, x,y,w,h)
+                wtest = min(tile_full.shape[1], aw)
+                htest = min(tile_full.shape[0], ah)
+                tile_full = tile_full[0:htest:, 0:wtest:, ::]
+                tile_crop = img_crop[0:htest:, 0:wtest:, ::]
+                its.image.write_image(tile_full, "%s_fullregion%d.jpg"%(NAME,j))
+                diff = numpy.fabs(tile_full - tile_crop).mean()
+                if min_diff is None or diff < min_diff:
+                    min_diff = diff
+                    min_diff_region = j
+            if i != min_diff_region:
+                match_failed = True
+            print "Crop image %d (%d,%d %dx%d) best match with region %d"%(
+                    i, ax, ay, aw, ah, min_diff_region)
+
+        assert(not match_failed)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_exposure.py b/apps/CameraITS/tests/scene1/test_exposure.py
new file mode 100644
index 0000000..8676358
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_exposure.py
@@ -0,0 +1,92 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import pylab
+import numpy
+import os.path
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test that a constant exposure is seen as ISO and exposure time vary.
+
+    Take a series of shots that have ISO and exposure time chosen to balance
+    each other; result should be the same brightness, but over the sequence
+    the images should get noisier.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    THRESHOLD_MAX_OUTLIER_DIFF = 0.1
+    THRESHOLD_MIN_LEVEL = 0.1
+    THRESHOLD_MAX_LEVEL = 0.9
+    THRESHOLD_MAX_ABS_GRAD = 0.001
+
+    mults = []
+    r_means = []
+    g_means = []
+    b_means = []
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        e,s = its.target.get_target_exposure_combos(cam)["minSensitivity"]
+        expt_range = props['android.sensor.info.exposureTimeRange']
+        sens_range = props['android.sensor.info.sensitivityRange']
+
+        m = 1
+        while s*m < sens_range[1] and e/m > expt_range[0]:
+            mults.append(m)
+            req = its.objects.manual_capture_request(s*m, e/m)
+            cap = cam.do_capture(req)
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_mult=%02d.jpg" % (NAME, m))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb_means = its.image.compute_image_means(tile)
+            r_means.append(rgb_means[0])
+            g_means.append(rgb_means[1])
+            b_means.append(rgb_means[2])
+            m = m + 4
+
+    # Draw a plot.
+    pylab.plot(mults, r_means, 'r')
+    pylab.plot(mults, g_means, 'g')
+    pylab.plot(mults, b_means, 'b')
+    pylab.ylim([0,1])
+    matplotlib.pyplot.savefig("%s_plot_means.png" % (NAME))
+
+    # Check for linearity. For each R,G,B channel, fit a line y=mx+b, and
+    # assert that the gradient is close to 0 (flat) and that there are no
+    # crazy outliers. Also ensure that the images aren't clamped to 0 or 1
+    # (which would make them look like flat lines).
+    for chan in xrange(3):
+        values = [r_means, g_means, b_means][chan]
+        m, b = numpy.polyfit(mults, values, 1).tolist()
+        print "Channel %d line fit (y = mx+b): m = %f, b = %f" % (chan, m, b)
+        assert(abs(m) < THRESHOLD_MAX_ABS_GRAD)
+        assert(b > THRESHOLD_MIN_LEVEL and b < THRESHOLD_MAX_LEVEL)
+        for v in values:
+            assert(v > THRESHOLD_MIN_LEVEL and v < THRESHOLD_MAX_LEVEL)
+            assert(abs(v - b) < THRESHOLD_MAX_OUTLIER_DIFF)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_format_combos.py b/apps/CameraITS/tests/scene1/test_format_combos.py
new file mode 100644
index 0000000..a021102
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_format_combos.py
@@ -0,0 +1,126 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.error
+import its.target
+import sys
+import os
+import os.path
+
+# Change this to True, to have the test break at the first failure.
+stop_at_first_failure = False
+
+def main():
+    """Test different combinations of output formats.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+
+        props = cam.get_camera_properties()
+        if (not its.caps.compute_target_exposure(props) or
+            not its.caps.raw16(props)):
+            print "Test skipped"
+            return
+
+        successes = []
+        failures = []
+
+        # Two different requests: auto, and manual.
+        e, s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        req_aut = its.objects.auto_capture_request()
+        req_man = its.objects.manual_capture_request(s, e)
+        reqs = [req_aut, # R0
+                req_man] # R1
+
+        # 10 different combos of output formats; some are single surfaces, and
+        # some are multiple surfaces.
+        wyuv,hyuv = its.objects.get_available_output_sizes("yuv", props)[-1]
+        wjpg,hjpg = its.objects.get_available_output_sizes("jpg", props)[-1]
+        fmt_yuv_prev = {"format":"yuv", "width":wyuv, "height":hyuv}
+        fmt_yuv_full = {"format":"yuv"}
+        fmt_jpg_prev = {"format":"jpeg","width":wjpg, "height":hjpg}
+        fmt_jpg_full = {"format":"jpeg"}
+        fmt_raw_full = {"format":"raw"}
+        fmt_combos =[
+                [fmt_yuv_prev],                             # F0
+                [fmt_yuv_full],                             # F1
+                [fmt_jpg_prev],                             # F2
+                [fmt_jpg_full],                             # F3
+                [fmt_raw_full],                             # F4
+                [fmt_yuv_prev, fmt_jpg_prev],               # F5
+                [fmt_yuv_prev, fmt_jpg_full],               # F6
+                [fmt_yuv_prev, fmt_raw_full],               # F7
+                [fmt_yuv_prev, fmt_jpg_prev, fmt_raw_full], # F8
+                [fmt_yuv_prev, fmt_jpg_full, fmt_raw_full]] # F9
+
+        # Two different burst lengths: single frame, and 3 frames.
+        burst_lens = [1, # B0
+                      3] # B1
+
+        # There are 2x10x2=40 different combinations. Run through them all.
+        n = 0
+        for r,req in enumerate(reqs):
+            for f,fmt_combo in enumerate(fmt_combos):
+                for b,burst_len in enumerate(burst_lens):
+                    try:
+                        caps = cam.do_capture([req]*burst_len, fmt_combo)
+                        successes.append((n,r,f,b))
+                        print "==> Success[%02d]: R%d F%d B%d" % (n,r,f,b)
+
+                        # Dump the captures out to jpegs.
+                        if not isinstance(caps, list):
+                            caps = [caps]
+                        elif isinstance(caps[0], list):
+                            caps = sum(caps, [])
+                        for c,cap in enumerate(caps):
+                            img = its.image.convert_capture_to_rgb_image(cap,
+                                    props=props)
+                            its.image.write_image(img,
+                                    "%s_n%02d_r%d_f%d_b%d_c%d.jpg"%(NAME,n,r,f,b,c))
+
+                    except Exception as e:
+                        print e
+                        print "==> Failure[%02d]: R%d F%d B%d" % (n,r,f,b)
+                        failures.append((n,r,f,b))
+                        if stop_at_first_failure:
+                            sys.exit(0)
+                    n += 1
+
+        num_fail = len(failures)
+        num_success = len(successes)
+        num_total = len(reqs)*len(fmt_combos)*len(burst_lens)
+        num_not_run = num_total - num_success - num_fail
+
+        print "\nFailures (%d / %d):" % (num_fail, num_total)
+        for (n,r,f,b) in failures:
+            print "  %02d: R%d F%d B%d" % (n,r,f,b)
+        print "\nSuccesses (%d / %d):" % (num_success, num_total)
+        for (n,r,f,b) in successes:
+            print "  %02d: R%d F%d B%d" % (n,r,f,b)
+        if num_not_run > 0:
+            print "\nNumber of tests not run: %d / %d" % (num_not_run, num_total)
+        print ""
+
+        # The test passes if all the combinations successfully capture.
+        assert(num_fail == 0)
+        assert(num_success == num_total)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_jpeg.py b/apps/CameraITS/tests/scene1/test_jpeg.py
new file mode 100644
index 0000000..bc2d64e
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_jpeg.py
@@ -0,0 +1,64 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import os.path
+import math
+
+def main():
+    """Test that converted YUV images and device JPEG images look the same.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    THRESHOLD_MAX_RMS_DIFF = 0.01
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        e, s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        req = its.objects.manual_capture_request(s, e, True)
+
+        # YUV
+        size = its.objects.get_available_output_sizes("yuv", props)[0]
+        out_surface = {"width":size[0], "height":size[1], "format":"yuv"}
+        cap = cam.do_capture(req, out_surface)
+        img = its.image.convert_capture_to_rgb_image(cap)
+        its.image.write_image(img, "%s_fmt=yuv.jpg" % (NAME))
+        tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+        rgb0 = its.image.compute_image_means(tile)
+
+        # JPEG
+        size = its.objects.get_available_output_sizes("jpg", props)[0]
+        out_surface = {"width":size[0], "height":size[1], "format":"jpg"}
+        cap = cam.do_capture(req, out_surface)
+        img = its.image.decompress_jpeg_to_rgb_image(cap["data"])
+        its.image.write_image(img, "%s_fmt=jpg.jpg" % (NAME))
+        tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+        rgb1 = its.image.compute_image_means(tile)
+
+        rms_diff = math.sqrt(
+                sum([pow(rgb0[i] - rgb1[i], 2.0) for i in range(3)]) / 3.0)
+        print "RMS difference:", rms_diff
+        assert(rms_diff < THRESHOLD_MAX_RMS_DIFF)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_latching.py b/apps/CameraITS/tests/scene1/test_latching.py
new file mode 100644
index 0000000..bef41ac
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_latching.py
@@ -0,0 +1,91 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test that settings latch on the right frame.
+
+    Takes a bunch of shots using back-to-back requests, varying the capture
+    request parameters between shots. Checks that the images that come back
+    have the expected properties.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.full(props):
+            print "Test skipped"
+            return
+
+        _,fmt = its.objects.get_fastest_manual_capture_settings(props)
+        e, s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        e /= 2.0
+
+        r_means = []
+        g_means = []
+        b_means = []
+
+        reqs = [
+            its.objects.manual_capture_request(s,  e,   True),
+            its.objects.manual_capture_request(s,  e,   True),
+            its.objects.manual_capture_request(s*2,e,   True),
+            its.objects.manual_capture_request(s*2,e,   True),
+            its.objects.manual_capture_request(s,  e,   True),
+            its.objects.manual_capture_request(s,  e,   True),
+            its.objects.manual_capture_request(s,  e*2, True),
+            its.objects.manual_capture_request(s,  e,   True),
+            its.objects.manual_capture_request(s*2,e,   True),
+            its.objects.manual_capture_request(s,  e,   True),
+            its.objects.manual_capture_request(s,  e*2, True),
+            its.objects.manual_capture_request(s,  e,   True),
+            its.objects.manual_capture_request(s,  e*2, True),
+            its.objects.manual_capture_request(s,  e*2, True),
+            ]
+
+        caps = cam.do_capture(reqs, fmt)
+        for i,cap in enumerate(caps):
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_i=%02d.jpg" % (NAME, i))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb_means = its.image.compute_image_means(tile)
+            r_means.append(rgb_means[0])
+            g_means.append(rgb_means[1])
+            b_means.append(rgb_means[2])
+
+        # Draw a plot.
+        idxs = range(len(r_means))
+        pylab.plot(idxs, r_means, 'r')
+        pylab.plot(idxs, g_means, 'g')
+        pylab.plot(idxs, b_means, 'b')
+        pylab.ylim([0,1])
+        matplotlib.pyplot.savefig("%s_plot_means.png" % (NAME))
+
+        g_avg = sum(g_means) / len(g_means)
+        g_ratios = [g / g_avg for g in g_means]
+        g_hilo = [g>1.0 for g in g_ratios]
+        assert(g_hilo == [False, False, True, True, False, False, True,
+                          False, True, False, True, False, True, True])
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_linearity.py b/apps/CameraITS/tests/scene1/test_linearity.py
new file mode 100644
index 0000000..fed0324
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_linearity.py
@@ -0,0 +1,99 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import numpy
+import math
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test that device processing can be inverted to linear pixels.
+
+    Captures a sequence of shots with the device pointed at a uniform
+    target. Attempts to invert all the ISP processing to get back to
+    linear R,G,B pixel data.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    RESIDUAL_THRESHOLD = 0.00005
+
+    # The HAL3.2 spec requires that curves up to 64 control points in length
+    # must be supported.
+    L = 64
+    LM1 = float(L-1)
+
+    gamma_lut = numpy.array(
+            sum([[i/LM1, math.pow(i/LM1, 1/2.2)] for i in xrange(L)], []))
+    inv_gamma_lut = numpy.array(
+            sum([[i/LM1, math.pow(i/LM1, 2.2)] for i in xrange(L)], []))
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        e,s = its.target.get_target_exposure_combos(cam)["midSensitivity"]
+        s /= 2
+        sens_range = props['android.sensor.info.sensitivityRange']
+        sensitivities = [s*1.0/3.0, s*2.0/3.0, s, s*4.0/3.0, s*5.0/3.0]
+        sensitivities = [s for s in sensitivities
+                if s > sens_range[0] and s < sens_range[1]]
+
+        req = its.objects.manual_capture_request(0, e)
+        req["android.blackLevel.lock"] = True
+        req["android.tonemap.mode"] = 0
+        req["android.tonemap.curveRed"] = gamma_lut.tolist()
+        req["android.tonemap.curveGreen"] = gamma_lut.tolist()
+        req["android.tonemap.curveBlue"] = gamma_lut.tolist()
+
+        r_means = []
+        g_means = []
+        b_means = []
+
+        for sens in sensitivities:
+            req["android.sensor.sensitivity"] = sens
+            cap = cam.do_capture(req)
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(
+                    img, "%s_sens=%04d.jpg" % (NAME, sens))
+            img = its.image.apply_lut_to_image(img, inv_gamma_lut[1::2] * LM1)
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb_means = its.image.compute_image_means(tile)
+            r_means.append(rgb_means[0])
+            g_means.append(rgb_means[1])
+            b_means.append(rgb_means[2])
+
+        pylab.plot(sensitivities, r_means, 'r')
+        pylab.plot(sensitivities, g_means, 'g')
+        pylab.plot(sensitivities, b_means, 'b')
+        pylab.ylim([0,1])
+        matplotlib.pyplot.savefig("%s_plot_means.png" % (NAME))
+
+        # Check that each plot is actually linear.
+        for means in [r_means, g_means, b_means]:
+            line,residuals,_,_,_  = numpy.polyfit(range(5),means,1,full=True)
+            print "Line: m=%f, b=%f, resid=%f"%(line[0], line[1], residuals[0])
+            assert(residuals[0] < RESIDUAL_THRESHOLD)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_locked_burst.py b/apps/CameraITS/tests/scene1/test_locked_burst.py
new file mode 100644
index 0000000..83cfa25
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_locked_burst.py
@@ -0,0 +1,89 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.device
+import its.objects
+import os.path
+import numpy
+import pylab
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test 3A lock + YUV burst (using auto settings).
+
+    This is a test that is designed to pass even on limited devices that
+    don't have MANUAL_SENSOR or PER_FRAME_CONTROLS. (They must be able to
+    capture bursts with full res @ full frame rate to pass, however).
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    BURST_LEN = 10
+    SPREAD_THRESH = 0.005
+    FPS_MAX_DIFF = 2.0
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+
+        # Converge 3A prior to capture.
+        cam.do_3a(do_af=False, lock_ae=True, lock_awb=True)
+
+        # After 3A has converged, lock AE+AWB for the duration of the test.
+        req = its.objects.auto_capture_request()
+        req["android.control.awbLock"] = True
+        req["android.control.aeLock"] = True
+
+        # Capture bursts of YUV shots.
+        # Get the mean values of a center patch for each.
+        r_means = []
+        g_means = []
+        b_means = []
+        caps = cam.do_capture([req]*BURST_LEN)
+        for i,cap in enumerate(caps):
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_frame%d.jpg"%(NAME,i))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            means = its.image.compute_image_means(tile)
+            r_means.append(means[0])
+            g_means.append(means[1])
+            b_means.append(means[2])
+
+        # Pass/fail based on center patch similarity.
+        for means in [r_means, g_means, b_means]:
+            spread = max(means) - min(means)
+            print "Patch mean spread", spread
+            assert(spread < SPREAD_THRESH)
+
+        # Also ensure that the burst was at full frame rate.
+        fmt_code = 0x23
+        configs = props['android.scaler.streamConfigurationMap']\
+                       ['availableStreamConfigurations']
+        min_duration = None
+        for cfg in configs:
+            if cfg['format'] == fmt_code and cfg['input'] == False and \
+                    cfg['width'] == caps[0]["width"] and \
+                    cfg['height'] == caps[0]["height"]:
+                min_duration = cfg["minFrameDuration"]
+        assert(min_duration is not None)
+        tstamps = [c['metadata']['android.sensor.timestamp'] for c in caps]
+        deltas = [tstamps[i]-tstamps[i-1] for i in range(1,len(tstamps))]
+        actual_fps = 1.0 / (max(deltas) / 1000000000.0)
+        max_fps = 1.0 / (min_duration / 1000000000.0)
+        print "FPS measured %.1f, max advertized %.1f" %(actual_fps, max_fps)
+        assert(max_fps - FPS_MAX_DIFF <= actual_fps <= max_fps)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_param_color_correction.py b/apps/CameraITS/tests/scene1/test_param_color_correction.py
new file mode 100644
index 0000000..82f2342
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_param_color_correction.py
@@ -0,0 +1,105 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test that the android.colorCorrection.* params are applied when set.
+
+    Takes shots with different transform and gains values, and tests that
+    they look correspondingly different. The transform and gains are chosen
+    to make the output go redder or bluer.
+
+    Uses a linear tonemap.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    THRESHOLD_MAX_DIFF = 0.1
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        # Baseline request
+        e, s = its.target.get_target_exposure_combos(cam)["midSensitivity"]
+        req = its.objects.manual_capture_request(s, e, True)
+        req["android.colorCorrection.mode"] = 0
+
+        # Transforms:
+        # 1. Identity
+        # 2. Identity
+        # 3. Boost blue
+        transforms = [its.objects.int_to_rational([1,0,0, 0,1,0, 0,0,1]),
+                      its.objects.int_to_rational([1,0,0, 0,1,0, 0,0,1]),
+                      its.objects.int_to_rational([1,0,0, 0,1,0, 0,0,2])]
+
+        # Gains:
+        # 1. Unit
+        # 2. Boost red
+        # 3. Unit
+        gains = [[1,1,1,1], [2,1,1,1], [1,1,1,1]]
+
+        r_means = []
+        g_means = []
+        b_means = []
+
+        # Capture requests:
+        # 1. With unit gains, and identity transform.
+        # 2. With a higher red gain, and identity transform.
+        # 3. With unit gains, and a transform that boosts blue.
+        for i in range(len(transforms)):
+            req["android.colorCorrection.transform"] = transforms[i]
+            req["android.colorCorrection.gains"] = gains[i]
+            cap = cam.do_capture(req)
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_req=%d.jpg" % (NAME, i))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb_means = its.image.compute_image_means(tile)
+            r_means.append(rgb_means[0])
+            g_means.append(rgb_means[1])
+            b_means.append(rgb_means[2])
+            ratios = [rgb_means[0] / rgb_means[1], rgb_means[2] / rgb_means[1]]
+            print "Means = ", rgb_means, "   Ratios =", ratios
+
+        # Draw a plot.
+        domain = range(len(transforms))
+        pylab.plot(domain, r_means, 'r')
+        pylab.plot(domain, g_means, 'g')
+        pylab.plot(domain, b_means, 'b')
+        pylab.ylim([0,1])
+        matplotlib.pyplot.savefig("%s_plot_means.png" % (NAME))
+
+        # Expect G0 == G1 == G2, R0 == 0.5*R1 == R2, B0 == B1 == 0.5*B2
+        # Also need to ensure that the imasge is not clamped to white/black.
+        assert(all(g_means[i] > 0.2 and g_means[i] < 0.8 for i in xrange(3)))
+        assert(abs(g_means[1] - g_means[0]) < THRESHOLD_MAX_DIFF)
+        assert(abs(g_means[2] - g_means[1]) < THRESHOLD_MAX_DIFF)
+        assert(abs(r_means[2] - r_means[0]) < THRESHOLD_MAX_DIFF)
+        assert(abs(r_means[1] - 2.0 * r_means[0]) < THRESHOLD_MAX_DIFF)
+        assert(abs(b_means[1] - b_means[0]) < THRESHOLD_MAX_DIFF)
+        assert(abs(b_means[2] - 2.0 * b_means[0]) < THRESHOLD_MAX_DIFF)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_param_exposure_time.py b/apps/CameraITS/tests/scene1/test_param_exposure_time.py
new file mode 100644
index 0000000..390fd3c
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_param_exposure_time.py
@@ -0,0 +1,69 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test that the android.sensor.exposureTime parameter is applied.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    exp_times = []
+    r_means = []
+    g_means = []
+    b_means = []
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        e,s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        for i,e_mult in enumerate([0.8, 0.9, 1.0, 1.1, 1.2]):
+            req = its.objects.manual_capture_request(s, e * e_mult, True)
+            cap = cam.do_capture(req)
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(
+                    img, "%s_frame%d.jpg" % (NAME, i))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb_means = its.image.compute_image_means(tile)
+            exp_times.append(e * e_mult)
+            r_means.append(rgb_means[0])
+            g_means.append(rgb_means[1])
+            b_means.append(rgb_means[2])
+
+    # Draw a plot.
+    pylab.plot(exp_times, r_means, 'r')
+    pylab.plot(exp_times, g_means, 'g')
+    pylab.plot(exp_times, b_means, 'b')
+    pylab.ylim([0,1])
+    matplotlib.pyplot.savefig("%s_plot_means.png" % (NAME))
+
+    # Test for pass/fail: check that each shot is brighter than the previous.
+    for means in [r_means, g_means, b_means]:
+        for i in range(len(means)-1):
+            assert(means[i+1] > means[i])
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_param_flash_mode.py b/apps/CameraITS/tests/scene1/test_param_flash_mode.py
new file mode 100644
index 0000000..6d1be4f
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_param_flash_mode.py
@@ -0,0 +1,66 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import os.path
+
+def main():
+    """Test that the android.flash.mode parameter is applied.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        flash_modes_reported = []
+        flash_states_reported = []
+        g_means = []
+
+        # Manually set the exposure to be a little on the dark side, so that
+        # it should be obvious whether the flash fired or not, and use a
+        # linear tonemap.
+        e, s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        e /= 4
+        req = its.objects.manual_capture_request(s, e, True)
+
+        for f in [0,1,2]:
+            req["android.flash.mode"] = f
+            cap = cam.do_capture(req)
+            flash_modes_reported.append(cap["metadata"]["android.flash.mode"])
+            flash_states_reported.append(cap["metadata"]["android.flash.state"])
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_mode=%d.jpg" % (NAME, f))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb = its.image.compute_image_means(tile)
+            g_means.append(rgb[1])
+
+        assert(flash_modes_reported == [0,1,2])
+        assert(flash_states_reported[0] not in [3,4])
+        assert(flash_states_reported[1] in [3,4])
+        assert(flash_states_reported[2] in [3,4])
+
+        print "G brightnesses:", g_means
+        assert(g_means[1] > g_means[0])
+        assert(g_means[2] > g_means[0])
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_param_noise_reduction.py b/apps/CameraITS/tests/scene1/test_param_noise_reduction.py
new file mode 100644
index 0000000..618f8a7
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_param_noise_reduction.py
@@ -0,0 +1,100 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test that the android.noiseReduction.mode param is applied when set.
+
+    Capture images with the camera dimly lit. Uses a high analog gain to
+    ensure the captured image is noisy.
+
+    Captures three images, for NR off, "fast", and "high quality".
+    Also captures an image with low gain and NR off, and uses the variance
+    of this as the baseline.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    # List of variances for Y,U,V.
+    variances = [[],[],[]]
+
+    # Reference (baseline) variance for each of Y,U,V.
+    ref_variance = []
+
+    nr_modes_reported = []
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        # NR mode 0 with low gain
+        e, s = its.target.get_target_exposure_combos(cam)["minSensitivity"]
+        req = its.objects.manual_capture_request(s, e)
+        req["android.noiseReduction.mode"] = 0
+        cap = cam.do_capture(req)
+        its.image.write_image(
+                its.image.convert_capture_to_rgb_image(cap),
+                "%s_low_gain.jpg" % (NAME))
+        planes = its.image.convert_capture_to_planes(cap)
+        for j in range(3):
+            img = planes[j]
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            ref_variance.append(its.image.compute_image_variances(tile)[0])
+        print "Ref variances:", ref_variance
+
+        for i in range(3):
+            # NR modes 0, 1, 2 with high gain
+            e, s = its.target.get_target_exposure_combos(cam)["maxSensitivity"]
+            req = its.objects.manual_capture_request(s, e)
+            req["android.noiseReduction.mode"] = i
+            cap = cam.do_capture(req)
+            nr_modes_reported.append(
+                    cap["metadata"]["android.noiseReduction.mode"])
+            its.image.write_image(
+                    its.image.convert_capture_to_rgb_image(cap),
+                    "%s_high_gain_nr=%d.jpg" % (NAME, i))
+            planes = its.image.convert_capture_to_planes(cap)
+            for j in range(3):
+                img = planes[j]
+                tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+                variance = its.image.compute_image_variances(tile)[0]
+                variances[j].append(variance / ref_variance[j])
+        print "Variances with NR mode [0,1,2]:", variances
+
+    # Draw a plot.
+    for j in range(3):
+        pylab.plot(range(3), variances[j], "rgb"[j])
+    matplotlib.pyplot.savefig("%s_plot_variances.png" % (NAME))
+
+    assert(nr_modes_reported == [0,1,2])
+
+    # Check that the variance of the NR=0 image is higher than for the
+    # NR=1 and NR=2 images.
+    for j in range(3):
+        for i in range(1,3):
+            assert(variances[j][i] < variances[j][0])
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_param_sensitivity.py b/apps/CameraITS/tests/scene1/test_param_sensitivity.py
new file mode 100644
index 0000000..c26e9f9
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_param_sensitivity.py
@@ -0,0 +1,74 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Test that the android.sensor.sensitivity parameter is applied.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    NUM_STEPS = 5
+
+    sensitivities = None
+    r_means = []
+    g_means = []
+    b_means = []
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        expt,_ = its.target.get_target_exposure_combos(cam)["midSensitivity"]
+        sens_range = props['android.sensor.info.sensitivityRange']
+        sens_step = (sens_range[1] - sens_range[0]) / float(NUM_STEPS-1)
+        sensitivities = [sens_range[0] + i * sens_step for i in range(NUM_STEPS)]
+
+        for s in sensitivities:
+            req = its.objects.manual_capture_request(s, expt)
+            cap = cam.do_capture(req)
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(
+                    img, "%s_iso=%04d.jpg" % (NAME, s))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb_means = its.image.compute_image_means(tile)
+            r_means.append(rgb_means[0])
+            g_means.append(rgb_means[1])
+            b_means.append(rgb_means[2])
+
+    # Draw a plot.
+    pylab.plot(sensitivities, r_means, 'r')
+    pylab.plot(sensitivities, g_means, 'g')
+    pylab.plot(sensitivities, b_means, 'b')
+    pylab.ylim([0,1])
+    matplotlib.pyplot.savefig("%s_plot_means.png" % (NAME))
+
+    # Test for pass/fail: check that each shot is brighter than the previous.
+    for means in [r_means, g_means, b_means]:
+        for i in range(len(means)-1):
+            assert(means[i+1] > means[i])
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_param_tonemap_mode.py b/apps/CameraITS/tests/scene1/test_param_tonemap_mode.py
new file mode 100644
index 0000000..fbd452c
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_param_tonemap_mode.py
@@ -0,0 +1,104 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import os
+import os.path
+
+def main():
+    """Test that the android.tonemap.mode param is applied.
+
+    Applies different tonemap curves to each R,G,B channel, and checks
+    that the output images are modified as expected.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    THRESHOLD_RATIO_MIN_DIFF = 0.1
+    THRESHOLD_DIFF_MAX_DIFF = 0.05
+
+    # The HAL3.2 spec requires that curves up to 64 control points in length
+    # must be supported.
+    L = 32
+    LM1 = float(L-1)
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        e, s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        e /= 2
+
+        # Test 1: that the tonemap curves have the expected effect. Take two
+        # shots, with n in [0,1], where each has a linear tonemap, with the
+        # n=1 shot having a steeper gradient. The gradient for each R,G,B
+        # channel increases (i.e.) R[n=1] should be brighter than R[n=0],
+        # and G[n=1] should be brighter than G[n=0] by a larger margin, etc.
+        rgb_means = []
+
+        for n in [0,1]:
+            req = its.objects.manual_capture_request(s,e)
+            req["android.tonemap.mode"] = 0
+            req["android.tonemap.curveRed"] = (
+                    sum([[i/LM1, min(1.0,(1+0.5*n)*i/LM1)] for i in range(L)], []))
+            req["android.tonemap.curveGreen"] = (
+                    sum([[i/LM1, min(1.0,(1+1.0*n)*i/LM1)] for i in range(L)], []))
+            req["android.tonemap.curveBlue"] = (
+                    sum([[i/LM1, min(1.0,(1+1.5*n)*i/LM1)] for i in range(L)], []))
+            cap = cam.do_capture(req)
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(
+                    img, "%s_n=%d.jpg" %(NAME, n))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb_means.append(its.image.compute_image_means(tile))
+
+        rgb_ratios = [rgb_means[1][i] / rgb_means[0][i] for i in xrange(3)]
+        print "Test 1: RGB ratios:", rgb_ratios
+        assert(rgb_ratios[0] + THRESHOLD_RATIO_MIN_DIFF < rgb_ratios[1])
+        assert(rgb_ratios[1] + THRESHOLD_RATIO_MIN_DIFF < rgb_ratios[2])
+
+
+        # Test 2: that the length of the tonemap curve (i.e. number of control
+        # points) doesn't affect the output.
+        rgb_means = []
+
+        for size in [32,64]:
+            m = float(size-1)
+            curve = sum([[i/m, i/m] for i in range(size)], [])
+            req = its.objects.manual_capture_request(s,e)
+            req["android.tonemap.mode"] = 0
+            req["android.tonemap.curveRed"] = curve
+            req["android.tonemap.curveGreen"] = curve
+            req["android.tonemap.curveBlue"] = curve
+            cap = cam.do_capture(req)
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(
+                    img, "%s_size=%02d.jpg" %(NAME, size))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb_means.append(its.image.compute_image_means(tile))
+
+        rgb_diffs = [rgb_means[1][i] - rgb_means[0][i] for i in xrange(3)]
+        print "Test 2: RGB diffs:", rgb_diffs
+        assert(abs(rgb_diffs[0]) < THRESHOLD_DIFF_MAX_DIFF)
+        assert(abs(rgb_diffs[1]) < THRESHOLD_DIFF_MAX_DIFF)
+        assert(abs(rgb_diffs[2]) < THRESHOLD_DIFF_MAX_DIFF)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_raw_burst_sensitivity.py b/apps/CameraITS/tests/scene1/test_raw_burst_sensitivity.py
new file mode 100644
index 0000000..bf0e2ea
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_raw_burst_sensitivity.py
@@ -0,0 +1,86 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.device
+import its.caps
+import its.objects
+import its.image
+import os.path
+import pylab
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Capture a set of raw images with increasing gains and measure the noise.
+
+    Capture raw-only, in a burst.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    # Each shot must be 1% noisier (by the variance metric) than the previous
+    # one.
+    VAR_THRESH = 1.01
+
+    NUM_STEPS = 5
+
+    with its.device.ItsSession() as cam:
+
+        props = cam.get_camera_properties()
+        if not its.caps.raw16(props) or \
+           not its.caps.manual_sensor(props) or \
+           not its.caps.read_3a(props):
+            print "Test skipped"
+            return
+
+        # Expose for the scene with min sensitivity
+        sens_min, sens_max = props['android.sensor.info.sensitivityRange']
+        sens_step = (sens_max - sens_min) / NUM_STEPS
+        s_ae,e_ae,_,_,_  = cam.do_3a(get_results=True)
+        s_e_prod = s_ae * e_ae
+
+        reqs = []
+        settings = []
+        for s in range(sens_min, sens_max, sens_step):
+            e = int(s_e_prod / float(s))
+            req = its.objects.manual_capture_request(s, e)
+            reqs.append(req)
+            settings.append((s,e))
+
+        caps = cam.do_capture(reqs, cam.CAP_RAW)
+
+        variances = []
+        for i,cap in enumerate(caps):
+            (s,e) = settings[i]
+
+            # Measure the variance. Each shot should be noisier than the
+            # previous shot (as the gain is increasing).
+            plane = its.image.convert_capture_to_planes(cap, props)[1]
+            tile = its.image.get_image_patch(plane, 0.45,0.45,0.1,0.1)
+            var = its.image.compute_image_variances(tile)[0]
+            variances.append(var)
+
+            img = its.image.convert_capture_to_rgb_image(cap, props=props)
+            its.image.write_image(img, "%s_s=%05d_var=%f.jpg" % (NAME,s,var))
+            print "s=%d, e=%d, var=%e"%(s,e,var)
+
+        pylab.plot(range(len(variances)), variances)
+        matplotlib.pyplot.savefig("%s_variances.png" % (NAME))
+
+        # Test that each shot is noisier than the previous one.
+        for i in range(len(variances) - 1):
+            assert(variances[i] < variances[i+1] / VAR_THRESH)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_raw_sensitivity.py b/apps/CameraITS/tests/scene1/test_raw_sensitivity.py
new file mode 100644
index 0000000..8e36219
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_raw_sensitivity.py
@@ -0,0 +1,79 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.device
+import its.caps
+import its.objects
+import its.image
+import os.path
+import pylab
+import matplotlib
+import matplotlib.pyplot
+
+def main():
+    """Capture a set of raw images with increasing gains and measure the noise.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    # Each shot must be 1% noisier (by the variance metric) than the previous
+    # one.
+    VAR_THRESH = 1.01
+
+    NUM_STEPS = 5
+
+    with its.device.ItsSession() as cam:
+
+        props = cam.get_camera_properties()
+        if (not its.caps.raw16(props) or
+            not its.caps.manual_sensor(props) or
+            not its.caps.read_3a(props)):
+            print "Test skipped"
+            return
+
+        # Expose for the scene with min sensitivity
+        sens_min, sens_max = props['android.sensor.info.sensitivityRange']
+        sens_step = (sens_max - sens_min) / NUM_STEPS
+        s_ae,e_ae,_,_,_  = cam.do_3a(get_results=True)
+        s_e_prod = s_ae * e_ae
+
+        variances = []
+        for s in range(sens_min, sens_max, sens_step):
+
+            e = int(s_e_prod / float(s))
+            req = its.objects.manual_capture_request(s, e)
+
+            # Capture raw+yuv, but only look at the raw.
+            cap,_ = cam.do_capture(req, cam.CAP_RAW_YUV)
+
+            # Measure the variance. Each shot should be noisier than the
+            # previous shot (as the gain is increasing).
+            plane = its.image.convert_capture_to_planes(cap, props)[1]
+            tile = its.image.get_image_patch(plane, 0.45,0.45,0.1,0.1)
+            var = its.image.compute_image_variances(tile)[0]
+            variances.append(var)
+
+            img = its.image.convert_capture_to_rgb_image(cap, props=props)
+            its.image.write_image(img, "%s_s=%05d_var=%f.jpg" % (NAME,s,var))
+            print "s=%d, e=%d, var=%e"%(s,e,var)
+
+        pylab.plot(range(len(variances)), variances)
+        matplotlib.pyplot.savefig("%s_variances.png" % (NAME))
+
+        # Test that each shot is noisier than the previous one.
+        for i in range(len(variances) - 1):
+            assert(variances[i] < variances[i+1] / VAR_THRESH)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_tonemap_sequence.py b/apps/CameraITS/tests/scene1/test_tonemap_sequence.py
new file mode 100644
index 0000000..7af51c5
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_tonemap_sequence.py
@@ -0,0 +1,71 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import os.path
+import numpy
+
+def main():
+    """Test a sequence of shots with different tonemap curves.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    # There should be 3 identical frames followed by a different set of
+    # 3 identical frames.
+    MAX_SAME_DELTA = 0.01
+    MIN_DIFF_DELTA = 0.10
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if (not its.caps.manual_sensor(props) or
+            not its.caps.manual_post_proc(props)):
+            print "Test skipped"
+            return
+
+        sens, exp_time, _,_,_ = cam.do_3a(do_af=False,get_results=True)
+
+        means = []
+
+        # Capture 3 manual shots with a linear tonemap.
+        req = its.objects.manual_capture_request(sens, exp_time, True)
+        for i in [0,1,2]:
+            cap = cam.do_capture(req)
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_i=%d.jpg" % (NAME, i))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            means.append(tile.mean(0).mean(0))
+
+        # Capture 3 manual shots with the default tonemap.
+        req = its.objects.manual_capture_request(sens, exp_time, False)
+        for i in [3,4,5]:
+            cap = cam.do_capture(req)
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_i=%d.jpg" % (NAME, i))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            means.append(tile.mean(0).mean(0))
+
+        # Compute the delta between each consecutive frame pair.
+        deltas = [numpy.max(numpy.fabs(means[i+1]-means[i])) \
+                  for i in range(len(means)-1)]
+        print "Deltas between consecutive frames:", deltas
+
+        assert(all([abs(deltas[i]) < MAX_SAME_DELTA for i in [0,1,3,4]]))
+        assert(abs(deltas[2]) > MIN_DIFF_DELTA)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_yuv_jpeg_all.py b/apps/CameraITS/tests/scene1/test_yuv_jpeg_all.py
new file mode 100644
index 0000000..2367ca2
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_yuv_jpeg_all.py
@@ -0,0 +1,85 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import os.path
+import math
+
+def main():
+    """Test that the reported sizes and formats for image capture work.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    THRESHOLD_MAX_RMS_DIFF = 0.03
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        # Use a manual request with a linear tonemap so that the YUV and JPEG
+        # should look the same (once converted by the its.image module).
+        e, s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        req = its.objects.manual_capture_request(s, e, True)
+
+        rgbs = []
+
+        for size in its.objects.get_available_output_sizes("yuv", props):
+            out_surface = {"width":size[0], "height":size[1], "format":"yuv"}
+            cap = cam.do_capture(req, out_surface)
+            assert(cap["format"] == "yuv")
+            assert(cap["width"] == size[0])
+            assert(cap["height"] == size[1])
+            print "Captured YUV %dx%d" % (cap["width"], cap["height"])
+            img = its.image.convert_capture_to_rgb_image(cap)
+            its.image.write_image(img, "%s_yuv_w%d_h%d.jpg"%(
+                    NAME,size[0],size[1]))
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb = its.image.compute_image_means(tile)
+            rgbs.append(rgb)
+
+        for size in its.objects.get_available_output_sizes("jpg", props):
+            out_surface = {"width":size[0], "height":size[1], "format":"jpg"}
+            cap = cam.do_capture(req, out_surface)
+            assert(cap["format"] == "jpeg")
+            assert(cap["width"] == size[0])
+            assert(cap["height"] == size[1])
+            img = its.image.decompress_jpeg_to_rgb_image(cap["data"])
+            its.image.write_image(img, "%s_jpg_w%d_h%d.jpg"%(
+                    NAME,size[0], size[1]))
+            assert(img.shape[0] == size[1])
+            assert(img.shape[1] == size[0])
+            assert(img.shape[2] == 3)
+            print "Captured JPEG %dx%d" % (cap["width"], cap["height"])
+            tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+            rgb = its.image.compute_image_means(tile)
+            rgbs.append(rgb)
+
+        max_diff = 0
+        rgb0 = rgbs[0]
+        for rgb1 in rgbs[1:]:
+            rms_diff = math.sqrt(
+                    sum([pow(rgb0[i] - rgb1[i], 2.0) for i in range(3)]) / 3.0)
+            max_diff = max(max_diff, rms_diff)
+        print "Max RMS difference:", max_diff
+        assert(rms_diff < THRESHOLD_MAX_RMS_DIFF)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_yuv_plus_dng.py b/apps/CameraITS/tests/scene1/test_yuv_plus_dng.py
new file mode 100644
index 0000000..4924c7b
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_yuv_plus_dng.py
@@ -0,0 +1,49 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import os.path
+
+def main():
+    """Test capturing a single frame as both DNG and YUV outputs.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if (not its.caps.raw(props) or
+            not its.caps.read_3a(props)):
+            print "Test skipped"
+            return
+
+        cam.do_3a()
+
+        req = its.objects.auto_capture_request()
+        cap_dng, cap_yuv = cam.do_capture(req, cam.CAP_DNG_YUV)
+
+        img = its.image.convert_capture_to_rgb_image(cap_yuv)
+        its.image.write_image(img, "%s.jpg" % (NAME))
+
+        with open("%s.dng"%(NAME), "wb") as f:
+            f.write(cap_dng["data"])
+
+        # No specific pass/fail check; test is assumed to have succeeded if
+        # it completes.
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_yuv_plus_jpeg.py b/apps/CameraITS/tests/scene1/test_yuv_plus_jpeg.py
new file mode 100644
index 0000000..15aa17c
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_yuv_plus_jpeg.py
@@ -0,0 +1,63 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import os.path
+import math
+
+def main():
+    """Test capturing a single frame as both YUV and JPEG outputs.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    THRESHOLD_MAX_RMS_DIFF = 0.01
+
+    fmt_yuv =  {"format":"yuv"}
+    fmt_jpeg = {"format":"jpeg"}
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if not its.caps.compute_target_exposure(props):
+            print "Test skipped"
+            return
+
+        # Use a manual request with a linear tonemap so that the YUV and JPEG
+        # should look the same (once converted by the its.image module).
+        e, s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        req = its.objects.manual_capture_request(s, e, True)
+
+        cap_yuv, cap_jpeg = cam.do_capture(req, [fmt_yuv, fmt_jpeg])
+
+        img = its.image.convert_capture_to_rgb_image(cap_yuv, True)
+        its.image.write_image(img, "%s_yuv.jpg" % (NAME))
+        tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+        rgb0 = its.image.compute_image_means(tile)
+
+        img = its.image.convert_capture_to_rgb_image(cap_jpeg, True)
+        its.image.write_image(img, "%s_jpeg.jpg" % (NAME))
+        tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+        rgb1 = its.image.compute_image_means(tile)
+
+        rms_diff = math.sqrt(
+                sum([pow(rgb0[i] - rgb1[i], 2.0) for i in range(3)]) / 3.0)
+        print "RMS difference:", rms_diff
+        assert(rms_diff < THRESHOLD_MAX_RMS_DIFF)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_yuv_plus_raw.py b/apps/CameraITS/tests/scene1/test_yuv_plus_raw.py
new file mode 100644
index 0000000..7a345c9
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_yuv_plus_raw.py
@@ -0,0 +1,63 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import os.path
+import math
+
+def main():
+    """Test capturing a single frame as both RAW and YUV outputs.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    THRESHOLD_MAX_RMS_DIFF = 0.02
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+        if (not its.caps.compute_target_exposure(props) or
+            not its.caps.raw16(props)):
+            print "Test skipped"
+            return
+
+        # Use a manual request with a linear tonemap so that the YUV and RAW
+        # should look the same (once converted by the its.image module).
+        e, s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        req = its.objects.manual_capture_request(s, e, True)
+
+        cap_raw, cap_yuv = cam.do_capture(req, cam.CAP_RAW_YUV)
+
+        img = its.image.convert_capture_to_rgb_image(cap_yuv)
+        its.image.write_image(img, "%s_yuv.jpg" % (NAME), True)
+        tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+        rgb0 = its.image.compute_image_means(tile)
+
+        # Raw shots are 1/2 x 1/2 smaller after conversion to RGB, so scale the
+        # tile appropriately.
+        img = its.image.convert_capture_to_rgb_image(cap_raw, props=props)
+        its.image.write_image(img, "%s_raw.jpg" % (NAME), True)
+        tile = its.image.get_image_patch(img, 0.475, 0.475, 0.05, 0.05)
+        rgb1 = its.image.compute_image_means(tile)
+
+        rms_diff = math.sqrt(
+                sum([pow(rgb0[i] - rgb1[i], 2.0) for i in range(3)]) / 3.0)
+        print "RMS difference:", rms_diff
+        assert(rms_diff < THRESHOLD_MAX_RMS_DIFF)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/scene1/test_yuv_plus_raw10.py b/apps/CameraITS/tests/scene1/test_yuv_plus_raw10.py
new file mode 100644
index 0000000..15612c5
--- /dev/null
+++ b/apps/CameraITS/tests/scene1/test_yuv_plus_raw10.py
@@ -0,0 +1,65 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.image
+import its.caps
+import its.device
+import its.objects
+import its.target
+import os.path
+import math
+
+def main():
+    """Test capturing a single frame as both RAW10 and YUV outputs.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    THRESHOLD_MAX_RMS_DIFF = 0.02
+
+    with its.device.ItsSession() as cam:
+        props = cam.get_camera_properties()
+
+        if (not its.caps.compute_target_exposure(props) or
+            not its.caps.raw10(props)):
+            print "Test skipped"
+            return
+
+        # Use a manual request with a linear tonemap so that the YUV and RAW
+        # should look the same (once converted by the its.image module).
+        e, s = its.target.get_target_exposure_combos(cam)["midExposureTime"]
+        req = its.objects.manual_capture_request(s, e, True)
+
+        cap_raw, cap_yuv = cam.do_capture(req,
+                [{"format":"raw10"}, {"format":"yuv"}])
+
+        img = its.image.convert_capture_to_rgb_image(cap_yuv)
+        its.image.write_image(img, "%s_yuv.jpg" % (NAME), True)
+        tile = its.image.get_image_patch(img, 0.45, 0.45, 0.1, 0.1)
+        rgb0 = its.image.compute_image_means(tile)
+
+        # Raw shots are 1/2 x 1/2 smaller after conversion to RGB, so scale the
+        # tile appropriately.
+        img = its.image.convert_capture_to_rgb_image(cap_raw, props=props)
+        its.image.write_image(img, "%s_raw.jpg" % (NAME), True)
+        tile = its.image.get_image_patch(img, 0.475, 0.475, 0.05, 0.05)
+        rgb1 = its.image.compute_image_means(tile)
+
+        rms_diff = math.sqrt(
+                sum([pow(rgb0[i] - rgb1[i], 2.0) for i in range(3)]) / 3.0)
+        print "RMS difference:", rms_diff
+        assert(rms_diff < THRESHOLD_MAX_RMS_DIFF)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tests/tutorial.py b/apps/CameraITS/tests/tutorial.py
new file mode 100644
index 0000000..1b1999e
--- /dev/null
+++ b/apps/CameraITS/tests/tutorial.py
@@ -0,0 +1,188 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# --------------------------------------------------------------------------- #
+# The Google Python style guide should be used for scripts:                   #
+# http://google-styleguide.googlecode.com/svn/trunk/pyguide.html              #
+# --------------------------------------------------------------------------- #
+
+# The ITS modules that are in the pymodules/its/ directory. To see formatted
+# docs, use the "pydoc" command:
+#
+# > pydoc its.image
+#
+import its.image
+import its.device
+import its.objects
+import its.target
+
+# Standard Python modules.
+import os.path
+import pprint
+import math
+
+# Modules from the numpy, scipy, and matplotlib libraries. These are used for
+# the image processing code, and images are represented as numpy arrays.
+import pylab
+import numpy
+import matplotlib
+import matplotlib.pyplot
+
+# Each script has a "main" function.
+def main():
+
+    # Each script has a string description of what it does. This is the first
+    # entry inside the main function.
+    """Tutorial script to show how to use the ITS infrastructure.
+    """
+
+    # A convention in each script is to use the filename (without the extension)
+    # as the name of the test, when printing results to the screen or dumping
+    # files.
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    # The standard way to open a session with a connected camera device. This
+    # creates a cam object which encapsulates the session and which is active
+    # within the scope of the "with" block; when the block exits, the camera
+    # session is closed.
+    with its.device.ItsSession() as cam:
+
+        # Get the static properties of the camera device. Returns a Python
+        # associative array object; print it to the console.
+        props = cam.get_camera_properties()
+        pprint.pprint(props)
+
+        # Grab a YUV frame with manual exposure of sensitivity = 200, exposure
+        # duration = 50ms.
+        req = its.objects.manual_capture_request(200, 50*1000*1000)
+        cap = cam.do_capture(req)
+
+        # Print the properties of the captured frame; width and height are
+        # integers, and the metadata is a Python associative array object.
+        print "Captured image width:", cap["width"]
+        print "Captured image height:", cap["height"]
+        pprint.pprint(cap["metadata"])
+
+        # The captured image is YUV420. Convert to RGB, and save as a file.
+        rgbimg = its.image.convert_capture_to_rgb_image(cap)
+        its.image.write_image(rgbimg, "%s_rgb_1.jpg" % (NAME))
+
+        # Can also get the Y,U,V planes separately; save these to greyscale
+        # files.
+        yimg,uimg,vimg = its.image.convert_capture_to_planes(cap)
+        its.image.write_image(yimg, "%s_y_plane_1.jpg" % (NAME))
+        its.image.write_image(uimg, "%s_u_plane_1.jpg" % (NAME))
+        its.image.write_image(vimg, "%s_v_plane_1.jpg" % (NAME))
+
+        # Run 3A on the device. In this case, just use the entire image as the
+        # 3A region, and run each of AWB,AE,AF. Can also change the region and
+        # specify independently for each of AE,AWB,AF whether it should run.
+        #
+        # NOTE: This may fail, if the camera isn't pointed at a reasonable
+        # target scene. If it fails, the script will end. The logcat messages
+        # can be inspected to see the status of 3A running on the device.
+        #
+        # > adb logcat -s 'ItsService:v'
+        #
+        # If this keeps on failing, try also rebooting the device before
+        # running the test.
+        sens, exp, gains, xform, focus = cam.do_3a(get_results=True)
+        print "AE: sensitivity %d, exposure %dms" % (sens, exp/1000000.0)
+        print "AWB: gains", gains, "transform", xform
+        print "AF: distance", focus
+
+        # Grab a new manual frame, using the 3A values, and convert it to RGB
+        # and save it to a file too. Note that the "req" object is just a
+        # Python dictionary that is pre-populated by the its.objets module
+        # functions (in this case a default manual capture), and the key/value
+        # pairs in the object can be used to set any field of the capture
+        # request. Here, the AWB gains and transform (CCM) are being used.
+        # Note that the CCM transform is in a rational format in capture
+        # requests, meaning it is an object with integer numerators and
+        # denominators. The 3A routine returns simple floats instead, however,
+        # so a conversion from float to rational must be performed.
+        req = its.objects.manual_capture_request(sens, exp)
+        xform_rat = its.objects.float_to_rational(xform)
+
+        req["android.colorCorrection.transform"] = xform_rat
+        req["android.colorCorrection.gains"] = gains
+        cap = cam.do_capture(req)
+        rgbimg = its.image.convert_capture_to_rgb_image(cap)
+        its.image.write_image(rgbimg, "%s_rgb_2.jpg" % (NAME))
+
+        # Print out the actual capture request object that was used.
+        pprint.pprint(req)
+
+        # Images are numpy arrays. The dimensions are (h,w,3) when indexing,
+        # in the case of RGB images. Greyscale images are (h,w,1). Pixels are
+        # generally float32 values in the [0,1] range, however some of the
+        # helper functions in its.image deal with the packed YUV420 and other
+        # formats of images that come from the device (and convert them to
+        # float32).
+        # Print the dimensions of the image, and the top-left pixel value,
+        # which is an array of 3 floats.
+        print "RGB image dimensions:", rgbimg.shape
+        print "RGB image top-left pixel:", rgbimg[0,0]
+
+        # Grab a center tile from the image; this returns a new image. Save
+        # this tile image. In this case, the tile is the middle 10% x 10%
+        # rectangle.
+        tile = its.image.get_image_patch(rgbimg, 0.45, 0.45, 0.1, 0.1)
+        its.image.write_image(tile, "%s_rgb_2_tile.jpg" % (NAME))
+
+        # Compute the mean values of the center tile image.
+        rgb_means = its.image.compute_image_means(tile)
+        print "RGB means:", rgb_means
+
+        # Apply a lookup table to the image, and save the new version. The LUT
+        # is basically a tonemap, and can be used to implement a gamma curve.
+        # In this case, the LUT is used to double the value of each pixel.
+        lut = numpy.array([2*i for i in xrange(65536)])
+        rgbimg_lut = its.image.apply_lut_to_image(rgbimg, lut)
+        its.image.write_image(rgbimg_lut, "%s_rgb_2_lut.jpg" % (NAME))
+
+        # Apply a 3x3 matrix to the image, and save the new version. The matrix
+        # is a numpy array, in row major order, and the pixel values are right-
+        # multipled to it (when considered as column vectors). The example
+        # matrix here just boosts the blue channel by 10%.
+        mat = numpy.array([[1, 0, 0  ],
+                           [0, 1, 0  ],
+                           [0, 0, 1.1]])
+        rgbimg_mat = its.image.apply_matrix_to_image(rgbimg, mat)
+        its.image.write_image(rgbimg_mat, "%s_rgb_2_mat.jpg" % (NAME))
+
+        # Compute a histogram of the luma image, in 256 buckeits.
+        yimg,_,_ = its.image.convert_capture_to_planes(cap)
+        hist,_ = numpy.histogram(yimg*255, 256, (0,256))
+
+        # Plot the histogram using matplotlib, and save as a PNG image.
+        pylab.plot(range(256), hist.tolist())
+        pylab.xlabel("Luma DN")
+        pylab.ylabel("Pixel count")
+        pylab.title("Histogram of luma channel of captured image")
+        matplotlib.pyplot.savefig("%s_histogram.png" % (NAME))
+
+        # Capture a frame to be returned as a JPEG. Load it as an RGB image,
+        # then save it back as a JPEG.
+        cap = cam.do_capture(req, cam.CAP_JPEG)
+        rgbimg = its.image.convert_capture_to_rgb_image(cap)
+        its.image.write_image(rgbimg, "%s_jpg.jpg" % (NAME))
+        r,g,b = its.image.convert_capture_to_planes(cap)
+        its.image.write_image(r, "%s_r.jpg" % (NAME))
+
+# This is the standard boilerplate in each test that allows the script to both
+# be executed directly and imported as a module.
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tools/compute_dng_noise_model.py b/apps/CameraITS/tools/compute_dng_noise_model.py
new file mode 100644
index 0000000..e089ffc
--- /dev/null
+++ b/apps/CameraITS/tools/compute_dng_noise_model.py
@@ -0,0 +1,175 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.device
+import its.objects
+import its.image
+import pprint
+import pylab
+import os.path
+import matplotlib
+import matplotlib.pyplot
+import numpy
+import math
+
+def main():
+    """Compute the DNG noise model from a color checker chart.
+
+    TODO: Make this more robust; some manual futzing may be needed.
+    """
+    NAME = os.path.basename(__file__).split(".")[0]
+
+    with its.device.ItsSession() as cam:
+
+        props = cam.get_camera_properties()
+
+        white_level = float(props['android.sensor.info.whiteLevel'])
+        black_levels = props['android.sensor.blackLevelPattern']
+        idxs = its.image.get_canonical_cfa_order(props)
+        black_levels = [black_levels[i] for i in idxs]
+
+        # Expose for the scene with min sensitivity
+        sens_min, sens_max = props['android.sensor.info.sensitivityRange']
+        s_ae,e_ae,awb_gains,awb_ccm,_  = cam.do_3a(get_results=True)
+        s_e_prod = s_ae * e_ae
+
+        # Make the image brighter since the script looks at linear Bayer
+        # raw patches rather than gamma-encoded YUV patches (and the AE
+        # probably under-exposes a little for this use-case).
+        s_e_prod *= 2
+
+        # Capture raw frames across the full sensitivity range.
+        NUM_SENS_STEPS = 15
+        sens_step = int((sens_max - sens_min - 1) / float(NUM_SENS_STEPS))
+        reqs = []
+        sens = []
+        for s in range(sens_min, sens_max, sens_step):
+            e = int(s_e_prod / float(s))
+            req = its.objects.manual_capture_request(s, e)
+            req["android.colorCorrection.transform"] = \
+                    its.objects.float_to_rational(awb_ccm)
+            req["android.colorCorrection.gains"] = awb_gains
+            reqs.append(req)
+            sens.append(s)
+
+        caps = cam.do_capture(reqs, cam.CAP_RAW)
+
+        # A list of the (x,y) coords of the center pixel of a collection of
+        # patches of a color checker chart. Each patch should be uniform,
+        # however the actual color doesn't matter. Note that the coords are
+        # relative to the *converted* RGB image, which is 1/2 x 1/2 of the
+        # full size; convert back to full.
+        img = its.image.convert_capture_to_rgb_image(caps[0], props=props)
+        patches = its.image.get_color_checker_chart_patches(img, NAME+"_debug")
+        patches = [(2*x,2*y) for (x,y) in sum(patches,[])]
+
+        lines = []
+        for (s,cap) in zip(sens,caps):
+            # For each capture, compute the mean value in each patch, for each
+            # Bayer plane; discard patches where pixels are close to clamped.
+            # Also compute the variance.
+            CLAMP_THRESH = 0.2
+            planes = its.image.convert_capture_to_planes(cap, props)
+            points = []
+            for i,plane in enumerate(planes):
+                plane = (plane * white_level - black_levels[i]) / (
+                        white_level - black_levels[i])
+                for j,(x,y) in enumerate(patches):
+                    tile = plane[y/2-16:y/2+16:,x/2-16:x/2+16:,::]
+                    mean = its.image.compute_image_means(tile)[0]
+                    var = its.image.compute_image_variances(tile)[0]
+                    if (mean > CLAMP_THRESH and mean < 1.0-CLAMP_THRESH):
+                        # Each point is a (mean,variance) tuple for a patch;
+                        # for a given ISO, there should be a linear
+                        # relationship between these values.
+                        points.append((mean,var))
+
+            # Fit a line to the points, with a line equation: y = mx + b.
+            # This line is the relationship between mean and variance (i.e.)
+            # between signal level and noise, for this particular sensor.
+            # In the DNG noise model, the gradient (m) is "S", and the offset
+            # (b) is "O".
+            points.sort()
+            xs = [x for (x,y) in points]
+            ys = [y for (x,y) in points]
+            m,b = numpy.polyfit(xs, ys, 1)
+            lines.append((s,m,b))
+            print s, "->", m, b
+
+            # TODO: Clean up these checks (which currently fail in some cases).
+            # Some sanity checks:
+            # * Noise levels should increase with brightness.
+            # * Extrapolating to a black image, the noise should be positive.
+            # Basically, the "b" value should correspnd to the read noise,
+            # which is the noise level if the sensor was operating in zero
+            # light.
+            #assert(m > 0)
+            #assert(b >= 0)
+
+            # Draw a plot.
+            pylab.plot(xs, ys, 'r')
+            pylab.plot([0,xs[-1]],[b,m*xs[-1]+b],'b')
+            matplotlib.pyplot.savefig("%s_plot_mean_vs_variance.png" % (NAME))
+
+        # Now fit a line across the (m,b) line parameters for each sensitivity.
+        # The gradient (m) params are fit to the "S" line, and the offset (b)
+        # params are fit to the "O" line, both as a function of sensitivity.
+        gains = [d[0] for d in lines]
+        Ss = [d[1] for d in lines]
+        Os = [d[2] for d in lines]
+        mS,bS = numpy.polyfit(gains, Ss, 1)
+        mO,bO = numpy.polyfit(gains, Os, 1)
+
+        # Plot curve "O" as 10x, so it fits in the same scale as curve "S".
+        pylab.plot(gains, [10*o for o in Os], 'r')
+        pylab.plot([gains[0],gains[-1]],
+                [10*mO*gains[0]+10*bO, 10*mO*gains[-1]+10*bO], 'b')
+        pylab.plot(gains, Ss, 'r')
+        pylab.plot([gains[0],gains[-1]], [mS*gains[0]+bS, mS*gains[-1]+bS], 'b')
+        matplotlib.pyplot.savefig("%s_plot_S_O.png" % (NAME))
+
+        print """
+        /* Generated test code to dump a table of data for external validation
+         * of the noise model parameters.
+         */
+        #include <stdio.h>
+        #include <assert.h>
+        double compute_noise_model_entry_S(int sens);
+        double compute_noise_model_entry_O(int sens);
+        int main(void) {
+            int sens;
+            for (sens = %d; sens <= %d; sens += 100) {
+                double o = compute_noise_model_entry_O(sens);
+                double s = compute_noise_model_entry_S(sens);
+                printf("%%d,%%lf,%%lf\\n", sens, o, s);
+            }
+            return 0;
+        }
+
+        /* Generated functions to map a given sensitivity to the O and S noise
+         * model parameters in the DNG noise model.
+         */
+        double compute_noise_model_entry_S(int sens) {
+            double s = %e * sens + %e;
+            return s < 0.0 ? 0.0 : s;
+        }
+        double compute_noise_model_entry_O(int sens) {
+            double o = %e * sens + %e;
+            return o < 0.0 ? 0.0 : o;
+        }
+        """%(sens_min,sens_max,mS,bS,mO,bO)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tools/config.py b/apps/CameraITS/tools/config.py
new file mode 100644
index 0000000..6e83412
--- /dev/null
+++ b/apps/CameraITS/tools/config.py
@@ -0,0 +1,66 @@
+# Copyright 2013 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import its.device
+import its.target
+import sys
+
+def main():
+    """Set the target exposure.
+
+    This program is just a wrapper around the its.target module, to allow the
+    functions in it to be invoked from the command line.
+
+    Usage:
+        python config.py        - Measure the target exposure, and cache it.
+        python config.py EXP    - Hard-code (and cache) the target exposure.
+
+    The "reboot" or "reboot=<N>" and "camera=<N>" arguments may also be
+    provided, just as with all the test scripts. The "target" argument is
+    may also be provided but it has no effect on this script since the cached
+    exposure value is cleared regardless.
+
+    If no exposure value is provided, the camera will be used to measure
+    the scene and set a level that will result in the luma (with linear
+    tonemap) being at the 0.5 level. This requires camera 3A and capture
+    to be functioning.
+
+    For bring-up purposes, the exposure value may be manually set to a hard-
+    coded value, without the camera having to be able to perform 3A (or even
+    capture a shot reliably).
+    """
+
+    # Command line args, ignoring any args that will be passed down to the
+    # ItsSession constructor.
+    args = [s for s in sys.argv if s[:6] not in \
+            ["reboot", "camera", "target", "noinit"]]
+
+    if len(args) == 1:
+        with its.device.ItsSession() as cam:
+            # Automatically measure target exposure.
+            its.target.clear_cached_target_exposure()
+            exposure = its.target.get_target_exposure(cam)
+    elif len(args) == 2:
+        # Hard-code the target exposure.
+        exposure = int(args[1])
+        its.target.set_hardcoded_exposure(exposure)
+    else:
+        print "Usage: python %s [EXPOSURE]"
+        sys.exit(0)
+    print "New target exposure set to", exposure
+    print "This corresponds to %dms at ISO 100" % int(exposure/100/1000000.0)
+
+if __name__ == '__main__':
+    main()
+
diff --git a/apps/CameraITS/tools/run_all_tests.py b/apps/CameraITS/tools/run_all_tests.py
new file mode 100644
index 0000000..2e1657c
--- /dev/null
+++ b/apps/CameraITS/tools/run_all_tests.py
@@ -0,0 +1,68 @@
+# Copyright 2014 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import os.path
+import tempfile
+import subprocess
+import time
+import sys
+import its.device
+
+def main():
+    """Run all the automated tests, saving intermediate files, and producing
+    a summary/report of the results.
+
+    Script should be run from the top-level CameraITS directory.
+    """
+
+    # Get all the scene0 and scene1 tests, which can be run using the same
+    # physical setup.
+    scenes = ["scene0", "scene1"]
+    tests = []
+    for d in scenes:
+        tests += [(d,s[:-3],os.path.join("tests", d, s))
+                  for s in os.listdir(os.path.join("tests",d))
+                  if s[-3:] == ".py"]
+    tests.sort()
+
+    # Make output directories to hold the generated files.
+    topdir = tempfile.mkdtemp()
+    for d in scenes:
+        os.mkdir(os.path.join(topdir, d))
+    print "Saving output files to:", topdir, "\n"
+
+    # Run each test, capturing stdout and stderr.
+    numpass = 0
+    for (scene,testname,testpath) in tests:
+        cmd = ['python', os.path.join(os.getcwd(),testpath)] + sys.argv[1:]
+        outdir = os.path.join(topdir,scene)
+        outpath = os.path.join(outdir,testname+"_stdout.txt")
+        errpath = os.path.join(outdir,testname+"_stderr.txt")
+        t0 = time.time()
+        with open(outpath,"w") as fout, open(errpath,"w") as ferr:
+            retcode = subprocess.call(cmd,stderr=ferr,stdout=fout,cwd=outdir)
+        t1 = time.time()
+        print "%s %s/%s [%.1fs]" % (
+                "PASS" if retcode==0 else "FAIL", scene, testname, t1-t0)
+        if retcode == 0:
+            numpass += 1
+    its.device.ItsSession.report_result(numpass == len(tests))
+
+    print "\n%d / %d tests passed (%.1f%%)" % (
+            numpass, len(tests), 100.0*float(numpass)/len(tests))
+
+if __name__ == '__main__':
+    main()
+