Add ability to reimage firmware

The dynamic suite framework needs to be flexible enough to allow
reimaging of the DUT with either Chrome OS (already supported) or
firmware images. This change allows certain test suites to request
firmware reimaging instead of Chrome OS.

Firmware tarball artifact is now downloaded by the devserver if found
in the build storage bucket. The name of the firmware tarball is fixed
to 'firmware_from_source.tar.bz2', a definition is added to
global_config.ini.

When a DUT is reimaged, a label is created to mark the appropriate
hosts with the Chrome OS version they have installed. This change adds
another label prefix, 'fw-version:' to create labels matching the
firmware version installed on the DUT.

The Reimager class is now a base for two distinct classes, OsReimager
and FwReimager. A few attributes have been added to keep track of
different imaging flows.

The Suite class is now being passed the label prefix so that it
creates proper dependencies to allow to find hosts reimaged with the
appropriate firmware or Chrome OS version to run the tests.

The unittests have been modified to accommodate the underlying module
changes.

The fwupdate control file now allows to be invoked both inside chroot
through run_remote_tests.sh (with parameters coming from the args
dictionary) and outside of chroot with the parameters defined in the
module scope (available through locals()). It also now manages fw
version label removal/assignment.

The fwupdate autotest is now capable of downloading the tarball from
either gs:// or http:// sources.

test_suites/control.faft_lv1 is just a placeholder allowing to debug
the changes, it causes the DUT to be reimaged with the new firmware
and then runs a simple test verifying DUT's firmware sanity.

BUG=chromium-os:37379
TEST=manual

  . verify that unittests still pass:
     $ ./utils/unittest_suite.py  -r server/cros/dynamic_suite
     All passed!

  . verify that ChromeOS reimaging still works as expected:

     $ ./site_utils/run_suite.py -i link-release/R26-3504.0.0 -b link -p link -s smoke

   observe all tests pass and the DUT reimaged with version 3504 also
   observe the proper cros-version: based label created by the AFE
   server and assigned to the DUT host.

  . with servod running locally and the new control file
    (test_suites/control.faft_lv1) copied to the appropriate directory
    on the dev server, ran the following command

     $ ./site_utils/run_suite.py -i link-release/R26-3505.0.0 -b link -p link -s faft_lv1

   observe all tests succeed, and the target come back with the new
   firmware image:

     $ echo $(ssh 192.168.1.4 crossystem fwid)
     Google_Link.3505.0.0

   also observe the proper fw-version: based label created by the AFE
   server and assigned to the DUT host.

Change-Id: I79a7fcc2c4fbb45fdb114630649d60c6e574a619
Signed-off-by: Vadim Bendebury <vbendeb@chromium.org>
Reviewed-on: https://gerrit.chromium.org/gerrit/40279
Reviewed-by: Richard Barnette <jrbarnette@chromium.org>
Reviewed-by: Alex Miller <milleral@chromium.org>
diff --git a/global_config.ini b/global_config.ini
index 4662d88..4575fd6 100644
--- a/global_config.ini
+++ b/global_config.ini
@@ -145,11 +145,12 @@
 sharding_factor: 1
 infrastructure_users: chromeos-test
 
-image_url_pattern: %s/update/%s
-package_url_pattern: %s/static/archive/%s/autotest/packages
-log_url_pattern: http://%s/tko/retrieve_logs.cgi?job=/results/%s/
 delta_payload_url_pattern: %s/static/%s-release/%s-%s/au/%s-%s_%s/update.gz
+firmware_url_pattern: %s/static/archive/%s/firmware_from_source.tar.bz2
 full_payload_url_pattern: %s/static/%s-release/%s-%s/update.gz
+image_url_pattern: %s/update/%s
+log_url_pattern: http://%s/tko/retrieve_logs.cgi?job=/results/%s/
+package_url_pattern: %s/static/archive/%s/autotest/packages
 test_image_url_pattern: %s/static/images/%s-release/%s-%s/chromiumos_test_image.bin
 # Username and password for connecting to remote power control switches of
 # the "Sentry Switched CDU" type
diff --git a/server/cros/dynamic_suite/constants.py b/server/cros/dynamic_suite/constants.py
index 28390d6..47d9682 100644
--- a/server/cros/dynamic_suite/constants.py
+++ b/server/cros/dynamic_suite/constants.py
@@ -8,9 +8,10 @@
 JOB_SUITE_KEY = 'suite'
 
 # Job attribute and label names
+EXPERIMENTAL_PREFIX = 'experimental_'
+FW_VERSION_PREFIX = 'fw-version:'
 JOB_REPO_URL = 'job_repo_url'
 VERSION_PREFIX = 'cros-version:'
-EXPERIMENTAL_PREFIX = 'experimental_'
 
 # Timings
 ARTIFACT_FINISHED_TIME = 'artifact_finished_time'
diff --git a/server/cros/dynamic_suite/dynamic_suite.py b/server/cros/dynamic_suite/dynamic_suite.py
index af2026b..c5f4082 100644
--- a/server/cros/dynamic_suite/dynamic_suite.py
+++ b/server/cros/dynamic_suite/dynamic_suite.py
@@ -15,7 +15,8 @@
 from autotest_lib.server.cros.dynamic_suite import frontend_wrappers
 from autotest_lib.server.cros.dynamic_suite import host_lock_manager, job_status
 from autotest_lib.server.cros.dynamic_suite.job_status import Status
-from autotest_lib.server.cros.dynamic_suite.reimager import Reimager
+from autotest_lib.server.cros.dynamic_suite.reimager import FwReimager
+from autotest_lib.server.cros.dynamic_suite.reimager import OsReimager
 from autotest_lib.server.cros.dynamic_suite.suite import Suite
 from autotest_lib.server import frontend
 
@@ -431,8 +432,13 @@
     tko = frontend_wrappers.RetryingTKO(timeout_min=30, delay_sec=10,
                                         user=suite_spec.job.user, debug=False)
     manager = host_lock_manager.HostLockManager(afe=afe)
-    reimager = Reimager(suite_spec.job.autodir, afe, tko,
-                        results_dir=suite_spec.job.resultdir)
+    if dargs.get('firmware_reimage'):
+        reimager_class = FwReimager
+    else:
+        reimager_class = OsReimager
+
+    reimager = reimager_class(suite_spec.job.autodir, suite_spec.board, afe,
+                              tko, results_dir=suite_spec.job.resultdir)
 
     _perform_reimage_and_run(suite_spec, afe, tko, reimager, manager)
 
@@ -452,10 +458,10 @@
     """
     with host_lock_manager.HostsLockedBy(manager):
         tests_to_skip = []
-        if spec.skip_reimage or reimager.attempt(spec.build, spec.board,
-                spec.pool, spec.devserver, spec.job.record_entry,
-                spec.check_hosts, manager, tests_to_skip, spec.dependencies,
-                num=spec.num, timeout_mins=spec.try_job_timeout_mins):
+        if spec.skip_reimage or reimager.attempt(spec.build, spec.pool,
+                spec.devserver, spec.job.record_entry, spec.check_hosts,
+                manager, tests_to_skip, spec.dependencies, num=spec.num,
+                timeout_mins=spec.try_job_timeout_mins):
             # Ensure that the image's artifacts have completed downloading.
             try:
                 spec.devserver.finish_download(spec.build)
@@ -471,7 +477,8 @@
                 spec.name, tests_to_skip, spec.build, spec.devserver,
                 afe=afe, tko=tko, pool=spec.pool,
                 results_dir=spec.job.resultdir,
-                max_runtime_mins=spec.max_runtime_mins)
+                max_runtime_mins=spec.max_runtime_mins,
+                version_prefix=reimager.version_prefix)
 
             suite.run_and_wait(spec.job.record_entry, manager,
                                spec.add_experimental)
diff --git a/server/cros/dynamic_suite/reimager.py b/server/cros/dynamic_suite/reimager.py
index 3224ff1..aa4f5a4 100644
--- a/server/cros/dynamic_suite/reimager.py
+++ b/server/cros/dynamic_suite/reimager.py
@@ -29,25 +29,38 @@
 
 class Reimager(object):
     """
-    A class that can run jobs to reimage devices.
+    A base class that can run jobs to reimage devices.
 
+    Is subclassed to create reimagers for Chrome OS and firmware, which use
+    different autotests to perform the action.
+
+    @var _board: a string, name of the board type to reimage
     @var _afe: a frontend.AFE instance used to talk to autotest.
     @var _tko: a frontend.TKO instance used to query the autotest results db.
     @var _results_dir: The directory where the job can write results to.
                        This must be set if you want the 'name_job-id' tuple
                        of each per-device reimaging job listed in the
                        parent reimaging job's keyvals.
-    @var _cf_getter: a ControlFileGetter used to get the AU control file.
+    @var _cf_getter: a ControlFileGetter used to get the appropriate autotest
+                       control file.
+    @var _version_prefix: a string, prefix for storing the build version in the
+                       afe database. Set by the derived classes constructors.
+    @var _control_file: a string, name of the file controlling the appropriate
+                       reimaging autotest
+    @var _url_pattern: a string, format used to generate url of the image on
+                       the devserver
     """
 
     JOB_NAME = 'try_new_image'
 
 
-    def __init__(self, autotest_dir, afe=None, tko=None, results_dir=None):
+    def __init__(self, autotest_dir, board_label, afe=None, tko=None,
+                 results_dir=None):
         """
         Constructor
 
         @param autotest_dir: the place to find autotests.
+        @param board_label: a string, label of the board type to reimage
         @param afe: an instance of AFE as defined in server/frontend.py.
         @param tko: an instance of TKO as defined in server/frontend.py.
         @param results_dir: The directory where the job can write results to.
@@ -55,6 +68,7 @@
                             of each per-device reimaging job listed in the
                             parent reimaging job's keyvals.
         """
+        self._board_label = board_label
         self._afe = afe or frontend_wrappers.RetryingAFE(timeout_min=30,
                                                          delay_sec=10,
                                                          debug=False)
@@ -65,10 +79,12 @@
         self._reimaged_hosts = {}
         self._cf_getter = control_file_getter.FileSystemGetter(
             [os.path.join(autotest_dir, 'server/site_tests')])
+        self._version_prefix = None
+        self._control_file = None
 
 
-    def attempt(self, build, board, pool, devserver, record, check_hosts,
-                manager, tests_to_skip, dependencies={'':[]}, num=None,
+    def attempt(self, build, pool, devserver, record, check_hosts, manager,
+                tests_to_skip, dependencies={'':[]}, num=None,
                 timeout_mins=DEFAULT_TRY_JOB_TIMEOUT_MINS):
         """
         Synchronously attempt to reimage some machines.
@@ -98,7 +114,6 @@
 
         @param build: the build to install e.g.
                       x86-alex-release/R18-1655.0.0-a1-b1584.
-        @param board: which kind of devices to reimage.
         @param pool: Specify the pool of machines to use for scheduling
                 purposes.
         @param devserver: an instance of a devserver to use to complete this
@@ -127,11 +142,10 @@
         logging.debug("scheduling reimaging across at most %d machines", num)
         begin_time_str = datetime.datetime.now().strftime(job_status.TIME_FMT)
         try:
-            self._ensure_version_label(constants.VERSION_PREFIX + build)
-
+            self._ensure_version_label(self._version_prefix + build)
             # Figure out what kind of hosts we need to grab.
             per_test_specs = self._build_host_specs_from_dependencies(
-                board, pool, dependencies)
+                self._board_label, pool, dependencies)
 
             # Pick hosts to use, make sure we have enough (if needed).
             to_reimage = self._build_host_group(set(per_test_specs.values()),
@@ -146,8 +160,8 @@
                        begin_time_str=begin_time_str).record_all(record)
 
             # Schedule job and record job metadata.
-            canary_job = self._schedule_reimage_job(build, to_reimage,
-                                                    devserver)
+            canary_job = self._schedule_reimage_job(
+                {'image_name': build}, to_reimage, devserver)
 
             self._record_job_if_possible(Reimager.JOB_NAME, canary_job)
             logging.info('Created re-imaging job: %d', canary_job.id)
@@ -211,6 +225,12 @@
         return should_continue
 
 
+    @property
+    def version_prefix(self):
+        """Report version prefix associated with this reimaging job."""
+        return self._version_prefix
+
+
     def _build_host_specs_from_dependencies(self, board, pool, deps):
         """
         Return a dict of {test name: HostSpec}, given some test dependencies.
@@ -387,9 +407,9 @@
 
     def _ensure_version_label(self, name):
         """
-        Ensure that a label called |name| exists in the autotest DB.
+        Ensure that a label called exists in the autotest DB.
 
-        @param name: the label to check for/create.
+        @param name:the label to check for/create.
         """
         try:
             self._afe.create_label(name=name)
@@ -401,28 +421,122 @@
                 raise ve
 
 
-    def _schedule_reimage_job(self, build, host_group, devserver):
+    def _schedule_reimage_job_base(self, host_group, params):
         """
-        Schedules the reimaging of |num_machines| |board| devices with |image|.
+        Schedules the reimaging of hosts in a host group.
 
         Sends an RPC to the autotest frontend to enqueue reimaging jobs on
         |num_machines| devices of type |board|.
 
-        @param build: the build to install (must be unique).
-        @param host_group: the HostGroup to be used for this reimaging job.
-        @param devserver: an instance of devserver that DUTs should use to get
-                          build artifacts from.
+        @param: host_group, a HostGroup object representing the set of hosts
+                to be reimaged.
 
+        @param params: a dictionary where keys and values are strings to be
+                injected as assignments into the scheduling autotest control
+                file. The dictionary contains reimaging type specific
+                information.
         @return a frontend.Job object for the reimaging job we scheduled.
         """
-        image_url = tools.image_url_pattern() % (devserver.url(), build)
+        params['image_url'] = self._url_pattern % (
+            params['devserver_url'], params['image_name'])
+
         control_file = tools.inject_vars(
-            dict(image_url=image_url, image_name=build,
-                 devserver_url=devserver.url()),
-            self._cf_getter.get_control_file_contents_by_name('autoupdate'))
+            params,
+            self._cf_getter.get_control_file_contents_by_name(
+                self._control_file))
 
         return self._afe.create_job(control_file=control_file,
-                                     name=build + '-try',
+                                     name=params['image_name'] + '-try',
                                      control_type='Server',
                                      priority='Low',
                                      **host_group.as_args())
+
+class OsReimager(Reimager):
+    """
+    A class that can run jobs to reimage Chrome OS on devices.
+
+    See attributes' description in the parent class docstring.
+    """
+
+    def __init__(self, autotest_dir, board, afe=None, tko=None,
+                 results_dir=None):
+        """Constructor
+
+        See parameters' description in the parent class constructor docstring.
+        """
+
+        super(OsReimager, self).__init__(autotest_dir, board, afe=afe, tko=tko,
+                                         results_dir=results_dir)
+        self._version_prefix = constants.VERSION_PREFIX
+        self._control_file = 'autoupdate'
+        self._url_pattern = tools.image_url_pattern()
+
+    def _schedule_reimage_job(self, params, host_group, devserver):
+        """Schedules the reimaging of a group of hosts with a Chrome OS image.
+
+        Adds a parameter to the params dictionary and invokes the base class
+        reimaging function, which sends an RPC to the autotest frontend to
+        enqueue reimaging jobs on hosts in the host_group.
+
+        @param params: a dictionary where keys and values are strings, to be
+                  injected into the reimaging job control file as variable
+                  assignments. By the time this function is invoked the
+                  dictionary contains one element, the name of the build to
+                  use for reimaging.
+        @param host_group: the HostGroup to be used for this reimaging job.
+        @param devserver: an instance of devserver that DUTs should use to get
+                  build artifacts from.
+
+        @return a frontend.Job object for the scheduled reimaging job.
+
+        """
+        params['devserver_url'] = devserver.url()
+        return self._schedule_reimage_job_base(host_group, params)
+
+
+
+class FwReimager(Reimager):
+    """
+    A class that can run jobs to reimage firmware on devices.
+
+    See attributes' description in the parent class docstring.
+    """
+
+    def __init__(self, autotest_dir, board, afe=None, tko=None,
+                 results_dir=None):
+        """Constructor
+
+        See parameters' description in the parent class constructor docstring.
+        """
+
+        super(FwReimager, self).__init__(autotest_dir, board, afe=afe, tko=tko,
+                                         results_dir=results_dir)
+        self._version_prefix = constants.FW_VERSION_PREFIX
+        self._control_file = 'fwupdate'
+        self._url_pattern = tools.firmware_url_pattern()
+
+    def _schedule_reimage_job(self, params, host_group, devserver):
+        """Schedules the reimaging of a group of hosts with a Chrome OS image.
+
+        Makes sure that the artifacts download has been completed (firmware
+        tarball is downloaded asynchronously), then a few parameters to the
+        params dictionary and invokes the base class reimaging function, which
+        sends an RPC to the autotest frontend to enqueue reimaging jobs on
+        hosts in the host_group.
+
+        @param params: a dictionary where keys and values are strings, to be
+                  injected into the reimaging job control file as variable
+                  assignments. By the time this function is invoked the
+                  dictionary contains one element, the name of the build to
+                  use for reimaging.
+        @param host_group: the HostGroup to be used for this reimaging job.
+        @param devserver: an instance of devserver that DUTs should use to get
+                  build artifacts from.
+
+        @return a frontend.Job object for the scheduled reimaging job.
+
+        """
+        devserver.finish_download(params['image_name'])
+        params['devserver_url'] = devserver.url()
+        params['board'] = self._board_label.split(':')[-1]
+        return self._schedule_reimage_job_base(host_group, params)
diff --git a/server/cros/dynamic_suite/reimager_unittest.py b/server/cros/dynamic_suite/reimager_unittest.py
index 895ca40..b1d6aad 100644
--- a/server/cros/dynamic_suite/reimager_unittest.py
+++ b/server/cros/dynamic_suite/reimager_unittest.py
@@ -24,7 +24,7 @@
 from autotest_lib.server.cros.dynamic_suite.host_spec import HostGroup
 from autotest_lib.server.cros.dynamic_suite.host_spec import HostSpec
 from autotest_lib.server.cros.dynamic_suite.host_spec import MetaHostGroup
-from autotest_lib.server.cros.dynamic_suite.reimager import Reimager
+from autotest_lib.server.cros.dynamic_suite.reimager import OsReimager
 from autotest_lib.server.cros.dynamic_suite.fakes import FakeHost, FakeJob
 from autotest_lib.server import frontend
 
@@ -56,7 +56,7 @@
         self.tko = self.mox.CreateMock(frontend.TKO)
         self.devserver = dev_server.ImageServer(self._DEVSERVER_URL)
         self.manager = self.mox.CreateMock(host_lock_manager.HostLockManager)
-        self.reimager = Reimager('', afe=self.afe, tko=self.tko)
+        self.reimager = OsReimager('', self._BOARD, afe=self.afe, tko=self.tko)
         # Having these ordered by complexity is important!
         host_spec_list = [HostSpec([self._BOARD, self._POOL])]
         for dep_list in self._DEPENDENCIES.itervalues():
@@ -236,9 +236,6 @@
         cf_getter = self.mox.CreateMock(control_file_getter.ControlFileGetter)
         cf_getter.get_control_file_contents_by_name('autoupdate').AndReturn('')
         self.reimager._cf_getter = cf_getter
-        self._CONFIG.override_config_value('CROS',
-                                           'image_url_pattern',
-                                           self._URL)
 
         hosts_per_spec = {HostSpec(['l1']): [FakeHost('h1')],
                           HostSpec(['l2']): [FakeHost('h2')],
@@ -247,15 +244,15 @@
         self.afe.create_job(
             control_file=mox.And(
                 mox.StrContains(self._BUILD),
-                mox.StrContains(self._URL % (self._DEVSERVER_URL,
-                                             self._BUILD))),
+                mox.StrContains(self._UPDATE_URL)),
             name=mox.StrContains(self._BUILD),
             control_type='Server',
             hosts=mox.SameElementsAs(hostnames),
             priority='Low')
         self.mox.ReplayAll()
         self.reimager._schedule_reimage_job(
-            self._BUILD, ExplicitHostGroup(hosts_per_spec), self.devserver)
+            {'image_name': self._BUILD},
+            ExplicitHostGroup(hosts_per_spec), self.devserver)
 
 
     def testPackageUrl(self):
@@ -289,7 +286,7 @@
         """
         self.mox.StubOutWithMock(self.reimager, '_ensure_version_label')
         self.mox.StubOutWithMock(self.reimager, '_build_host_group')
-        self.mox.StubOutWithMock(self.reimager, '_schedule_reimage_job')
+        self.mox.StubOutWithMock(self.reimager, '_schedule_reimage_job_base')
         self.mox.StubOutWithMock(self.reimager, '_clear_build_state')
 
         self.mox.StubOutWithMock(job_status, 'wait_for_jobs_to_start')
@@ -308,7 +305,7 @@
         self.reimager._build_host_group(
             mox.IgnoreArg(), self._NUM, check_hosts).AndReturn(host_group)
         self.reimager._schedule_reimage_job(
-            self._BUILD,
+            {'image_name': self._BUILD},
             host_group,
             self.devserver).AndReturn(canary_job)
 
@@ -332,7 +329,7 @@
             job_status.wait_for_jobs_to_finish(self.afe, [canary_job])
             job_status.gather_per_host_results(
                 mox.IgnoreArg(), mox.IgnoreArg(), [canary_job],
-                mox.StrContains(Reimager.JOB_NAME)).AndReturn(statuses)
+                mox.StrContains(OsReimager.JOB_NAME)).AndReturn(statuses)
 
         if statuses:
             ret_val = reduce(lambda v, s: v or s.is_good(),
@@ -351,8 +348,9 @@
         rjob = self.mox.CreateMock(base_job.base_job)
         self.reimager._clear_build_state(mox.StrContains(canary.hostnames[0]))
         self.mox.ReplayAll()
-        self.assertTrue(self.reimager.attempt(self._BUILD, self._BOARD,
-                                              self._POOL, self.devserver,
+
+        self.assertTrue(self.reimager.attempt(self._BUILD, self._POOL,
+                                              self.devserver,
                                               rjob.record_entry, True,
                                               self.manager, [],
                                               self._DEPENDENCIES))
@@ -372,8 +370,8 @@
         rjob.record_entry(StatusContains.CreateFromStrings('END ABORT'))
 
         self.mox.ReplayAll()
-        self.assertFalse(self.reimager.attempt(self._BUILD, self._BOARD,
-                                               self._POOL, self.devserver,
+        self.assertFalse(self.reimager.attempt(self._BUILD, self._POOL,
+                                               self.devserver,
                                                rjob.record_entry, True,
                                                self.manager, [],
                                                self._DEPENDENCIES))
@@ -390,8 +388,8 @@
         rjob = self.mox.CreateMock(base_job.base_job)
         self.reimager._clear_build_state(mox.StrContains(canary.hostnames[0]))
         self.mox.ReplayAll()
-        self.assertTrue(self.reimager.attempt(self._BUILD, self._BOARD,
-                                              self._POOL, self.devserver,
+        self.assertTrue(self.reimager.attempt(self._BUILD, self._POOL,
+                                              self.devserver,
                                               rjob.record_entry, True,
                                               self.manager, []))
         self.reimager.clear_reimaged_host_state(self._BUILD)
@@ -411,7 +409,7 @@
         self.reimager._clear_build_state(comparator)
         self.reimager._clear_build_state(comparator)
         self.mox.ReplayAll()
-        self.assertTrue(self.reimager.attempt(self._BUILD, self._BOARD, None,
+        self.assertTrue(self.reimager.attempt(self._BUILD, None,
                                               self.devserver,
                                               rjob.record_entry, True,
                                               self.manager, []))
@@ -434,8 +432,8 @@
         self.reimager._clear_build_state(mox.StrContains(canary.hostnames[0]))
         self.mox.ReplayAll()
         tests_to_skip = []
-        self.assertTrue(self.reimager.attempt(self._BUILD, self._BOARD,
-                                              self._POOL, self.devserver,
+        self.assertTrue(self.reimager.attempt(self._BUILD, self._POOL,
+                                              self.devserver,
                                               rjob.record_entry, True,
                                               self.manager, tests_to_skip,
                                               self._DEPENDENCIES))
@@ -464,8 +462,8 @@
         self.reimager._clear_build_state(comparator)
         self.mox.ReplayAll()
         tests_to_skip = []
-        self.assertTrue(self.reimager.attempt(self._BUILD, self._BOARD,
-                                              self._POOL, self.devserver,
+        self.assertTrue(self.reimager.attempt(self._BUILD, self._POOL,
+                                              self.devserver,
                                               rjob.record_entry, True,
                                               self.manager, tests_to_skip,
                                               self._DEPENDENCIES))
@@ -483,8 +481,8 @@
         rjob = self.mox.CreateMock(base_job.base_job)
         self.reimager._clear_build_state(mox.StrContains(canary.hostnames[0]))
         self.mox.ReplayAll()
-        self.assertFalse(self.reimager.attempt(self._BUILD, self._BOARD,
-                                               self._POOL, self.devserver,
+        self.assertFalse(self.reimager.attempt(self._BUILD, self._POOL,
+                                               self.devserver,
                                                rjob.record_entry, True,
                                                self.manager, [],
                                                self._DEPENDENCIES))
@@ -499,9 +497,9 @@
 
         rjob = self.mox.CreateMock(base_job.base_job)
         self.mox.ReplayAll()
-        self.reimager.attempt(self._BUILD, self._BOARD, self._POOL,
-                              self.devserver, rjob.record_entry, True,
-                              self.manager, [], self._DEPENDENCIES)
+        self.reimager.attempt(self._BUILD, self._POOL, self.devserver,
+                              rjob.record_entry, True, self.manager, [],
+                              self._DEPENDENCIES)
         self.reimager.clear_reimaged_host_state(self._BUILD)
 
 
@@ -518,9 +516,9 @@
                                                            reason=ex_message))
         rjob.record_entry(StatusContains.CreateFromStrings('END ERROR'))
         self.mox.ReplayAll()
-        self.reimager.attempt(self._BUILD, self._BOARD, self._POOL,
-                              self.devserver, rjob.record_entry, True,
-                              self.manager, [], self._DEPENDENCIES)
+        self.reimager.attempt(self._BUILD, self._POOL, self.devserver,
+                              rjob.record_entry, True, self.manager, [],
+                              self._DEPENDENCIES)
         self.reimager.clear_reimaged_host_state(self._BUILD)
 
 
@@ -535,8 +533,8 @@
         rjob = self.mox.CreateMock(base_job.base_job)
         self.reimager._clear_build_state(mox.StrContains(canary.hostnames[0]))
         self.mox.ReplayAll()
-        self.assertTrue(self.reimager.attempt(self._BUILD, self._BOARD,
-                                              self._POOL, self.devserver,
+        self.assertTrue(self.reimager.attempt(self._BUILD, self._POOL,
+                                              self.devserver,
                                               rjob.record_entry, False,
                                               self.manager, [],
                                               self._DEPENDENCIES))
@@ -564,9 +562,9 @@
                                                            reason=alarm_string))
         rjob.record_entry(StatusContains.CreateFromStrings('END WARN'))
         self.mox.ReplayAll()
-        self.reimager.attempt(self._BUILD, self._BOARD, self._POOL,
-                              self.devserver, rjob.record_entry, True,
-                              self.manager, [], self._DEPENDENCIES)
+        self.reimager.attempt(self._BUILD, self._POOL, self.devserver,
+                              rjob.record_entry, True, self.manager, [],
+                              self._DEPENDENCIES)
         self.reimager.clear_reimaged_host_state(self._BUILD)
 
 
@@ -592,7 +590,7 @@
                                                            reason=alarm_string))
         rjob.record_entry(StatusContains.CreateFromStrings('END ERROR'))
         self.mox.ReplayAll()
-        self.reimager.attempt(self._BUILD, self._BOARD, self._POOL,
-                              self.devserver, rjob.record_entry, True,
-                              self.manager, [], self._DEPENDENCIES)
+        self.reimager.attempt(self._BUILD, self._POOL, self.devserver,
+                              rjob.record_entry, True, self.manager, [],
+                              self._DEPENDENCIES)
         self.reimager.clear_reimaged_host_state(self._BUILD)
diff --git a/server/cros/dynamic_suite/suite.py b/server/cros/dynamic_suite/suite.py
index 308449d..05eab95 100644
--- a/server/cros/dynamic_suite/suite.py
+++ b/server/cros/dynamic_suite/suite.py
@@ -108,7 +108,8 @@
     @staticmethod
     def create_from_name(name, build, devserver, cf_getter=None, afe=None,
                          tko=None, pool=None, results_dir=None,
-                         max_runtime_mins=24*60):
+                         max_runtime_mins=24*60,
+                         version_prefix=constants.VERSION_PREFIX):
         """
         Create a Suite using a predicate based on the SUITE control file var.
 
@@ -129,20 +130,26 @@
         @param results_dir: The directory where the job can write results to.
                             This must be set if you want job_id of sub-jobs
                             list in the job keyvals.
+        @param version_prefix: a string, a prefix to be concatenated with the
+                               build name to form a label which the DUT needs
+                               to be labeled with to be eligible to run this
+                               test.
         @return a Suite instance.
         """
         if cf_getter is None:
             cf_getter = Suite.create_ds_getter(build, devserver)
 
         return Suite(Suite.name_in_tag_predicate(name),
-                     name, build, cf_getter, afe, tko, pool, results_dir)
+                     name, build, cf_getter, afe, tko, pool, results_dir,
+                     version_prefix=version_prefix)
 
 
     @staticmethod
     def create_from_name_and_blacklist(name, blacklist, build, devserver,
                                        cf_getter=None, afe=None, tko=None,
                                        pool=None, results_dir=None,
-                                       max_runtime_mins=24*60):
+                                       max_runtime_mins=24*60,
+                                       version_prefix=constants.VERSION_PREFIX):
         """
         Create a Suite using a predicate based on the SUITE control file var.
 
@@ -164,6 +171,10 @@
         @param results_dir: The directory where the job can write results to.
                             This must be set if you want job_id of sub-jobs
                             list in the job keyvals.
+        @param version_prefix: a string, a prefix to be concatenated with the
+                               build name to form a label which the DUT needs
+                               to be labeled with to be eligible to run this
+                               test.
         @return a Suite instance.
         """
         if cf_getter is None:
@@ -176,11 +187,12 @@
 
         return Suite(in_tag_not_in_blacklist_predicate,
                      name, build, cf_getter, afe, tko, pool, results_dir,
-                     max_runtime_mins)
+                     max_runtime_mins, version_prefix)
 
 
     def __init__(self, predicate, tag, build, cf_getter, afe=None, tko=None,
-                 pool=None, results_dir=None, max_runtime_mins=24*60):
+                 pool=None, results_dir=None, max_runtime_mins=24*60,
+                 version_prefix=constants.VERSION_PREFIX):
         """
         Constructor
 
@@ -197,6 +209,8 @@
         @param results_dir: The directory where the job can write results to.
                             This must be set if you want job_id of sub-jobs
                             list in the job keyvals.
+        @param version_prefix: a string, prefix for the database label
+                               associated with the build
         """
         self._predicate = predicate
         self._tag = tag
@@ -215,7 +229,7 @@
                                                  self._predicate,
                                                  add_experimental=True)
         self._max_runtime_mins = max_runtime_mins
-
+        self._version_prefix = version_prefix
 
     @property
     def tests(self):
@@ -251,11 +265,11 @@
         job_deps = list(test.dependencies)
         if self._pool:
             meta_hosts = self._pool
-            cros_label = constants.VERSION_PREFIX + self._build
+            cros_label = self._version_prefix + self._build
             job_deps.append(cros_label)
         else:
             # No pool specified use any machines with the following label.
-            meta_hosts = constants.VERSION_PREFIX + self._build
+            meta_hosts = self._version_prefix + self._build
         test_obj = self._afe.create_job(
             control_file=test.text,
             name='/'.join([self._build, self._tag, test.name]),
diff --git a/server/cros/dynamic_suite/tools.py b/server/cros/dynamic_suite/tools.py
index 1f8a6f2..c79e320 100644
--- a/server/cros/dynamic_suite/tools.py
+++ b/server/cros/dynamic_suite/tools.py
@@ -16,6 +16,10 @@
     return _CONFIG.get_config_value('CROS', 'image_url_pattern', type=str)
 
 
+def firmware_url_pattern():
+    return _CONFIG.get_config_value('CROS', 'firmware_url_pattern', type=str)
+
+
 def sharding_factor():
     return _CONFIG.get_config_value('CROS', 'sharding_factor', type=int)
 
diff --git a/server/cros/dynamic_suite/tools_unittest.py b/server/cros/dynamic_suite/tools_unittest.py
index 0f1f40c..9e0871e 100644
--- a/server/cros/dynamic_suite/tools_unittest.py
+++ b/server/cros/dynamic_suite/tools_unittest.py
@@ -18,7 +18,7 @@
 from autotest_lib.server.cros.dynamic_suite.host_spec import HostSpec
 from autotest_lib.server.cros.dynamic_suite import host_spec
 from autotest_lib.server.cros.dynamic_suite import tools
-from autotest_lib.server.cros.dynamic_suite.reimager import Reimager
+from autotest_lib.server.cros.dynamic_suite.reimager import OsReimager
 from autotest_lib.server import frontend
 
 
@@ -36,7 +36,7 @@
         super(DynamicSuiteToolsTest, self).setUp()
         self.afe = self.mox.CreateMock(frontend.AFE)
         self.tko = self.mox.CreateMock(frontend.TKO)
-        self.reimager = Reimager('', afe=self.afe, tko=self.tko)
+        self.reimager = OsReimager('', self._BOARD, afe=self.afe, tko=self.tko)
         # Having these ordered by complexity is important!
         host_spec_list = [HostSpec([self._BOARD, self._POOL])]
         for dep_list in self._DEPENDENCIES.itervalues():
diff --git a/server/site_tests/fwupdate/control b/server/site_tests/fwupdate/control
index 9786930..7b5fb5a 100644
--- a/server/site_tests/fwupdate/control
+++ b/server/site_tests/fwupdate/control
@@ -22,13 +22,42 @@
 args_dict = utils.args_to_dict(args)
 servo_args = hosts.SiteHost.get_servo_arguments(args_dict)
 
+# In case this is run by dynamic suite, board and url come from local
+# variables
+
+dut_board = locals().get('board')
+if dut_board is None:
+    dut_board = args_dict['board']
+    fwurl = args_dict['fwurl']
+    AFE = None
+else:
+    from autotest_lib.server.cros.dynamic_suite import frontend_wrappers
+    from autotest_lib.server.cros.dynamic_suite import constants
+    AFE = frontend_wrappers.RetryingAFE(
+        timeout_min=1, delay_sec=10, debug=False)
+    fwurl = locals().get('image_url')
+
+def clear_version_labels(machine):
+    """Clear all build-specific labels, attributes from the target.
+
+    Following suite after server/site_tests/autoupdate/control
+
+    @param machine: the host to clear labels, attributes from.
+    """
+    labels = AFE.get_labels(name__startswith=constants.FW_VERSION_PREFIX,
+                            host__hostname__in=[machine])
+    for label in labels: label.remove_hosts(hosts=[machine])
+    AFE.set_host_attribute('job_repo_url', None, hostname=machine)
+
 def run_fwupdate(machine):
     host = hosts.create_host(machine, servo_args=servo_args)
-    # TODO(vbendeb): both board type and tarball url need to be provided in
-    # both cases, when when invoked through the command line and by the
-    # scheduler. Right now only command line invocation is supported.
-    # (http://crosbug.com/37148)
-    job.run_test('fwupdate', servo=host.servo, board=args_dict['board'],
-                 fwurl=args_dict['fwurl'])
+    if AFE:
+        clear_version_labels(machine)
+    if job.run_test('fwupdate', servo=host.servo,
+                    board=dut_board, fwurl=fwurl) and AFE:
+        label = AFE.get_labels(
+            name__startswith=constants.FW_VERSION_PREFIX + image_name)[0]
+        label.add_hosts([machine])
+        AFE.set_host_attribute('job_repo_url', fwurl, hostname=machine)
 
 parallel_simple(run_fwupdate, machines)
diff --git a/server/site_tests/fwupdate/fwupdate.py b/server/site_tests/fwupdate/fwupdate.py
index ff129b8..cf9de8a 100644
--- a/server/site_tests/fwupdate/fwupdate.py
+++ b/server/site_tests/fwupdate/fwupdate.py
@@ -14,14 +14,18 @@
 
     def initialize(self, servo, board, fwurl):
         self.tmpd = autotemp.tempdir(unique_id='fwimage')
-        tarball = os.path.basename(fwurl)
-        utils.system('gsutil cp %s %s' % (fwurl, self.tmpd.name))
+        local_tarball = os.path.join(self.tmpd.name,
+                                     os.path.basename(fwurl))
+        if fwurl.startswith('gs://'):
+            utils.system('gsutil cp %s %s' % (fwurl, local_tarball))
+        else:
+            utils.system('wget -O %s %s' % (local_tarball, fwurl))
         self._ap_image = 'image-%s.bin' % board
         self._ec_image = 'ec.bin'
         self._board = board
         self._servo = servo
         utils.system('tar xf %s -C %s %s %s' % (
-                os.path.join(self.tmpd.name, tarball), self.tmpd.name,
+                local_tarball, self.tmpd.name,
                 self._ap_image, self._ec_image), timeout=60)
 
     def cleanup(self):
diff --git a/test_suites/control.faft_lv1 b/test_suites/control.faft_lv1
new file mode 100644
index 0000000..5a0c3ed
--- /dev/null
+++ b/test_suites/control.faft_lv1
@@ -0,0 +1,40 @@
+# Copyright (c) 2012 The Chromium OS Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+AUTHOR = "Chrome OS Team"
+NAME = "faft"
+PURPOSE = "Test hard-to-automate firmware and ec scenarios."
+CRITERIA = "All tests with SUITE=faft must pass."
+
+TIME = "SHORT"
+TEST_CATEGORY = "General"
+TEST_CLASS = "suite"
+TEST_TYPE = "Server"
+
+DOC = """
+This is the faft (FULLY AUTOMATED FIRMWARE TEST) suite.  The tests in this
+suite verify that normal boot scenarios progress properly (with state progress
+checks) and that error scenarios (corrupted blobs) are caught as expected.
+Some of these test failures should close the tree as they may imply that the
+system is unbootable and further tests will only become hung or blocked.
+Other faft tests verify all of the features (some of them security related)
+are functioning.
+
+@param build: The name of the image to test.
+              Ex: x86-mario-release/R17-1412.33.0-a1-b29
+@param board: The board to test on.  Ex: x86-mario
+@param pool: The pool of machines to utilize for scheduling. If pool=None
+             board is used.
+@param check_hosts: require appropriate live hosts to exist in the lab.
+@param SKIP_IMAGE: (optional) If present and True, don't re-image devices.
+"""
+
+import common
+from autotest_lib.server.cros.dynamic_suite import dynamic_suite
+
+dynamic_suite.reimage_and_run(
+    build=build, board=board, name='faft_lv1', job=job,
+    pool=pool, check_hosts=check_hosts, add_experimental=True, num=num,
+    file_bugs=file_bugs, skip_reimage=dynamic_suite.skip_reimage(globals()),
+    firmware_reimage=True)