commit | b6aed51feee1d07fa2465fa3b2fe53cd3f247def | [log] [tgz] |
---|---|---|
author | Po-Hsien Wang <pwang@chromium.org> | Wed Jun 07 17:53:34 2017 -0700 |
committer | chrome-bot <chrome-bot@chromium.org> | Mon Jun 12 22:33:23 2017 -0700 |
tree | 25cc0a4bd2767ebbf599f81fec653fce860a74cc | |
parent | ec0355f640703c1aea462a623e0afbd011717aa9 [diff] |
Report failure for most of the graphics_test Report failures for most of the graphics_test to chrome perf dashboard for regression detection. 1) Add docstring to graphics_utils.GraphicsTest 2) Add helper function failure_report and failure_report_decorator to GraphicsTest class to help report failures to chrome perf dashboard if function does not finish executation normally. 3) Add failure report for following tests graphics_WebGLAquarium graphics_GLMark2 graphics_Gbm graphics_SanAngeles graphics_WebGLManyPlanetsDeep graphics_GLAPICheck graphics_WebGLPerformance 4) Remaining tests to report failures graphics_GLBench graphics_WebGLClear graphics_KernelMemory graphics_dEQP graphics_GpuReset graphics_PerfControl graphics_VTSwitch 5) Modified tko/perf_upload/perf_dashboard_config.json BUG=chromium:717664 TEST=[1] test_that graphics_WebGLAquarium /tmp/test_that_results_YvUmYu/results-1-graphics_WebGLAquarium [ PASSED ] --------------------------------------------------------------------------- Total PASS: 2/2 (100%) TEST=[2] test_that graphics_GLMark2 /tmp/test_that_results_IaISZ2/results-1-graphics_GLMark2 [ PASSED ] --------------------------------------------------------------------------- Total PASS: 2/2 (100%) TEST=[3] test_that graphics_Gbm /tmp/test_that_results_Lz1rvI/results-1-graphics_Gbm [ PASSED ] --------------------------------------------------------------------------- Total PASS: 2/2 (100%) TEST=[4] test_that graphics_SanAngeles /tmp/test_that_results_4600Gs/results-1-graphics_SanAngeles [ PASSED ] --------------------------------------------------------------------------- Total PASS: 2/2 (100%) TEST=[5] test_that graphics_WebGLManyPlanetsDeep /tmp/test_that_results_Yw8aAt/results-1-graphics_WebG... [ PASSED ] --------------------------------------------------------------------------- Total PASS: 2/2 (100%) TEST=[6] test_that graphics_GLAPICheck /tmp/test_that_results_lq1ryW/results-1-graphics_GLAP... [ PASSED ] --------------------------------------------------------------------------- Total PASS: 2/2 (100% TEST=[7] test_that graphics_WebGLPerformance /tmp/test_that_results_9xQJ3_/results-1-graphics_WebG... [ PASSED ] --------------------------------------------------------------------------- Total PASS: 2/2 (100%) TEST= http://jsonlint.com --- Valid JSON Change-Id: Ic3d33373f7d1903472c3951e7e915c1f1afc5fa6 Reviewed-on: https://chromium-review.googlesource.com/528366 Commit-Ready: Ilja H. Friedel <ihf@chromium.org> Tested-by: Ilja H. Friedel <ihf@chromium.org> Reviewed-by: Ilja H. Friedel <ihf@chromium.org>
Autotest is a framework for fully automated testing. It was originally designed to test the Linux kernel, and expanded by the Chrome OS team to validate complete system images of Chrome OS and Android.
Autotest is composed of a number of modules that will help you to do stand alone tests or setup a fully automated test grid, depending on what you are up to. A non extensive list of functionality is:
A body of code to run tests on the device under test. In this setup, test logic executes on the machine being tested, and results are written to files for later collection from a development machine or lab infrastructure.
A body of code to run tests against a remote device under test. In this setup, test logic executes on a development machine or piece of lab infrastructure, and the device under test is controlled remotely via SSH/adb/some combination of the above.
Developer tools to execute one or more tests. test_that
for Chrome OS and test_droid
for Android allow developers to run tests against a device connected to their development machine on their desk. These tools are written so that the same test logic that runs in the lab will run at their desk, reducing the number of configurations under which tests are run.
Lab infrastructure to automate the running of tests. This infrastructure is capable of managing and running tests against thousands of devices in various lab environments. This includes code for both synchronous and asynchronous scheduling of tests. Tests are run against this hardware daily to validate every build of Chrome OS.
Infrastructure to set up miniature replicas of a full lab. A full lab does entail a certain amount of administrative work which isn't appropriate for a work group interested in automated tests against a small set of devices. Since this scale is common during device bringup, a special setup, called Moblab, allows a natural progressing from desk -> mini lab -> full lab.
See the guides to test_that
and test_droid
:
See the best practices guide, existing tests, and comments in the code.
git clone https://chromium.googlesource.com/chromiumos/third_party/autotest
See the coding style guide for guidance on submitting patches.