commit | def7d57a978ab27b90c6498c83584479f1cafe8e | [log] [tgz] |
---|---|---|
author | Brian Norris <briannorris@chromium.org> | Tue Aug 28 18:43:56 2018 -0700 |
committer | chrome-bot <chrome-bot@chromium.org> | Thu Sep 27 19:44:20 2018 -0700 |
tree | 79164ed314febc9b932c675be5b435d57e75cc09 | |
parent | 9a4a4cc5030ad28a2cb49109f9608220db45b49b [diff] |
[autotest] netperf_runner: filter out 'catcher: timer popped ...' I see a smattering of reports like these across the lab: MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.254 () port 12866 AF_INET catcher: timer popped with times_up != 0 Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 2.00 207.87 Because we expected results to be on exactly the 6th or 7th line, this trips us up with an exception: Unhandled IndexError: list index out of range Looking at the netperf code, it seems like this can happen if, e.g., netperf gets a SIGINT at the same time as its timer alarm. Possibly for other reasons too. In any case, all the results I've seen still look valid, so let's ignore that warning. While we're at it, make this slightly less fragile by avoiding memorizing line numbers, and instead only process numeric lines. BUG=chromium:889556 TEST=network_WiFi_Perf Change-Id: I7d3ce90019851e9370b42d3ba39ec6d8f0266457 Signed-off-by: Brian Norris <briannorris@chromium.org> Reviewed-on: https://chromium-review.googlesource.com/1246840 Commit-Ready: ChromeOS CL Exonerator Bot <chromiumos-cl-exonerator@appspot.gserviceaccount.com> Reviewed-by: Kirtika Ruchandani <kirtika@chromium.org>
Autotest is a framework for fully automated testing. It was originally designed to test the Linux kernel, and expanded by the Chrome OS team to validate complete system images of Chrome OS and Android.
Autotest is composed of a number of modules that will help you to do stand alone tests or setup a fully automated test grid, depending on what you are up to. A non extensive list of functionality is:
A body of code to run tests on the device under test. In this setup, test logic executes on the machine being tested, and results are written to files for later collection from a development machine or lab infrastructure.
A body of code to run tests against a remote device under test. In this setup, test logic executes on a development machine or piece of lab infrastructure, and the device under test is controlled remotely via SSH/adb/some combination of the above.
Developer tools to execute one or more tests. test_that
for Chrome OS and test_droid
for Android allow developers to run tests against a device connected to their development machine on their desk. These tools are written so that the same test logic that runs in the lab will run at their desk, reducing the number of configurations under which tests are run.
Lab infrastructure to automate the running of tests. This infrastructure is capable of managing and running tests against thousands of devices in various lab environments. This includes code for both synchronous and asynchronous scheduling of tests. Tests are run against this hardware daily to validate every build of Chrome OS.
Infrastructure to set up miniature replicas of a full lab. A full lab does entail a certain amount of administrative work which isn't appropriate for a work group interested in automated tests against a small set of devices. Since this scale is common during device bringup, a special setup, called Moblab, allows a natural progressing from desk -> mini lab -> full lab.
See the guides to test_that
and test_droid
:
See the best practices guide, existing tests, and comments in the code.
git clone https://chromium.googlesource.com/chromiumos/third_party/autotest
See the coding style guide for guidance on submitting patches.
You need to run utils/build_externals.py
to set up the dependencies for pre-upload hook tests.