[docs] Port markdown to Docsy
This CL lands copies all the documentation in /site into /site2
but also adds frontmatter to each page.
Additionally it adds a Hugo `config.toml` file.
Once the new documentation server is live the original /site
directory will be removed and /site2 will be renamed /site.
Bugs: skia:11799
Change-Id: Ic300cf5c2a2a8fa2f9acc3455251bf818cb96a52
Docs-Preview: https://skia.org/?cl=386116
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/386116
Reviewed-by: Joe Gregorio <jcgregorio@google.com>
diff --git a/site2/docs/dev/testing/_index.md b/site2/docs/dev/testing/_index.md
new file mode 100644
index 0000000..ca10247
--- /dev/null
+++ b/site2/docs/dev/testing/_index.md
@@ -0,0 +1,23 @@
+
+---
+title: "Testing"
+linkTitle: "Testing"
+
+weight: 3
+
+---
+
+
+Skia relies heavily on our suite of unit and GM tests, which are served by our
+DM test tool, for correctness testing. Tests are executed by our trybots, for
+every commit, across most of our supported platforms and configurations.
+Skia [Gold](https://gold.skia.org) is a web interface for triaging these results.
+
+We also have a robust set of performance tests, served by the nanobench tool and
+accessible via the Skia [Perf](https://perf.skia.org) web interface.
+
+Cluster Telemetry is a powerful framework that helps us capture and benchmark
+SKP files, a binary format for draw commands, across up to one million websites.
+
+See the individual subpages for more details on our various test tools.
+
diff --git a/site2/docs/dev/testing/automated_testing.md b/site2/docs/dev/testing/automated_testing.md
new file mode 100644
index 0000000..48f687c
--- /dev/null
+++ b/site2/docs/dev/testing/automated_testing.md
@@ -0,0 +1,193 @@
+
+---
+title: "Skia Automated Testing"
+linkTitle: "Skia Automated Testing"
+
+---
+
+
+Overview
+--------
+
+Skia uses [Swarming](https://github.com/luci/luci-py/blob/master/appengine/swarming/doc/Design.md)
+to do the heavy lifting for our automated testing. It farms out tasks, which may
+consist of compiling code, running tests, or any number of other things, to our
+bots, which are virtual or real machines living in our local lab, Chrome Infra's
+lab, or in GCE.
+
+The [Skia Task Scheduler](http://go/skia-task-scheduler) determines what tasks
+should run on what bots at what time. See the link for a detailed explanation of
+how relative task priorities are derived. A *task* corresponds to a single
+Swarming task. A *job* is composed of a directed acyclic graph of one or more
+*tasks*. The job is complete when all of its component tasks have succeeded
+or is considered a failure when any of its component tasks fails. The scheduler
+may automatically retry tasks within its set limits. Jobs are not retried.
+Multiple jobs may share the same task, for example, tests on two different
+Android devices which use the same compiled code.
+
+Each Skia repository has an `infra/bots/tasks.json` file which defines the jobs
+and tasks for the repo. Most jobs will run at every commit, but it is possible
+to specify nightly and weekly jobs as well. For convenience, most repos also
+have a `gen_tasks.go` which will generate `tasks.json`. You will need to
+[install Go](https://golang.org/doc/install). From the repository root:
+
+ $ go run infra/bots/gen_tasks.go
+
+It is necessary to run `gen_tasks.go` every time it is changed or every time an
+[asset](https://skia.googlesource.com/skia/+/master/infra/bots/assets/README.md)
+has changed. There is also a test mode which simply verifies that the `tasks.json`
+file is up to date:
+
+ $ go run infra/bots/gen_tasks.go --test
+
+
+
+Try Jobs
+--------
+
+Skia's trybots allow testing and verification of changes before they land in the
+repo. You need to have permission to trigger try jobs; if you need permission,
+ask a committer. After uploading your CL to [Gerrit](https://skia-review.googlesource.com/),
+you may trigger a try job for any job listed in `tasks.json`, either via the
+Gerrit UI, using `git cl try`, eg.
+
+ git cl try -B skia.primary -b Some-Tryjob-Name
+
+or using `bin/try`, a small wrapper for `git cl try` which helps to choose try jobs.
+From a Skia checkout:
+
+ bin/try --list
+
+You can also search using regular expressions:
+
+ bin/try "Test.*GTX660.*Release"
+
+
+Status View
+------------
+
+The status view shows a table with tasks, grouped by test type and platform,
+on the X-axis and commits on the Y-axis. The cells are colored according to
+the status of the task for each commit:
+
+* green: success
+* orange: failure
+* purple: mishap (infrastructure issue)
+* black border, no fill: task in progress
+* blank: no task has started yet for a given revision
+
+Commits are listed by author, and the branch on which the commit was made is
+shown on the very left. A purple result will override an orange result.
+
+For more detail, you can click on an individual cell to get a summary of the
+task. You can also click one of the white bars at the top of each column to see
+a summary of recent tasks with the same name.
+
+The status page has several filters which can be used to show only a subset of
+task specs:
+
+* Interesting: Task specs which have both successes and failures within the
+ visible commit window.
+* Failures: Task specs which have failures within the visible commit window.
+* Comments: Task specs which have comments.
+* Failing w/o comment: task specs which have failures within the visible commit
+ window but have no comments.
+* All: Display all tasks.
+* Search: Enter a search string. Substrings and regular expressions may be
+ used, per the Javascript String Match() rules:
+ http://www.w3schools.com/jsref/jsref_match.asp
+
+<a name="adding-new-jobs"></a>
+Adding new jobs
+---------------
+
+If you would like to add jobs to build or test new configurations, please file a
+[New Bot Request][new bot request].
+
+If you know that the new jobs will need new hardware or you aren't sure which
+existing bots should run the new jobs, assign to jcgregorio. Once the Infra team
+has allocated the hardware, we will assign back to you to complete the process.
+
+Generally it's possible to copy an existing job and make changes to accomplish
+what you want. You will need to add the new job to
+[infra/bots/jobs.json][jobs json]. In some cases, you will need to make changes
+to recipes:
+
+* If there are new GN flags or compiler options:
+ [infra/bots/recipe_modules/build][build recipe module], probably default.py.
+* If there are modifications to dm flags: [infra/bots/recipes/test.py][test py]
+* If there are modifications to nanobench flags:
+ [infra/bots/recipes/perf.py][perf py]
+
+After modifying any of the above files, run `make train` in the infra/bots
+directory to update generated files. Upload the CL, then run `git cl try -B
+skia.primary -b <job name>` to run the new job. (After commit, the new job will
+appear in the PolyGerrit UI after the next successful run of the
+Housekeeper-Nightly-UpdateMetaConfig task.)
+
+[new bot request]:
+ https://bugs.chromium.org/p/skia/issues/entry?template=New+Bot+Request
+[jobs json]: https://skia.googlesource.com/skia/+/master/infra/bots/jobs.json
+[build recipe module]:
+ https://skia.googlesource.com/skia/+/refs/heads/master/infra/bots/recipe_modules/build/
+[test py]:
+ https://skia.googlesource.com/skia/+/master/infra/bots/recipes/test.py
+[perf py]:
+ https://skia.googlesource.com/skia/+/master/infra/bots/recipes/perf.py
+
+
+Detail on Skia Tasks
+--------------------
+
+[infra/bots/gen_tasks.go][gen_tasks] reads config files:
+
+* [infra/bots/jobs.json][jobs json]
+* [infra/bots/cfg.json][cfg json]
+* [infra/bots/recipe_modules/builder_name_schema/builder_name_schema.json][builder_name_schema]
+
+Based on each job name in jobs.json, gen_tasks decides which tasks to generate (process
+function). Various helper functions return task name of the direct dependencies of the job.
+
+In gen_tasks, tasks are specified with a TaskSpec. A TaskSpec specifies how to generate and trigger
+a Swarming task.
+
+Most Skia tasks run a recipe with Kitchen. The arguments to the kitchenTask function specify the
+most common parameters for a TaskSpec that will run a recipe. More info on recipes at
+[infra/bots/recipes/README.md][recipes README] and
+[infra/bots/recipe_modules/README.md][recipe_modules README].
+
+The Swarming task is generated based on several parameters of the TaskSpec:
+
+* Isolate: specifies the isolate file. The isolate file specifies the files from the repo to place
+ on the bot before running the task. (For non-Kitchen tasks, the isolate also specifies the command
+ to run.) [More info][isolate user guide].
+* Command: the command to run, if not specified in the Isolate. (Generally this is a boilerplate
+ Kitchen command that runs a recipe; see below.)
+* CipdPackages: specifies the IDs of CIPD packages that will be placed on the bot before running the
+ task. See infra/bots/assets/README.md for more info.
+* Dependencies: specifies the names of other tasks that this task depends upon. The outputs of those
+ tasks will be placed on the bot before running this task.
+* Dimensions: specifies what kind of bot should run this task. Ask Infra team for how to set this.
+* ExecutionTimeout: total time the task is allowed to run before it is killed.
+* IoTimeout: amount of time the task can run without printing something to stdout/stderr before it
+ is killed.
+* Expiration: Mostly ignored. If the task happens to be scheduled when there are no bots that can
+ run it, it will remain pending for this long before being canceled.
+
+If you need to do something more complicated, or if you are not sure how to add
+and configure the new jobs, please ask for help from borenet, benjaminwagner, or
+mtklein.
+
+[gen_tasks]:
+ https://skia.googlesource.com/skia/+/master/infra/bots/gen_tasks.go
+[cfg json]:
+ https://skia.googlesource.com/skia/+/master/infra/bots/cfg.json
+[builder_name_schema]:
+ https://skia.googlesource.com/skia/+/master/infra/bots/recipe_modules/builder_name_schema/builder_name_schema.json
+[recipes README]:
+ https://skia.googlesource.com/skia/+/master/infra/bots/recipes/README.md
+[recipe_modules README]:
+ https://skia.googlesource.com/skia/+/master/infra/bots/recipe_modules/README.md
+[isolate user guide]:
+ https://chromium.googlesource.com/infra/luci/luci-py/+/master/appengine/isolate/doc/client/Isolate-User-Guide.md
+
diff --git a/site2/docs/dev/testing/download.md b/site2/docs/dev/testing/download.md
new file mode 100644
index 0000000..45f8e21
--- /dev/null
+++ b/site2/docs/dev/testing/download.md
@@ -0,0 +1,38 @@
+
+---
+title: "Downloading Isolates"
+linkTitle: "Downloading Isolates"
+
+---
+
+
+The intermediate and final build products from running tests are all stored in
+[Isolate](https://github.com/luci/luci-py/blob/master/appengine/isolate/doc/Design.md),
+and can be downloaded to the desktop for inspection and debugging.
+
+First install the client:
+
+ git clone https://github.com/luci/client-py.git
+
+Add the checkout location to your $PATH.
+
+To download the isolated files for a test first visit
+the build status page and find the "isolated output" link:
+
+<img src="Status.png" style="margin-left:30px" width=576 height=271 >
+
+
+Follow that link to find the hash of the isolated outputs:
+
+
+<img src="Isolate.png" style="margin-left:30px" width=451 height=301 >
+
+Then run `isolateserver.py` with --isolated set to that hash:
+
+ $ isolateserver.py \
+ download \
+ --isolate-server=https://isolateserver.appspot.com \
+ --isolated=5b85b7c382ee2a34530e33c7db20a07515ff9481 \
+ --target=./download/
+
+
diff --git a/site2/docs/dev/testing/fonts.md b/site2/docs/dev/testing/fonts.md
new file mode 100644
index 0000000..678bae0
--- /dev/null
+++ b/site2/docs/dev/testing/fonts.md
@@ -0,0 +1,40 @@
+
+---
+title: "Fonts and GM Tests"
+linkTitle: "Fonts and GM Tests"
+
+---
+
+
+Overview
+--------
+
+Each test in the gm directory draws a reference image. Their primary purpose is
+to detect when images change unexpectedly, indicating that a rendering bug has
+been introduced.
+
+The gm tests have a secondary purpose: they detect when rendering is different
+across platforms and configurations.
+
+GM font selection
+-----------------
+
+Each gm specifies the typeface to use when drawing text. For now, to set the
+portable typeface on the paint, call:
+
+~~~~
+ToolUtils::set_portable_typeface(SkPaint* , const char* name = nullptr,
+SkFontStyle style = SkFontStyle());
+~~~~
+
+To create a portable typeface, use:
+
+~~~~
+SkTypeface* typeface = ToolUtils::create_portable_typeface(const char* name,
+SkFontStyle style);
+~~~~
+
+Eventually, both `set_portable_typeface()` and `create_portable_typeface()` will be
+removed. Instead, a test-wide `SkFontMgr` will be selected to choose portable
+fonts or resource fonts.
+
diff --git a/site2/docs/dev/testing/fuzz.md b/site2/docs/dev/testing/fuzz.md
new file mode 100644
index 0000000..d827e4c
--- /dev/null
+++ b/site2/docs/dev/testing/fuzz.md
@@ -0,0 +1,93 @@
+
+---
+title: "Fuzzing"
+linkTitle: "Fuzzing"
+
+---
+
+
+Reproducing using `fuzz`
+------------------------
+
+We assume that you can [build Skia](/user/build). Many fuzzes only reproduce
+when building with ASAN or MSAN; see [those instructions for more details](./xsan).
+
+When building, you should add the following args to BUILD.gn to make reproducing
+less machine- and platform- dependent:
+
+ skia_use_fontconfig=false
+ skia_use_freetype=true
+ skia_use_system_freetype2=false
+ skia_use_wuffs=true
+ skia_enable_skottie=true
+ skia_enable_fontmgr_custom_directory=false
+ skia_enable_fontmgr_custom_embedded=false
+ skia_enable_fontmgr_custom_empty=true
+
+All that is needed to reproduce a fuzz downloaded from ClusterFuzz or oss-fuzz is to
+run something like:
+
+ out/ASAN/fuzz -b /path/to/downloaded/testcase
+
+The fuzz binary will try its best to guess what the type/name should be based on
+the name of the testcase. Manually providing type and name is also supported, like:
+
+ out/ASAN/fuzz -t filter_fuzz -b /path/to/downloaded/testcase
+ out/ASAN/fuzz -t api -n RasterN32Canvas -b /path/to/downloaded/testcase
+
+To enumerate all supported types and names, run the following:
+
+ out/ASAN/fuzz --help # will list all types
+ out/ASAN/fuzz -t api # will list all names
+
+If the crash does not show up, try to add the flag --loops:
+
+ out/ASAN/fuzz -b /path/to/downloaded/testcase --loops <times-to-run>
+
+Writing fuzzers with libfuzzer
+------------------------------
+
+libfuzzer is an easy way to write new fuzzers, and how we run them on oss-fuzz.
+Your fuzzer entry point should implement this API:
+
+ extern "C" int LLVMFuzzerTestOneInput(const uint8_t*, size_t);
+
+First install Clang and libfuzzer, e.g.
+
+ sudo apt install clang-10 libc++-10-dev libfuzzer-10-dev
+
+You should now be able to use `-fsanitize=fuzzer` with Clang.
+
+Set up GN args to use libfuzzer:
+
+ cc = "clang-10"
+ cxx = "clang++-10"
+ sanitize = "fuzzer"
+ extra_cflags = [ "-O1" ] # Or whatever you want.
+ ...
+
+Build Skia and your fuzzer entry point:
+
+ ninja -C out/libfuzzer skia
+ clang++-10 -I. -O1 -fsanitize=fuzzer fuzz/oss_fuzz/whatever.cpp out/libfuzzer/libskia.a
+
+Run your new fuzzer binary
+
+ ./a.out
+
+
+Fuzzing Defines
+---------------
+There are some defines that can help guide a fuzzer to be more productive (e.g. avoid OOMs, avoid
+unnecessarily slow code).
+
+ // Required for fuzzing with afl-fuzz to prevent OOMs from adding noise.
+ SK_BUILD_FOR_AFL_FUZZ
+
+ // Required for fuzzing with libfuzzer
+ SK_BUILD_FOR_LIBFUZZER
+
+ // This define adds in guards to abort when we think some code path will take a long time or
+ // use a lot of RAM. It is set by default when either of the above defines are set.
+ SK_BUILD_FOR_FUZZER
+
diff --git a/site2/docs/dev/testing/ios.md b/site2/docs/dev/testing/ios.md
new file mode 100644
index 0000000..25ab660
--- /dev/null
+++ b/site2/docs/dev/testing/ios.md
@@ -0,0 +1,48 @@
+
+---
+title: "Testing on iOS"
+linkTitle: "Testing on iOS"
+
+---
+
+Before setting Skia up for automated testing from the command line, please
+follow the instructions to run Skia tests (*dm*, *nano-bench*) with the
+mainstream iOS tool chain. See the [quick start guide for ios](../../user/quick/ios).
+
+iOS doesn't lend itself well to compiling and running from the command line.
+Below are instructions on how to install a set of tools that make this possible.
+To see how they are used in automated testing please see the bash scripts
+used by the buildbot recipes: <https://github.com/google/skia/tree/master/platform_tools/ios/bin>.
+
+Installation
+------------
+The key tools are
+
+* libimobiledevice <http://www.libimobiledevice.org/>, <https://github.com/libimobiledevice/libimobiledevice>
+
+* ios-deploy <https://github.com/phonegap/ios-deploy>
+
+Follow these steps to install them:
+
+* Install Brew at <http://brew.sh/>
+* Install *libimobiledevice*
+ (Note: All these are part of the *libimobiledevice* project but packaged/developed
+ under different names. The *cask* extension to *brew* is necessary to install
+ *osxfuse* and *ifuse*, which allows to mount the application directory on an iOS device).
+
+```
+brew install libimobiledevice
+brew install ideviceinstaller
+brew install caskroom/cask/brew-cask
+brew install Caskroom/cask/osxfuse
+brew install ifuse
+```
+
+* Install node.js and ios-deploy
+
+```
+$ brew update
+$ brew install node
+$ npm install ios-deploy
+```
+
diff --git a/site2/docs/dev/testing/skiagold.md b/site2/docs/dev/testing/skiagold.md
new file mode 100644
index 0000000..6acf078
--- /dev/null
+++ b/site2/docs/dev/testing/skiagold.md
@@ -0,0 +1,185 @@
+---
+title: 'Skia Gold'
+linkTitle: 'Skia Gold'
+---
+
+## Overview
+
+Gold is a web application that compares the images produced by our bots against
+known baseline images.
+
+Key features:
+
+- Baselines are managed in Gold outside of Git, but in lockstep with Git
+ commits.
+- Each commit creates >500k images.
+- Deviations from the baseline are triaged after a CL lands and images are
+ triaged as either `positive` or `negative`. 'Positive' means the diff is
+ considered acceptable. 'Negative' means the diff is considered unacceptable
+ and requires a fix. If a CL causes Skia to break it is reverted or an
+ additional CL is landed to fix the problem.
+- We test across a range of dimensions, e.g.:
+
+ - OS (Windows, Linux, Mac, Android, iOS)
+ - Architectures (Intel, ARM)
+ - Backends (CPU, OpenGL, Vulkan etc.)
+ - etc.
+
+- Written in Go, Polymer and deployed on the Google Cloud. The code is in the
+ [Skia Infra Repository](https://github.com/google/skia-buildbot).
+
+## Recommended Workflows
+
+### How to best use Gold for commonly faced problems
+
+These instructions will refer to various views which are accessible via the left
+navigation on [gold.skia.org](https://gold.skia.org/).
+
+View access is public, triage access is granted to Skia contributors. You must
+be logged in to triage.
+
+## Problem #1: As Skia Gardener, I need to triage and “assign” many incoming new images.
+
+Solution today:
+
+- Access the By Blame view to see digests needing triage and associated
+ owners/CLs
+ - Only untriaged digests will be shown by default
+ - Blame is not sorted in any particular order
+ - Digests are clustered by runs and the most minimal set of blame
+
+<img src=BlameView.png style="margin-left:30px" align="left" width="800"/>
+<br clear="left">
+
+- Select digests for triage
+ - Digests will be listed in order with largest difference first
+ - Click to open the digest view with detailed information
+
+<img src=Digests.png style="margin-left:40px" align="left" width="780"/>
+<br clear="left">
+
+- Open bugs for identified owner(s)
+ - The digest detail view has a link to open a bug from the UI
+ - Via the Gold UI or when manually entering a bug, copy the full URL of single
+ digest into a bug report
+ - The URL reference to the digest in Issue Tracker will link the bug to the
+ digest in Gold
+
+<img src="IssueHighlight.png" style="margin-left:60px" align="left" width="720" border=1/>
+<br clear="left">
+
+<br>
+
+Future improvements:
+
+- Smarter, more granular blamelist
+
+<br>
+
+## Problem #2: As a developer, I need to land a CL that may change many images.
+
+To find your results:
+
+- Immediately following commit, access the By Blame view to find untriaged
+ digest groupings associated with your ID
+- Click on one of the clusters including your CL to triage
+- Return to the By Blame view to walk through all untriaged digests involving
+ your change
+- Note: It is not yet implemented in the UI but possible to filter the view by
+ CL. Delete hashes in the URL to only include the hash for your CL.
+
+<img src=BlameView.png style="margin-left:30px" align="left" width="800"/>
+<br clear="left">
+
+To rebaseline images:
+
+- Access the Ignores view and create a new, short-interval (hours) ignore for
+ the most affected configuration(s)
+
+<img src=Ignores.png style="margin-left:30px" align="left" width="800"/>
+<br clear="left">
+
+- Click on the Ignore to bring up a search view filtered by the affected
+ configuration(s)
+- Mark untriaged images as positive (or negative if appropriate)
+- Follow one of two options for handling former positives:
+ - Leave former positives as-is and let them fall off with time if there is low
+ risk of recurrence
+ - Mark former positives as negative if needed to verify the change moving
+ forward
+
+Future improvements:
+
+- Trybot support prior to commit, with view limited to your CL
+- Pre-triage prior to commit that will persist when the CL lands
+
+<br>
+
+## Problem #3: As a developer or infrastructure engineer, I need to add a new or updated config.
+
+(ie: new bot, test mode, environment change)
+
+Solution today:
+
+- Follow the process for rebaselining images:
+ - Wait for the bot/test/config to be committed and show up in the Gold UI
+ - Access the Ignores view and create a short-interval ignore for the
+ configuration(s)
+ - Triage the ignores for that config to identify positive images
+ - Delete the ignore
+
+Future improvements:
+
+- Introduction of a new or updated test can make use of try jobs and pre-triage.
+- New configs may be able to use these features as well.
+
+<br>
+
+## Problem #4: As a developer, I need to analyze the details of a particular image digest.
+
+Solution:
+
+- Access the By Test view
+
+<img src=ByTest.png style="margin-left:30px" align="left" width="800"/>
+<br clear="left">
+
+- Click the magnifier to filter by configuration
+- Access the Cluster view to see the distribution of digest results
+ - Use control-click to select and do a direct compare between data points
+ - Click on configurations under “parameters” to highlight data points and
+ compare
+
+<img src=ClusterConfig.png style="margin-left:30px" align="left" width="800"/>
+<br clear="left">
+
+- Access the Grid view to see NxN diffs
+
+<img src=Grid.png style="margin-left:30px" align="left" width="800"/>
+<br clear="left">
+
+- Access the Dot diagram to see history of commits for the trace
+ - Each dot represents a commit
+ - Each line represents a configuration
+ - Dot colors distinguish between digests
+
+<img src=DotDiagram.png style="margin-left:30px" align="left" width="800"/>
+<br clear="left">
+
+<br>
+
+Future improvements:
+
+- Large diff display of image vs image
+
+<br>
+
+## Problem #5: As a developer, I need to find results for a particular configuration.
+
+Solution:
+
+- Access the Search view
+- Select any parameters desired to search across tests
+
+<img src=Search.png style="margin-left:30px" align="left" width="800"/>
+<br clear="left">
diff --git a/site2/docs/dev/testing/skiaperf.md b/site2/docs/dev/testing/skiaperf.md
new file mode 100644
index 0000000..1e33148
--- /dev/null
+++ b/site2/docs/dev/testing/skiaperf.md
@@ -0,0 +1,45 @@
+
+---
+title: "Skia Perf"
+linkTitle: "Skia Perf"
+
+---
+
+
+[Skia Perf](https://perf.skia.org) is a web application for analyzing and
+viewing performance metrics produced by Skia's testing infrastructure.
+
+<img src=Perf.png style="margin-left:30px" align="left" width="800"/> <br clear="left">
+
+Skia tests across a large number of platforms and configurations, and each
+commit to Skia generates more than 400,000 individual values that are sent to
+Perf, consisting mostly of performance benchmark results, but also including
+memory and coverage data.
+
+Perf offers clustering, which is a tool to pick out trends and patterns in large sets of traces.
+
+<img src=Cluster.png style="margin-left:30px" align="left" width="400"/> <br clear="left">
+
+And can generate alerts when those trends spot a regression:
+
+<img src=Regression.png style="margin-left:30px" align="left" width="800"/> <br clear="left">
+
+
+## Calculations
+
+Skia Perf has the ability to perform calculations over the test data
+allowing you to build up interesting queries.
+
+This query displays the ratio of playback time in ms to the number of ops for desk\_wowwiki.skp:
+
+ ratio(
+ ave(fill(filter("name=desk_wowwiki.skp&sub_result=min_ms"))),
+ ave(fill(filter("name=desk_wowwiki.skp&sub_result=ops")))
+ )
+
+You can also use the data to answer questions like how many tests were run per commit.
+
+ count(filter(""))
+
+See Skia Perf for the [full list of functions available](https://perf.skia.org/help/).
+
diff --git a/site2/docs/dev/testing/skqp.md b/site2/docs/dev/testing/skqp.md
new file mode 100644
index 0000000..0a2db87
--- /dev/null
+++ b/site2/docs/dev/testing/skqp.md
@@ -0,0 +1,56 @@
+
+---
+title: "SkQP"
+linkTitle: "SkQP"
+
+---
+
+
+Development APKs of SkQP are kept in Google storage. Each file in named
+with a abbreviated Git hash that points at the commit in the Skia repository it
+was built with.
+
+These are universal APKs that contain native libraries for armeabi-v7a,
+arm64-v8a, x86, and x86\_64 architectures. The most recent is listed first.
+
+The listing can be found here:
+[https://storage.googleapis.com/skia-skqp/apklist](https://storage.googleapis.com/skia-skqp/apklist)
+
+If you are looking at Android CTS failures, use the most recent commit on the
+`origin/skqp/release` branch.
+
+To run tests:
+
+ adb install -r skqp-universal-{APK_SHA_HERE}.apk
+ adb logcat -c
+ adb shell am instrument -w org.skia.skqp
+
+Monitor the output with:
+
+ adb logcat TestRunner org.skia.skqp skia DEBUG "*:S"
+
+Note the test's output path on the device. It will look something like this:
+
+ 01-23 15:22:12.688 27158 27173 I org.skia.skqp:
+ output written to "/storage/emulated/0/Android/data/org.skia.skqp/files/skqp_report_2019-02-28T102058"
+
+Retrieve and view the report with:
+
+ OUTPUT_LOCATION="/storage/emulated/0/Android/data/org.skia.skqp/files/skqp_report_2019-02-28T102058"
+ adb pull "$OUTPUT_LOCATION" /tmp/
+
+(Your value of `$OUTPUT_LOCATION` will differ from mine.
+
+Open the file `/tmp/output/skqp_report_2019-02-28T102058/report.html` .
+
+**Zip up that directory to attach to a bug report:**
+
+ cd /tmp
+ zip -r skqp_report_2019-02-28T102058.zip skqp_report_2019-02-28T102058
+ ls -l skqp_report_2019-02-28T102058.zip
+
+* * *
+
+For more information about building your own APK, refer to
+https://skia.googlesource.com/skia/+/master/tools/skqp/README.md
+
diff --git a/site2/docs/dev/testing/swarmingbots.md b/site2/docs/dev/testing/swarmingbots.md
new file mode 100644
index 0000000..436c426
--- /dev/null
+++ b/site2/docs/dev/testing/swarmingbots.md
@@ -0,0 +1,94 @@
+
+---
+title: "Skia Swarming Bots"
+linkTitle: "Skia Swarming Bots"
+
+---
+
+
+Overview
+--------
+
+Skia's Swarming bots are hosted in three places:
+
+* Google Compute Engine. This is the preferred location for bots which don't need to run on physical
+ hardware, ie. anything that doesn't require a GPU or a specific hardware configuration. Most of
+ our compile bots live here, along with some non-GPU test bots on Linux and Windows. We get
+ surprisingly stable performance numbers from GCE, despite very few guarantees about the physical
+ hardware.
+* Chrome Golo. This is the preferred location for bots which require specific hardware or OS
+ configurations that are not supported by GCE. We have several Mac, Linux, and Windows bots in the
+ Golo.
+* The Skolo (local Skia lab in Chapel Hill). Anything we can't get in GCE or the Golo lives
+ here. This includes a wider variety of GPUs and all Android, ChromeOS, iOS, and other devices.
+
+[go/skbl](https://goto.google.com/skbl) lists all Skia Swarming bots.
+
+
+<a name="connecting-to-swarming-bots"></a>
+Connecting to Swarming Bots
+---------------------------
+
+If you need to make changes on a bot/device, please check with the Infra Gardener or another Infra team member. Most
+bots/devices can be flashed/imaged back to a clean state, but others can not.
+
+- Machine name like “skia-e-gce-NNN”, “skia-ct-gce-NNN”, “skia-i-gce-NNN”, “ct-gce-NNN”, “ct-xxx-builder-NNN” -> GCE
+ * First determine the project for the bot:
+ + skia-e-gce-NNN, skia-ct-gce-NNN: [skia-swarming-bots](https://console.cloud.google.com/compute/instances?project=skia-swarming-bots)
+ + skia-i-gce-NNN: [google.com:skia-buildbots](https://console.cloud.google.com/compute/instances?project=google.com:skia-buildbots)
+ + ct-gce-NNN, ct-xxx-builder-NNN: [ct-swarming-bots](https://console.cloud.google.com/compute/instances?project=ct-swarming-bots)
+ * To log in to a Linux bot in GCE, use `gcloud compute ssh --project <project> default@<machine name>`. Choose the zone listed on the VM's detail page (see links above). You may also specify the zone using the `--zone` command-line flag.
+ * To log in to a Windows bot in GCE, first go to the VM's detail page and click the "Set Windows password"
+ button. (Alternatively, ask the Infra Team how to log in as chrome-bot.) There are two options to connect:
+ + SSH: Follow the instructions for Linux using your username rather than `default`.
+ + RDP: On the VM's detail page, click the "RDP" button. (You will be instructed to install the Chrome RDP Extension
+ for GCP if it hasn't already been installed.)
+
+- Machine name ends with “a9”, “m3”, "m5" -> Chrome Golo/Labs
+ * To log in to Golo bots, see [go/chrome-infra-build-access](https://goto.google.com/chrome-infra-build-access).
+
+- Machine name starts with “skia-e-”, “skia-i-” (other than “skia-i-gce-NNN”), “skia-rpi-” -> Chapel Hill lab (aka Skolo)<br/>
+ To log in to Skolo bots, see the [Skolo maintenance doc][remote access] remote access section. See the following for OS specific instructions:<br/>
+ * [Remotely debug an Android device in Skolo][remotely debug android]
+ * [VNC to Skolo Windows bots][vnc to skolo windows]
+ * [ChromeOS Debugging][chromeos debugging]
+
+
+Debugging
+---------
+
+If you need to run code on a specific machine/device to debug an issue, the simplest option is to
+run tryjobs (after adding debugging output to the relevant code). In some cases you may also need to
+[create or modify tryjobs](automated_testing#adding-new-jobs).
+
+For Googlers: If you need more control (e.g. to run GDB) and need to run directly on a swarming bot then you can use [leasing.skia.org](https://leasing.skia.org).<br/>
+If that does not work then the [current infra gardener][current infra gardener] can help you bring the device back to your desk and connect
+it to GoogleGuest Wifi or the [Google Test Network](http://go/gtn-criteria).
+
+If you need to make changes on a bot/device, please check with the Infra Gardener or another Infra team member. Most
+bots/devices can be flashed/imaged back to a clean state, but others can not.
+
+If a permanent change needs to be made on the machine (such as an OS or driver update), please [file
+a bug][infra bug] and assign to jcgregorio for reassignment.
+
+For your convenience, the machine skolo-builder is available for checking out and compiling code within the Skolo. See
+more info in the [Skolo maintenance doc][remote access] remote access section.
+
+[current infra gardener]: https://rotations.corp.google.com/rotation/4617277386260480
+[remote access]:
+ https://docs.google.com/document/d/1zTR1YtrIFBo-fRWgbUgvJNVJ-s_4_sNjTrHIoX2vulo/edit#heading=h.v77cmwbwc5la
+[infra bug]: https://bugs.chromium.org/p/skia/issues/entry?template=Infrastructure+Bug
+[remotely debug android]: https://docs.google.com/document/d/1nxn7TobfaLNNfhSTiwstOnjV0jCxYUI1uwW0T_V7BYg/
+[vnc to skolo windows]:
+ https://docs.google.com/document/d/1zTR1YtrIFBo-fRWgbUgvJNVJ-s_4_sNjTrHIoX2vulo/edit#heading=h.7cqd856ft0s
+[chromeos debugging]:
+ https://docs.google.com/document/d/1yJ2LLfLzV6pXKjiameid1LHEz1mj71Ob4wySIYxlBdw/edit#heading=h.9arg79l59xrf
+
+Maintenance Tasks
+-----------------
+
+See the [Skolo maintenance doc][skolo maintenance].
+
+[skolo maintenance]:
+ https://docs.google.com/document/d/1zTR1YtrIFBo-fRWgbUgvJNVJ-s_4_sNjTrHIoX2vulo/edit
+
diff --git a/site2/docs/dev/testing/testing.md b/site2/docs/dev/testing/testing.md
new file mode 100644
index 0000000..429fa4b
--- /dev/null
+++ b/site2/docs/dev/testing/testing.md
@@ -0,0 +1,201 @@
+---
+title: 'Correctness Testing'
+linkTitle: 'Correctness Testing'
+---
+
+Skia correctness testing is primarily served by a tool named DM. This is a
+quickstart to building and running DM.
+
+<!--?prettify lang=sh?-->
+
+ python2 tools/git-sync-deps
+ bin/gn gen out/Debug
+ ninja -C out/Debug dm
+ out/Debug/dm -v -w dm_output
+
+When you run this, you may notice your CPU peg to 100% for a while, then taper
+off to 1 or 2 active cores as the run finishes. This is intentional. DM is very
+multithreaded, but some of the work, particularly GPU-backed work, is still
+forced to run on a single thread. You can use `--threads N` to limit DM to N
+threads if you like. This can sometimes be helpful on machines that have
+relatively more CPU available than RAM.
+
+As DM runs, you ought to see a giant spew of output that looks something like
+this.
+
+```
+Skipping nonrendering: Don't understand 'nonrendering'.
+Skipping angle: Don't understand 'angle'.
+Skipping nvprmsaa4: Could not create a surface.
+492 srcs * 3 sinks + 382 tests == 1858 tasks
+
+( 25MB 1857) 1.36ms 8888 image mandrill_132x132_12x12.astc-5-subsets
+( 25MB 1856) 1.41ms 8888 image mandrill_132x132_6x6.astc-5-subsets
+( 25MB 1855) 1.35ms 8888 image mandrill_132x130_6x5.astc-5-subsets
+( 25MB 1854) 1.41ms 8888 image mandrill_132x130_12x10.astc-5-subsets
+( 25MB 1853) 151µs 8888 image mandrill_130x132_10x6.astc-5-subsets
+( 25MB 1852) 154µs 8888 image mandrill_130x130_5x5.astc-5-subsets
+ ...
+( 748MB 5) 9.43ms unit test GLInterfaceValidation
+( 748MB 4) 30.3ms unit test HalfFloatTextureTest
+( 748MB 3) 31.2ms unit test FloatingPointTextureTest
+( 748MB 2) 32.9ms unit test DeferredCanvas_GPU
+( 748MB 1) 49.4ms unit test ClipCache
+( 748MB 0) 37.2ms unit test Blur
+```
+
+Do not panic.
+
+As you become more familiar with DM, this spew may be a bit annoying. If you
+remove -v from the command line, DM will spin its progress on a single line
+rather than print a new line for each status update.
+
+Don't worry about the "Skipping something: Here's why." lines at startup. DM
+supports many test configurations, which are not all appropriate for all
+machines. These lines are a sort of FYI, mostly in case DM can't run some
+configuration you might be expecting it to run.
+
+Don't worry about the "skps: Couldn't read skps." messages either, you won't
+have those by default and can do without them. If you wish to test with them
+too, you can download them separately.
+
+The next line is an overview of the work DM is about to do.
+
+```
+492 srcs * 3 sinks + 382 tests == 1858 tasks
+```
+
+DM has found 382 unit tests (code linked in from tests/), and 492 other drawing
+sources. These drawing sources may be GM integration tests (code linked in from
+gm/), image files (from `--images`, which defaults to "resources") or .skp files
+(from `--skps`, which defaults to "skps"). You can control the types of sources
+DM will use with `--src` (default, "tests gm image skp").
+
+DM has found 3 usable ways to draw those 492 sources. This is controlled by
+`--config`. The defaults are operating system dependent. On Linux they are "8888
+gl nonrendering". DM has skipped nonrendering leaving two usable configs: 8888
+and gl. These two name different ways to draw using Skia:
+
+- 8888: draw using the software backend into a 32-bit RGBA bitmap
+- gl: draw using the OpenGL backend (Ganesh) into a 32-bit RGBA bitmap
+
+Sometimes DM calls these configs, sometimes sinks. Sorry. There are many
+possible configs but generally we pay most attention to 8888 and gl.
+
+DM always tries to draw all sources into all sinks, which is why we multiply 492
+by 3. The unit tests don't really fit into this source-sink model, so they stand
+alone. A couple thousand tasks is pretty normal. Let's look at the status line
+for one of those tasks.
+
+```
+( 25MB 1857) 1.36ms 8888 image mandrill_132x132_12x12.astc-5-subsets
+ [1] [2] [3] [4]
+```
+
+This status line tells us several things.
+
+1. The maximum amount of memory DM had ever used was 25MB. Note this is a high
+ water mark, not the current memory usage. This is mostly useful for us to
+ track on our buildbots, some of which run perilously close to the system
+ memory limit.
+
+2. The number of unfinished tasks, in this example there are 1857, either
+ currently running or waiting to run. We generally run one task per hardware
+ thread available, so on a typical laptop there are probably 4 or 8 running at
+ once. Sometimes the counts appear to show up out of order, particularly at DM
+ startup; it's harmless, and doesn't affect the correctness of the run.
+
+3. Next, we see this task took 1.36 milliseconds to run. Generally, the
+ precision of this timer is around 1 microsecond. The time is purely there for
+ informational purposes, to make it easier for us to find slow tests.
+
+4. The configuration and name of the test we ran. We drew the test
+ "mandrill_132x132_12x12.astc-5-subsets", which is an "image" source, into an
+ "8888" sink.
+
+When DM finishes running, you should find a directory with file named `dm.json`,
+and some nested directories filled with lots of images.
+
+```
+$ ls dm_output
+8888 dm.json gl
+
+$ find dm_output -name '*.png'
+dm_output/8888/gm/3x3bitmaprect.png
+dm_output/8888/gm/aaclip.png
+dm_output/8888/gm/aarectmodes.png
+dm_output/8888/gm/alphagradients.png
+dm_output/8888/gm/arcofzorro.png
+dm_output/8888/gm/arithmode.png
+dm_output/8888/gm/astcbitmap.png
+dm_output/8888/gm/bezier_conic_effects.png
+dm_output/8888/gm/bezier_cubic_effects.png
+dm_output/8888/gm/bezier_quad_effects.png
+ ...
+```
+
+The directories are nested first by sink type (`--config`), then by source type
+(`--src`). The image from the task we just looked at, "8888 image
+mandrill_132x132_12x12.astc-5-subsets", can be found at
+`dm_output/8888/image/mandrill_132x132_12x12.astc-5-subsets.png`.
+
+`dm.json` is used by our automated testing system, so you can ignore it if you
+like. It contains a listing of each test run and a checksum of the image
+generated for that run.
+
+### Detail <a name="digests"></a>
+
+Boring technical detail: The checksum is not a checksum of the .png file, but
+rather a checksum of the raw pixels used to create that .png. That means it is
+possible for two different configurations to produce the same exact .png, but
+have their checksums differ.
+
+Unit tests don't generally output anything but a status update when they pass.
+If a test fails, DM will print out its assertion failures, both at the time they
+happen and then again all together after everything is done running. These
+failures are also included in the `dm.json` file.
+
+DM has a simple facility to compare against the results of a previous run:
+
+<!--?prettify lang=sh?-->
+
+ ninja -C out/Debug dm
+ out/Debug/dm -w good
+
+ # do some work
+
+ ninja -C out/Debug dm
+ out/Debug/dm -r good -w bad
+
+When using `-r`, DM will display a failure for any test that didn't produce the
+same image as the `good` run.
+
+For anything fancier, I suggest using skdiff:
+
+<!--?prettify lang=sh?-->
+
+ ninja -C out/Debug dm
+ out/Debug/dm -w good
+
+ # do some work
+
+ ninja -C out/Debug dm
+ out/Debug/dm -w bad
+
+ ninja -C out/Debug skdiff
+ mkdir diff
+ out/Debug/skdiff good bad diff
+
+ # open diff/index.html in your web browser
+
+That's the basics of DM. DM supports many other modes and flags. Here are a few
+examples you might find handy.
+
+<!--?prettify lang=sh?-->
+
+ out/Debug/dm --help # Print all flags, their defaults, and a brief explanation of each.
+ out/Debug/dm --src tests # Run only unit tests.
+ out/Debug/dm --nocpu # Test only GPU-backed work.
+ out/Debug/dm --nogpu # Test only CPU-backed work.
+ out/Debug/dm --match blur # Run only work with "blur" in its name.
+ out/Debug/dm --dryRun # Don't really do anything, just print out what we'd do.
diff --git a/site2/docs/dev/testing/tests.md b/site2/docs/dev/testing/tests.md
new file mode 100644
index 0000000..1ff29fd
--- /dev/null
+++ b/site2/docs/dev/testing/tests.md
@@ -0,0 +1,129 @@
+---
+title: 'Writing Skia Tests'
+linkTitle: 'Writing Skia Tests'
+---
+
+- [Unit Tests](#test)
+- [Rendering Tests](#gm)
+- [Benchmark Tests](#bench)
+
+We assume you have already synced Skia's dependencies and set up Skia's build
+system.
+
+<!--?prettify lang=sh?-->
+
+ python2 tools/git-sync-deps
+ bin/gn gen out/Debug
+ bin/gn gen out/Release --args='is_debug=false'
+
+<span id="test"></span>
+
+## Writing a Unit Test
+
+1. Add a file `tests/NewUnitTest.cpp`:
+
+ <!--?prettify lang=cc?-->
+
+ /*
+ * Copyright ........
+ *
+ * Use of this source code is governed by a BSD-style license
+ * that can be found in the LICENSE file.
+ */
+ #include "Test.h"
+ DEF_TEST(NewUnitTest, reporter) {
+ if (1 + 1 != 2) {
+ ERRORF(reporter, "%d + %d != %d", 1, 1, 2);
+ }
+ bool lifeIsGood = true;
+ REPORTER_ASSERT(reporter, lifeIsGood);
+ }
+
+2. Add `NewUnitTest.cpp` to `gn/tests.gni`.
+
+3. Recompile and run test:
+
+ <!--?prettify lang=sh?-->
+
+ ninja -C out/Debug dm
+ out/Debug/dm --match NewUnitTest
+
+<span id="gm"></span>
+
+## Writing a Rendering Test
+
+1. Add a file `gm/newgmtest.cpp`:
+
+ <!--?prettify lang=cc?-->
+
+ /*
+ * Copyright ........
+ *
+ * Use of this source code is governed by a BSD-style license
+ * that can be found in the LICENSE file.
+ */
+ #include "gm.h"
+ DEF_SIMPLE_GM(newgmtest, canvas, 128, 128) {
+ canvas->clear(SK_ColorWHITE);
+ SkPaint p;
+ p.setStrokeWidth(2);
+ canvas->drawLine(16, 16, 112, 112, p);
+ }
+
+2. Add `newgmtest.cpp` to `gn/gm.gni`.
+
+3. Recompile and run test:
+
+ <!--?prettify lang=sh?-->
+
+ ninja -C out/Debug dm
+ out/Debug/dm --match newgmtest
+
+4. Run the GM inside Viewer:
+
+ <!--?prettify lang=sh?-->
+
+ ninja -C out/Debug viewer
+ out/Debug/viewer --slide GM_newgmtest
+
+<span id="bench"></span>
+
+## Writing a Benchmark Test
+
+1. Add a file `bench/FooBench.cpp`:
+
+ <!--?prettify lang=cc?-->
+
+ /*
+ * Copyright ........
+ *
+ * Use of this source code is governed by a BSD-style license
+ * that can be found in the LICENSE file.
+ */
+ #include "Benchmark.h"
+ #include "SkCanvas.h"
+ namespace {
+ class FooBench : public Benchmark {
+ public:
+ FooBench() {}
+ virtual ~FooBench() {}
+ protected:
+ const char* onGetName() override { return "Foo"; }
+ SkIPoint onGetSize() override { return SkIPoint{100, 100}; }
+ void onDraw(int loops, SkCanvas* canvas) override {
+ while (loops-- > 0) {
+ canvas->drawLine(0.0f, 0.0f, 100.0f, 100.0f, SkPaint());
+ }
+ }
+ };
+ } // namespace
+ DEF_BENCH(return new FooBench;)
+
+2. Add `FooBench.cpp` to `gn/bench.gni`.
+
+3. Recompile and run nanobench:
+
+ <!--?prettify lang=sh?-->
+
+ ninja -C out/Release nanobench
+ out/Release/nanobench --match Foo
diff --git a/site2/docs/dev/testing/xsan.md b/site2/docs/dev/testing/xsan.md
new file mode 100644
index 0000000..f8878df
--- /dev/null
+++ b/site2/docs/dev/testing/xsan.md
@@ -0,0 +1,104 @@
+
+---
+title: "MSAN, ASAN, & TSAN"
+linkTitle: "MSAN, ASAN, & TSAN"
+
+---
+
+
+*Testing Skia with memory, address, and thread santizers.*
+
+Compiling Skia with ASAN, UBSAN, or TSAN can be done with the latest version of Clang.
+
+- UBSAN works on Linux, Mac, Android, and Windows, though some checks are platform-specific.
+- ASAN works on Linux, Mac, Android, and Windows.
+- TSAN works on Linux and Mac.
+- MSAN works on Linux[1].
+
+We find that testing sanitizer builds with libc++ uncovers more issues than
+with the system-provided C++ standard library, which is usually libstdc++.
+libc++ proactively hooks into sanitizers to help their analyses.
+We ship a copy of libc++ with our Linux toolchain in /lib.
+
+[1]To compile and run with MSAN, an MSAN-instrumented version of libc++ is needed.
+It's generally easiest to run one of the following 2 steps to build/download a recent version
+of Clang and the instrumented libc++, located in /msan.
+
+Downloading Clang binaries (Googlers Only)
+------------------------------------------
+This requires gsutil, part of the [gcloud sdk](https://cloud.google.com/sdk/downloads).
+
+<!--?prettify lang=sh?-->
+
+ CLANGDIR="${HOME}/clang"
+ python2 infra/bots/assets/clang_linux/download.py -t $CLANGDIR
+
+Building Clang binaries from scratch (Other users)
+---------------------------
+
+<!--?prettify lang=sh?-->
+
+ CLANGDIR="${HOME}/clang"
+
+ python2 tools/git-sync-deps
+ CC= CXX= infra/bots/assets/clang_linux/create.py -t "$CLANGDIR"
+
+Configure and Compile Skia with MSAN
+------------------------------------
+
+<!--?prettify lang=sh?-->
+
+ CLANGDIR="${HOME}/clang"
+ mkdir -p out/msan
+ cat > out/msan/args.gn <<- EOF
+ cc = "${CLANGDIR}/bin/clang"
+ cxx = "${CLANGDIR}/bin/clang++"
+ extra_cflags = [ "-B${CLANGDIR}/bin" ]
+ extra_ldflags = [
+ "-B${CLANGDIR}/bin",
+ "-fuse-ld=lld",
+ "-L${CLANGDIR}/msan",
+ "-Wl,-rpath,${CLANGDIR}/msan" ]
+ sanitize = "MSAN"
+ skia_use_fontconfig = false
+ EOF
+ python2 tools/git-sync-deps
+ bin/gn gen out/msan
+ ninja -C out/msan
+
+Configure and Compile Skia with ASAN
+------------------------------------
+
+<!--?prettify lang=sh?-->
+
+ CLANGDIR="${HOME}/clang"
+ mkdir -p out/asan
+ cat > out/asan/args.gn <<- EOF
+ cc = "${CLANGDIR}/bin/clang"
+ cxx = "${CLANGDIR}/bin/clang++"
+ sanitize = "ASAN"
+ extra_ldflags = [ "-fuse-ld=lld", "-Wl,-rpath,${CLANGDIR}/lib" ]
+ EOF
+ python2 tools/git-sync-deps
+ bin/gn gen out/asan
+ ninja -C out/asan
+
+Configure and Compile Skia with TSAN
+------------------------------------
+
+<!--?prettify lang=sh?-->
+
+ CLANGDIR="${HOME}/clang"
+ mkdir -p out/tsan
+ cat > out/tsan/args.gn <<- EOF
+ cc = "${CLANGDIR}/bin/clang"
+ cxx = "${CLANGDIR}/bin/clang++"
+ sanitize = "TSAN"
+ is_debug = false
+ extra_ldflags = [ "-Wl,-rpath,${CLANGDIR}/lib" ]
+ EOF
+ python2 tools/git-sync-deps
+ bin/gn gen out/tsan
+ ninja -C out/tsan
+
+