To compile CanvasKit, you will first need to install emscripten
. This will set the environment EMSDK
(among others) which is required for compilation.
Make sure you have Python3 installed, otherwise the downloading emscripten toolchain can fail with errors about SSL certificates. https://github.com/emscripten-core/emsdk/pull/273
See also https://github.com/emscripten-core/emscripten/issues/9036#issuecomment-532092743 for a solution to Python3 using the wrong certificates.
make release # make debug is much faster and has better error messages make local-example
This will print a local endpoint for viewing the example. You can experiment with the CanvasKit API by modifying ./canvaskit/example.html
and refreshing the page. For some more experimental APIs, there's also ./canvaskit/extra.html
.
For other available build targets, see Makefile
and compile.sh
. For example, building a stripped-down version of CanvasKit with no text support or any of the "extras", one might run:
./compile.sh no_skottie no_particles no_font
Such a stripped-down version is about half the size of the default release build.
To run unit tests and compute test coverage on a debug gpu build
make debug make test-continuous
This reads karma.conf.js, and opens a chrome browser and begins running all the test in test/
it will detect changes to the tests in that directory and automatically run again, however it will automatically rebuild and reload canvaskit. Closing the chrome window will just cause it to re-opened. Kill the karma process to stop continuous monitoring for changes.
The tests are run with whichever build of canvaskit you last made. be sure to also test with release
, debug_cpu
, and release_cpu
. testing with release builds will expose problems in closure compilation and usually forgotten externs.
Coverage will be automatically computed when running test-continuous locally. Note that the results will only be useful when testing a debug build. Open coverage/<browser version>/index.html
For a summary and detailed line-by-line result.
To measure the runtime of all benchmarks in perf/
make release make perf
Performacnce benchmarks also use karma, with a different config karma.bench.conf.js
. It will run once and print results.
Typically, you'd want to run these at head, and with your CL to observe the effect of some optimization.
The tests in tests/
and perf/
are grouped into files by topic. Within each file there are describe
blocks further organizing the tests, and within those it()
functions which test particular behaviors. describe
and it
are jasmine methods which can both be temporarily renamed fdescribe
and fit
. Which causes jasmine to only those.
We have also defined gm
which is a method for defining a test which draws something to a canvas that is shapshotted and reported to gold.skia.org, where you can compare it with the snapshot at head.
When submitting a CL in gerrit, click "choose tryjobs" and type canvaskit to filter them. select all of them, which at the time of this writing is four jobs, for each combination of perf/test gpu/cpu.
The performance results are reported to perf.skia.org gold results are reported to gold.skia.org
Coverage is not measured while running tests this way.
The wasm2wat
tool from the WebAssembly Binary Toolkit can be used to produce a human-readable text version of a .wasm
file.
The output of wasm2wat --version
should be 1.0.13 (1.0.17)
. This version has been checked to work with the tools in wasm_tools/SIMD/
. These tools programmatically inspect the .wasm
output of a CanvasKit build to detect the presence of wasm SIMD operations.
When dealing with CanvasKit (or PathKit) on our bots, we use Docker. Check out $SKIA_ROOT/infra/wasm-common/docker/README.md for more on building/editing the images used for building and testing.
This presumes you have updated emscripten locally to a newer version of the sdk and verified/fixed any build issues that have arisen.
$SKIA_ROOT/infra/wasm-common/docker/emsdk-base/Dockerfile
to install and activate the desired version of Emscripten.$SKIA_ROOT/infra/wasm-common/docker/Makefile
to have EMSDK_VERSION
be set to that desired version. If there is a suffix that is not _v1
, reset it to be _v1
. If testing the image later does not work and edits are made to the emsdk-base Dockerfile to correct that, increment to _v2
,_v3
, etc to force the bots to pick up the new image.$SKIA_ROOT/infra/wasm-common/docker/
, run make publish_emsdk_base
$SKIA_ROOT/infra/canvaskit/docker/canvaskit-emsdk/Dockerfile
to be based off the new version from step 2. CanvasKit has its own docker image because it needs a few extra dependencies to build with font support.$SKIA_ROOT/infra/canvaskit/docker/Makefile
to have the same version from step 2. It's easiest to keep the emsdk-base
and canvaskit-emsdk
versions be in lock-step.$SKIA_ROOT/infra/canvaskit/docker/
, run make publish_canvaskit_emsdk
.$SKIA_ROOT/infra/bots/recipe_modules/build/
, update canvaskit.py
and pathkit.py
to have DOCKER_IMAGE
point to the desired tagged Docker containers from steps 2 and 5 (which should be the same).$SKIA_ROOT/infra/bots/
, run make train
to re-train the recipes.git grep 1\\.38\\.
in $SKIA_ROOT
to see if there are any other references that need updating.