commit | 8a1277de1ad3c09855122b071a1b5d38125ad980 | [log] [tgz] |
---|---|---|
author | Dennis Kempin <denniskempin@google.com> | Mon Oct 18 13:52:30 2021 -0700 |
committer | Commit Bot <commit-bot@chromium.org> | Tue Oct 19 17:10:04 2021 +0000 |
tree | f071976d89e9c0e0e0d3ca2656fcecebe522a8f1 | |
parent | 76e6fd051bae8e78004ee6356d60b25ccca5153d [diff] |
Kokoro: Rebase changes to ToT before testing By default Kokoro will use changes as they come from gerrit, which may have an outdated parent. We want to make sure that we are always testing against tip of tree so we are confident the presubmit test result will reflect the postsubmit results. BUG=b:202275156 TEST=Tested in Kokoro Change-Id: I2a1d2860a361eee741dc522c9859e32890ff15bf Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/3229947 Tested-by: kokoro <noreply+kokoro@google.com> Commit-Queue: Dennis Kempin <denniskempin@google.com> Reviewed-by: Daniel Verkamp <dverkamp@chromium.org>
This component, known as crosvm, runs untrusted operating systems along with virtualized devices. This only runs VMs through the Linux's KVM interface. What makes crosvm unique is a focus on safety within the programming language and a sandbox around the virtual devices to protect the kernel from attack in case of an exploit in the devices.
Crosvm uses submodules to manage external dependencies. Initialize them via:
git submodule update --init
It is recommended to enable automatic recursive operations to keep the submodules in sync with the main repository (But do not push them, as that can conflict with repo
):
git config --global submodule.recurse true git config push.recurseSubmodules no
Crosvm development best works on debian derivaties. We provide a script to install the necessary packages on debian:
$ ./tools/install_deps
For other systems, please see below for instructions on Using the development container.
Crosvm is built and tested on x86, aarch64 and armhf. Your host needs to be set up to allow installation of foreign architecture packages.
On debian this is as easy as:
$ sudo dpkg --add-architecture arm64 $ sudo dpkg --add-architecture armhf $ sudo apt update
On ubuntu this is a little harder and needs some manual modifications of APT sources.
For other systems (including gLinux), please see below for instructions on Using the development container.
With that enabled, the following scripts will install the needed packages:
$ ./tools/install_aarch64_deps $ ./tools/install_armhf_deps
We provide a debian container with the required packages installed. With Docker installed, it can be started with:
$ ./tools/dev_container
The container image is big and may take a while to download when first used. Once started, you can follow all instructions in this document within the container shell.
You can use cargo as usual for crosvm development to cargo build
and cargo test
single crates that you are working on.
If you are working on aarch64 specific code, you can use the set_test_target
tool to instruct cargo to build for aarch64 and run tests on a VM:
$ ./tools/set_test_target vm:aarch64 && source .envrc $ cd mycrate && cargo test
The script will start a VM for testing and write environment variables for cargo to .envrc
. With those cargo build
will build for aarch64 and cargo test
will run tests inside the VM.
The aarch64 VM can be managed with the ./tools/aarch64vm
script.
Crosvm cannot use cargo test --workspace
because of various restrictions of cargo. So we have our own test runner:
$ ./tools/run_tests
Which will run all tests locally. Since we have some architecture-dependent code, we also have the option of running tests within an aarch64 VM:
$ ./tools/run_tests --target=vm:aarch64
When working on a machine that does not support cross-compilation (e.g. gLinux), you can use the dev container to build and run the tests.
$ ./tools/dev_container ./tools/run_tests --target=vm:aarch64
Note however, that using an interactive shell in the container is preferred, as the build artifacts are not preserved between calls:
$ ./tools/dev_container crosvm_dev$ ./tools/run_tests --target=vm:aarch64
It is also possible to run tests on a remote machine via ssh. The target architecture is automatically detected:
$ ./tools/run_tests --target=ssh:hostname
However, it is your responsibility to make sure the required libraries for crosvm are installed and password-less authentication is set up. See ./tools/impl/testvm/cloud_init.yaml
for hints on what the VM has installed.
To verify changes before submitting, use the presubmit
script:
$ ./tools/presubmit
or
$ ./tools/presubmit --quick
This will run clippy, formatters and runs all tests. The --quick
variant will skip some slower checks, like building for other platforms.
--disable-sandbox
to run everything in a single process.--seccomp-log-failures
to the crosvm command line to turn these into warnings. Note that this option will also stop minijail from killing processes that violate the seccomp rule, making the sandboxing much less aggressive.sudo mkdir /usr/share/policy && sudo ln -s /path/to/crosvm/seccomp/x86_64 /usr/share/policy/crosvm
. We'll eventually build the precompiled policies into the crosvm binary./var/empty
doesn't exist. sudo mkdir -p /var/empty
to work around this for now./dev/kvm
to run tests or other crosvm instances. Usually it's owned by the kvm
group, so sudo usermod -a -G kvm $USER
and then log out and back in again to fix this.CAP_NET_ADMIN
so those usually need to be run as root.crosvm is included in the ChromeOS source tree at src/platform/crosvm
. Crosvm can be built with ChromeOS features using Portage or cargo.
If ChromeOS-specific features are not needed, or you want to run the full test suite of crosvm, the Building for Linux and Running crosvm tests workflows can be used from the crosvm repository of ChromeOS as well.
crosvm on ChromeOS is usually built with Portage, so it follows the same general workflow as any cros_workon
package. The full package name is chromeos-base/crosvm
.
See the Chromium OS developer guide for more on how to build and deploy with Portage.
NOTE: cros_workon_make
modifies crosvm's Cargo.toml and Cargo.lock. Please be careful not to commit the changes. Moreover, with the changes cargo will fail to build and clippy preupload check will fail.
Since development using portage can be slow, it's possible to build crosvm for ChromeOS using cargo for faster iteration times. To do so, the Cargo.toml
file needs to be updated to point to dependencies provided by ChromeOS using ./setup_cros_cargo.sh
.
To see the usage information for your version of crosvm, run crosvm
or crosvm run --help
.
To run a very basic VM with just a kernel and default devices:
$ crosvm run "${KERNEL_PATH}"
The uncompressed kernel image, also known as vmlinux, can be found in your kernel build directory in the case of x86 at arch/x86/boot/compressed/vmlinux
.
In most cases, you will want to give the VM a virtual block device to use as a root file system:
$ crosvm run -r "${ROOT_IMAGE}" "${KERNEL_PATH}"
The root image must be a path to a disk image formatted in a way that the kernel can read. Typically this is a squashfs image made with mksquashfs
or an ext4 image made with mkfs.ext4
. By using the -r
argument, the kernel is automatically told to use that image as the root, and therefore can only be given once. More disks can be given with -d
or --rwdisk
if a writable disk is desired.
To run crosvm with a writable rootfs:
WARNING: Writable disks are at risk of corruption by a malicious or malfunctioning guest OS.
crosvm run --rwdisk "${ROOT_IMAGE}" -p "root=/dev/vda" vmlinux
NOTE: If more disks arguments are added prior to the desired rootfs image, the
root=/dev/vda
must be adjusted to the appropriate letter.
Linux kernel 5.4+ is required for using virtiofs. This is convenient for testing. The file system must be named "mtd*" or "ubi*".
crosvm run --shared-dir "/:mtdfake:type=fs:cache=always" \ -p "rootfstype=virtiofs root=mtdfake" vmlinux
If the control socket was enabled with -s
, the main process can be controlled while crosvm is running. To tell crosvm to stop and exit, for example:
NOTE: If the socket path given is for a directory, a socket name underneath that path will be generated based on crosvm's PID.
$ crosvm run -s /run/crosvm.sock ${USUAL_CROSVM_ARGS} <in another shell> $ crosvm stop /run/crosvm.sock
WARNING: The guest OS will not be notified or gracefully shutdown.
This will cause the original crosvm process to exit in an orderly fashion, allowing it to clean up any OS resources that might have stuck around if crosvm were terminated early.
By default crosvm runs in multiprocess mode. Each device that supports running inside of a sandbox will run in a jailed child process of crosvm. The appropriate minijail seccomp policy files must be present either in /usr/share/policy/crosvm
or in the path specified by the --seccomp-policy-dir
argument. The sandbox can be disabled for testing with the --disable-sandbox
option.
Virtio Wayland support requires special support on the part of the guest and as such is unlikely to work out of the box unless you are using a Chrome OS kernel along with a termina
rootfs.
To use it, ensure that the XDG_RUNTIME_DIR
enviroment variable is set and that the path $XDG_RUNTIME_DIR/wayland-0
points to the socket of the Wayland compositor you would like the guest to use.
crosvm supports GDB Remote Serial Protocol to allow developers to debug guest kernel via GDB.
You can enable the feature by --gdb
flag:
# Use uncompressed vmlinux $ crosvm run --gdb <port> ${USUAL_CROSVM_ARGS} vmlinux
Then, you can start GDB in another shell.
$ gdb vmlinux (gdb) target remote :<port> (gdb) hbreak start_kernel (gdb) c <start booting in the other shell>
For general techniques for debugging the Linux kernel via GDB, see this kernel documentation.
The following are crosvm's default arguments and how to override them.
-m
)-c
)-r
, -d
, or --rwdisk
)--host_ip
, --netmask
, and --mac
)XDG_RUNTIME_DIR
enviroment variable is set (disable with --no-wl
)-p
)--disable-sandbox
)-s
)A Linux kernel with KVM support (check for /dev/kvm
) is required to run crosvm. In order to run certain devices, there are additional system requirements:
virtio-wayland
- The memfd_create
syscall, introduced in Linux 3.17, and a Wayland compositor.vsock
- Host Linux kernel with vhost-vsock support, introduced in Linux 4.8.multiprocess
- Host Linux kernel with seccomp-bpf and Linux namespacing support.virtio-net
- Host Linux kernel with TUN/TAP support (check for /dev/net/tun
) and running with CAP_NET_ADMIN
privileges.Device | Description |
---|---|
CMOS/RTC | Used to get the current calendar time. |
i8042 | Used by the guest kernel to exit crosvm. |
serial | x86 I/O port driven serial devices that print to stdout and take input from stdin. |
virtio-block | Basic read/write block device. |
virtio-net | Device to interface the host and guest networks. |
virtio-rng | Entropy source used to seed guest OS's entropy pool. |
virtio-vsock | Enabled VSOCKs for the guests. |
virtio-wayland | Allowed guest to use host Wayland socket. |
rustfmt
All code should be formatted with rustfmt
. We have a script that applies rustfmt to all Rust code in the crosvm repo: please run bin/fmt
before checking in a change. This is different from cargo fmt --all
which formats multiple crates but a single workspace only; crosvm consists of multiple workspaces.
clippy
The clippy
linter is used to check for common Rust problems. The crosvm project uses a specific set of clippy
checks; please run bin/clippy
before checking in a change.
ChromeOS and Android both have a review process for third party dependencies to ensure that code included in the product is safe. Since crosvm needs to build on both, this means we are restricted in our usage of third party crates. When in doubt, do not add new dependencies.
The crosvm source code is written in Rust and C. To build, crosvm generally requires the most recent stable version of rustc.
Source code is organized into crates, each with their own unit tests. These crates are:
crosvm
- The top-level binary front-end for using crosvm.devices
- Virtual devices exposed to the guest OS.kernel_loader
- Loads elf64 kernel files to a slice of memory.kvm_sys
- Low-level (mostly) auto-generated structures and constants for using KVM.kvm
- Unsafe, low-level wrapper code for using kvm_sys
.net_sys
- Low-level (mostly) auto-generated structures and constants for creating TUN/TAP devices.net_util
- Wrapper for creating TUN/TAP devices.sys_util
- Mostly safe wrappers for small system facilities such as eventfd
or syslog
.syscall_defines
- Lists of syscall numbers in each architecture used to make syscalls not supported in libc
.vhost
- Wrappers for creating vhost based devices.virtio_sys
- Low-level (mostly) auto-generated structures and constants for interfacing with kernel vhost support.vm_control
- IPC for the VM.x86_64
- Support code specific to 64 bit intel machines.The seccomp
folder contains minijail seccomp policy files for each sandboxed device. Because some syscalls vary by architecture, the seccomp policies are split by architecture.