Test Runner (Twister)

This script scans for the set of unit test applications in the git repository and attempts to execute them. By default, it tries to build each test case on boards marked as default in the board definition file.

The default options will build the majority of the tests on a defined set of boards and will run in an emulated environment if available for the architecture or configuration being tested.

In normal use, twister runs a limited set of kernel tests (inside an emulator). Because of its limited test execution coverage, twister cannot guarantee local changes will succeed in the full build environment, but it does sufficient testing by building samples and tests for different boards and different configurations to help keep the complete code tree buildable.

When using (at least) one -v option, twister’s console output shows for every test how the test is run (qemu, native_posix, etc.) or whether the binary was just built. There are a few reasons why twister only builds a test and doesn’t run it:

  • The test is marked as build_only: true in its .yaml configuration file.

  • The test configuration has defined a harness but you don’t have it or haven’t set it up.

  • The target device is not connected and not available for flashing

  • You or some higher level automation invoked twister with --build-only.

These also affect the outputs of --testcase-report and --detailed-report, see their respective --help sections.

To run the script in the local tree, follow the steps below:

$ source zephyr-env.sh
$ ./scripts/twister

If you have a system with a large number of cores and plenty of free storage space, you can build and run all possible tests using the following options:

$ ./scripts/twister --all --enable-slow

This will build for all available boards and run all applicable tests in a simulated (for example QEMU) environment.

The list of command line options supported by twister can be viewed using:

$ ./scripts/twister --help

Board Configuration

To build tests for a specific board and to execute some of the tests on real hardware or in an emulation environment such as QEMU a board configuration file is required which is generic enough to be used for other tasks that require a board inventory with details about the board and its configuration that is only available during build time otherwise.

The board metadata file is located in the board directory and is structured using the YAML markup language. The example below shows a board with a data required for best test coverage for this specific board:

identifier: frdm_k64f
name: NXP FRDM-K64F
type: mcu
arch: arm
toolchain:
  - zephyr
  - gnuarmemb
  - xtools
supported:
  - arduino_gpio
  - arduino_i2c
  - netif:eth
  - adc
  - i2c
  - nvs
  - spi
  - gpio
  - usb_device
  - watchdog
  - can
  - pwm
testing:
  default: true
identifier:

A string that matches how the board is defined in the build system. This same string is used when building, for example when calling west build or cmake:

# with west
west build -b reel_board
# with cmake
cmake -DBOARD=reel_board ..
name:

The actual name of the board as it appears in marketing material.

type:

Type of the board or configuration, currently we support 2 types: mcu, qemu

arch:

Architecture of the board

toolchain:

The list of supported toolchains that can build this board. This should match one of the values used for ‘ZEPHYR_TOOLCHAIN_VARIANT’ when building on the command line

ram:

Available RAM on the board (specified in KB). This is used to match testcase requirements. If not specified we default to 128KB.

flash:

Available FLASH on the board (specified in KB). This is used to match testcase requirements. If not specified we default to 512KB.

supported:

A list of features this board supports. This can be specified as a single word feature or as a variant of a feature class. For example:

supported:
  - pci

This indicates the board does support PCI. You can make a testcase build or run only on such boards, or:

supported:
  - netif:eth
  - sensor:bmi16

A testcase can both depend on ‘eth’ to only test ethernet or on ‘netif’ to run on any board with a networking interface.

testing:

testing relating keywords to provide best coverage for the features of this board.

default: [True|False]:

This is a default board, it will tested with the highest priority and is covered when invoking the simplified twister without any additional arguments.

ignore_tags:

Do not attempt to build (and therefore run) tests marked with this list of tags.

only_tags:

Only execute tests with this list of tags on a specific platform.

Test Cases

Test cases are detected by the presence of a ‘testcase.yaml’ or a ‘sample.yaml’ files in the application’s project directory. This file may contain one or more entries in the test section each identifying a test scenario.

The name of each testcase needs to be unique in the context of the overall testsuite and has to follow basic rules:

  1. The format of the test identifier shall be a string without any spaces or special characters (allowed characters: alphanumric and [_=]) consisting of multiple sections delimited with a dot (.).

  2. Each test identifier shall start with a section followed by a subsection separated by a dot. For example, a test that covers semaphores in the kernel shall start with kernel.sempahore.

  3. All test identifiers within a testcase.yaml file need to be unique. For example a testcase.yaml file covering semaphores in the kernel can have:

    • kernel.semaphore: For general semaphore tests

    • kernel.semaphore.stress: Stress testng semaphores in the kernel.

  4. Depending on the nature of the test, an identifier can consist of at least two sections:

    • Ztest tests: The individual testcases in the ztest testsuite will be concatenated to identifier in the testcase.yaml file generating unique identifiers for every testcase in the suite.

    • Standalone tests and samples: This type of test should at least have 3 sections in the test identifier in the testcase.yaml (or sample.yaml) file. The last section of the name shall signify the test itself.

Test cases are written using the YAML syntax and share the same structure as samples. The following is an example test with a few options that are explained in this document.

tests:
  bluetooth.gatt:
    build_only: true
    platform_allow: qemu_cortex_m3 qemu_x86
    tags: bluetooth
  bluetooth.gatt.br:
    build_only: true
    extra_args: CONF_FILE="prj_br.conf"
    filter: not CONFIG_DEBUG
    platform_exclude: up_squared
    platform_allow: qemu_cortex_m3 qemu_x86
    tags: bluetooth

A sample with tests will have the same structure with additional information related to the sample and what is being demonstrated:

sample:
  name: hello world
  description: Hello World sample, the simplest Zephyr application
tests:
  sample.basic.hello_world:
    build_only: true
    tags: tests
    min_ram: 16
  sample.basic.hello_world.singlethread:
    build_only: true
    extra_args: CONF_FILE=prj_single.conf
    filter: not CONFIG_BT
    tags: tests
    min_ram: 16

The full canonical name for each test case is:

<path to test case>/<test entry>

Each test block in the testcase meta data can define the following key/value pairs:

tags: <list of tags> (required)

A set of string tags for the testcase. Usually pertains to functional domains but can be anything. Command line invocations of this script can filter the set of tests to run based on tag.

skip: <True|False> (default False)

skip testcase unconditionally. This can be used for broken tests.

slow: <True|False> (default False)

Don’t run this test case unless –enable-slow was passed in on the command line. Intended for time-consuming test cases that are only run under certain circumstances, like daily builds. These test cases are still compiled.

extra_args: <list of extra arguments>

Extra arguments to pass to Make when building or running the test case.

extra_configs: <list of extra configurations>

Extra configuration options to be merged with a master prj.conf when building or running the test case. For example:

common:
  tags: drivers adc
tests:
  test:
    depends_on: adc
  test_async:
    extra_configs:
      - CONFIG_ADC_ASYNC=y
build_only: <True|False> (default False)

If true, don’t try to run the test even if the selected platform supports it.

build_on_all: <True|False> (default False)

If true, attempt to build test on all available platforms.

depends_on: <list of features>

A board or platform can announce what features it supports, this option will enable the test only those platforms that provide this feature.

min_ram: <integer>

minimum amount of RAM in KB needed for this test to build and run. This is compared with information provided by the board metadata.

min_flash: <integer>

minimum amount of ROM in KB needed for this test to build and run. This is compared with information provided by the board metadata.

timeout: <number of seconds>

Length of time to run test in QEMU before automatically killing it. Default to 60 seconds.

arch_allow: <list of arches, such as x86, arm, arc>

Set of architectures that this test case should only be run for.

arch_exclude: <list of arches, such as x86, arm, arc>

Set of architectures that this test case should not run on.

platform_allow: <list of platforms>

Set of platforms that this test case should only be run for. Do not use this option to limit testing or building in CI due to time or resource constraints, this option should only be used if the test or sample can only be run on the allowed platform and nothing else.

integration_platforms: <YML list of platforms/boards>

This option limits the scope to the listed platforms when twister is invoked with the –integration option. Use this instead of platform_allow if the goal is to limit scope due to timing or resource constraints.

platform_exclude: <list of platforms>

Set of platforms that this test case should not run on.

extra_sections: <list of extra binary sections>

When computing sizes, twister will report errors if it finds extra, unexpected sections in the Zephyr binary unless they are named here. They will not be included in the size calculation.

harness: <string>

A harness string needed to run the tests successfully. This can be as simple as a loopback wiring or a complete hardware test setup for sensor and IO testing. Usually pertains to external dependency domains but can be anything such as console, sensor, net, keyboard, Bluetooth or pytest.

harness_config: <harness configuration options>

Extra harness configuration options to be used to select a board and/or for handling generic Console with regex matching. Config can announce what features it supports. This option will enable the test to run on only those platforms that fulfill this external dependency.

The following options are currently supported:

type: <one_line|multi_line> (required)

Depends on the regex string to be matched

record: <recording options>

regex: <expression> (required)

Any string that the particular test case prints to record test results.

regex: <expression> (required)

Any string that the particular test case prints to confirm test runs as expected.

ordered: <True|False> (default False)

Check the regular expression strings in orderly or randomly fashion

repeat: <integer>

Number of times to validate the repeated regex expression

fixture: <expression>

Specify a test case dependency on an external device(e.g., sensor), and identify setups that fulfill this dependency. It depends on specific test setup and board selection logic to pick the particular board(s) out of multiple boards that fulfill the dependency in an automation setup based on “fixture” keyword. Some sample fixture names are i2c_hts221, i2c_bme280, i2c_FRAM, ble_fw and gpio_loop.

Only one fixture can be defined per testcase.

pytest_root: <pytest dirctory> (default pytest)

Specify a pytest directory which need to excute when test case begin to running, default pytest directory name is pytest, after pytest finished, twister will check if this case pass or fail according the pytest report.

The following is an example yaml file with a few harness_config options.

sample:
  name: HTS221 Temperature and Humidity Monitor
common:
  tags: sensor
  harness: console
  harness_config:
    type: multi_line
    ordered: false
    regex:
      - "Temperature:(.*)C"
      - "Relative Humidity:(.*)%"
    fixture: i2c_hts221
tests:
  test:
    tags: sensors
    depends_on: i2c

The following is an example yaml file with pytest harness_config options, default pytest_root name “pytest” will be used if pytest_root not specified. please refer the example in samples/subsys/testsuite/pytest/.

tests:
  pytest.example:
    harness: pytest
    harness_config:
      pytest_root: [pytest directory name]
filter: <expression>

Filter whether the testcase should be run by evaluating an expression against an environment containing the following values:

{ ARCH : <architecture>,
  PLATFORM : <platform>,
  <all CONFIG_* key/value pairs in the test's generated defconfig>,
  *<env>: any environment variable available
}

The grammar for the expression language is as follows:

expression ::= expression “and” expression
expression “or” expression
“not” expression
“(” expression “)”
symbol “==” constant
symbol “!=” constant
symbol “<” number
symbol “>” number
symbol “>=” number
symbol “<=” number
symbol “in” list
symbol “:” string
symbol

list ::= “[” list_contents “]”

list_contents ::= constant
list_contents “,” constant
constant ::= number
string

For the case where expression ::= symbol, it evaluates to true if the symbol is defined to a non-empty string.

Operator precedence, starting from lowest to highest:

or (left associative) and (left associative) not (right associative) all comparison operators (non-associative)

arch_allow, arch_exclude, platform_allow, platform_exclude are all syntactic sugar for these expressions. For instance

arch_exclude = x86 arc

Is the same as:

filter = not ARCH in [“x86”, “arc”]

The ‘:’ operator compiles the string argument as a regular expression, and then returns a true value only if the symbol’s value in the environment matches. For example, if CONFIG_SOC=”stm32f107xc” then

filter = CONFIG_SOC : “stm.*”

Would match it.

The set of test cases that actually run depends on directives in the testcase filed and options passed in on the command line. If there is any confusion, running with -v or examining the discard report (twister_discard.csv) can help show why particular test cases were skipped.

Metrics (such as pass/fail state and binary size) for the last code release are stored in scripts/release/twister_last_release.csv. To update this, pass the –all –release options.

To load arguments from a file, write ‘+’ before the file name, e.g., +file_name. File content must be one or more valid arguments separated by line break instead of white spaces.

Most everyday users will run with no arguments.

Running in Integration Mode

This mode is used in continuous integration (CI) and other automated environments used to give developers fast feedback on changes. The mode can be activated using the –integration option of twister and narrows down the scope of builds and tests if applicable to platforms defined under the integration keyword in the testcase definition file (testcase.yaml and sample.yaml).

Running Tests on Hardware

Beside being able to run tests in QEMU and other simulated environments, twister supports running most of the tests on real devices and produces reports for each run with detailed FAIL/PASS results.

Executing tests on a single device

To use this feature on a single connected device, run twister with the following new options:

scripts/twister --device-testing --device-serial /dev/ttyACM0 \
--device-serial-baud 9600 -p frdm_k64f  -T tests/kernel

The --device-serial option denotes the serial device the board is connected to. This needs to be accessible by the user running twister. You can run this on only one board at a time, specified using the --platform option.

The --device-serial-baud option is only needed if your device does not run at 115200 baud.

Executing tests on multiple devices

To build and execute tests on multiple devices connected to the host PC, a hardware map needs to be created with all connected devices and their details such as the serial device, baud and their IDs if available. Run the following command to produce the hardware map:

./scripts/twister --generate-hardware-map map.yml

The generated hardware map file (map.yml) will have the list of connected devices, for example:

- connected: true
  id: OSHW000032254e4500128002ab98002784d1000097969900
  platform: unknown
  product: DAPLink CMSIS-DAP
  runner: pyocd
  serial: /dev/cu.usbmodem146114202
- connected: true
  id: 000683759358
  platform: unknown
  product: J-Link
  runner: unknown
  serial: /dev/cu.usbmodem0006837593581

Any options marked as ‘unknown’ need to be changed and set with the correct values, in the above example both the platform names and the runners need to be replaced with the correct values corresponding to the connected hardware. In this example we are using a reel_board and an nrf52840dk_nrf52840:

- connected: true
  id: OSHW000032254e4500128002ab98002784d1000097969900
  platform: reel_board
  product: DAPLink CMSIS-DAP
  runner: pyocd
  serial: /dev/cu.usbmodem146114202
  baud: 9600
- connected: true
  id: 000683759358
  platform: nrf52840dk_nrf52840
  product: J-Link
  runner: nrfjprog
  serial: /dev/cu.usbmodem0006837593581
  baud: 9600

The baud entry is only needed if not running at 115200.

If the map file already exists, then new entries are added and existing entries will be updated. This way you can use one single master hardware map and update it for every run to get the correct serial devices and status of the devices.

With the hardware map ready, you can run any tests by pointing to the map file:

./scripts/twister --device-testing --hardware-map map.yml -T samples/hello_world/

The above command will result in twister building tests for the platforms defined in the hardware map and subsequently flashing and running the tests on those platforms.

Note

Currently only boards with support for both pyocd and nrfjprog are supported with the hardware map features. Boards that require other runners to flash the Zephyr binary are still work in progress.

Fixtures

Some tests require additional setup or special wiring specific to the test. Running the tests without this setup or test fixture may fail. A testcase can specify the fixture it needs which can then be matched with hardware capability of a board and the fixtures it supports via the command line or using the hardware map file.

Fixtures are defined in the hardware map file as a list:

- connected: true
  fixtures:
    - gpio_loopback
  id: 0240000026334e450015400f5e0e000b4eb1000097969900
  platform: frdm_k64f
  product: DAPLink CMSIS-DAP
  runner: pyocd
  serial: /dev/ttyACM9

When running twister with --device-testing, the configured fixture in the hardware map file will be matched to testcases requesting the same fixtures and these tests will be executed on the boards that provide this fixture.

../../_images/fixtures.svg

Notes

It may be useful to annotate board descriptions in the hardware map file with additional information. Use the “notes” keyword to do this. For example:

- connected: false
  fixtures:
    - gpio_loopback
  id: 000683290670
  notes: An nrf5340dk_nrf5340 is detected as an nrf52840dk_nrf52840 with no serial
    port, and three serial ports with an unknown platform.  The board id of the serial
    ports is not the same as the board id of the the development kit.  If you regenerate
    this file you will need to update serial to reference the third port, and platform
    to nrf5340dk_nrf5340_cpuapp or another supported board target.
  platform: nrf52840dk_nrf52840
  product: J-Link
  runner: jlink
  serial: null

Overriding Board Identifier

When (re-)generated the hardware map file will contain an “id” keyword that serves as the argument to --board-id when flashing. In some cases the detected ID is not the correct one to use, for example when using an external J-Link probe. The “probe_id” keyword overrides the “id” keyword for this purpose. For example:

- connected: false
  id: 0229000005d9ebc600000000000000000000000097969905
  platform: mimxrt1060_evk
  probe_id: 000609301751
  product: DAPLink CMSIS-DAP
  runner: jlink
  serial: null

Quarantine

Twister allows using user-defined yaml files defining the list of tests to be put under quarantine. Such tests will be skipped and marked accordingly in the output reports. This feature is especially useful when running larger test suits, where a failure of one test can affect the execution of other tests (e.g. putting the physical board in a corrupted state).

To use the quarantine feature one has to add the argument --quarantine-list <PATH_TO_QUARANTINE_YAML> to a twister call. The current status of tests on the quarantine list can also be verified by adding --quarantine-verify to the above argument. This will make twister skip all tests which are not on the given list.

A quarantine yaml has to be a sequence of dictionaries. Each dictionary has to have “scenarios” and “platforms” entries listing combinations of scenarios and platforms to put under quarantine. In addition, an optional entry “comment” can be used, where some more details can be given (e.g. link to a reported issue). These comments will also be added to the output reports.

An example of entries in a quarantine yaml:

- scenarios:
    - sample.basic.helloworld
  platforms:
    - all
  comment: "Link to the issue: https://github.com/zephyrproject-rtos/zephyr/pull/33287"

- scenarios:
    - kernel.common
    - kernel.common.misra
    - kernel.common.nano64
  platforms:
    - qemu_cortex_m3
    - native_posix