Test Framework

The Zephyr Test Framework (Ztest) provides a simple testing framework intended to be used during development. It provides basic assertion macros and a generic test structure.

The framework can be used in two ways, either as a generic framework for integration testing, or for unit testing specific modules.

To enable the latest APIs of Ztest simply set CONFIG_ZTEST_NEW_API=y. The legacy APIs will soon be deprecated and eventually removed.

Creating a test suite

Using Ztest to create a test suite is as easy as calling the ZTEST_SUITE. The macro accepts the following arguments:

  • suite_name - The name of the suite. This name must be unique within a single binary.

  • ztest_suite_predicate_t - An optional predicate function to allow choosing when the test will run. The predicate will get a pointer to the global state passed in through ztest_run_all() and should return a boolean to decide if the suite should run.

  • ztest_suite_setup_t - An optional setup function which returns a test fixture. This will be called and run once per test suite run.

  • ztest_suite_before_t - An optional before function which will run before every single test in this suite.

  • ztest_suite_after_t - An optional after function which will run after every single test in this suite.

  • ztest_suite_teardown_t - An optional teardown function which will run at the end of all the tests in the suite.

Below is an example of a test suite using a predicate:

#include <zephyr/ztest.h>
#include "test_state.h"

static bool predicate(const void *global_state)
{
  return ((const struct test_state*)global_state)->x == 5;
}

ZTEST_SUITE(alternating_suite, predicate, NULL, NULL, NULL, NULL);

Adding tests to a suite

There are 4 macros used to add a test to a suite, they are:

  • ZTEST (suite_name, test_name) - Which can be used to add a test by test_name to a given suite by suite_name.

  • ZTEST_USER (suite_name, test_name) - Which behaves the same as ZTEST, only that when CONFIG_USERSPACE is enabled, then the test will be run in a userspace thread.

  • ZTEST_F (suite_name, test_name) - Which behaves the same as ZTEST, only that the test function will already include a variable named fixture with the type <suite_name>_fixture.

  • ZTEST_USER_F (suite_name, test_name) - Which combines the fixture feature of ZTEST_F with the userspace threading for the test.

Test fixtures

Test fixtures can be used to help simplify repeated test setup operations. In many cases, tests in the same suite will require some initial setup followed by some form of reset between each test. This is achieved via fixtures in the following way:

#include <zephyr/ztest.h>

struct my_suite_fixture {
  size_t max_size;
  size_t size;
  uint8_t buff[1];
};

static void *my_suite_setup(void)
{
  /* Allocate the fixture with 256 byte buffer */
  struct my_suite_fixture *fixture = k_malloc(sizeof(struct my_suite_fixture) + 255);

  zassume_not_null(fixture, NULL);
  fixture->max_size = 256;

  return fixture;
}

static void my_suite_before(void *f)
{
  struct my_suite_fixture *fixture = (struct my_suite_fixture *)f;
  memset(fixture->buff, 0, fixture->max_size);
  fixture->size = 0;
}

static void my_suite_teardown(void *f)
{
  k_free(f);
}

ZTEST_SUITE(my_suite, NULL, my_suite_setup, my_suite_before, NULL, my_suite_teardown);

ZTEST_F(my_suite, test_feature_x)
{
  zassert_equal(0, fixture->size);
  zassert_equal(256, fixture->max_size);
}

Advanced features

Test rules

Test rules are a way to run the same logic for every test and every suite. There are a lot of cases where you might want to reset some state for every test in the binary (regardless of which suite is currently running). As an example, this could be to reset mocks, reset emulators, flush the UART, etc.

#include <zephyr/fff.h>
#include <zephyr/ztest.h>

#include "test_mocks.h"

DEFINE_FFF_GLOBALS;

DEFINE_FAKE_VOID_FUN(my_weak_func);

static void fff_reset_rule_before(const struct ztest_unit_test *test, void *fixture)
{
  ARG_UNUSED(test);
  ARG_UNUSED(fixture);

  RESET_FAKE(my_weak_func);
}

ZTEST_RULE(fff_reset_rule, fff_reset_rule_before, NULL);

A custom test_main

While the Ztest framework provides a default test_main() function, it’s possible that some applications will want to provide custom behavior. This is particularly true if there’s some global state that the tests depend on and that state either cannot be replicated or is difficult to replicate without starting the process over. For example, one such state could be a power sequence. Assuming there’s a board with several steps in the power-on sequence a test suite can be written using the predicate to control when it would run. In that case, the test_main() function can be written as following:

#include <zephyr/ztest.h>

#include "my_test.h"

void test_main(void)
{
  struct power_sequence_state state;

  /* Only suites that use a predicate checking for phase == PWR_PHASE_0 will run. */
  state.phase = PWR_PHASE_0;
  ztest_run_all(&state);

  /* Only suites that use a predicate checking for phase == PWR_PHASE_1 will run. */
  state.phase = PWR_PHASE_1;
  ztest_run_all(&state);

  /* Only suites that use a predicate checking for phase == PWR_PHASE_2 will run. */
  state.phase = PWR_PHASE_2;
  ztest_run_all(&state);

  /* Check that all the suites in this binary ran at least once. */
  ztest_verify_all_test_suites_ran();
}

Quick start - Integration testing

A simple working base is located at samples/subsys/testsuite/integration. Just copy the files to tests/ and edit them for your needs. The test will then be automatically built and run by the twister script. If you are testing the bar component of foo, you should copy the sample folder to tests/foo/bar. It can then be tested with:

./scripts/twister -s tests/foo/bar/test-identifier

In the example above tests/foo/bar signifies the path to the test and the test-identifier references a test defined in the testcase.yaml file.

To run all tests defined in a test project, run:

./scripts/twister -T tests/foo/bar/

The sample contains the following files:

CMakeLists.txt

1# SPDX-License-Identifier: Apache-2.0
2
3cmake_minimum_required(VERSION 3.20.0)
4find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})
5project(integration)
6
7FILE(GLOB app_sources src/*.c)
8target_sources(app PRIVATE ${app_sources})

testcase.yaml

1tests:
2  # section.subsection
3  testing.ztest:
4    build_only: true
5    platform_allow: native_posix
6    tags: testing

prj.conf

1CONFIG_ZTEST=y
2CONFIG_ZTEST_NEW_API=y

src/main.c (see best practices)

 1/*
 2 * Copyright (c) 2016 Intel Corporation
 3 *
 4 * SPDX-License-Identifier: Apache-2.0
 5 */
 6
 7#include <zephyr/ztest.h>
 8
 9
10ZTEST_SUITE(framework_tests, NULL, NULL, NULL, NULL, NULL);
11
12/**
13 * @brief Test Asserts
14 *
15 * This test verifies various assert macros provided by ztest.
16 *
17 */
18ZTEST(framework_tests, test_assert)
19{
20	zassert_true(1, "1 was false");
21	zassert_false(0, "0 was true");
22	zassert_is_null(NULL, "NULL was not NULL");
23	zassert_not_null("foo", "\"foo\" was NULL");
24	zassert_equal(1, 1, "1 was not equal to 1");
25	zassert_equal_ptr(NULL, NULL, "NULL was not equal to NULL");
26}

A test case project may consist of multiple sub-tests or smaller tests that either can be testing functionality or APIs. Functions implementing a test should follow the guidelines below:

  • Test cases function names should be prefix with test_

  • Test cases should be documented using doxygen

  • Test function names should be unique within the section or component being tested

An example can be seen below:

/**
 * @brief Test Asserts
 *
 * This test verifies the zassert_true macro.
 */
ZTEST(my_suite, test_assert)
{
        zassert_true(1, "1 was false");
}

Listing Tests

Tests (test projects) in the Zephyr tree consist of many testcases that run as part of a project and test similar functionality, for example an API or a feature. The twister script can parse the testcases in all test projects or a subset of them, and can generate reports on a granular level, i.e. if cases have passed or failed or if they were blocked or skipped.

Twister parses the source files looking for test case names, so you can list all kernel test cases, for example, by entering:

twister --list-tests -T tests/kernel

Skipping Tests

Special- or architecture-specific tests cannot run on all platforms and architectures, however we still want to count those and report them as being skipped. Because the test inventory and the list of tests is extracted from the code, adding conditionals inside the test suite is sub-optimal. Tests that need to be skipped for a certain platform or feature need to explicitly report a skip using ztest_test_skip() or Z_TEST_SKIP_IFDEF. If the test runs, it needs to report either a pass or fail. For example:

#ifdef CONFIG_TEST1
ZTEST(common, test_test1)
{
  zassert_true(1, "true");
}
#else
ZTEST(common, test_test1)
{
        ztest_test_skip();
}
#endif

ZTEST(common, test_test2)
{
  Z_TEST_SKIP_IFDEF(CONFIG_BUGxxxxx);
  zassert_equal(1, 0, NULL);
}

ZTEST_SUITE(common, NULL, NULL, NULL, NULL, NULL);

Quick start - Unit testing

Ztest can be used for unit testing. This means that rather than including the entire Zephyr OS for testing a single function, you can focus the testing efforts into the specific module in question. This will speed up testing since only the module will have to be compiled in, and the tested functions will be called directly.

Since you won’t be including basic kernel data structures that most code depends on, you have to provide function stubs in the test. Ztest provides some helpers for mocking functions, as demonstrated below.

In a unit test, mock objects can simulate the behavior of complex real objects and are used to decide whether a test failed or passed by verifying whether an interaction with an object occurred, and if required, to assert the order of that interaction.

Best practices for declaring the test suite

twister and other validation tools need to obtain the list of subcases that a Zephyr ztest test image will expose.

Rationale

This all is for the purpose of traceability. It’s not enough to have only a semaphore test project. We also need to show that we have testpoints for all APIs and functionality, and we trace back to documentation of the API, and functional requirements.

The idea is that test reports show results for every sub-testcase as passed, failed, blocked, or skipped. Reporting on only the high-level test project level, particularly when tests do too many things, is too vague.

Other questions:

  • Why not pre-scan with CPP and then parse? or post scan the ELF file?

    If C pre-processing or building fails because of any issue, then we won’t be able to tell the subcases.

  • Why not declare them in the YAML testcase description?

    A separate testcase description file would be harder to maintain than just keeping the information in the test source files themselves – only one file to update when changes are made eliminates duplication.

Stress test framework

Zephyr stress test framework (Ztress) provides an environment for executing user functions in multiple priority contexts. It can be used to validate that code is resilient to preemptions. The framework tracks the number of executions and preemptions for each context. Execution can have various completion conditions like timeout, number of executions or number of preemptions.

The framework is setting up the environment by creating the requested number of threads (each on different priority), optionally starting a timer. For each context, a user function (different for each context) is called and then the context sleeps for a randomized amount of system ticks. The framework is tracking CPU load and adjusts sleeping periods to achieve higher CPU load. In order to increase the probability of preemptions, the system clock frequency should be relatively high. The default 100 Hz on QEMU x86 is much too low and it is recommended to increase it to 100 kHz.

The stress test environment is setup and executed using ZTRESS_EXECUTE which accepts a variable number of arguments. Each argument is a context that is specified by ZTRESS_TIMER or ZTRESS_THREAD macros. Contexts are specified in priority descending order. Each context specifies completion conditions by providing the minimum number of executions and preemptions. When all conditions are met and the execution has completed, an execution report is printed and the macro returns. Note that while the test is executing, a progress report is periodically printed.

Execution can be prematurely completed by specifying a test timeout (ztress_set_timeout()) or an explicit abort (ztress_abort()).

User function parameters contains an execution counter and a flag indicating if it is the last execution.

The example below presents how to setup and run 3 contexts (one of which is k_timer interrupt handler context). Completion criteria is set to at least 10000 executions of each context and 1000 preemptions of the lowest priority context. Additionally, the timeout is configured to complete after 10 seconds if those conditions are not met. The last argument of each context is the initial sleep time which will be adjusted throughout the test to achieve the highest CPU load.

ztress_set_timeout(K_MSEC(10000));
ZTRESS_EXECUTE(ZTRESS_TIMER(foo_0, user_data_0, 10000, Z_TIMEOUT_TICKS(20)),
               ZTRESS_THREAD(foo_1, user_data_1, 10000, 0, Z_TIMEOUT_TICKS(20)),
               ZTRESS_THREAD(foo_2, user_data_2, 10000, 1000, Z_TIMEOUT_TICKS(20)));

Configuration

Static configuration of Ztress contains:

  • ZTRESS_MAX_THREADS - number of supported threads.

  • ZTRESS_STACK_SIZE - Stack size of created threads.

  • ZTRESS_REPORT_PROGRESS_MS - Test progress report interval.

API reference

Running tests

group ztest_test

This module eases the testing process by providing helpful macros and other testing structures.

Defines

ZTEST(suite, fn)

Create and register a new unit test.

Calling this macro will create a new unit test and attach it to the declared suite. The suite does not need to be defined in the same compilation unit.

Parameters
  • suite – The name of the test suite to attach this test

  • fn – The test function to call.

ZTEST_USER(suite, fn)

Define a test function that should run as a user thread.

This macro behaves exactly the same as ZTEST, but calls the test function in user space if CONFIG_USERSPACE was enabled.

Parameters
  • suite – The name of the test suite to attach this test

  • fn – The test function to call.

ZTEST_F(suite, fn)

Define a test function.

This macro behaves exactly the same as ZTEST(), but the function takes an argument for the fixture of type struct suite##_fixture* named this.

Parameters
  • suite – The name of the test suite to attach this test

  • fn – The test function to call.

ZTEST_USER_F(suite, fn)

Define a test function that should run as a user thread.

If CONFIG_USERSPACE is not enabled, this is functionally identical to ZTEST_F(). The test function takes a single fixture argument of type struct suite##_fixture* named this.

Parameters
  • suite – The name of the test suite to attach this test

  • fn – The test function to call.

ZTEST_RULE(name, before_each_fn, after_each_fn)

Define a test rule that will run before/after each unit test.

Functions defined here will run before/after each unit test for every test suite. Along with the callback, the test functions are provided a pointer to the test being run, and the data. This provides a mechanism for tests to perform custom operations depending on the specific test or the data (for example logging may use the test’s name).

Ordering:

  • Test rule’s before function will run before the suite’s before function. This is done to allow the test suite’s customization to take precedence over the rule which is applied to all suites.

  • Test rule’s after function is not guaranteed to run in any particular order.

Parameters
  • name – The name for the test rule (must be unique within the compilation unit)

  • before_each_fn – The callback function (ztest_rule_cb) to call before each test (may be NULL)

  • after_each_fn – The callback function (ztest_rule_cb) to call after each test (may be NULL)

ztest_run_test_suite(suite)

Run the specified test suite.

Parameters
  • suite – Test suite to run.

Typedefs

typedef void (*ztest_rule_cb)(const struct ztest_unit_test *test, void *data)

Test rule callback function signature.

The function signature that can be used to register a test rule’s before/after callback. This provides access to the test and the fixture data (if provided).

Param test

Pointer to the unit test in context

Param data

Pointer to the test’s fixture data (may be NULL)

Functions

void ztest_test_fail(void)

Fail the currently running test.

This is the function called from failed assertions and the like. You probably don’t need to call it yourself.

void ztest_test_pass(void)

Pass the currently running test.

Normally a test passes just by returning without an assertion failure. However, if the success case for your test involves a fatal fault, you can call this function from k_sys_fatal_error_handler to indicate that the test passed before aborting the thread.

void ztest_test_skip(void)

Skip the current test.

void ztest_simple_1cpu_before(void *data)

A ‘before’ function to use in test suites that just need to start 1cpu.

Ignores data, and calls z_test_1cpu_start()

Parameters
  • data – The test suite’s data

void ztest_simple_1cpu_after(void *data)

A ‘after’ function to use in test suites that just need to stop 1cpu.

Ignores data, and calls z_test_1cpu_stop()

Parameters
  • data – The test suite’s data

struct ztest_test_rule
struct ztest_arch_api
#include <ztest_test_new.h>

Structure for architecture specific APIs.

Assertions

These macros will instantly fail the test if the related assertion fails. When an assertion fails, it will print the current file, line and function, alongside a reason for the failure and an optional message. If the config option:CONFIG_ZTEST_ASSERT_VERBOSE is 0, the assertions will only print the file and line numbers, reducing the binary size of the test.

Example output for a failed macro from zassert_equal(buf->ref, 2, "Invalid refcount"):

Assertion failed at main.c:62: test_get_single_buffer: Invalid refcount (buf->ref not equal to 2)
Aborted at unit test function
group ztest_assert

This module provides assertions when using Ztest.

Defines

zassert(cond, default_msg, msg, ...)

Fail the test, if cond is false.

You probably don’t need to call this macro directly. You should instead use zassert_{condition} macros below.

Note that when CONFIG_MULTITHREADING=n macro returns from the function. It is then expected that in that case ztest asserts will be used only in the context of the test function.

Parameters
  • cond – Condition to check

  • msg – Optional, can be NULL. Message to print if cond is false.

  • default_msg – Message to print if cond is false

zassume(cond, default_msg, msg, ...)
zassert_unreachable(msg, ...)

Assert that this function call won’t be reached.

Parameters
  • msg – Optional message to print if the assertion fails

zassert_true(cond, msg, ...)

Assert that cond is true.

Parameters
  • cond – Condition to check

  • msg – Optional message to print if the assertion fails

zassert_false(cond, msg, ...)

Assert that cond is false.

Parameters
  • cond – Condition to check

  • msg – Optional message to print if the assertion fails

zassert_ok(cond, msg, ...)

Assert that cond is 0 (success)

Parameters
  • cond – Condition to check

  • msg – Optional message to print if the assertion fails

zassert_is_null(ptr, msg, ...)

Assert that ptr is NULL.

Parameters
  • ptr – Pointer to compare

  • msg – Optional message to print if the assertion fails

zassert_not_null(ptr, msg, ...)

Assert that ptr is not NULL.

Parameters
  • ptr – Pointer to compare

  • msg – Optional message to print if the assertion fails

zassert_equal(a, b, msg, ...)

Assert that a equals b.

a and b won’t be converted and will be compared directly.

Parameters
  • a – Value to compare

  • b – Value to compare

  • msg – Optional message to print if the assertion fails

zassert_not_equal(a, b, msg, ...)

Assert that a does not equal b.

a and b won’t be converted and will be compared directly.

Parameters
  • a – Value to compare

  • b – Value to compare

  • msg – Optional message to print if the assertion fails

zassert_equal_ptr(a, b, msg, ...)

Assert that a equals b.

a and b will be converted to void * before comparing.

Parameters
  • a – Value to compare

  • b – Value to compare

  • msg – Optional message to print if the assertion fails

zassert_within(a, b, d, msg, ...)

Assert that a is within b with delta d.

Parameters
  • a – Value to compare

  • b – Value to compare

  • d – Delta

  • msg – Optional message to print if the assertion fails

zassert_between_inclusive(a, l, u, msg, ...)

Assert that a is greater than or equal to l and less than or equal to u.

Parameters
  • a – Value to compare

  • l – Lower limit

  • u – Upper limit

  • msg – Optional message to print if the assertion fails

zassert_mem_equal(...)

Assert that 2 memory buffers have the same contents.

This macro calls the final memory comparison assertion macro. Using double expansion allows providing some arguments by macros that would expand to more than one values (ANSI-C99 defines that all the macro arguments have to be expanded before macro call).

Parameters
zassert_mem_equal__(buf, exp, size, msg, ...)

Internal assert that 2 memory buffers have the same contents.

Note

This is internal macro, to be used as a second expansion. See zassert_mem_equal.

Parameters
  • buf – Buffer to compare

  • exp – Buffer with expected contents

  • size – Size of buffers

  • msg – Optional message to print if the assertion fails

Assumptions

These macros will instantly skip the test or suite if the related assumption fails. When an assumption fails, it will print the current file, line, and function, alongside a reason for the failure and an optional message. If the config option:CONFIG_ZTEST_ASSERT_VERBOSE is 0, the assumptions will only print the file and line numbers, reducing the binary size of the test.

Example output for a failed macro from zassume_equal(buf->ref, 2, "Invalid refcount"):

group ztest_assume

This module provides assumptions when using Ztest.

Defines

zassume_true(cond, msg, ...)

Assume that cond is true.

If the assumption fails, the test will be marked as “skipped”.

Parameters
  • cond – Condition to check

  • msg – Optional message to print if the assumption fails

zassume_false(cond, msg, ...)

Assume that cond is false.

If the assumption fails, the test will be marked as “skipped”.

Parameters
  • cond – Condition to check

  • msg – Optional message to print if the assumption fails

zassume_ok(cond, msg, ...)

Assume that cond is 0 (success)

If the assumption fails, the test will be marked as “skipped”.

Parameters
  • cond – Condition to check

  • msg – Optional message to print if the assumption fails

zassume_is_null(ptr, msg, ...)

Assume that ptr is NULL.

If the assumption fails, the test will be marked as “skipped”.

Parameters
  • ptr – Pointer to compare

  • msg – Optional message to print if the assumption fails

zassume_not_null(ptr, msg, ...)

Assume that ptr is not NULL.

If the assumption fails, the test will be marked as “skipped”.

Parameters
  • ptr – Pointer to compare

  • msg – Optional message to print if the assumption fails

zassume_equal(a, b, msg, ...)

Assume that a equals b.

a and b won’t be converted and will be compared directly. If the assumption fails, the test will be marked as “skipped”.

Parameters
  • a – Value to compare

  • b – Value to compare

  • msg – Optional message to print if the assumption fails

zassume_not_equal(a, b, msg, ...)

Assume that a does not equal b.

a and b won’t be converted and will be compared directly. If the assumption fails, the test will be marked as “skipped”.

Parameters
  • a – Value to compare

  • b – Value to compare

  • msg – Optional message to print if the assumption fails

zassume_equal_ptr(a, b, msg, ...)

Assume that a equals b.

a and b will be converted to void * before comparing. If the assumption fails, the test will be marked as “skipped”.

Parameters
  • a – Value to compare

  • b – Value to compare

  • msg – Optional message to print if the assumption fails

zassume_within(a, b, d, msg, ...)

Assume that a is within b with delta d.

If the assumption fails, the test will be marked as “skipped”.

Parameters
  • a – Value to compare

  • b – Value to compare

  • d – Delta

  • msg – Optional message to print if the assumption fails

zassume_between_inclusive(a, l, u, msg, ...)

Assume that a is greater than or equal to l and less than or equal to u.

If the assumption fails, the test will be marked as “skipped”.

Parameters
  • a – Value to compare

  • l – Lower limit

  • u – Upper limit

  • msg – Optional message to print if the assumption fails

zassume_mem_equal(...)

Assume that 2 memory buffers have the same contents.

This macro calls the final memory comparison assumption macro. Using double expansion allows providing some arguments by macros that would expand to more than one values (ANSI-C99 defines that all the macro arguments have to be expanded before macro call).

Parameters
zassume_mem_equal__(buf, exp, size, msg, ...)

Internal assume that 2 memory buffers have the same contents.

If the assumption fails, the test will be marked as “skipped”.

Note

This is internal macro, to be used as a second expansion. See zassume_mem_equal.

Parameters
  • buf – Buffer to compare

  • exp – Buffer with expected contents

  • size – Size of buffers

  • msg – Optional message to print if the assumption fails

Mocking via FFF

Zephyr has integrated with FFF for mocking. See FFF for documentation. To use it, use the following in your source:

#include <zephyr/fff.h>

Customizing Test Output

The way output is presented when running tests can be customized. An example can be found in tests/ztest/custom_output.

Customization is enabled by setting CONFIG_ZTEST_TC_UTIL_USER_OVERRIDE to “y” and adding a file tc_util_user_override.h with your overrides.

Add the line zephyr_include_directories(my_folder) to your project’s CMakeLists.txt to let Zephyr find your header file during builds.

See the file subsys/testsuite/include/tc_util.h to see which macros and/or defines can be overridden. These will be surrounded by blocks such as:

#ifndef SOMETHING
#define SOMETHING <default implementation>
#endif /* SOMETHING */

Shuffling Test Sequence

By default the tests are sorted and ran in alphanumerical order. Test cases may be dependent on this sequence. Enable ZTEST_SHUFFLE to randomize the order. The output from the test will display the seed for failed tests. For native posix builds you can provide the seed as an argument to twister with –seed

Static configuration of ZTEST_SHUFFLE contains:

  • ZTEST_SHUFFLE_SUITE_REPEAT_COUNT - Number of iterations the test suite will run.

  • ZTEST_SHUFFLE_TEST_REPEAT_COUNT - Number of iterations the test will run.

Test Selection

For POSIX enabled builds with ZTEST_NEW_API use command line arguments to list or select tests to run. The test argument expects a comma separated list of suite::test . You can substitute the test name with an * to run all tests within a suite.

For example

$ zephyr.exe -list
$ zephyr.exe -test="fixture_tests::test_fixture_pointer,framework_tests::test_assert_mem_equal"
$ zephyr.exe -test="framework_tests::*"