Commit 64e34b50 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'linux-kselftest-kunit-5.19-rc1' of...

Merge tag 'linux-kselftest-kunit-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest

Pull KUnit updates from Shuah Khan:
 "Several fixes, cleanups, and enhancements to tests and framework:

   - introduce _NULL and _NOT_NULL macros to pointer error checks

   - rework kunit_resource allocation policy to fix memory leaks when
     caller doesn't specify free() function to be used when allocating
     memory using kunit_add_resource() and kunit_alloc_resource() funcs.

   - add ability to specify suite-level init and exit functions"

* tag 'linux-kselftest-kunit-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: (41 commits)
  kunit: tool: Use qemu-system-i386 for i386 runs
  kunit: fix executor OOM error handling logic on non-UML
  kunit: tool: update riscv QEMU config with new serial dependency
  kcsan: test: use new suite_{init,exit} support
  kunit: tool: Add list of all valid test configs on UML
  kunit: take `kunit_assert` as `const`
  kunit: tool: misc cleanups
  kunit: tool: minor cosmetic cleanups in kunit_parser.py
  kunit: tool: make parser stop overwriting status of suites w/ no_tests
  kunit: tool: remove dead parse_crash_in_log() logic
  kunit: tool: print clearer error message when there's no TAP output
  kunit: tool: stop using a shell to run kernel under QEMU
  kunit: tool: update test counts summary line format
  kunit: bail out of test filtering logic quicker if OOM
  lib/Kconfig.debug: change KUnit tests to default to KUNIT_ALL_TESTS
  kunit: Rework kunit_resource allocation policy
  kunit: fix debugfs code to use enum kunit_status, not bool
  kfence: test: use new suite_{init/exit} support, add .kunitconfig
  kunit: add ability to specify suite-level init and exit functions
  kunit: rename print_subtest_{start,end} for clarity (s/subtest/suite)
  ...
parents 1c6d2ead e7eaffce
...@@ -6,6 +6,7 @@ API Reference ...@@ -6,6 +6,7 @@ API Reference
.. toctree:: .. toctree::
test test
resource
This section documents the KUnit kernel testing API. It is divided into the This section documents the KUnit kernel testing API. It is divided into the
following sections: following sections:
...@@ -13,3 +14,7 @@ following sections: ...@@ -13,3 +14,7 @@ following sections:
Documentation/dev-tools/kunit/api/test.rst Documentation/dev-tools/kunit/api/test.rst
- documents all of the standard testing API - documents all of the standard testing API
Documentation/dev-tools/kunit/api/resource.rst
- documents the KUnit resource API
.. SPDX-License-Identifier: GPL-2.0
============
Resource API
============
This file documents the KUnit resource API.
Most users won't need to use this API directly, power users can use it to store
state on a per-test basis, register custom cleanup actions, and more.
.. kernel-doc:: include/kunit/resource.h
:internal:
...@@ -114,6 +114,7 @@ Instead of enabling ``CONFIG_GCOV_KERNEL=y``, we can set these options: ...@@ -114,6 +114,7 @@ Instead of enabling ``CONFIG_GCOV_KERNEL=y``, we can set these options:
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
CONFIG_GCOV=y CONFIG_GCOV=y
...@@ -122,7 +123,7 @@ Putting it together into a copy-pastable sequence of commands: ...@@ -122,7 +123,7 @@ Putting it together into a copy-pastable sequence of commands:
.. code-block:: bash .. code-block:: bash
# Append coverage options to the current config # Append coverage options to the current config
$ echo -e "CONFIG_DEBUG_KERNEL=y\nCONFIG_DEBUG_INFO=y\nCONFIG_GCOV=y" >> .kunit/.kunitconfig $ echo -e "CONFIG_DEBUG_KERNEL=y\nCONFIG_DEBUG_INFO=y\nCONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y\nCONFIG_GCOV=y" >> .kunit/.kunitconfig
$ ./tools/testing/kunit/kunit.py run $ ./tools/testing/kunit/kunit.py run
# Extract the coverage information from the build dir (.kunit/) # Extract the coverage information from the build dir (.kunit/)
$ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/ $ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/
......
...@@ -125,8 +125,8 @@ We need many test cases covering all the unit's behaviors. It is common to have ...@@ -125,8 +125,8 @@ We need many test cases covering all the unit's behaviors. It is common to have
many similar tests. In order to reduce duplication in these closely related many similar tests. In order to reduce duplication in these closely related
tests, most unit testing frameworks (including KUnit) provide the concept of a tests, most unit testing frameworks (including KUnit) provide the concept of a
*test suite*. A test suite is a collection of test cases for a unit of code *test suite*. A test suite is a collection of test cases for a unit of code
with a setup function that gets invoked before every test case and then a tear with optional setup and teardown functions that run before/after the whole
down function that gets invoked after every test case completes. For example: suite and/or every test case. For example:
.. code-block:: c .. code-block:: c
...@@ -141,16 +141,19 @@ down function that gets invoked after every test case completes. For example: ...@@ -141,16 +141,19 @@ down function that gets invoked after every test case completes. For example:
.name = "example", .name = "example",
.init = example_test_init, .init = example_test_init,
.exit = example_test_exit, .exit = example_test_exit,
.suite_init = example_suite_init,
.suite_exit = example_suite_exit,
.test_cases = example_test_cases, .test_cases = example_test_cases,
}; };
kunit_test_suite(example_test_suite); kunit_test_suite(example_test_suite);
In the above example, the test suite ``example_test_suite`` would run the test In the above example, the test suite ``example_test_suite`` would first run
cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``. Each ``example_suite_init``, then run the test cases ``example_test_foo``,
would have ``example_test_init`` called immediately before it and ``example_test_bar``, and ``example_test_baz``. Each would have
``example_test_exit`` called immediately after it. ``example_test_init`` called immediately before it and ``example_test_exit``
``kunit_test_suite(example_test_suite)`` registers the test suite with the called immediately after it. Finally, ``example_suite_exit`` would be called
KUnit test framework. after everything else. ``kunit_test_suite(example_test_suite)`` registers the
test suite with the KUnit test framework.
.. note:: .. note::
A test case will only run if it is associated with a test suite. A test case will only run if it is associated with a test suite.
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -1566,14 +1566,6 @@ static void test_exit(struct kunit *test) ...@@ -1566,14 +1566,6 @@ static void test_exit(struct kunit *test)
torture_cleanup_end(); torture_cleanup_end();
} }
static struct kunit_suite kcsan_test_suite = {
.name = "kcsan",
.test_cases = kcsan_test_cases,
.init = test_init,
.exit = test_exit,
};
static struct kunit_suite *kcsan_test_suites[] = { &kcsan_test_suite, NULL };
__no_kcsan __no_kcsan
static void register_tracepoints(struct tracepoint *tp, void *ignore) static void register_tracepoints(struct tracepoint *tp, void *ignore)
{ {
...@@ -1589,11 +1581,7 @@ static void unregister_tracepoints(struct tracepoint *tp, void *ignore) ...@@ -1589,11 +1581,7 @@ static void unregister_tracepoints(struct tracepoint *tp, void *ignore)
tracepoint_probe_unregister(tp, probe_console, NULL); tracepoint_probe_unregister(tp, probe_console, NULL);
} }
/* static int kcsan_suite_init(struct kunit_suite *suite)
* We only want to do tracepoints setup and teardown once, therefore we have to
* customize the init and exit functions and cannot rely on kunit_test_suite().
*/
static int __init kcsan_test_init(void)
{ {
/* /*
* Because we want to be able to build the test as a module, we need to * Because we want to be able to build the test as a module, we need to
...@@ -1601,18 +1589,25 @@ static int __init kcsan_test_init(void) ...@@ -1601,18 +1589,25 @@ static int __init kcsan_test_init(void)
* won't work here. * won't work here.
*/ */
for_each_kernel_tracepoint(register_tracepoints, NULL); for_each_kernel_tracepoint(register_tracepoints, NULL);
return __kunit_test_suites_init(kcsan_test_suites); return 0;
} }
static void kcsan_test_exit(void) static void kcsan_suite_exit(struct kunit_suite *suite)
{ {
__kunit_test_suites_exit(kcsan_test_suites);
for_each_kernel_tracepoint(unregister_tracepoints, NULL); for_each_kernel_tracepoint(unregister_tracepoints, NULL);
tracepoint_synchronize_unregister(); tracepoint_synchronize_unregister();
} }
late_initcall_sync(kcsan_test_init); static struct kunit_suite kcsan_test_suite = {
module_exit(kcsan_test_exit); .name = "kcsan",
.test_cases = kcsan_test_cases,
.init = test_init,
.exit = test_exit,
.suite_init = kcsan_suite_init,
.suite_exit = kcsan_suite_exit,
};
kunit_test_suites(&kcsan_test_suite);
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Marco Elver <elver@google.com>"); MODULE_AUTHOR("Marco Elver <elver@google.com>");
...@@ -2142,10 +2142,11 @@ config TEST_DIV64 ...@@ -2142,10 +2142,11 @@ config TEST_DIV64
If unsure, say N. If unsure, say N.
config KPROBES_SANITY_TEST config KPROBES_SANITY_TEST
tristate "Kprobes sanity tests" tristate "Kprobes sanity tests" if !KUNIT_ALL_TESTS
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
depends on KPROBES depends on KPROBES
depends on KUNIT depends on KUNIT
default KUNIT_ALL_TESTS
help help
This option provides for testing basic kprobes functionality on This option provides for testing basic kprobes functionality on
boot. Samples of kprobe and kretprobe are inserted and boot. Samples of kprobe and kretprobe are inserted and
...@@ -2419,8 +2420,9 @@ config TEST_SYSCTL ...@@ -2419,8 +2420,9 @@ config TEST_SYSCTL
If unsure, say N. If unsure, say N.
config BITFIELD_KUNIT config BITFIELD_KUNIT
tristate "KUnit test bitfield functions at runtime" tristate "KUnit test bitfield functions at runtime" if !KUNIT_ALL_TESTS
depends on KUNIT depends on KUNIT
default KUNIT_ALL_TESTS
help help
Enable this option to test the bitfield functions at boot. Enable this option to test the bitfield functions at boot.
...@@ -2454,8 +2456,9 @@ config HASH_KUNIT_TEST ...@@ -2454,8 +2456,9 @@ config HASH_KUNIT_TEST
optimized versions. If unsure, say N. optimized versions. If unsure, say N.
config RESOURCE_KUNIT_TEST config RESOURCE_KUNIT_TEST
tristate "KUnit test for resource API" tristate "KUnit test for resource API" if !KUNIT_ALL_TESTS
depends on KUNIT depends on KUNIT
default KUNIT_ALL_TESTS
help help
This builds the resource API unit test. This builds the resource API unit test.
Tests the logic of API provided by resource.c and ioport.h. Tests the logic of API provided by resource.c and ioport.h.
...@@ -2508,8 +2511,9 @@ config LINEAR_RANGES_TEST ...@@ -2508,8 +2511,9 @@ config LINEAR_RANGES_TEST
If unsure, say N. If unsure, say N.
config CMDLINE_KUNIT_TEST config CMDLINE_KUNIT_TEST
tristate "KUnit test for cmdline API" tristate "KUnit test for cmdline API" if !KUNIT_ALL_TESTS
depends on KUNIT depends on KUNIT
default KUNIT_ALL_TESTS
help help
This builds the cmdline API unit test. This builds the cmdline API unit test.
Tests the logic of API provided by cmdline.c. Tests the logic of API provided by cmdline.c.
...@@ -2519,8 +2523,9 @@ config CMDLINE_KUNIT_TEST ...@@ -2519,8 +2523,9 @@ config CMDLINE_KUNIT_TEST
If unsure, say N. If unsure, say N.
config BITS_TEST config BITS_TEST
tristate "KUnit test for bits.h" tristate "KUnit test for bits.h" if !KUNIT_ALL_TESTS
depends on KUNIT depends on KUNIT
default KUNIT_ALL_TESTS
help help
This builds the bits unit test. This builds the bits unit test.
Tests the logic of macros defined in bits.h. Tests the logic of macros defined in bits.h.
......
obj-$(CONFIG_KUNIT) += kunit.o obj-$(CONFIG_KUNIT) += kunit.o
kunit-objs += test.o \ kunit-objs += test.o \
resource.o \
string-stream.o \ string-stream.o \
assert.o \ assert.o \
try-catch.o \ try-catch.o \
......
...@@ -52,7 +52,7 @@ static void debugfs_print_result(struct seq_file *seq, ...@@ -52,7 +52,7 @@ static void debugfs_print_result(struct seq_file *seq,
static int debugfs_print_results(struct seq_file *seq, void *v) static int debugfs_print_results(struct seq_file *seq, void *v)
{ {
struct kunit_suite *suite = (struct kunit_suite *)seq->private; struct kunit_suite *suite = (struct kunit_suite *)seq->private;
bool success = kunit_suite_has_succeeded(suite); enum kunit_status success = kunit_suite_has_succeeded(suite);
struct kunit_case *test_case; struct kunit_case *test_case;
if (!suite || !suite->log) if (!suite || !suite->log)
......
...@@ -71,9 +71,13 @@ kunit_filter_tests(struct kunit_suite *const suite, const char *test_glob) ...@@ -71,9 +71,13 @@ kunit_filter_tests(struct kunit_suite *const suite, const char *test_glob)
/* Use memcpy to workaround copy->name being const. */ /* Use memcpy to workaround copy->name being const. */
copy = kmalloc(sizeof(*copy), GFP_KERNEL); copy = kmalloc(sizeof(*copy), GFP_KERNEL);
if (!copy)
return ERR_PTR(-ENOMEM);
memcpy(copy, suite, sizeof(*copy)); memcpy(copy, suite, sizeof(*copy));
filtered = kcalloc(n + 1, sizeof(*filtered), GFP_KERNEL); filtered = kcalloc(n + 1, sizeof(*filtered), GFP_KERNEL);
if (!filtered)
return ERR_PTR(-ENOMEM);
n = 0; n = 0;
kunit_suite_for_each_test_case(suite, test_case) { kunit_suite_for_each_test_case(suite, test_case) {
...@@ -106,14 +110,16 @@ kunit_filter_subsuite(struct kunit_suite * const * const subsuite, ...@@ -106,14 +110,16 @@ kunit_filter_subsuite(struct kunit_suite * const * const subsuite,
filtered = kmalloc_array(n + 1, sizeof(*filtered), GFP_KERNEL); filtered = kmalloc_array(n + 1, sizeof(*filtered), GFP_KERNEL);
if (!filtered) if (!filtered)
return NULL; return ERR_PTR(-ENOMEM);
n = 0; n = 0;
for (i = 0; subsuite[i] != NULL; ++i) { for (i = 0; subsuite[i] != NULL; ++i) {
if (!glob_match(filter->suite_glob, subsuite[i]->name)) if (!glob_match(filter->suite_glob, subsuite[i]->name))
continue; continue;
filtered_suite = kunit_filter_tests(subsuite[i], filter->test_glob); filtered_suite = kunit_filter_tests(subsuite[i], filter->test_glob);
if (filtered_suite) if (IS_ERR(filtered_suite))
return ERR_CAST(filtered_suite);
else if (filtered_suite)
filtered[n++] = filtered_suite; filtered[n++] = filtered_suite;
} }
filtered[n] = NULL; filtered[n] = NULL;
...@@ -146,7 +152,8 @@ static void kunit_free_suite_set(struct suite_set suite_set) ...@@ -146,7 +152,8 @@ static void kunit_free_suite_set(struct suite_set suite_set)
} }
static struct suite_set kunit_filter_suites(const struct suite_set *suite_set, static struct suite_set kunit_filter_suites(const struct suite_set *suite_set,
const char *filter_glob) const char *filter_glob,
int *err)
{ {
int i; int i;
struct kunit_suite * const **copy, * const *filtered_subsuite; struct kunit_suite * const **copy, * const *filtered_subsuite;
...@@ -166,6 +173,10 @@ static struct suite_set kunit_filter_suites(const struct suite_set *suite_set, ...@@ -166,6 +173,10 @@ static struct suite_set kunit_filter_suites(const struct suite_set *suite_set,
for (i = 0; i < max; ++i) { for (i = 0; i < max; ++i) {
filtered_subsuite = kunit_filter_subsuite(suite_set->start[i], &filter); filtered_subsuite = kunit_filter_subsuite(suite_set->start[i], &filter);
if (IS_ERR(filtered_subsuite)) {
*err = PTR_ERR(filtered_subsuite);
return filtered;
}
if (filtered_subsuite) if (filtered_subsuite)
*copy++ = filtered_subsuite; *copy++ = filtered_subsuite;
} }
...@@ -236,9 +247,15 @@ int kunit_run_all_tests(void) ...@@ -236,9 +247,15 @@ int kunit_run_all_tests(void)
.start = __kunit_suites_start, .start = __kunit_suites_start,
.end = __kunit_suites_end, .end = __kunit_suites_end,
}; };
int err = 0;
if (filter_glob_param) if (filter_glob_param) {
suite_set = kunit_filter_suites(&suite_set, filter_glob_param); suite_set = kunit_filter_suites(&suite_set, filter_glob_param, &err);
if (err) {
pr_err("kunit executor: error filtering suites: %d\n", err);
goto out;
}
}
if (!action_param) if (!action_param)
kunit_exec_run_tests(&suite_set); kunit_exec_run_tests(&suite_set);
...@@ -251,9 +268,10 @@ int kunit_run_all_tests(void) ...@@ -251,9 +268,10 @@ int kunit_run_all_tests(void)
kunit_free_suite_set(suite_set); kunit_free_suite_set(suite_set);
} }
kunit_handle_shutdown();
return 0; out:
kunit_handle_shutdown();
return err;
} }
#if IS_BUILTIN(CONFIG_KUNIT_TEST) #if IS_BUILTIN(CONFIG_KUNIT_TEST)
......
...@@ -137,14 +137,16 @@ static void filter_suites_test(struct kunit *test) ...@@ -137,14 +137,16 @@ static void filter_suites_test(struct kunit *test)
.end = suites + 2, .end = suites + 2,
}; };
struct suite_set filtered = {.start = NULL, .end = NULL}; struct suite_set filtered = {.start = NULL, .end = NULL};
int err = 0;
/* Emulate two files, each having one suite */ /* Emulate two files, each having one suite */
subsuites[0][0] = alloc_fake_suite(test, "suite0", dummy_test_cases); subsuites[0][0] = alloc_fake_suite(test, "suite0", dummy_test_cases);
subsuites[1][0] = alloc_fake_suite(test, "suite1", dummy_test_cases); subsuites[1][0] = alloc_fake_suite(test, "suite1", dummy_test_cases);
/* Filter out suite1 */ /* Filter out suite1 */
filtered = kunit_filter_suites(&suite_set, "suite0"); filtered = kunit_filter_suites(&suite_set, "suite0", &err);
kfree_subsuites_at_end(test, &filtered); /* let us use ASSERTs without leaking */ kfree_subsuites_at_end(test, &filtered); /* let us use ASSERTs without leaking */
KUNIT_EXPECT_EQ(test, err, 0);
KUNIT_ASSERT_EQ(test, filtered.end - filtered.start, (ptrdiff_t)1); KUNIT_ASSERT_EQ(test, filtered.end - filtered.start, (ptrdiff_t)1);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, filtered.start); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, filtered.start);
......
...@@ -40,6 +40,17 @@ static int example_test_init(struct kunit *test) ...@@ -40,6 +40,17 @@ static int example_test_init(struct kunit *test)
return 0; return 0;
} }
/*
* This is run once before all test cases in the suite.
* See the comment on example_test_suite for more information.
*/
static int example_test_init_suite(struct kunit_suite *suite)
{
kunit_info(suite, "initializing suite\n");
return 0;
}
/* /*
* This test should always be skipped. * This test should always be skipped.
*/ */
...@@ -91,6 +102,8 @@ static void example_all_expect_macros_test(struct kunit *test) ...@@ -91,6 +102,8 @@ static void example_all_expect_macros_test(struct kunit *test)
KUNIT_EXPECT_NOT_ERR_OR_NULL(test, test); KUNIT_EXPECT_NOT_ERR_OR_NULL(test, test);
KUNIT_EXPECT_PTR_EQ(test, NULL, NULL); KUNIT_EXPECT_PTR_EQ(test, NULL, NULL);
KUNIT_EXPECT_PTR_NE(test, test, NULL); KUNIT_EXPECT_PTR_NE(test, test, NULL);
KUNIT_EXPECT_NULL(test, NULL);
KUNIT_EXPECT_NOT_NULL(test, test);
/* String assertions */ /* String assertions */
KUNIT_EXPECT_STREQ(test, "hi", "hi"); KUNIT_EXPECT_STREQ(test, "hi", "hi");
...@@ -140,17 +153,20 @@ static struct kunit_case example_test_cases[] = { ...@@ -140,17 +153,20 @@ static struct kunit_case example_test_cases[] = {
* may be specified which runs after every test case and can be used to for * may be specified which runs after every test case and can be used to for
* cleanup. For clarity, running tests in a test suite would behave as follows: * cleanup. For clarity, running tests in a test suite would behave as follows:
* *
* suite.suite_init(suite);
* suite.init(test); * suite.init(test);
* suite.test_case[0](test); * suite.test_case[0](test);
* suite.exit(test); * suite.exit(test);
* suite.init(test); * suite.init(test);
* suite.test_case[1](test); * suite.test_case[1](test);
* suite.exit(test); * suite.exit(test);
* suite.suite_exit(suite);
* ...; * ...;
*/ */
static struct kunit_suite example_test_suite = { static struct kunit_suite example_test_suite = {
.name = "example", .name = "example",
.init = example_test_init, .init = example_test_init,
.suite_init = example_test_init_suite,
.test_cases = example_test_cases, .test_cases = example_test_cases,
}; };
......
...@@ -190,6 +190,40 @@ static void kunit_resource_test_destroy_resource(struct kunit *test) ...@@ -190,6 +190,40 @@ static void kunit_resource_test_destroy_resource(struct kunit *test)
KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources)); KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
} }
static void kunit_resource_test_remove_resource(struct kunit *test)
{
struct kunit_test_resource_context *ctx = test->priv;
struct kunit_resource *res = kunit_alloc_and_get_resource(
&ctx->test,
fake_resource_init,
fake_resource_free,
GFP_KERNEL,
ctx);
/* The resource is in the list */
KUNIT_EXPECT_FALSE(test, list_empty(&ctx->test.resources));
/* Remove the resource. The pointer is still valid, but it can't be
* found.
*/
kunit_remove_resource(test, res);
KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
/* We haven't been freed yet. */
KUNIT_EXPECT_TRUE(test, ctx->is_resource_initialized);
/* Removing the resource multiple times is valid. */
kunit_remove_resource(test, res);
KUNIT_EXPECT_TRUE(test, list_empty(&ctx->test.resources));
/* Despite having been removed twice (from only one reference), the
* resource still has not been freed.
*/
KUNIT_EXPECT_TRUE(test, ctx->is_resource_initialized);
/* Free the resource. */
kunit_put_resource(res);
KUNIT_EXPECT_FALSE(test, ctx->is_resource_initialized);
}
static void kunit_resource_test_cleanup_resources(struct kunit *test) static void kunit_resource_test_cleanup_resources(struct kunit *test)
{ {
int i; int i;
...@@ -387,6 +421,7 @@ static struct kunit_case kunit_resource_test_cases[] = { ...@@ -387,6 +421,7 @@ static struct kunit_case kunit_resource_test_cases[] = {
KUNIT_CASE(kunit_resource_test_init_resources), KUNIT_CASE(kunit_resource_test_init_resources),
KUNIT_CASE(kunit_resource_test_alloc_resource), KUNIT_CASE(kunit_resource_test_alloc_resource),
KUNIT_CASE(kunit_resource_test_destroy_resource), KUNIT_CASE(kunit_resource_test_destroy_resource),
KUNIT_CASE(kunit_resource_test_remove_resource),
KUNIT_CASE(kunit_resource_test_cleanup_resources), KUNIT_CASE(kunit_resource_test_cleanup_resources),
KUNIT_CASE(kunit_resource_test_proper_free_ordering), KUNIT_CASE(kunit_resource_test_proper_free_ordering),
KUNIT_CASE(kunit_resource_test_static), KUNIT_CASE(kunit_resource_test_static),
...@@ -435,7 +470,7 @@ static void kunit_log_test(struct kunit *test) ...@@ -435,7 +470,7 @@ static void kunit_log_test(struct kunit *test)
KUNIT_EXPECT_NOT_ERR_OR_NULL(test, KUNIT_EXPECT_NOT_ERR_OR_NULL(test,
strstr(suite.log, "along with this.")); strstr(suite.log, "along with this."));
#else #else
KUNIT_EXPECT_PTR_EQ(test, test->log, (char *)NULL); KUNIT_EXPECT_NULL(test, test->log);
#endif #endif
} }
......
// SPDX-License-Identifier: GPL-2.0
/*
* KUnit resource API for test managed resources (allocations, etc.).
*
* Copyright (C) 2022, Google LLC.
* Author: Daniel Latypov <dlatypov@google.com>
*/
#include <kunit/resource.h>
#include <kunit/test.h>
#include <linux/kref.h>
/*
* Used for static resources and when a kunit_resource * has been created by
* kunit_alloc_resource(). When an init function is supplied, @data is passed
* into the init function; otherwise, we simply set the resource data field to
* the data value passed in. Doesn't initialize res->should_kfree.
*/
int __kunit_add_resource(struct kunit *test,
kunit_resource_init_t init,
kunit_resource_free_t free,
struct kunit_resource *res,
void *data)
{
int ret = 0;
unsigned long flags;
res->free = free;
kref_init(&res->refcount);
if (init) {
ret = init(res, data);
if (ret)
return ret;
} else {
res->data = data;
}
spin_lock_irqsave(&test->lock, flags);
list_add_tail(&res->node, &test->resources);
/* refcount for list is established by kref_init() */
spin_unlock_irqrestore(&test->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(__kunit_add_resource);
void kunit_remove_resource(struct kunit *test, struct kunit_resource *res)
{
unsigned long flags;
bool was_linked;
spin_lock_irqsave(&test->lock, flags);
was_linked = !list_empty(&res->node);
list_del_init(&res->node);
spin_unlock_irqrestore(&test->lock, flags);
if (was_linked)
kunit_put_resource(res);
}
EXPORT_SYMBOL_GPL(kunit_remove_resource);
int kunit_destroy_resource(struct kunit *test, kunit_resource_match_t match,
void *match_data)
{
struct kunit_resource *res = kunit_find_resource(test, match,
match_data);
if (!res)
return -ENOENT;
kunit_remove_resource(test, res);
/* We have a reference also via _find(); drop it. */
kunit_put_resource(res);
return 0;
}
EXPORT_SYMBOL_GPL(kunit_destroy_resource);
...@@ -6,10 +6,10 @@ ...@@ -6,10 +6,10 @@
* Author: Brendan Higgins <brendanhiggins@google.com> * Author: Brendan Higgins <brendanhiggins@google.com>
*/ */
#include <kunit/resource.h>
#include <kunit/test.h> #include <kunit/test.h>
#include <kunit/test-bug.h> #include <kunit/test-bug.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/kref.h>
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
#include <linux/sched/debug.h> #include <linux/sched/debug.h>
#include <linux/sched.h> #include <linux/sched.h>
...@@ -134,7 +134,7 @@ size_t kunit_suite_num_test_cases(struct kunit_suite *suite) ...@@ -134,7 +134,7 @@ size_t kunit_suite_num_test_cases(struct kunit_suite *suite)
} }
EXPORT_SYMBOL_GPL(kunit_suite_num_test_cases); EXPORT_SYMBOL_GPL(kunit_suite_num_test_cases);
static void kunit_print_subtest_start(struct kunit_suite *suite) static void kunit_print_suite_start(struct kunit_suite *suite)
{ {
kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT "# Subtest: %s", kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT "# Subtest: %s",
suite->name); suite->name);
...@@ -179,6 +179,9 @@ enum kunit_status kunit_suite_has_succeeded(struct kunit_suite *suite) ...@@ -179,6 +179,9 @@ enum kunit_status kunit_suite_has_succeeded(struct kunit_suite *suite)
const struct kunit_case *test_case; const struct kunit_case *test_case;
enum kunit_status status = KUNIT_SKIPPED; enum kunit_status status = KUNIT_SKIPPED;
if (suite->suite_init_err)
return KUNIT_FAILURE;
kunit_suite_for_each_test_case(suite, test_case) { kunit_suite_for_each_test_case(suite, test_case) {
if (test_case->status == KUNIT_FAILURE) if (test_case->status == KUNIT_FAILURE)
return KUNIT_FAILURE; return KUNIT_FAILURE;
...@@ -192,7 +195,7 @@ EXPORT_SYMBOL_GPL(kunit_suite_has_succeeded); ...@@ -192,7 +195,7 @@ EXPORT_SYMBOL_GPL(kunit_suite_has_succeeded);
static size_t kunit_suite_counter = 1; static size_t kunit_suite_counter = 1;
static void kunit_print_subtest_end(struct kunit_suite *suite) static void kunit_print_suite_end(struct kunit_suite *suite)
{ {
kunit_print_ok_not_ok((void *)suite, false, kunit_print_ok_not_ok((void *)suite, false,
kunit_suite_has_succeeded(suite), kunit_suite_has_succeeded(suite),
...@@ -241,7 +244,7 @@ static void kunit_print_string_stream(struct kunit *test, ...@@ -241,7 +244,7 @@ static void kunit_print_string_stream(struct kunit *test,
} }
static void kunit_fail(struct kunit *test, const struct kunit_loc *loc, static void kunit_fail(struct kunit *test, const struct kunit_loc *loc,
enum kunit_assert_type type, struct kunit_assert *assert, enum kunit_assert_type type, const struct kunit_assert *assert,
const struct va_format *message) const struct va_format *message)
{ {
struct string_stream *stream; struct string_stream *stream;
...@@ -281,7 +284,7 @@ static void __noreturn kunit_abort(struct kunit *test) ...@@ -281,7 +284,7 @@ static void __noreturn kunit_abort(struct kunit *test)
void kunit_do_failed_assertion(struct kunit *test, void kunit_do_failed_assertion(struct kunit *test,
const struct kunit_loc *loc, const struct kunit_loc *loc,
enum kunit_assert_type type, enum kunit_assert_type type,
struct kunit_assert *assert, const struct kunit_assert *assert,
const char *fmt, ...) const char *fmt, ...)
{ {
va_list args; va_list args;
...@@ -498,7 +501,16 @@ int kunit_run_tests(struct kunit_suite *suite) ...@@ -498,7 +501,16 @@ int kunit_run_tests(struct kunit_suite *suite)
struct kunit_result_stats suite_stats = { 0 }; struct kunit_result_stats suite_stats = { 0 };
struct kunit_result_stats total_stats = { 0 }; struct kunit_result_stats total_stats = { 0 };
kunit_print_subtest_start(suite); if (suite->suite_init) {
suite->suite_init_err = suite->suite_init(suite);
if (suite->suite_init_err) {
kunit_err(suite, KUNIT_SUBTEST_INDENT
"# failed to initialize (%d)", suite->suite_init_err);
goto suite_end;
}
}
kunit_print_suite_start(suite);
kunit_suite_for_each_test_case(suite, test_case) { kunit_suite_for_each_test_case(suite, test_case) {
struct kunit test = { .param_value = NULL, .param_index = 0 }; struct kunit test = { .param_value = NULL, .param_index = 0 };
...@@ -551,8 +563,12 @@ int kunit_run_tests(struct kunit_suite *suite) ...@@ -551,8 +563,12 @@ int kunit_run_tests(struct kunit_suite *suite)
kunit_accumulate_stats(&total_stats, param_stats); kunit_accumulate_stats(&total_stats, param_stats);
} }
if (suite->suite_exit)
suite->suite_exit(suite);
kunit_print_suite_stats(suite, suite_stats, total_stats); kunit_print_suite_stats(suite, suite_stats, total_stats);
kunit_print_subtest_end(suite); suite_end:
kunit_print_suite_end(suite);
return 0; return 0;
} }
...@@ -562,6 +578,7 @@ static void kunit_init_suite(struct kunit_suite *suite) ...@@ -562,6 +578,7 @@ static void kunit_init_suite(struct kunit_suite *suite)
{ {
kunit_debugfs_create_suite(suite); kunit_debugfs_create_suite(suite);
suite->status_comment[0] = '\0'; suite->status_comment[0] = '\0';
suite->suite_init_err = 0;
} }
int __kunit_test_suites_init(struct kunit_suite * const * const suites) int __kunit_test_suites_init(struct kunit_suite * const * const suites)
...@@ -592,120 +609,6 @@ void __kunit_test_suites_exit(struct kunit_suite **suites) ...@@ -592,120 +609,6 @@ void __kunit_test_suites_exit(struct kunit_suite **suites)
} }
EXPORT_SYMBOL_GPL(__kunit_test_suites_exit); EXPORT_SYMBOL_GPL(__kunit_test_suites_exit);
/*
* Used for static resources and when a kunit_resource * has been created by
* kunit_alloc_resource(). When an init function is supplied, @data is passed
* into the init function; otherwise, we simply set the resource data field to
* the data value passed in.
*/
int kunit_add_resource(struct kunit *test,
kunit_resource_init_t init,
kunit_resource_free_t free,
struct kunit_resource *res,
void *data)
{
int ret = 0;
unsigned long flags;
res->free = free;
kref_init(&res->refcount);
if (init) {
ret = init(res, data);
if (ret)
return ret;
} else {
res->data = data;
}
spin_lock_irqsave(&test->lock, flags);
list_add_tail(&res->node, &test->resources);
/* refcount for list is established by kref_init() */
spin_unlock_irqrestore(&test->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(kunit_add_resource);
int kunit_add_named_resource(struct kunit *test,
kunit_resource_init_t init,
kunit_resource_free_t free,
struct kunit_resource *res,
const char *name,
void *data)
{
struct kunit_resource *existing;
if (!name)
return -EINVAL;
existing = kunit_find_named_resource(test, name);
if (existing) {
kunit_put_resource(existing);
return -EEXIST;
}
res->name = name;
return kunit_add_resource(test, init, free, res, data);
}
EXPORT_SYMBOL_GPL(kunit_add_named_resource);
struct kunit_resource *kunit_alloc_and_get_resource(struct kunit *test,
kunit_resource_init_t init,
kunit_resource_free_t free,
gfp_t internal_gfp,
void *data)
{
struct kunit_resource *res;
int ret;
res = kzalloc(sizeof(*res), internal_gfp);
if (!res)
return NULL;
ret = kunit_add_resource(test, init, free, res, data);
if (!ret) {
/*
* bump refcount for get; kunit_resource_put() should be called
* when done.
*/
kunit_get_resource(res);
return res;
}
return NULL;
}
EXPORT_SYMBOL_GPL(kunit_alloc_and_get_resource);
void kunit_remove_resource(struct kunit *test, struct kunit_resource *res)
{
unsigned long flags;
spin_lock_irqsave(&test->lock, flags);
list_del(&res->node);
spin_unlock_irqrestore(&test->lock, flags);
kunit_put_resource(res);
}
EXPORT_SYMBOL_GPL(kunit_remove_resource);
int kunit_destroy_resource(struct kunit *test, kunit_resource_match_t match,
void *match_data)
{
struct kunit_resource *res = kunit_find_resource(test, match,
match_data);
if (!res)
return -ENOENT;
kunit_remove_resource(test, res);
/* We have a reference also via _find(); drop it. */
kunit_put_resource(res);
return 0;
}
EXPORT_SYMBOL_GPL(kunit_destroy_resource);
struct kunit_kmalloc_array_params { struct kunit_kmalloc_array_params {
size_t n; size_t n;
size_t size; size_t size;
......
...@@ -804,6 +804,401 @@ static struct kunit_suite list_test_module = { ...@@ -804,6 +804,401 @@ static struct kunit_suite list_test_module = {
.test_cases = list_test_cases, .test_cases = list_test_cases,
}; };
kunit_test_suites(&list_test_module); struct hlist_test_struct {
int data;
struct hlist_node list;
};
static void hlist_test_init(struct kunit *test)
{
/* Test the different ways of initialising a list. */
struct hlist_head list1 = HLIST_HEAD_INIT;
struct hlist_head list2;
HLIST_HEAD(list3);
struct hlist_head *list4;
struct hlist_head *list5;
INIT_HLIST_HEAD(&list2);
list4 = kzalloc(sizeof(*list4), GFP_KERNEL | __GFP_NOFAIL);
INIT_HLIST_HEAD(list4);
list5 = kmalloc(sizeof(*list5), GFP_KERNEL | __GFP_NOFAIL);
memset(list5, 0xFF, sizeof(*list5));
INIT_HLIST_HEAD(list5);
KUNIT_EXPECT_TRUE(test, hlist_empty(&list1));
KUNIT_EXPECT_TRUE(test, hlist_empty(&list2));
KUNIT_EXPECT_TRUE(test, hlist_empty(&list3));
KUNIT_EXPECT_TRUE(test, hlist_empty(list4));
KUNIT_EXPECT_TRUE(test, hlist_empty(list5));
kfree(list4);
kfree(list5);
}
static void hlist_test_unhashed(struct kunit *test)
{
struct hlist_node a;
HLIST_HEAD(list);
INIT_HLIST_NODE(&a);
/* is unhashed by default */
KUNIT_EXPECT_TRUE(test, hlist_unhashed(&a));
hlist_add_head(&a, &list);
/* is hashed once added to list */
KUNIT_EXPECT_FALSE(test, hlist_unhashed(&a));
hlist_del_init(&a);
/* is again unhashed after del_init */
KUNIT_EXPECT_TRUE(test, hlist_unhashed(&a));
}
/* Doesn't test concurrency guarantees */
static void hlist_test_unhashed_lockless(struct kunit *test)
{
struct hlist_node a;
HLIST_HEAD(list);
INIT_HLIST_NODE(&a);
/* is unhashed by default */
KUNIT_EXPECT_TRUE(test, hlist_unhashed_lockless(&a));
hlist_add_head(&a, &list);
/* is hashed once added to list */
KUNIT_EXPECT_FALSE(test, hlist_unhashed_lockless(&a));
hlist_del_init(&a);
/* is again unhashed after del_init */
KUNIT_EXPECT_TRUE(test, hlist_unhashed_lockless(&a));
}
static void hlist_test_del(struct kunit *test)
{
struct hlist_node a, b;
HLIST_HEAD(list);
hlist_add_head(&a, &list);
hlist_add_behind(&b, &a);
/* before: [list] -> a -> b */
hlist_del(&a);
/* now: [list] -> b */
KUNIT_EXPECT_PTR_EQ(test, list.first, &b);
KUNIT_EXPECT_PTR_EQ(test, b.pprev, &list.first);
}
static void hlist_test_del_init(struct kunit *test)
{
struct hlist_node a, b;
HLIST_HEAD(list);
hlist_add_head(&a, &list);
hlist_add_behind(&b, &a);
/* before: [list] -> a -> b */
hlist_del_init(&a);
/* now: [list] -> b */
KUNIT_EXPECT_PTR_EQ(test, list.first, &b);
KUNIT_EXPECT_PTR_EQ(test, b.pprev, &list.first);
/* a is now initialised */
KUNIT_EXPECT_PTR_EQ(test, a.next, NULL);
KUNIT_EXPECT_PTR_EQ(test, a.pprev, NULL);
}
/* Tests all three hlist_add_* functions */
static void hlist_test_add(struct kunit *test)
{
struct hlist_node a, b, c, d;
HLIST_HEAD(list);
hlist_add_head(&a, &list);
hlist_add_head(&b, &list);
hlist_add_before(&c, &a);
hlist_add_behind(&d, &a);
/* should be [list] -> b -> c -> a -> d */
KUNIT_EXPECT_PTR_EQ(test, list.first, &b);
KUNIT_EXPECT_PTR_EQ(test, c.pprev, &(b.next));
KUNIT_EXPECT_PTR_EQ(test, b.next, &c);
KUNIT_EXPECT_PTR_EQ(test, a.pprev, &(c.next));
KUNIT_EXPECT_PTR_EQ(test, c.next, &a);
KUNIT_EXPECT_PTR_EQ(test, d.pprev, &(a.next));
KUNIT_EXPECT_PTR_EQ(test, a.next, &d);
}
/* Tests both hlist_fake() and hlist_add_fake() */
static void hlist_test_fake(struct kunit *test)
{
struct hlist_node a;
INIT_HLIST_NODE(&a);
/* not fake after init */
KUNIT_EXPECT_FALSE(test, hlist_fake(&a));
hlist_add_fake(&a);
/* is now fake */
KUNIT_EXPECT_TRUE(test, hlist_fake(&a));
}
static void hlist_test_is_singular_node(struct kunit *test)
{
struct hlist_node a, b;
HLIST_HEAD(list);
INIT_HLIST_NODE(&a);
KUNIT_EXPECT_FALSE(test, hlist_is_singular_node(&a, &list));
hlist_add_head(&a, &list);
KUNIT_EXPECT_TRUE(test, hlist_is_singular_node(&a, &list));
hlist_add_head(&b, &list);
KUNIT_EXPECT_FALSE(test, hlist_is_singular_node(&a, &list));
KUNIT_EXPECT_FALSE(test, hlist_is_singular_node(&b, &list));
}
static void hlist_test_empty(struct kunit *test)
{
struct hlist_node a;
HLIST_HEAD(list);
/* list starts off empty */
KUNIT_EXPECT_TRUE(test, hlist_empty(&list));
hlist_add_head(&a, &list);
/* list is no longer empty */
KUNIT_EXPECT_FALSE(test, hlist_empty(&list));
}
static void hlist_test_move_list(struct kunit *test)
{
struct hlist_node a;
HLIST_HEAD(list1);
HLIST_HEAD(list2);
hlist_add_head(&a, &list1);
KUNIT_EXPECT_FALSE(test, hlist_empty(&list1));
KUNIT_EXPECT_TRUE(test, hlist_empty(&list2));
hlist_move_list(&list1, &list2);
KUNIT_EXPECT_TRUE(test, hlist_empty(&list1));
KUNIT_EXPECT_FALSE(test, hlist_empty(&list2));
}
static void hlist_test_entry(struct kunit *test)
{
struct hlist_test_struct test_struct;
KUNIT_EXPECT_PTR_EQ(test, &test_struct,
hlist_entry(&(test_struct.list),
struct hlist_test_struct, list));
}
static void hlist_test_entry_safe(struct kunit *test)
{
struct hlist_test_struct test_struct;
KUNIT_EXPECT_PTR_EQ(test, &test_struct,
hlist_entry_safe(&(test_struct.list),
struct hlist_test_struct, list));
KUNIT_EXPECT_PTR_EQ(test, NULL,
hlist_entry_safe((struct hlist_node *)NULL,
struct hlist_test_struct, list));
}
static void hlist_test_for_each(struct kunit *test)
{
struct hlist_node entries[3], *cur;
HLIST_HEAD(list);
int i = 0;
hlist_add_head(&entries[0], &list);
hlist_add_behind(&entries[1], &entries[0]);
hlist_add_behind(&entries[2], &entries[1]);
hlist_for_each(cur, &list) {
KUNIT_EXPECT_PTR_EQ(test, cur, &entries[i]);
i++;
}
KUNIT_EXPECT_EQ(test, i, 3);
}
static void hlist_test_for_each_safe(struct kunit *test)
{
struct hlist_node entries[3], *cur, *n;
HLIST_HEAD(list);
int i = 0;
hlist_add_head(&entries[0], &list);
hlist_add_behind(&entries[1], &entries[0]);
hlist_add_behind(&entries[2], &entries[1]);
hlist_for_each_safe(cur, n, &list) {
KUNIT_EXPECT_PTR_EQ(test, cur, &entries[i]);
hlist_del(&entries[i]);
i++;
}
KUNIT_EXPECT_EQ(test, i, 3);
KUNIT_EXPECT_TRUE(test, hlist_empty(&list));
}
static void hlist_test_for_each_entry(struct kunit *test)
{
struct hlist_test_struct entries[5], *cur;
HLIST_HEAD(list);
int i = 0;
entries[0].data = 0;
hlist_add_head(&entries[0].list, &list);
for (i = 1; i < 5; ++i) {
entries[i].data = i;
hlist_add_behind(&entries[i].list, &entries[i-1].list);
}
i = 0;
hlist_for_each_entry(cur, &list, list) {
KUNIT_EXPECT_EQ(test, cur->data, i);
i++;
}
KUNIT_EXPECT_EQ(test, i, 5);
}
static void hlist_test_for_each_entry_continue(struct kunit *test)
{
struct hlist_test_struct entries[5], *cur;
HLIST_HEAD(list);
int i = 0;
entries[0].data = 0;
hlist_add_head(&entries[0].list, &list);
for (i = 1; i < 5; ++i) {
entries[i].data = i;
hlist_add_behind(&entries[i].list, &entries[i-1].list);
}
/* We skip the first (zero-th) entry. */
i = 1;
cur = &entries[0];
hlist_for_each_entry_continue(cur, list) {
KUNIT_EXPECT_EQ(test, cur->data, i);
/* Stamp over the entry. */
cur->data = 42;
i++;
}
KUNIT_EXPECT_EQ(test, i, 5);
/* The first entry was not visited. */
KUNIT_EXPECT_EQ(test, entries[0].data, 0);
/* The second (and presumably others), were. */
KUNIT_EXPECT_EQ(test, entries[1].data, 42);
}
static void hlist_test_for_each_entry_from(struct kunit *test)
{
struct hlist_test_struct entries[5], *cur;
HLIST_HEAD(list);
int i = 0;
entries[0].data = 0;
hlist_add_head(&entries[0].list, &list);
for (i = 1; i < 5; ++i) {
entries[i].data = i;
hlist_add_behind(&entries[i].list, &entries[i-1].list);
}
i = 0;
cur = &entries[0];
hlist_for_each_entry_from(cur, list) {
KUNIT_EXPECT_EQ(test, cur->data, i);
/* Stamp over the entry. */
cur->data = 42;
i++;
}
KUNIT_EXPECT_EQ(test, i, 5);
/* The first entry was visited. */
KUNIT_EXPECT_EQ(test, entries[0].data, 42);
}
static void hlist_test_for_each_entry_safe(struct kunit *test)
{
struct hlist_test_struct entries[5], *cur;
struct hlist_node *tmp_node;
HLIST_HEAD(list);
int i = 0;
entries[0].data = 0;
hlist_add_head(&entries[0].list, &list);
for (i = 1; i < 5; ++i) {
entries[i].data = i;
hlist_add_behind(&entries[i].list, &entries[i-1].list);
}
i = 0;
hlist_for_each_entry_safe(cur, tmp_node, &list, list) {
KUNIT_EXPECT_EQ(test, cur->data, i);
hlist_del(&cur->list);
i++;
}
KUNIT_EXPECT_EQ(test, i, 5);
KUNIT_EXPECT_TRUE(test, hlist_empty(&list));
}
static struct kunit_case hlist_test_cases[] = {
KUNIT_CASE(hlist_test_init),
KUNIT_CASE(hlist_test_unhashed),
KUNIT_CASE(hlist_test_unhashed_lockless),
KUNIT_CASE(hlist_test_del),
KUNIT_CASE(hlist_test_del_init),
KUNIT_CASE(hlist_test_add),
KUNIT_CASE(hlist_test_fake),
KUNIT_CASE(hlist_test_is_singular_node),
KUNIT_CASE(hlist_test_empty),
KUNIT_CASE(hlist_test_move_list),
KUNIT_CASE(hlist_test_entry),
KUNIT_CASE(hlist_test_entry_safe),
KUNIT_CASE(hlist_test_for_each),
KUNIT_CASE(hlist_test_for_each_safe),
KUNIT_CASE(hlist_test_for_each_entry),
KUNIT_CASE(hlist_test_for_each_entry_continue),
KUNIT_CASE(hlist_test_for_each_entry_from),
KUNIT_CASE(hlist_test_for_each_entry_safe),
{},
};
static struct kunit_suite hlist_test_module = {
.name = "hlist",
.test_cases = hlist_test_cases,
};
kunit_test_suites(&list_test_module, &hlist_test_module);
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
...@@ -391,7 +391,7 @@ static void krealloc_uaf(struct kunit *test) ...@@ -391,7 +391,7 @@ static void krealloc_uaf(struct kunit *test)
kfree(ptr1); kfree(ptr1);
KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL)); KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL));
KUNIT_ASSERT_PTR_EQ(test, (void *)ptr2, NULL); KUNIT_ASSERT_NULL(test, ptr2);
KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1); KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
} }
......
CONFIG_KUNIT=y
CONFIG_KFENCE=y
CONFIG_KFENCE_KUNIT_TEST=y
# Additional dependencies.
CONFIG_FTRACE=y
...@@ -826,14 +826,6 @@ static void test_exit(struct kunit *test) ...@@ -826,14 +826,6 @@ static void test_exit(struct kunit *test)
test_cache_destroy(); test_cache_destroy();
} }
static struct kunit_suite kfence_test_suite = {
.name = "kfence",
.test_cases = kfence_test_cases,
.init = test_init,
.exit = test_exit,
};
static struct kunit_suite *kfence_test_suites[] = { &kfence_test_suite, NULL };
static void register_tracepoints(struct tracepoint *tp, void *ignore) static void register_tracepoints(struct tracepoint *tp, void *ignore)
{ {
check_trace_callback_type_console(probe_console); check_trace_callback_type_console(probe_console);
...@@ -847,11 +839,7 @@ static void unregister_tracepoints(struct tracepoint *tp, void *ignore) ...@@ -847,11 +839,7 @@ static void unregister_tracepoints(struct tracepoint *tp, void *ignore)
tracepoint_probe_unregister(tp, probe_console, NULL); tracepoint_probe_unregister(tp, probe_console, NULL);
} }
/* static int kfence_suite_init(struct kunit_suite *suite)
* We only want to do tracepoints setup and teardown once, therefore we have to
* customize the init and exit functions and cannot rely on kunit_test_suite().
*/
static int __init kfence_test_init(void)
{ {
/* /*
* Because we want to be able to build the test as a module, we need to * Because we want to be able to build the test as a module, we need to
...@@ -859,18 +847,25 @@ static int __init kfence_test_init(void) ...@@ -859,18 +847,25 @@ static int __init kfence_test_init(void)
* won't work here. * won't work here.
*/ */
for_each_kernel_tracepoint(register_tracepoints, NULL); for_each_kernel_tracepoint(register_tracepoints, NULL);
return __kunit_test_suites_init(kfence_test_suites); return 0;
} }
static void kfence_test_exit(void) static void kfence_suite_exit(struct kunit_suite *suite)
{ {
__kunit_test_suites_exit(kfence_test_suites);
for_each_kernel_tracepoint(unregister_tracepoints, NULL); for_each_kernel_tracepoint(unregister_tracepoints, NULL);
tracepoint_synchronize_unregister(); tracepoint_synchronize_unregister();
} }
late_initcall_sync(kfence_test_init); static struct kunit_suite kfence_test_suite = {
module_exit(kfence_test_exit); .name = "kfence",
.test_cases = kfence_test_cases,
.init = test_init,
.exit = test_exit,
.suite_init = kfence_suite_init,
.suite_exit = kfence_suite_exit,
};
kunit_test_suites(&kfence_test_suite);
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>"); MODULE_AUTHOR("Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>");
...@@ -361,7 +361,7 @@ static void mctp_test_route_input_sk(struct kunit *test) ...@@ -361,7 +361,7 @@ static void mctp_test_route_input_sk(struct kunit *test)
} else { } else {
KUNIT_EXPECT_NE(test, rc, 0); KUNIT_EXPECT_NE(test, rc, 0);
skb2 = skb_recv_datagram(sock->sk, 0, 1, &rc); skb2 = skb_recv_datagram(sock->sk, 0, 1, &rc);
KUNIT_EXPECT_PTR_EQ(test, skb2, NULL); KUNIT_EXPECT_NULL(test, skb2);
} }
__mctp_route_test_fini(test, dev, rt, sock); __mctp_route_test_fini(test, dev, rt, sock);
...@@ -431,7 +431,7 @@ static void mctp_test_route_input_sk_reasm(struct kunit *test) ...@@ -431,7 +431,7 @@ static void mctp_test_route_input_sk_reasm(struct kunit *test)
skb_free_datagram(sock->sk, skb2); skb_free_datagram(sock->sk, skb2);
} else { } else {
KUNIT_EXPECT_PTR_EQ(test, skb2, NULL); KUNIT_EXPECT_NULL(test, skb2);
} }
__mctp_route_test_fini(test, dev, rt, sock); __mctp_route_test_fini(test, dev, rt, sock);
......
...@@ -313,7 +313,7 @@ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test) ...@@ -313,7 +313,7 @@ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test)
size = unpack_strdup(puf->e, &string, TEST_STRING_NAME); size = unpack_strdup(puf->e, &string, TEST_STRING_NAME);
KUNIT_EXPECT_EQ(test, size, 0); KUNIT_EXPECT_EQ(test, size, 0);
KUNIT_EXPECT_PTR_EQ(test, string, (char *)NULL); KUNIT_EXPECT_NULL(test, string);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start); KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start);
} }
...@@ -409,7 +409,7 @@ static void policy_unpack_test_unpack_u16_chunk_out_of_bounds_1( ...@@ -409,7 +409,7 @@ static void policy_unpack_test_unpack_u16_chunk_out_of_bounds_1(
size = unpack_u16_chunk(puf->e, &chunk); size = unpack_u16_chunk(puf->e, &chunk);
KUNIT_EXPECT_EQ(test, size, (size_t)0); KUNIT_EXPECT_EQ(test, size, (size_t)0);
KUNIT_EXPECT_PTR_EQ(test, chunk, (char *)NULL); KUNIT_EXPECT_NULL(test, chunk);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, puf->e->end - 1); KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, puf->e->end - 1);
} }
...@@ -431,7 +431,7 @@ static void policy_unpack_test_unpack_u16_chunk_out_of_bounds_2( ...@@ -431,7 +431,7 @@ static void policy_unpack_test_unpack_u16_chunk_out_of_bounds_2(
size = unpack_u16_chunk(puf->e, &chunk); size = unpack_u16_chunk(puf->e, &chunk);
KUNIT_EXPECT_EQ(test, size, (size_t)0); KUNIT_EXPECT_EQ(test, size, (size_t)0);
KUNIT_EXPECT_PTR_EQ(test, chunk, (char *)NULL); KUNIT_EXPECT_NULL(test, chunk);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, puf->e->start + TEST_U16_OFFSET); KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, puf->e->start + TEST_U16_OFFSET);
} }
......
# This config enables as many tests as possible under UML.
# It is intended for use in continuous integration systems and similar for
# automated testing of as much as possible.
# The config is manually maintained, though it uses KUNIT_ALL_TESTS=y to enable
# any tests whose dependencies are already satisfied. Please feel free to add
# more options if they any new tests.
CONFIG_KUNIT=y
CONFIG_KUNIT_EXAMPLE_TEST=y
CONFIG_KUNIT_ALL_TESTS=y
CONFIG_IIO=y
CONFIG_EXT4_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_VIRTIO_UML=y
CONFIG_UML_PCI_OVER_VIRTIO=y
CONFIG_PCI=y
CONFIG_USB4=y
CONFIG_NET=y
CONFIG_MCTP=y
CONFIG_INET=y
CONFIG_MPTCP=y
CONFIG_DAMON=y
CONFIG_DAMON_VADDR=y
CONFIG_DAMON_PADDR=y
CONFIG_DEBUG_FS=y
CONFIG_DAMON_DBGFS=y
CONFIG_SECURITY=y
CONFIG_SECURITY_APPARMOR=y
...@@ -47,11 +47,11 @@ class KunitBuildRequest(KunitConfigRequest): ...@@ -47,11 +47,11 @@ class KunitBuildRequest(KunitConfigRequest):
@dataclass @dataclass
class KunitParseRequest: class KunitParseRequest:
raw_output: Optional[str] raw_output: Optional[str]
build_dir: str
json: Optional[str] json: Optional[str]
@dataclass @dataclass
class KunitExecRequest(KunitParseRequest): class KunitExecRequest(KunitParseRequest):
build_dir: str
timeout: int timeout: int
alltests: bool alltests: bool
filter_glob: str filter_glob: str
...@@ -63,8 +63,6 @@ class KunitRequest(KunitExecRequest, KunitBuildRequest): ...@@ -63,8 +63,6 @@ class KunitRequest(KunitExecRequest, KunitBuildRequest):
pass pass
KernelDirectoryPath = sys.argv[0].split('tools/testing/kunit/')[0]
def get_kernel_root_path() -> str: def get_kernel_root_path() -> str:
path = sys.argv[0] if not __file__ else __file__ path = sys.argv[0] if not __file__ else __file__
parts = os.path.realpath(path).split('tools/testing/kunit') parts = os.path.realpath(path).split('tools/testing/kunit')
...@@ -126,7 +124,7 @@ def _list_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest) ...@@ -126,7 +124,7 @@ def _list_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest)
lines.pop() lines.pop()
# Filter out any extraneous non-test output that might have gotten mixed in. # Filter out any extraneous non-test output that might have gotten mixed in.
return [l for l in lines if re.match('^[^\s.]+\.[^\s.]+$', l)] return [l for l in lines if re.match(r'^[^\s.]+\.[^\s.]+$', l)]
def _suites_from_test_list(tests: List[str]) -> List[str]: def _suites_from_test_list(tests: List[str]) -> List[str]:
"""Extracts all the suites from an ordered list of tests.""" """Extracts all the suites from an ordered list of tests."""
...@@ -155,6 +153,8 @@ def exec_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest) - ...@@ -155,6 +153,8 @@ def exec_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest) -
test_glob = request.filter_glob.split('.', maxsplit=2)[1] test_glob = request.filter_glob.split('.', maxsplit=2)[1]
filter_globs = [g + '.'+ test_glob for g in filter_globs] filter_globs = [g + '.'+ test_glob for g in filter_globs]
metadata = kunit_json.Metadata(arch=linux.arch(), build_dir=request.build_dir, def_config='kunit_defconfig')
test_counts = kunit_parser.TestCounts() test_counts = kunit_parser.TestCounts()
exec_time = 0.0 exec_time = 0.0
for i, filter_glob in enumerate(filter_globs): for i, filter_glob in enumerate(filter_globs):
...@@ -167,7 +167,7 @@ def exec_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest) - ...@@ -167,7 +167,7 @@ def exec_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest) -
filter_glob=filter_glob, filter_glob=filter_glob,
build_dir=request.build_dir) build_dir=request.build_dir)
_, test_result = parse_tests(request, run_result) _, test_result = parse_tests(request, metadata, run_result)
# run_kernel() doesn't block on the kernel exiting. # run_kernel() doesn't block on the kernel exiting.
# That only happens after we get the last line of output from `run_result`. # That only happens after we get the last line of output from `run_result`.
# So exec_time here actually contains parsing + execution time, which is fine. # So exec_time here actually contains parsing + execution time, which is fine.
...@@ -188,10 +188,9 @@ def exec_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest) - ...@@ -188,10 +188,9 @@ def exec_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest) -
def _map_to_overall_status(test_status: kunit_parser.TestStatus) -> KunitStatus: def _map_to_overall_status(test_status: kunit_parser.TestStatus) -> KunitStatus:
if test_status in (kunit_parser.TestStatus.SUCCESS, kunit_parser.TestStatus.SKIPPED): if test_status in (kunit_parser.TestStatus.SUCCESS, kunit_parser.TestStatus.SKIPPED):
return KunitStatus.SUCCESS return KunitStatus.SUCCESS
else:
return KunitStatus.TEST_FAILURE return KunitStatus.TEST_FAILURE
def parse_tests(request: KunitParseRequest, input_data: Iterable[str]) -> Tuple[KunitResult, kunit_parser.Test]: def parse_tests(request: KunitParseRequest, metadata: kunit_json.Metadata, input_data: Iterable[str]) -> Tuple[KunitResult, kunit_parser.Test]:
parse_start = time.time() parse_start = time.time()
test_result = kunit_parser.Test() test_result = kunit_parser.Test()
...@@ -206,8 +205,6 @@ def parse_tests(request: KunitParseRequest, input_data: Iterable[str]) -> Tuple[ ...@@ -206,8 +205,6 @@ def parse_tests(request: KunitParseRequest, input_data: Iterable[str]) -> Tuple[
pass pass
elif request.raw_output == 'kunit': elif request.raw_output == 'kunit':
output = kunit_parser.extract_tap_lines(output) output = kunit_parser.extract_tap_lines(output)
else:
print(f'Unknown --raw_output option "{request.raw_output}"', file=sys.stderr)
for line in output: for line in output:
print(line.rstrip()) print(line.rstrip())
...@@ -216,13 +213,16 @@ def parse_tests(request: KunitParseRequest, input_data: Iterable[str]) -> Tuple[ ...@@ -216,13 +213,16 @@ def parse_tests(request: KunitParseRequest, input_data: Iterable[str]) -> Tuple[
parse_end = time.time() parse_end = time.time()
if request.json: if request.json:
json_obj = kunit_json.get_json_result( json_str = kunit_json.get_json_result(
test=test_result, test=test_result,
def_config='kunit_defconfig', metadata=metadata)
build_dir=request.build_dir,
json_path=request.json)
if request.json == 'stdout': if request.json == 'stdout':
print(json_obj) print(json_str)
else:
with open(request.json, 'w') as f:
f.write(json_str)
kunit_parser.print_with_timestamp("Test results stored in %s" %
os.path.abspath(request.json))
if test_result.status != kunit_parser.TestStatus.SUCCESS: if test_result.status != kunit_parser.TestStatus.SUCCESS:
return KunitResult(KunitStatus.TEST_FAILURE, parse_end - parse_start), test_result return KunitResult(KunitStatus.TEST_FAILURE, parse_end - parse_start), test_result
...@@ -281,10 +281,10 @@ def add_common_opts(parser) -> None: ...@@ -281,10 +281,10 @@ def add_common_opts(parser) -> None:
parser.add_argument('--build_dir', parser.add_argument('--build_dir',
help='As in the make command, it specifies the build ' help='As in the make command, it specifies the build '
'directory.', 'directory.',
type=str, default='.kunit', metavar='build_dir') type=str, default='.kunit', metavar='DIR')
parser.add_argument('--make_options', parser.add_argument('--make_options',
help='X=Y make option, can be repeated.', help='X=Y make option, can be repeated.',
action='append') action='append', metavar='X=Y')
parser.add_argument('--alltests', parser.add_argument('--alltests',
help='Run all KUnit tests through allyesconfig', help='Run all KUnit tests through allyesconfig',
action='store_true') action='store_true')
...@@ -292,11 +292,11 @@ def add_common_opts(parser) -> None: ...@@ -292,11 +292,11 @@ def add_common_opts(parser) -> None:
help='Path to Kconfig fragment that enables KUnit tests.' help='Path to Kconfig fragment that enables KUnit tests.'
' If given a directory, (e.g. lib/kunit), "/.kunitconfig" ' ' If given a directory, (e.g. lib/kunit), "/.kunitconfig" '
'will get automatically appended.', 'will get automatically appended.',
metavar='kunitconfig') metavar='PATH')
parser.add_argument('--kconfig_add', parser.add_argument('--kconfig_add',
help='Additional Kconfig options to append to the ' help='Additional Kconfig options to append to the '
'.kunitconfig, e.g. CONFIG_KASAN=y. Can be repeated.', '.kunitconfig, e.g. CONFIG_KASAN=y. Can be repeated.',
action='append') action='append', metavar='CONFIG_X=Y')
parser.add_argument('--arch', parser.add_argument('--arch',
help=('Specifies the architecture to run tests under. ' help=('Specifies the architecture to run tests under. '
...@@ -304,7 +304,7 @@ def add_common_opts(parser) -> None: ...@@ -304,7 +304,7 @@ def add_common_opts(parser) -> None:
'string passed to the ARCH make param, ' 'string passed to the ARCH make param, '
'e.g. i386, x86_64, arm, um, etc. Non-UML ' 'e.g. i386, x86_64, arm, um, etc. Non-UML '
'architectures run on QEMU.'), 'architectures run on QEMU.'),
type=str, default='um', metavar='arch') type=str, default='um', metavar='ARCH')
parser.add_argument('--cross_compile', parser.add_argument('--cross_compile',
help=('Sets make\'s CROSS_COMPILE variable; it should ' help=('Sets make\'s CROSS_COMPILE variable; it should '
...@@ -316,18 +316,18 @@ def add_common_opts(parser) -> None: ...@@ -316,18 +316,18 @@ def add_common_opts(parser) -> None:
'if you have downloaded the microblaze toolchain ' 'if you have downloaded the microblaze toolchain '
'from the 0-day website to a directory in your ' 'from the 0-day website to a directory in your '
'home directory called `toolchains`).'), 'home directory called `toolchains`).'),
metavar='cross_compile') metavar='PREFIX')
parser.add_argument('--qemu_config', parser.add_argument('--qemu_config',
help=('Takes a path to a path to a file containing ' help=('Takes a path to a path to a file containing '
'a QemuArchParams object.'), 'a QemuArchParams object.'),
type=str, metavar='qemu_config') type=str, metavar='FILE')
def add_build_opts(parser) -> None: def add_build_opts(parser) -> None:
parser.add_argument('--jobs', parser.add_argument('--jobs',
help='As in the make command, "Specifies the number of ' help='As in the make command, "Specifies the number of '
'jobs (commands) to run simultaneously."', 'jobs (commands) to run simultaneously."',
type=int, default=get_default_jobs(), metavar='jobs') type=int, default=get_default_jobs(), metavar='N')
def add_exec_opts(parser) -> None: def add_exec_opts(parser) -> None:
parser.add_argument('--timeout', parser.add_argument('--timeout',
...@@ -336,7 +336,7 @@ def add_exec_opts(parser) -> None: ...@@ -336,7 +336,7 @@ def add_exec_opts(parser) -> None:
'tests.', 'tests.',
type=int, type=int,
default=300, default=300,
metavar='timeout') metavar='SECONDS')
parser.add_argument('filter_glob', parser.add_argument('filter_glob',
help='Filter which KUnit test suites/tests run at ' help='Filter which KUnit test suites/tests run at '
'boot-time, e.g. list* or list*.*del_test', 'boot-time, e.g. list* or list*.*del_test',
...@@ -346,24 +346,24 @@ def add_exec_opts(parser) -> None: ...@@ -346,24 +346,24 @@ def add_exec_opts(parser) -> None:
metavar='filter_glob') metavar='filter_glob')
parser.add_argument('--kernel_args', parser.add_argument('--kernel_args',
help='Kernel command-line parameters. Maybe be repeated', help='Kernel command-line parameters. Maybe be repeated',
action='append') action='append', metavar='')
parser.add_argument('--run_isolated', help='If set, boot the kernel for each ' parser.add_argument('--run_isolated', help='If set, boot the kernel for each '
'individual suite/test. This is can be useful for debugging ' 'individual suite/test. This is can be useful for debugging '
'a non-hermetic test, one that might pass/fail based on ' 'a non-hermetic test, one that might pass/fail based on '
'what ran before it.', 'what ran before it.',
type=str, type=str,
choices=['suite', 'test']), choices=['suite', 'test'])
def add_parse_opts(parser) -> None: def add_parse_opts(parser) -> None:
parser.add_argument('--raw_output', help='If set don\'t format output from kernel. ' parser.add_argument('--raw_output', help='If set don\'t format output from kernel. '
'If set to --raw_output=kunit, filters to just KUnit output.', 'If set to --raw_output=kunit, filters to just KUnit output.',
type=str, nargs='?', const='all', default=None) type=str, nargs='?', const='all', default=None, choices=['all', 'kunit'])
parser.add_argument('--json', parser.add_argument('--json',
nargs='?', nargs='?',
help='Stores test results in a JSON, and either ' help='Stores test results in a JSON, and either '
'prints to stdout or saves to file if a ' 'prints to stdout or saves to file if a '
'filename is specified', 'filename is specified',
type=str, const='stdout', default=None) type=str, const='stdout', default=None, metavar='FILE')
def main(argv, linux=None): def main(argv, linux=None):
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
...@@ -496,16 +496,17 @@ def main(argv, linux=None): ...@@ -496,16 +496,17 @@ def main(argv, linux=None):
if result.status != KunitStatus.SUCCESS: if result.status != KunitStatus.SUCCESS:
sys.exit(1) sys.exit(1)
elif cli_args.subcommand == 'parse': elif cli_args.subcommand == 'parse':
if cli_args.file == None: if cli_args.file is None:
sys.stdin.reconfigure(errors='backslashreplace') # pytype: disable=attribute-error sys.stdin.reconfigure(errors='backslashreplace') # pytype: disable=attribute-error
kunit_output = sys.stdin kunit_output = sys.stdin
else: else:
with open(cli_args.file, 'r', errors='backslashreplace') as f: with open(cli_args.file, 'r', errors='backslashreplace') as f:
kunit_output = f.read().splitlines() kunit_output = f.read().splitlines()
# We know nothing about how the result was created!
metadata = kunit_json.Metadata()
request = KunitParseRequest(raw_output=cli_args.raw_output, request = KunitParseRequest(raw_output=cli_args.raw_output,
build_dir='',
json=cli_args.json) json=cli_args.json)
result, _ = parse_tests(request, kunit_output) result, _ = parse_tests(request, metadata, kunit_output)
if result.status != KunitStatus.SUCCESS: if result.status != KunitStatus.SUCCESS:
sys.exit(1) sys.exit(1)
else: else:
......
...@@ -6,29 +6,29 @@ ...@@ -6,29 +6,29 @@
# Author: Felix Guo <felixguoxiuping@gmail.com> # Author: Felix Guo <felixguoxiuping@gmail.com>
# Author: Brendan Higgins <brendanhiggins@google.com> # Author: Brendan Higgins <brendanhiggins@google.com>
import collections from dataclasses import dataclass
import re import re
from typing import List, Set from typing import List, Set
CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_(\w+) is not set$' CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_(\w+) is not set$'
CONFIG_PATTERN = r'^CONFIG_(\w+)=(\S+|".*")$' CONFIG_PATTERN = r'^CONFIG_(\w+)=(\S+|".*")$'
KconfigEntryBase = collections.namedtuple('KconfigEntryBase', ['name', 'value']) @dataclass(frozen=True)
class KconfigEntry:
class KconfigEntry(KconfigEntryBase): name: str
value: str
def __str__(self) -> str: def __str__(self) -> str:
if self.value == 'n': if self.value == 'n':
return r'# CONFIG_%s is not set' % (self.name) return f'# CONFIG_{self.name} is not set'
else: return f'CONFIG_{self.name}={self.value}'
return r'CONFIG_%s=%s' % (self.name, self.value)
class KconfigParseError(Exception): class KconfigParseError(Exception):
"""Error parsing Kconfig defconfig or .config.""" """Error parsing Kconfig defconfig or .config."""
class Kconfig(object): class Kconfig:
"""Represents defconfig or .config specified using the Kconfig language.""" """Represents defconfig or .config specified using the Kconfig language."""
def __init__(self) -> None: def __init__(self) -> None:
...@@ -48,7 +48,7 @@ class Kconfig(object): ...@@ -48,7 +48,7 @@ class Kconfig(object):
if a.value == 'n': if a.value == 'n':
continue continue
return False return False
elif a.value != b: if a.value != b:
return False return False
return True return True
...@@ -90,6 +90,5 @@ def parse_from_string(blob: str) -> Kconfig: ...@@ -90,6 +90,5 @@ def parse_from_string(blob: str) -> Kconfig:
if line[0] == '#': if line[0] == '#':
continue continue
else:
raise KconfigParseError('Failed to parse: ' + line) raise KconfigParseError('Failed to parse: ' + line)
return kconfig return kconfig
...@@ -6,60 +6,58 @@ ...@@ -6,60 +6,58 @@
# Copyright (C) 2020, Google LLC. # Copyright (C) 2020, Google LLC.
# Author: Heidi Fahim <heidifahim@google.com> # Author: Heidi Fahim <heidifahim@google.com>
from dataclasses import dataclass
import json import json
import os from typing import Any, Dict
import kunit_parser
from kunit_parser import Test, TestStatus from kunit_parser import Test, TestStatus
from typing import Any, Dict, Optional
@dataclass
class Metadata:
"""Stores metadata about this run to include in get_json_result()."""
arch: str = ''
def_config: str = ''
build_dir: str = ''
JsonObj = Dict[str, Any] JsonObj = Dict[str, Any]
def _get_group_json(test: Test, def_config: str, _status_map: Dict[TestStatus, str] = {
build_dir: Optional[str]) -> JsonObj: TestStatus.SUCCESS: "PASS",
TestStatus.SKIPPED: "SKIP",
TestStatus.TEST_CRASHED: "ERROR",
}
def _get_group_json(test: Test, common_fields: JsonObj) -> JsonObj:
sub_groups = [] # List[JsonObj] sub_groups = [] # List[JsonObj]
test_cases = [] # List[JsonObj] test_cases = [] # List[JsonObj]
for subtest in test.subtests: for subtest in test.subtests:
if len(subtest.subtests): if subtest.subtests:
sub_group = _get_group_json(subtest, def_config, sub_group = _get_group_json(subtest, common_fields)
build_dir)
sub_groups.append(sub_group) sub_groups.append(sub_group)
else: continue
test_case = {"name": subtest.name, "status": "FAIL"} status = _status_map.get(subtest.status, "FAIL")
if subtest.status == TestStatus.SUCCESS: test_cases.append({"name": subtest.name, "status": status})
test_case["status"] = "PASS"
elif subtest.status == TestStatus.SKIPPED:
test_case["status"] = "SKIP"
elif subtest.status == TestStatus.TEST_CRASHED:
test_case["status"] = "ERROR"
test_cases.append(test_case)
test_group = { test_group = {
"name": test.name, "name": test.name,
"arch": "UM",
"defconfig": def_config,
"build_environment": build_dir,
"sub_groups": sub_groups, "sub_groups": sub_groups,
"test_cases": test_cases, "test_cases": test_cases,
}
test_group.update(common_fields)
return test_group
def get_json_result(test: Test, metadata: Metadata) -> str:
common_fields = {
"arch": metadata.arch,
"defconfig": metadata.def_config,
"build_environment": metadata.build_dir,
"lab_name": None, "lab_name": None,
"kernel": None, "kernel": None,
"job": None, "job": None,
"git_branch": "kselftest", "git_branch": "kselftest",
} }
return test_group
def get_json_result(test: Test, def_config: str, test_group = _get_group_json(test, common_fields)
build_dir: Optional[str], json_path: str) -> str:
test_group = _get_group_json(test, def_config, build_dir)
test_group["name"] = "KUnit Test Group" test_group["name"] = "KUnit Test Group"
json_obj = json.dumps(test_group, indent=4) return json.dumps(test_group, indent=4)
if json_path != 'stdout':
with open(json_path, 'w') as result_path:
result_path.write(json_obj)
root = __file__.split('tools/testing/kunit/')[0]
kunit_parser.print_with_timestamp(
"Test results stored in %s" %
os.path.join(root, result_path.name))
return json_obj
...@@ -11,6 +11,7 @@ import importlib.util ...@@ -11,6 +11,7 @@ import importlib.util
import logging import logging
import subprocess import subprocess
import os import os
import shlex
import shutil import shutil
import signal import signal
import threading import threading
...@@ -29,11 +30,6 @@ OUTFILE_PATH = 'test.log' ...@@ -29,11 +30,6 @@ OUTFILE_PATH = 'test.log'
ABS_TOOL_PATH = os.path.abspath(os.path.dirname(__file__)) ABS_TOOL_PATH = os.path.abspath(os.path.dirname(__file__))
QEMU_CONFIGS_DIR = os.path.join(ABS_TOOL_PATH, 'qemu_configs') QEMU_CONFIGS_DIR = os.path.join(ABS_TOOL_PATH, 'qemu_configs')
def get_file_path(build_dir, default):
if build_dir:
default = os.path.join(build_dir, default)
return default
class ConfigError(Exception): class ConfigError(Exception):
"""Represents an error trying to configure the Linux kernel.""" """Represents an error trying to configure the Linux kernel."""
...@@ -42,7 +38,7 @@ class BuildError(Exception): ...@@ -42,7 +38,7 @@ class BuildError(Exception):
"""Represents an error trying to build the Linux kernel.""" """Represents an error trying to build the Linux kernel."""
class LinuxSourceTreeOperations(object): class LinuxSourceTreeOperations:
"""An abstraction over command line operations performed on a source tree.""" """An abstraction over command line operations performed on a source tree."""
def __init__(self, linux_arch: str, cross_compile: Optional[str]): def __init__(self, linux_arch: str, cross_compile: Optional[str]):
...@@ -57,20 +53,18 @@ class LinuxSourceTreeOperations(object): ...@@ -57,20 +53,18 @@ class LinuxSourceTreeOperations(object):
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
raise ConfigError(e.output.decode()) raise ConfigError(e.output.decode())
def make_arch_qemuconfig(self, kconfig: kunit_config.Kconfig) -> None: def make_arch_qemuconfig(self, base_kunitconfig: kunit_config.Kconfig) -> None:
pass pass
def make_allyesconfig(self, build_dir, make_options) -> None: def make_allyesconfig(self, build_dir: str, make_options) -> None:
raise ConfigError('Only the "um" arch is supported for alltests') raise ConfigError('Only the "um" arch is supported for alltests')
def make_olddefconfig(self, build_dir, make_options) -> None: def make_olddefconfig(self, build_dir: str, make_options) -> None:
command = ['make', 'ARCH=' + self._linux_arch, 'olddefconfig'] command = ['make', 'ARCH=' + self._linux_arch, 'O=' + build_dir, 'olddefconfig']
if self._cross_compile: if self._cross_compile:
command += ['CROSS_COMPILE=' + self._cross_compile] command += ['CROSS_COMPILE=' + self._cross_compile]
if make_options: if make_options:
command.extend(make_options) command.extend(make_options)
if build_dir:
command += ['O=' + build_dir]
print('Populating config with:\n$', ' '.join(command)) print('Populating config with:\n$', ' '.join(command))
try: try:
subprocess.check_output(command, stderr=subprocess.STDOUT) subprocess.check_output(command, stderr=subprocess.STDOUT)
...@@ -79,14 +73,12 @@ class LinuxSourceTreeOperations(object): ...@@ -79,14 +73,12 @@ class LinuxSourceTreeOperations(object):
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
raise ConfigError(e.output.decode()) raise ConfigError(e.output.decode())
def make(self, jobs, build_dir, make_options) -> None: def make(self, jobs, build_dir: str, make_options) -> None:
command = ['make', 'ARCH=' + self._linux_arch, '--jobs=' + str(jobs)] command = ['make', 'ARCH=' + self._linux_arch, 'O=' + build_dir, '--jobs=' + str(jobs)]
if make_options: if make_options:
command.extend(make_options) command.extend(make_options)
if self._cross_compile: if self._cross_compile:
command += ['CROSS_COMPILE=' + self._cross_compile] command += ['CROSS_COMPILE=' + self._cross_compile]
if build_dir:
command += ['O=' + build_dir]
print('Building with:\n$', ' '.join(command)) print('Building with:\n$', ' '.join(command))
try: try:
proc = subprocess.Popen(command, proc = subprocess.Popen(command,
...@@ -127,16 +119,17 @@ class LinuxSourceTreeOperationsQemu(LinuxSourceTreeOperations): ...@@ -127,16 +119,17 @@ class LinuxSourceTreeOperationsQemu(LinuxSourceTreeOperations):
'-nodefaults', '-nodefaults',
'-m', '1024', '-m', '1024',
'-kernel', kernel_path, '-kernel', kernel_path,
'-append', '\'' + ' '.join(params + [self._kernel_command_line]) + '\'', '-append', ' '.join(params + [self._kernel_command_line]),
'-no-reboot', '-no-reboot',
'-nographic', '-nographic',
'-serial stdio'] + self._extra_qemu_params '-serial', 'stdio'] + self._extra_qemu_params
print('Running tests with:\n$', ' '.join(qemu_command)) # Note: shlex.join() does what we want, but requires python 3.8+.
return subprocess.Popen(' '.join(qemu_command), print('Running tests with:\n$', ' '.join(shlex.quote(arg) for arg in qemu_command))
return subprocess.Popen(qemu_command,
stdin=subprocess.PIPE, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, stderr=subprocess.STDOUT,
text=True, shell=True, errors='backslashreplace') text=True, errors='backslashreplace')
class LinuxSourceTreeOperationsUml(LinuxSourceTreeOperations): class LinuxSourceTreeOperationsUml(LinuxSourceTreeOperations):
"""An abstraction over command line operations performed on a source tree.""" """An abstraction over command line operations performed on a source tree."""
...@@ -144,14 +137,12 @@ class LinuxSourceTreeOperationsUml(LinuxSourceTreeOperations): ...@@ -144,14 +137,12 @@ class LinuxSourceTreeOperationsUml(LinuxSourceTreeOperations):
def __init__(self, cross_compile=None): def __init__(self, cross_compile=None):
super().__init__(linux_arch='um', cross_compile=cross_compile) super().__init__(linux_arch='um', cross_compile=cross_compile)
def make_allyesconfig(self, build_dir, make_options) -> None: def make_allyesconfig(self, build_dir: str, make_options) -> None:
kunit_parser.print_with_timestamp( kunit_parser.print_with_timestamp(
'Enabling all CONFIGs for UML...') 'Enabling all CONFIGs for UML...')
command = ['make', 'ARCH=um', 'allyesconfig'] command = ['make', 'ARCH=um', 'O=' + build_dir, 'allyesconfig']
if make_options: if make_options:
command.extend(make_options) command.extend(make_options)
if build_dir:
command += ['O=' + build_dir]
process = subprocess.Popen( process = subprocess.Popen(
command, command,
stdout=subprocess.DEVNULL, stdout=subprocess.DEVNULL,
...@@ -168,30 +159,30 @@ class LinuxSourceTreeOperationsUml(LinuxSourceTreeOperations): ...@@ -168,30 +159,30 @@ class LinuxSourceTreeOperationsUml(LinuxSourceTreeOperations):
def start(self, params: List[str], build_dir: str) -> subprocess.Popen: def start(self, params: List[str], build_dir: str) -> subprocess.Popen:
"""Runs the Linux UML binary. Must be named 'linux'.""" """Runs the Linux UML binary. Must be named 'linux'."""
linux_bin = get_file_path(build_dir, 'linux') linux_bin = os.path.join(build_dir, 'linux')
return subprocess.Popen([linux_bin] + params, return subprocess.Popen([linux_bin] + params,
stdin=subprocess.PIPE, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, stderr=subprocess.STDOUT,
text=True, errors='backslashreplace') text=True, errors='backslashreplace')
def get_kconfig_path(build_dir) -> str: def get_kconfig_path(build_dir: str) -> str:
return get_file_path(build_dir, KCONFIG_PATH) return os.path.join(build_dir, KCONFIG_PATH)
def get_kunitconfig_path(build_dir) -> str: def get_kunitconfig_path(build_dir: str) -> str:
return get_file_path(build_dir, KUNITCONFIG_PATH) return os.path.join(build_dir, KUNITCONFIG_PATH)
def get_old_kunitconfig_path(build_dir) -> str: def get_old_kunitconfig_path(build_dir: str) -> str:
return get_file_path(build_dir, OLD_KUNITCONFIG_PATH) return os.path.join(build_dir, OLD_KUNITCONFIG_PATH)
def get_outfile_path(build_dir) -> str: def get_outfile_path(build_dir: str) -> str:
return get_file_path(build_dir, OUTFILE_PATH) return os.path.join(build_dir, OUTFILE_PATH)
def get_source_tree_ops(arch: str, cross_compile: Optional[str]) -> LinuxSourceTreeOperations: def get_source_tree_ops(arch: str, cross_compile: Optional[str]) -> LinuxSourceTreeOperations:
config_path = os.path.join(QEMU_CONFIGS_DIR, arch + '.py') config_path = os.path.join(QEMU_CONFIGS_DIR, arch + '.py')
if arch == 'um': if arch == 'um':
return LinuxSourceTreeOperationsUml(cross_compile=cross_compile) return LinuxSourceTreeOperationsUml(cross_compile=cross_compile)
elif os.path.isfile(config_path): if os.path.isfile(config_path):
return get_source_tree_ops_from_qemu_config(config_path, cross_compile)[1] return get_source_tree_ops_from_qemu_config(config_path, cross_compile)[1]
options = [f[:-3] for f in os.listdir(QEMU_CONFIGS_DIR) if f.endswith('.py')] options = [f[:-3] for f in os.listdir(QEMU_CONFIGS_DIR) if f.endswith('.py')]
...@@ -222,7 +213,7 @@ def get_source_tree_ops_from_qemu_config(config_path: str, ...@@ -222,7 +213,7 @@ def get_source_tree_ops_from_qemu_config(config_path: str,
return params.linux_arch, LinuxSourceTreeOperationsQemu( return params.linux_arch, LinuxSourceTreeOperationsQemu(
params, cross_compile=cross_compile) params, cross_compile=cross_compile)
class LinuxSourceTree(object): class LinuxSourceTree:
"""Represents a Linux kernel source tree with KUnit tests.""" """Represents a Linux kernel source tree with KUnit tests."""
def __init__( def __init__(
...@@ -260,6 +251,8 @@ class LinuxSourceTree(object): ...@@ -260,6 +251,8 @@ class LinuxSourceTree(object):
kconfig = kunit_config.parse_from_string('\n'.join(kconfig_add)) kconfig = kunit_config.parse_from_string('\n'.join(kconfig_add))
self._kconfig.merge_in_entries(kconfig) self._kconfig.merge_in_entries(kconfig)
def arch(self) -> str:
return self._arch
def clean(self) -> bool: def clean(self) -> bool:
try: try:
...@@ -269,7 +262,7 @@ class LinuxSourceTree(object): ...@@ -269,7 +262,7 @@ class LinuxSourceTree(object):
return False return False
return True return True
def validate_config(self, build_dir) -> bool: def validate_config(self, build_dir: str) -> bool:
kconfig_path = get_kconfig_path(build_dir) kconfig_path = get_kconfig_path(build_dir)
validated_kconfig = kunit_config.parse_file(kconfig_path) validated_kconfig = kunit_config.parse_file(kconfig_path)
if self._kconfig.is_subset_of(validated_kconfig): if self._kconfig.is_subset_of(validated_kconfig):
...@@ -284,7 +277,7 @@ class LinuxSourceTree(object): ...@@ -284,7 +277,7 @@ class LinuxSourceTree(object):
logging.error(message) logging.error(message)
return False return False
def build_config(self, build_dir, make_options) -> bool: def build_config(self, build_dir: str, make_options) -> bool:
kconfig_path = get_kconfig_path(build_dir) kconfig_path = get_kconfig_path(build_dir)
if build_dir and not os.path.exists(build_dir): if build_dir and not os.path.exists(build_dir):
os.mkdir(build_dir) os.mkdir(build_dir)
...@@ -312,7 +305,7 @@ class LinuxSourceTree(object): ...@@ -312,7 +305,7 @@ class LinuxSourceTree(object):
old_kconfig = kunit_config.parse_file(old_path) old_kconfig = kunit_config.parse_file(old_path)
return old_kconfig.entries() != self._kconfig.entries() return old_kconfig.entries() != self._kconfig.entries()
def build_reconfig(self, build_dir, make_options) -> bool: def build_reconfig(self, build_dir: str, make_options) -> bool:
"""Creates a new .config if it is not a subset of the .kunitconfig.""" """Creates a new .config if it is not a subset of the .kunitconfig."""
kconfig_path = get_kconfig_path(build_dir) kconfig_path = get_kconfig_path(build_dir)
if not os.path.exists(kconfig_path): if not os.path.exists(kconfig_path):
...@@ -327,7 +320,7 @@ class LinuxSourceTree(object): ...@@ -327,7 +320,7 @@ class LinuxSourceTree(object):
os.remove(kconfig_path) os.remove(kconfig_path)
return self.build_config(build_dir, make_options) return self.build_config(build_dir, make_options)
def build_kernel(self, alltests, jobs, build_dir, make_options) -> bool: def build_kernel(self, alltests, jobs, build_dir: str, make_options) -> bool:
try: try:
if alltests: if alltests:
self._ops.make_allyesconfig(build_dir, make_options) self._ops.make_allyesconfig(build_dir, make_options)
...@@ -375,6 +368,6 @@ class LinuxSourceTree(object): ...@@ -375,6 +368,6 @@ class LinuxSourceTree(object):
waiter.join() waiter.join()
subprocess.call(['stty', 'sane']) subprocess.call(['stty', 'sane'])
def signal_handler(self, sig, frame) -> None: def signal_handler(self, unused_sig, unused_frame) -> None:
logging.error('Build interruption occurred. Cleaning console.') logging.error('Build interruption occurred. Cleaning console.')
subprocess.call(['stty', 'sane']) subprocess.call(['stty', 'sane'])
...@@ -11,13 +11,13 @@ ...@@ -11,13 +11,13 @@
from __future__ import annotations from __future__ import annotations
import re import re
import sys
import datetime import datetime
from enum import Enum, auto from enum import Enum, auto
from functools import reduce
from typing import Iterable, Iterator, List, Optional, Tuple from typing import Iterable, Iterator, List, Optional, Tuple
class Test(object): class Test:
""" """
A class to represent a test parsed from KTAP results. All KTAP A class to represent a test parsed from KTAP results. All KTAP
results within a test log are stored in a main Test object as results within a test log are stored in a main Test object as
...@@ -45,10 +45,8 @@ class Test(object): ...@@ -45,10 +45,8 @@ class Test(object):
def __str__(self) -> str: def __str__(self) -> str:
"""Returns string representation of a Test class object.""" """Returns string representation of a Test class object."""
return ('Test(' + str(self.status) + ', ' + self.name + return (f'Test({self.status}, {self.name}, {self.expected_count}, '
', ' + str(self.expected_count) + ', ' + f'{self.subtests}, {self.log}, {self.counts})')
str(self.subtests) + ', ' + str(self.log) + ', ' +
str(self.counts) + ')')
def __repr__(self) -> str: def __repr__(self) -> str:
"""Returns string representation of a Test class object.""" """Returns string representation of a Test class object."""
...@@ -57,7 +55,7 @@ class Test(object): ...@@ -57,7 +55,7 @@ class Test(object):
def add_error(self, error_message: str) -> None: def add_error(self, error_message: str) -> None:
"""Records an error that occurred while parsing this test.""" """Records an error that occurred while parsing this test."""
self.counts.errors += 1 self.counts.errors += 1
print_error('Test ' + self.name + ': ' + error_message) print_with_timestamp(red('[ERROR]') + f' Test: {self.name}: {error_message}')
class TestStatus(Enum): class TestStatus(Enum):
"""An enumeration class to represent the status of a test.""" """An enumeration class to represent the status of a test."""
...@@ -91,13 +89,12 @@ class TestCounts: ...@@ -91,13 +89,12 @@ class TestCounts:
self.errors = 0 self.errors = 0
def __str__(self) -> str: def __str__(self) -> str:
"""Returns the string representation of a TestCounts object. """Returns the string representation of a TestCounts object."""
""" statuses = [('passed', self.passed), ('failed', self.failed),
return ('Passed: ' + str(self.passed) + ('crashed', self.crashed), ('skipped', self.skipped),
', Failed: ' + str(self.failed) + ('errors', self.errors)]
', Crashed: ' + str(self.crashed) + return f'Ran {self.total()} tests: ' + \
', Skipped: ' + str(self.skipped) + ', '.join(f'{s}: {n}' for s, n in statuses if n > 0)
', Errors: ' + str(self.errors))
def total(self) -> int: def total(self) -> int:
"""Returns the total number of test cases within a test """Returns the total number of test cases within a test
...@@ -128,31 +125,19 @@ class TestCounts: ...@@ -128,31 +125,19 @@ class TestCounts:
""" """
if self.total() == 0: if self.total() == 0:
return TestStatus.NO_TESTS return TestStatus.NO_TESTS
elif self.crashed: if self.crashed:
# If one of the subtests crash, the expected status # Crashes should take priority.
# of the Test is crashed.
return TestStatus.TEST_CRASHED return TestStatus.TEST_CRASHED
elif self.failed: if self.failed:
# Otherwise if one of the subtests fail, the
# expected status of the Test is failed.
return TestStatus.FAILURE return TestStatus.FAILURE
elif self.passed: if self.passed:
# Otherwise if one of the subtests pass, the # No failures or crashes, looks good!
# expected status of the Test is passed.
return TestStatus.SUCCESS return TestStatus.SUCCESS
else: # We have only skipped tests.
# Finally, if none of the subtests have failed,
# crashed, or passed, the expected status of the
# Test is skipped.
return TestStatus.SKIPPED return TestStatus.SKIPPED
def add_status(self, status: TestStatus) -> None: def add_status(self, status: TestStatus) -> None:
""" """Increments the count for `status`."""
Increments count of inputted status.
Parameters:
status - status to be added to the TestCounts object
"""
if status == TestStatus.SUCCESS: if status == TestStatus.SUCCESS:
self.passed += 1 self.passed += 1
elif status == TestStatus.FAILURE: elif status == TestStatus.FAILURE:
...@@ -282,11 +267,9 @@ def check_version(version_num: int, accepted_versions: List[int], ...@@ -282,11 +267,9 @@ def check_version(version_num: int, accepted_versions: List[int],
test - Test object for current test being parsed test - Test object for current test being parsed
""" """
if version_num < min(accepted_versions): if version_num < min(accepted_versions):
test.add_error(version_type + test.add_error(f'{version_type} version lower than expected!')
' version lower than expected!')
elif version_num > max(accepted_versions): elif version_num > max(accepted_versions):
test.add_error( test.add_error(f'{version_type} version higer than expected!')
version_type + ' version higher than expected!')
def parse_ktap_header(lines: LineStream, test: Test) -> bool: def parse_ktap_header(lines: LineStream, test: Test) -> bool:
""" """
...@@ -396,7 +379,7 @@ def peek_test_name_match(lines: LineStream, test: Test) -> bool: ...@@ -396,7 +379,7 @@ def peek_test_name_match(lines: LineStream, test: Test) -> bool:
if not match: if not match:
return False return False
name = match.group(4) name = match.group(4)
return (name == test.name) return name == test.name
def parse_test_result(lines: LineStream, test: Test, def parse_test_result(lines: LineStream, test: Test,
expected_num: int) -> bool: expected_num: int) -> bool:
...@@ -439,8 +422,7 @@ def parse_test_result(lines: LineStream, test: Test, ...@@ -439,8 +422,7 @@ def parse_test_result(lines: LineStream, test: Test,
# Check test num # Check test num
num = int(match.group(2)) num = int(match.group(2))
if num != expected_num: if num != expected_num:
test.add_error('Expected test number ' + test.add_error(f'Expected test number {expected_num} but found {num}')
str(expected_num) + ' but found ' + str(num))
# Set status of test object # Set status of test object
status = match.group(1) status = match.group(1)
...@@ -474,26 +456,6 @@ def parse_diagnostic(lines: LineStream) -> List[str]: ...@@ -474,26 +456,6 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
log.append(lines.pop()) log.append(lines.pop())
return log return log
DIAGNOSTIC_CRASH_MESSAGE = re.compile(r'^# .*?: kunit test case crashed!$')
def parse_crash_in_log(test: Test) -> bool:
"""
Iterate through the lines of the log to parse for crash message.
If crash message found, set status to crashed and return True.
Otherwise return False.
Parameters:
test - Test object for current test being parsed
Return:
True if crash message found in log
"""
for line in test.log:
if DIAGNOSTIC_CRASH_MESSAGE.match(line):
test.status = TestStatus.TEST_CRASHED
return True
return False
# Printing helper methods: # Printing helper methods:
...@@ -503,14 +465,20 @@ RESET = '\033[0;0m' ...@@ -503,14 +465,20 @@ RESET = '\033[0;0m'
def red(text: str) -> str: def red(text: str) -> str:
"""Returns inputted string with red color code.""" """Returns inputted string with red color code."""
if not sys.stdout.isatty():
return text
return '\033[1;31m' + text + RESET return '\033[1;31m' + text + RESET
def yellow(text: str) -> str: def yellow(text: str) -> str:
"""Returns inputted string with yellow color code.""" """Returns inputted string with yellow color code."""
if not sys.stdout.isatty():
return text
return '\033[1;33m' + text + RESET return '\033[1;33m' + text + RESET
def green(text: str) -> str: def green(text: str) -> str:
"""Returns inputted string with green color code.""" """Returns inputted string with green color code."""
if not sys.stdout.isatty():
return text
return '\033[1;32m' + text + RESET return '\033[1;32m' + text + RESET
ANSI_LEN = len(red('')) ANSI_LEN = len(red(''))
...@@ -542,7 +510,7 @@ def format_test_divider(message: str, len_message: int) -> str: ...@@ -542,7 +510,7 @@ def format_test_divider(message: str, len_message: int) -> str:
# calculate number of dashes for each side of the divider # calculate number of dashes for each side of the divider
len_1 = int(difference / 2) len_1 = int(difference / 2)
len_2 = difference - len_1 len_2 = difference - len_1
return ('=' * len_1) + ' ' + message + ' ' + ('=' * len_2) return ('=' * len_1) + f' {message} ' + ('=' * len_2)
def print_test_header(test: Test) -> None: def print_test_header(test: Test) -> None:
""" """
...@@ -558,20 +526,13 @@ def print_test_header(test: Test) -> None: ...@@ -558,20 +526,13 @@ def print_test_header(test: Test) -> None:
message = test.name message = test.name
if test.expected_count: if test.expected_count:
if test.expected_count == 1: if test.expected_count == 1:
message += (' (' + str(test.expected_count) + message += ' (1 subtest)'
' subtest)')
else: else:
message += (' (' + str(test.expected_count) + message += f' ({test.expected_count} subtests)'
' subtests)')
print_with_timestamp(format_test_divider(message, len(message))) print_with_timestamp(format_test_divider(message, len(message)))
def print_log(log: Iterable[str]) -> None: def print_log(log: Iterable[str]) -> None:
""" """Prints all strings in saved log for test in yellow."""
Prints all strings in saved log for test in yellow.
Parameters:
log - Iterable object with all strings saved in log for test
"""
for m in log: for m in log:
print_with_timestamp(yellow(m)) print_with_timestamp(yellow(m))
...@@ -590,17 +551,16 @@ def format_test_result(test: Test) -> str: ...@@ -590,17 +551,16 @@ def format_test_result(test: Test) -> str:
String containing formatted test result String containing formatted test result
""" """
if test.status == TestStatus.SUCCESS: if test.status == TestStatus.SUCCESS:
return (green('[PASSED] ') + test.name) return green('[PASSED] ') + test.name
elif test.status == TestStatus.SKIPPED: if test.status == TestStatus.SKIPPED:
return (yellow('[SKIPPED] ') + test.name) return yellow('[SKIPPED] ') + test.name
elif test.status == TestStatus.NO_TESTS: if test.status == TestStatus.NO_TESTS:
return (yellow('[NO TESTS RUN] ') + test.name) return yellow('[NO TESTS RUN] ') + test.name
elif test.status == TestStatus.TEST_CRASHED: if test.status == TestStatus.TEST_CRASHED:
print_log(test.log) print_log(test.log)
return (red('[CRASHED] ') + test.name) return red('[CRASHED] ') + test.name
else:
print_log(test.log) print_log(test.log)
return (red('[FAILED] ') + test.name) return red('[FAILED] ') + test.name
def print_test_result(test: Test) -> None: def print_test_result(test: Test) -> None:
""" """
...@@ -644,24 +604,11 @@ def print_summary_line(test: Test) -> None: ...@@ -644,24 +604,11 @@ def print_summary_line(test: Test) -> None:
""" """
if test.status == TestStatus.SUCCESS: if test.status == TestStatus.SUCCESS:
color = green color = green
elif test.status == TestStatus.SKIPPED or test.status == TestStatus.NO_TESTS: elif test.status in (TestStatus.SKIPPED, TestStatus.NO_TESTS):
color = yellow color = yellow
else: else:
color = red color = red
counts = test.counts print_with_timestamp(color(f'Testing complete. {test.counts}'))
print_with_timestamp(color('Testing complete. ' + str(counts)))
def print_error(error_message: str) -> None:
"""
Prints error message with error format.
Example:
"[ERROR] Test example: missing test plan!"
Parameters:
error_message - message describing error
"""
print_with_timestamp(red('[ERROR] ') + error_message)
# Other methods: # Other methods:
...@@ -675,7 +622,6 @@ def bubble_up_test_results(test: Test) -> None: ...@@ -675,7 +622,6 @@ def bubble_up_test_results(test: Test) -> None:
Parameters: Parameters:
test - Test object for current test being parsed test - Test object for current test being parsed
""" """
parse_crash_in_log(test)
subtests = test.subtests subtests = test.subtests
counts = test.counts counts = test.counts
status = test.status status = test.status
...@@ -789,6 +735,9 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test: ...@@ -789,6 +735,9 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
# Check for there being no tests # Check for there being no tests
if parent_test and len(subtests) == 0: if parent_test and len(subtests) == 0:
# Don't override a bad status if this test had one reported.
# Assumption: no subtests means CRASHED is from Test.__init__()
if test.status in (TestStatus.TEST_CRASHED, TestStatus.SUCCESS):
test.status = TestStatus.NO_TESTS test.status = TestStatus.NO_TESTS
test.add_error('0 tests run!') test.add_error('0 tests run!')
...@@ -805,7 +754,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test: ...@@ -805,7 +754,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
def parse_run_tests(kernel_output: Iterable[str]) -> Test: def parse_run_tests(kernel_output: Iterable[str]) -> Test:
""" """
Using kernel output, extract KTAP lines, parse the lines for test Using kernel output, extract KTAP lines, parse the lines for test
results and print condensed test results and summary line . results and print condensed test results and summary line.
Parameters: Parameters:
kernel_output - Iterable object contains lines of kernel output kernel_output - Iterable object contains lines of kernel output
...@@ -817,7 +766,8 @@ def parse_run_tests(kernel_output: Iterable[str]) -> Test: ...@@ -817,7 +766,8 @@ def parse_run_tests(kernel_output: Iterable[str]) -> Test:
lines = extract_tap_lines(kernel_output) lines = extract_tap_lines(kernel_output)
test = Test() test = Test()
if not lines: if not lines:
test.add_error('invalid KTAP input!') test.name = '<missing>'
test.add_error('could not find any KTAP output!')
test.status = TestStatus.FAILURE_TO_PARSE_TESTS test.status = TestStatus.FAILURE_TO_PARSE_TESTS
else: else:
test = parse_test(lines, 0, []) test = parse_test(lines, 0, [])
......
...@@ -226,19 +226,10 @@ class KUnitParserTest(unittest.TestCase): ...@@ -226,19 +226,10 @@ class KUnitParserTest(unittest.TestCase):
with open(crash_log) as file: with open(crash_log) as file:
result = kunit_parser.parse_run_tests( result = kunit_parser.parse_run_tests(
kunit_parser.extract_tap_lines(file.readlines())) kunit_parser.extract_tap_lines(file.readlines()))
print_mock.assert_any_call(StrContains('invalid KTAP input!')) print_mock.assert_any_call(StrContains('could not find any KTAP output!'))
print_mock.stop() print_mock.stop()
self.assertEqual(0, len(result.subtests)) self.assertEqual(0, len(result.subtests))
def test_crashed_test(self):
crashed_log = test_data_path('test_is_test_passed-crash.log')
with open(crashed_log) as file:
result = kunit_parser.parse_run_tests(
file.readlines())
self.assertEqual(
kunit_parser.TestStatus.TEST_CRASHED,
result.status)
def test_skipped_test(self): def test_skipped_test(self):
skipped_log = test_data_path('test_skip_tests.log') skipped_log = test_data_path('test_skip_tests.log')
with open(skipped_log) as file: with open(skipped_log) as file:
...@@ -260,7 +251,7 @@ class KUnitParserTest(unittest.TestCase): ...@@ -260,7 +251,7 @@ class KUnitParserTest(unittest.TestCase):
def test_ignores_hyphen(self): def test_ignores_hyphen(self):
hyphen_log = test_data_path('test_strip_hyphen.log') hyphen_log = test_data_path('test_strip_hyphen.log')
file = open(hyphen_log) with open(hyphen_log) as file:
result = kunit_parser.parse_run_tests(file.readlines()) result = kunit_parser.parse_run_tests(file.readlines())
# A skipped test does not fail the whole suite. # A skipped test does not fail the whole suite.
...@@ -356,7 +347,7 @@ class LineStreamTest(unittest.TestCase): ...@@ -356,7 +347,7 @@ class LineStreamTest(unittest.TestCase):
called_times = 0 called_times = 0
def generator(): def generator():
nonlocal called_times nonlocal called_times
for i in range(1,5): for _ in range(1,5):
called_times += 1 called_times += 1
yield called_times, str(called_times) yield called_times, str(called_times)
...@@ -468,9 +459,7 @@ class KUnitJsonTest(unittest.TestCase): ...@@ -468,9 +459,7 @@ class KUnitJsonTest(unittest.TestCase):
test_result = kunit_parser.parse_run_tests(file) test_result = kunit_parser.parse_run_tests(file)
json_obj = kunit_json.get_json_result( json_obj = kunit_json.get_json_result(
test=test_result, test=test_result,
def_config='kunit_defconfig', metadata=kunit_json.Metadata())
build_dir=None,
json_path='stdout')
return json.loads(json_obj) return json.loads(json_obj)
def test_failed_test_json(self): def test_failed_test_json(self):
...@@ -480,10 +469,10 @@ class KUnitJsonTest(unittest.TestCase): ...@@ -480,10 +469,10 @@ class KUnitJsonTest(unittest.TestCase):
result["sub_groups"][1]["test_cases"][0]) result["sub_groups"][1]["test_cases"][0])
def test_crashed_test_json(self): def test_crashed_test_json(self):
result = self._json_for('test_is_test_passed-crash.log') result = self._json_for('test_kernel_panic_interrupt.log')
self.assertEqual( self.assertEqual(
{'name': 'example_simple_test', 'status': 'ERROR'}, {'name': '', 'status': 'ERROR'},
result["sub_groups"][1]["test_cases"][0]) result["sub_groups"][2]["test_cases"][1])
def test_skipped_test_json(self): def test_skipped_test_json(self):
result = self._json_for('test_skip_tests.log') result = self._json_for('test_skip_tests.log')
...@@ -559,12 +548,13 @@ class KUnitMainTest(unittest.TestCase): ...@@ -559,12 +548,13 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(e.exception.code, 1) self.assertEqual(e.exception.code, 1)
self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1) self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1)
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1) self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
self.print_mock.assert_any_call(StrContains('invalid KTAP input!')) self.print_mock.assert_any_call(StrContains('could not find any KTAP output!'))
def test_exec_no_tests(self): def test_exec_no_tests(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=['TAP version 14', '1..0']) self.linux_source_mock.run_kernel = mock.Mock(return_value=['TAP version 14', '1..0'])
with self.assertRaises(SystemExit) as e: with self.assertRaises(SystemExit) as e:
kunit.main(['run'], self.linux_source_mock) kunit.main(['run'], self.linux_source_mock)
self.assertEqual(e.exception.code, 1)
self.linux_source_mock.run_kernel.assert_called_once_with( self.linux_source_mock.run_kernel.assert_called_once_with(
args=None, build_dir='.kunit', filter_glob='', timeout=300) args=None, build_dir='.kunit', filter_glob='', timeout=300)
self.print_mock.assert_any_call(StrContains(' 0 tests run!')) self.print_mock.assert_any_call(StrContains(' 0 tests run!'))
...@@ -595,6 +585,12 @@ class KUnitMainTest(unittest.TestCase): ...@@ -595,6 +585,12 @@ class KUnitMainTest(unittest.TestCase):
self.assertNotEqual(call, mock.call(StrContains('Testing complete.'))) self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
self.assertNotEqual(call, mock.call(StrContains(' 0 tests run'))) self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
def test_run_raw_output_invalid(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
with self.assertRaises(SystemExit) as e:
kunit.main(['run', '--raw_output=invalid'], self.linux_source_mock)
self.assertNotEqual(e.exception.code, 0)
def test_run_raw_output_does_not_take_positional_args(self): def test_run_raw_output_does_not_take_positional_args(self):
# --raw_output is a string flag, but we don't want it to consume # --raw_output is a string flag, but we don't want it to consume
# any positional arguments, only ones after an '=' # any positional arguments, only ones after an '='
...@@ -692,7 +688,7 @@ class KUnitMainTest(unittest.TestCase): ...@@ -692,7 +688,7 @@ class KUnitMainTest(unittest.TestCase):
self.linux_source_mock.run_kernel.return_value = ['TAP version 14', 'init: random output'] + want self.linux_source_mock.run_kernel.return_value = ['TAP version 14', 'init: random output'] + want
got = kunit._list_tests(self.linux_source_mock, got = kunit._list_tests(self.linux_source_mock,
kunit.KunitExecRequest(None, '.kunit', None, 300, False, 'suite*', None, 'suite')) kunit.KunitExecRequest(None, None, '.kunit', 300, False, 'suite*', None, 'suite'))
self.assertEqual(got, want) self.assertEqual(got, want)
# Should respect the user's filter glob when listing tests. # Should respect the user's filter glob when listing tests.
...@@ -707,7 +703,7 @@ class KUnitMainTest(unittest.TestCase): ...@@ -707,7 +703,7 @@ class KUnitMainTest(unittest.TestCase):
# Should respect the user's filter glob when listing tests. # Should respect the user's filter glob when listing tests.
mock_tests.assert_called_once_with(mock.ANY, mock_tests.assert_called_once_with(mock.ANY,
kunit.KunitExecRequest(None, '.kunit', None, 300, False, 'suite*.test*', None, 'suite')) kunit.KunitExecRequest(None, None, '.kunit', 300, False, 'suite*.test*', None, 'suite'))
self.linux_source_mock.run_kernel.assert_has_calls([ self.linux_source_mock.run_kernel.assert_has_calls([
mock.call(args=None, build_dir='.kunit', filter_glob='suite.test*', timeout=300), mock.call(args=None, build_dir='.kunit', filter_glob='suite.test*', timeout=300),
mock.call(args=None, build_dir='.kunit', filter_glob='suite2.test*', timeout=300), mock.call(args=None, build_dir='.kunit', filter_glob='suite2.test*', timeout=300),
...@@ -720,7 +716,7 @@ class KUnitMainTest(unittest.TestCase): ...@@ -720,7 +716,7 @@ class KUnitMainTest(unittest.TestCase):
# Should respect the user's filter glob when listing tests. # Should respect the user's filter glob when listing tests.
mock_tests.assert_called_once_with(mock.ANY, mock_tests.assert_called_once_with(mock.ANY,
kunit.KunitExecRequest(None, '.kunit', None, 300, False, 'suite*', None, 'test')) kunit.KunitExecRequest(None, None, '.kunit', 300, False, 'suite*', None, 'test'))
self.linux_source_mock.run_kernel.assert_has_calls([ self.linux_source_mock.run_kernel.assert_has_calls([
mock.call(args=None, build_dir='.kunit', filter_glob='suite.test1', timeout=300), mock.call(args=None, build_dir='.kunit', filter_glob='suite.test1', timeout=300),
mock.call(args=None, build_dir='.kunit', filter_glob='suite.test2', timeout=300), mock.call(args=None, build_dir='.kunit', filter_glob='suite.test2', timeout=300),
......
...@@ -5,12 +5,15 @@ ...@@ -5,12 +5,15 @@
# Copyright (C) 2021, Google LLC. # Copyright (C) 2021, Google LLC.
# Author: Brendan Higgins <brendanhiggins@google.com> # Author: Brendan Higgins <brendanhiggins@google.com>
from collections import namedtuple from dataclasses import dataclass
from typing import List
QemuArchParams = namedtuple('QemuArchParams', ['linux_arch', @dataclass(frozen=True)
'kconfig', class QemuArchParams:
'qemu_arch', linux_arch: str
'kernel_path', kconfig: str
'kernel_command_line', qemu_arch: str
'extra_qemu_params']) kernel_path: str
kernel_command_line: str
extra_qemu_params: List[str]
...@@ -7,4 +7,4 @@ CONFIG_SERIAL_8250_CONSOLE=y''', ...@@ -7,4 +7,4 @@ CONFIG_SERIAL_8250_CONSOLE=y''',
qemu_arch='alpha', qemu_arch='alpha',
kernel_path='arch/alpha/boot/vmlinux', kernel_path='arch/alpha/boot/vmlinux',
kernel_command_line='console=ttyS0', kernel_command_line='console=ttyS0',
extra_qemu_params=['']) extra_qemu_params=[])
...@@ -10,4 +10,4 @@ CONFIG_SERIAL_AMBA_PL011_CONSOLE=y''', ...@@ -10,4 +10,4 @@ CONFIG_SERIAL_AMBA_PL011_CONSOLE=y''',
qemu_arch='arm', qemu_arch='arm',
kernel_path='arch/arm/boot/zImage', kernel_path='arch/arm/boot/zImage',
kernel_command_line='console=ttyAMA0', kernel_command_line='console=ttyAMA0',
extra_qemu_params=['-machine virt']) extra_qemu_params=['-machine', 'virt'])
...@@ -9,4 +9,4 @@ CONFIG_SERIAL_AMBA_PL011_CONSOLE=y''', ...@@ -9,4 +9,4 @@ CONFIG_SERIAL_AMBA_PL011_CONSOLE=y''',
qemu_arch='aarch64', qemu_arch='aarch64',
kernel_path='arch/arm64/boot/Image.gz', kernel_path='arch/arm64/boot/Image.gz',
kernel_command_line='console=ttyAMA0', kernel_command_line='console=ttyAMA0',
extra_qemu_params=['-machine virt', '-cpu cortex-a57']) extra_qemu_params=['-machine', 'virt', '-cpu', 'cortex-a57'])
...@@ -4,7 +4,7 @@ QEMU_ARCH = QemuArchParams(linux_arch='i386', ...@@ -4,7 +4,7 @@ QEMU_ARCH = QemuArchParams(linux_arch='i386',
kconfig=''' kconfig='''
CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y''', CONFIG_SERIAL_8250_CONSOLE=y''',
qemu_arch='x86_64', qemu_arch='i386',
kernel_path='arch/x86/boot/bzImage', kernel_path='arch/x86/boot/bzImage',
kernel_command_line='console=ttyS0', kernel_command_line='console=ttyS0',
extra_qemu_params=['']) extra_qemu_params=[])
...@@ -9,4 +9,4 @@ CONFIG_HVC_CONSOLE=y''', ...@@ -9,4 +9,4 @@ CONFIG_HVC_CONSOLE=y''',
qemu_arch='ppc64', qemu_arch='ppc64',
kernel_path='vmlinux', kernel_path='vmlinux',
kernel_command_line='console=ttyS0', kernel_command_line='console=ttyS0',
extra_qemu_params=['-M pseries', '-cpu power8']) extra_qemu_params=['-M', 'pseries', '-cpu', 'power8'])
...@@ -21,11 +21,12 @@ CONFIG_SOC_VIRT=y ...@@ -21,11 +21,12 @@ CONFIG_SOC_VIRT=y
CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_OF_PLATFORM=y CONFIG_SERIAL_OF_PLATFORM=y
CONFIG_RISCV_SBI_V01=y
CONFIG_SERIAL_EARLYCON_RISCV_SBI=y''', CONFIG_SERIAL_EARLYCON_RISCV_SBI=y''',
qemu_arch='riscv64', qemu_arch='riscv64',
kernel_path='arch/riscv/boot/Image', kernel_path='arch/riscv/boot/Image',
kernel_command_line='console=ttyS0', kernel_command_line='console=ttyS0',
extra_qemu_params=[ extra_qemu_params=[
'-machine virt', '-machine', 'virt',
'-cpu rv64', '-cpu', 'rv64',
'-bios opensbi-riscv64-generic-fw_dynamic.bin']) '-bios', 'opensbi-riscv64-generic-fw_dynamic.bin'])
...@@ -10,5 +10,5 @@ CONFIG_MODULES=y''', ...@@ -10,5 +10,5 @@ CONFIG_MODULES=y''',
kernel_path='arch/s390/boot/bzImage', kernel_path='arch/s390/boot/bzImage',
kernel_command_line='console=ttyS0', kernel_command_line='console=ttyS0',
extra_qemu_params=[ extra_qemu_params=[
'-machine s390-ccw-virtio', '-machine', 's390-ccw-virtio',
'-cpu qemu',]) '-cpu', 'qemu',])
...@@ -7,4 +7,4 @@ CONFIG_SERIAL_8250_CONSOLE=y''', ...@@ -7,4 +7,4 @@ CONFIG_SERIAL_8250_CONSOLE=y''',
qemu_arch='sparc', qemu_arch='sparc',
kernel_path='arch/sparc/boot/zImage', kernel_path='arch/sparc/boot/zImage',
kernel_command_line='console=ttyS0 mem=256M', kernel_command_line='console=ttyS0 mem=256M',
extra_qemu_params=['-m 256']) extra_qemu_params=['-m', '256'])
...@@ -7,4 +7,4 @@ CONFIG_SERIAL_8250_CONSOLE=y''', ...@@ -7,4 +7,4 @@ CONFIG_SERIAL_8250_CONSOLE=y''',
qemu_arch='x86_64', qemu_arch='x86_64',
kernel_path='arch/x86/boot/bzImage', kernel_path='arch/x86/boot/bzImage',
kernel_command_line='console=ttyS0', kernel_command_line='console=ttyS0',
extra_qemu_params=['']) extra_qemu_params=[])
...@@ -14,7 +14,7 @@ import shutil ...@@ -14,7 +14,7 @@ import shutil
import subprocess import subprocess
import sys import sys
import textwrap import textwrap
from typing import Dict, List, Sequence, Tuple from typing import Dict, List, Sequence
ABS_TOOL_PATH = os.path.abspath(os.path.dirname(__file__)) ABS_TOOL_PATH = os.path.abspath(os.path.dirname(__file__))
TIMEOUT = datetime.timedelta(minutes=5).total_seconds() TIMEOUT = datetime.timedelta(minutes=5).total_seconds()
......
printk: console [tty0] enabled
printk: console [mc-1] enabled
TAP version 14
1..2
# Subtest: sysctl_test
1..8
# sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
ok 1 - sysctl_test_dointvec_null_tbl_data
# sysctl_test_dointvec_table_maxlen_unset: sysctl_test_dointvec_table_maxlen_unset passed
ok 2 - sysctl_test_dointvec_table_maxlen_unset
# sysctl_test_dointvec_table_len_is_zero: sysctl_test_dointvec_table_len_is_zero passed
ok 3 - sysctl_test_dointvec_table_len_is_zero
# sysctl_test_dointvec_table_read_but_position_set: sysctl_test_dointvec_table_read_but_position_set passed
ok 4 - sysctl_test_dointvec_table_read_but_position_set
# sysctl_test_dointvec_happy_single_positive: sysctl_test_dointvec_happy_single_positive passed
ok 5 - sysctl_test_dointvec_happy_single_positive
# sysctl_test_dointvec_happy_single_negative: sysctl_test_dointvec_happy_single_negative passed
ok 6 - sysctl_test_dointvec_happy_single_negative
# sysctl_test_dointvec_single_less_int_min: sysctl_test_dointvec_single_less_int_min passed
ok 7 - sysctl_test_dointvec_single_less_int_min
# sysctl_test_dointvec_single_greater_int_max: sysctl_test_dointvec_single_greater_int_max passed
ok 8 - sysctl_test_dointvec_single_greater_int_max
kunit sysctl_test: all tests passed
ok 1 - sysctl_test
# Subtest: example
1..2
init_suite
# example_simple_test: initializing
Stack:
6016f7db 6f81bd30 6f81bdd0 60021450
6024b0e8 60021440 60018bbe 16f81bdc0
00000001 6f81bd30 6f81bd20 6f81bdd0
Call Trace:
[<6016f7db>] ? kunit_try_run_case+0xab/0xf0
[<60021450>] ? set_signals+0x0/0x60
[<60021440>] ? get_signals+0x0/0x10
[<60018bbe>] ? kunit_um_run_try_catch+0x5e/0xc0
[<60021450>] ? set_signals+0x0/0x60
[<60021440>] ? get_signals+0x0/0x10
[<60018bb3>] ? kunit_um_run_try_catch+0x53/0xc0
[<6016f321>] ? kunit_run_case_catch_errors+0x121/0x1a0
[<60018b60>] ? kunit_um_run_try_catch+0x0/0xc0
[<600189e0>] ? kunit_um_throw+0x0/0x180
[<6016f730>] ? kunit_try_run_case+0x0/0xf0
[<6016f600>] ? kunit_catch_run_case+0x0/0x130
[<6016edd0>] ? kunit_vprintk+0x0/0x30
[<6016ece0>] ? kunit_fail+0x0/0x40
[<6016eca0>] ? kunit_abort+0x0/0x40
[<6016ed20>] ? kunit_printk_emit+0x0/0xb0
[<6016f200>] ? kunit_run_case_catch_errors+0x0/0x1a0
[<6016f46e>] ? kunit_run_tests+0xce/0x260
[<6005b390>] ? unregister_console+0x0/0x190
[<60175b70>] ? suite_kunit_initexample_test_suite+0x0/0x20
[<60001cbb>] ? do_one_initcall+0x0/0x197
[<60001d47>] ? do_one_initcall+0x8c/0x197
[<6005cd20>] ? irq_to_desc+0x0/0x30
[<60002005>] ? kernel_init_freeable+0x1b3/0x272
[<6005c5ec>] ? printk+0x0/0x9b
[<601c0086>] ? kernel_init+0x26/0x160
[<60014442>] ? new_thread_handler+0x82/0xc0
# example_simple_test: kunit test case crashed!
# example_simple_test: example_simple_test failed
not ok 1 - example_simple_test
# example_mock_test: initializing
# example_mock_test: example_mock_test passed
ok 2 - example_mock_test
kunit example: one or more tests failed
not ok 2 - example
List of all partitions:
...@@ -3,5 +3,5 @@ TAP version 14 ...@@ -3,5 +3,5 @@ TAP version 14
# Subtest: suite # Subtest: suite
1..1 1..1
# Subtest: case # Subtest: case
ok 1 - case # SKIP ok 1 - case
ok 1 - suite ok 1 - suite
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment