Commit 45b3a376 authored by David S. Miller's avatar David S. Miller

Merge branch 'tc-testing-plugin-architecture'

Brenda J. Butler says:

====================
tools: tc-testing: Plugin Architecture

To make tdc.py more general, we are introducing a plugin architecture.

This patch set first organizes the command line parameters, then
introduces the plugin architecture and some example plugins.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents c402fb7e 95ce14c3
......@@ -17,8 +17,8 @@ REQUIREMENTS
* The kernel must have veth support available, as a veth pair is created
prior to running the tests.
* All tc-related features must be built in or available as modules.
To check what is required in current setup run:
* All tc-related features being tested must be built in or available as
modules. To check what is required in current setup run:
./tdc.py -c
Note:
......@@ -44,10 +44,13 @@ using the -p option when running tdc:
RUNNING TDC
-----------
To use tdc, root privileges are required. tdc will not run otherwise.
To use tdc, root privileges are required. This is because the
commands being tested must be run as root. The code that enforces
execution by root uid has been moved into a plugin (see PLUGIN
ARCHITECTURE, below).
All tests are executed inside a network namespace to prevent conflicts
within the host.
If nsPlugin is linked, all tests are executed inside a network
namespace to prevent conflicts within the host.
Running tdc without any arguments will run all tests. Refer to the section
on command line arguments for more information, or run:
......@@ -59,6 +62,33 @@ output captured from the failing test will be printed immediately following
the failed test in the TAP output.
OVERVIEW OF TDC EXECUTION
-------------------------
One run of tests is considered a "test suite" (this will be refined in the
future). A test suite has one or more test cases in it.
A test case has four stages:
- setup
- execute
- verify
- teardown
The setup and teardown stages can run zero or more commands. The setup
stage does some setup if the test needs it. The teardown stage undoes
the setup and returns the system to a "neutral" state so any other test
can be run next. These two stages require any commands run to return
success, but do not otherwise verify the results.
The execute and verify stages each run one command. The execute stage
tests the return code against one or more acceptable values. The
verify stage checks the return code for success, and also compares
the stdout with a regular expression.
Each of the commands in any stage will run in a shell instance.
USER-DEFINED CONSTANTS
----------------------
......@@ -70,23 +100,132 @@ executed as part of the test. More will be added as test cases require.
Example:
$TC qdisc add dev $DEV1 ingress
The NAMES values are used to substitute into the commands in the test cases.
COMMAND LINE ARGUMENTS
----------------------
Run tdc.py -h to see the full list of available arguments.
-p PATH Specify the tc executable located at PATH to be used on this
test run
-c Show the available test case categories in this test file
-c CATEGORY Run only tests that belong to CATEGORY
-f FILE Read test cases from the JSON file named FILE
-l [CATEGORY] List all test cases in the JSON file. If CATEGORY is
specified, list test cases matching that category.
-s ID Show the test case matching ID
-e ID Execute the test case identified by ID
-i Generate unique ID numbers for test cases with no existing
ID number
usage: tdc.py [-h] [-p PATH] [-D DIR [DIR ...]] [-f FILE [FILE ...]]
[-c [CATG [CATG ...]]] [-e ID [ID ...]] [-l] [-s] [-i] [-v]
[-d DEVICE] [-n NS] [-V]
Linux TC unit tests
optional arguments:
-h, --help show this help message and exit
-p PATH, --path PATH The full path to the tc executable to use
-v, --verbose Show the commands that are being run
-d DEVICE, --device DEVICE
Execute the test case in flower category
selection:
select which test cases: files plus directories; filtered by categories
plus testids
-D DIR [DIR ...], --directory DIR [DIR ...]
Collect tests from the specified directory(ies)
(default [tc-tests])
-f FILE [FILE ...], --file FILE [FILE ...]
Run tests from the specified file(s)
-c [CATG [CATG ...]], --category [CATG [CATG ...]]
Run tests only from the specified category/ies, or if
no category/ies is/are specified, list known
categories.
-e ID [ID ...], --execute ID [ID ...]
Execute the specified test cases with specified IDs
action:
select action to perform on selected test cases
-l, --list List all test cases, or those only within the
specified category
-s, --show Display the selected test cases
-i, --id Generate ID numbers for new test cases
netns:
options for nsPlugin(run commands in net namespace)
-n NS, --namespace NS
Run commands in namespace NS
valgrind:
options for valgrindPlugin (run command under test under Valgrind)
-V, --valgrind Run commands under valgrind
PLUGIN ARCHITECTURE
-------------------
There is now a plugin architecture, and some of the functionality that
was in the tdc.py script has been moved into the plugins.
The plugins are in the directory plugin-lib. The are executed from
directory plugins. Put symbolic links from plugins to plugin-lib,
and name them according to the order you want them to run.
Example:
bjb@bee:~/work/tc-testing$ ls -l plugins
total 4
lrwxrwxrwx 1 bjb bjb 27 Oct 4 16:12 10-rootPlugin.py -> ../plugin-lib/rootPlugin.py
lrwxrwxrwx 1 bjb bjb 25 Oct 12 17:55 20-nsPlugin.py -> ../plugin-lib/nsPlugin.py
-rwxr-xr-x 1 bjb bjb 0 Sep 29 15:56 __init__.py
The plugins are a subclass of TdcPlugin, defined in TdcPlugin.py and
must be called "SubPlugin" so tdc can find them. They are
distinguished from each other in the python program by their module
name.
This base class supplies "hooks" to run extra functions. These hooks are as follows:
pre- and post-suite
pre- and post-case
pre- and post-execute stage
adjust-command (runs in all stages and receives the stage name)
The pre-suite hook receives the number of tests and an array of test ids.
This allows you to dump out the list of skipped tests in the event of a
failure during setup or teardown stage.
The pre-case hook receives the ordinal number and test id of the current test.
The adjust-command hook receives the stage id (see list below) and the
full command to be executed. This allows for last-minute adjustment
of the command.
The stages are identified by the following strings:
- pre (pre-suite)
- setup
- command
- verify
- teardown
- post (post-suite)
To write a plugin, you need to inherit from TdcPlugin in
TdcPlugin.py. To use the plugin, you have to put the
implementation file in plugin-lib, and add a symbolic link to it from
plugins. It will be detected at run time and invoked at the
appropriate times. There are a few examples in the plugin-lib
directory:
- rootPlugin.py:
implements the enforcement of running as root
- nsPlugin.py:
sets up a network namespace and runs all commands in that namespace
- valgrindPlugin.py
runs each command in the execute stage under valgrind,
and checks for leaks.
This plugin will output an extra test for each test in the test file,
one is the existing output as to whether the test passed or failed,
and the other is a test whether the command leaked memory or not.
(This one is a preliminary version, it may not work quite right yet,
but the overall template is there and it should only need tweaks.)
ACKNOWLEDGEMENTS
......
......@@ -5,6 +5,27 @@ tc Testing Suite To-Do list:
- Add support for multiple versions of tc to run successively
- Improve error messages when tdc aborts its run
- Improve error messages when tdc aborts its run. Partially done - still
need to better handle problems in pre- and post-suite.
- Allow tdc to write its results to file
- Use python logger module for debug/verbose output
- Allow tdc to write its results to file.
Maybe use python logger module for this too.
- A better implementation of the "hooks". Currently, every plugin
will attempt to run a function at every hook point. Could be
changed so that plugin __init__ methods will register functions to
be run in the various predefined times. Then if a plugin does not
require action at a specific point, no penalty will be paid for
trying to run a function that will do nothing.
- Proper exception handling - make an exception class and use it
- a TestCase class, for easier testcase handling, searching, comparison
- a TestSuite class
and a way to configure a test suite,
to automate running multiple "test suites" with different requirements
- super simple test case example using ls, touch, etc
#!/usr/bin/env python3
class TdcPlugin:
def __init__(self):
super().__init__()
print(' -- {}.__init__'.format(self.sub_class))
def pre_suite(self, testcount, testidlist):
'''run commands before test_runner goes into a test loop'''
self.testcount = testcount
self.testidlist = testidlist
if self.args.verbose > 1:
print(' -- {}.pre_suite'.format(self.sub_class))
def post_suite(self, index):
'''run commands after test_runner completes the test loop
index is the last ordinal number of test that was attempted'''
if self.args.verbose > 1:
print(' -- {}.post_suite'.format(self.sub_class))
def pre_case(self, test_ordinal, testid):
'''run commands before test_runner does one test'''
if self.args.verbose > 1:
print(' -- {}.pre_case'.format(self.sub_class))
self.args.testid = testid
self.args.test_ordinal = test_ordinal
def post_case(self):
'''run commands after test_runner does one test'''
if self.args.verbose > 1:
print(' -- {}.post_case'.format(self.sub_class))
def pre_execute(self):
'''run command before test-runner does the execute step'''
if self.args.verbose > 1:
print(' -- {}.pre_execute'.format(self.sub_class))
def post_execute(self):
'''run command after test-runner does the execute step'''
if self.args.verbose > 1:
print(' -- {}.post_execute'.format(self.sub_class))
def adjust_command(self, stage, command):
'''adjust the command'''
if self.args.verbose > 1:
print(' -- {}.adjust_command {}'.format(self.sub_class, stage))
# if stage == 'pre':
# pass
# elif stage == 'setup':
# pass
# elif stage == 'execute':
# pass
# elif stage == 'verify':
# pass
# elif stage == 'teardown':
# pass
# elif stage == 'post':
# pass
# else:
# pass
return command
def add_args(self, parser):
'''Get the plugin args from the command line'''
self.argparser = parser
return self.argparser
def check_args(self, args, remaining):
'''Check that the args are set correctly'''
self.args = args
if self.args.verbose > 1:
print(' -- {}.check_args'.format(self.sub_class))
tdc - Adding plugins for tdc
Author: Brenda J. Butler - bjb@mojatatu.com
ADDING PLUGINS
--------------
A new plugin should be written in python as a class that inherits from TdcPlugin.
There are some examples in plugin-lib.
The plugin can be used to add functionality to the test framework,
such as:
- adding commands to be run before and/or after the test suite
- adding commands to be run before and/or after the test cases
- adding commands to be run before and/or after the execute phase of the test cases
- ability to alter the command to be run in any phase:
pre (the pre-suite stage)
prepare
execute
verify
teardown
post (the post-suite stage)
- ability to add to the command line args, and use them at run time
The functions in the class should follow the following interfaces:
def __init__(self)
def pre_suite(self, testcount, testidlist) # see "PRE_SUITE" below
def post_suite(self, ordinal) # see "SKIPPING" below
def pre_case(self, test_ordinal, testid) # see "PRE_CASE" below
def post_case(self)
def pre_execute(self)
def post_execute(self)
def adjust_command(self, stage, command) # see "ADJUST" below
def add_args(self, parser) # see "ADD_ARGS" below
def check_args(self, args, remaining) # see "CHECK_ARGS" below
PRE_SUITE
This method takes a testcount (number of tests to be run) and
testidlist (array of test ids for tests that will be run). This is
useful for various things, including when an exception occurs and the
rest of the tests must be skipped. The info is stored in the object,
and the post_suite method can refer to it when dumping the "skipped"
TAP output. The tdc.py script will do that for the test suite as
defined in the test case, but if the plugin is being used to run extra
tests on each test (eg, check for memory leaks on associated
co-processes) then that other tap output can be generated in the
post-suite method using this info passed in to the pre_suite method.
SKIPPING
The post_suite method will receive the ordinal number of the last
test to be attempted. It can use this info when outputting
the TAP output for the extra test cases.
PRE_CASE
The pre_case method will receive the ordinal number of the test
and the test id. Useful for outputing the extra test results.
ADJUST
The adjust_command method receives a string representing
the execution stage and a string which is the actual command to be
executed. The plugin can adjust the command, based on the stage of
execution.
The stages are represented by the following strings:
'pre'
'setup'
'command'
'verify'
'teardown'
'post'
The adjust_command method must return the adjusted command so tdc
can use it.
ADD_ARGS
The add_args method receives the argparser object and can add
arguments to it. Care should be taken that the new arguments do not
conflict with any from tdc.py or from other plugins that will be used
concurrently.
The add_args method should return the argparser object.
CHECK_ARGS
The check_args method is so that the plugin can do validation on
the args, if needed. If there is a problem, and Exception should
be raised, with a string that explains the problem.
eg: raise Exception('plugin xxx, arg -y is wrong, fix it')
......@@ -12,14 +12,18 @@ template.json for the required JSON format for test cases.
Include the 'id' field, but do not assign a value. Running tdc with the -i
option will generate a unique ID for that test case.
tdc will recursively search the 'tc' subdirectory for .json files. Any
test case files you create in these directories will automatically be included.
If you wish to store your custom test cases elsewhere, be sure to run tdc
with the -f argument and the path to your file.
tdc will recursively search the 'tc-tests' subdirectory (or the
directories named with the -D option) for .json files. Any test case
files you create in these directories will automatically be included.
If you wish to store your custom test cases elsewhere, be sure to run
tdc with the -f argument and the path to your file, or the -D argument
and the path to your directory(ies).
Be aware of required escape characters in the JSON data - particularly when
defining the match pattern. Refer to the tctests.json file for examples when
in doubt.
Be aware of required escape characters in the JSON data - particularly
when defining the match pattern. Refer to the supplied json test files
for examples when in doubt. The match pattern is written in json, and
will be used by python. So the match pattern will be a python regular
expression, but should be written using json syntax.
TEST CASE STRUCTURE
......@@ -69,7 +73,8 @@ SETUP/TEARDOWN ERRORS
If an error is detected during the setup/teardown process, execution of the
tests will immediately stop with an error message and the namespace in which
the tests are run will be destroyed. This is to prevent inaccurate results
in the test cases.
in the test cases. tdc will output a series of TAP results for the skipped
tests.
Repeated failures of the setup/teardown may indicate a problem with the test
case, or possibly even a bug in one of the commands that are not being tested.
......@@ -79,3 +84,17 @@ so that it doesn't halt the script for an error that doesn't matter. Turn the
individual command into a list, with the command being first, followed by all
acceptable exit codes for the command.
Example:
A pair of setup commands. The first can have exit code 0, 1 or 255, the
second must have exit code 0.
"setup": [
[
"$TC actions flush action gact",
0,
1,
255
],
"$TC actions add action reclassify index 65536"
],
tdc.py will look for plugins in a directory plugins off the cwd.
Make a set of numbered symbolic links from there to the actual plugins.
Eg:
tdc.py
plugin-lib/
plugins/
__init__.py
10-rootPlugin.py -> ../plugin-lib/rootPlugin.py
20-valgrindPlugin.py -> ../plugin-lib/valgrindPlugin.py
30-nsPlugin.py -> ../plugin-lib/nsPlugin.py
tdc.py will find them and use them.
rootPlugin
Check if the uid is root. If not, bail out.
valgrindPlugin
Run the command under test with valgrind, and produce an extra set of TAP results for the memory tests.
This plugin will write files to the cwd, called vgnd-xxx.log. These will contain
the valgrind output for test xxx. Any file matching the glob 'vgnd-*.log' will be
deleted at the end of the run.
nsPlugin
Run all the commands in a network namespace.
import os
import signal
from string import Template
import subprocess
import time
from TdcPlugin import TdcPlugin
from tdc_config import *
class SubPlugin(TdcPlugin):
def __init__(self):
self.sub_class = 'ns/SubPlugin'
super().__init__()
def pre_suite(self, testcount, testidlist):
'''run commands before test_runner goes into a test loop'''
super().pre_suite(testcount, testidlist)
if self.args.namespace:
self._ns_create()
def post_suite(self, index):
'''run commands after test_runner goes into a test loop'''
super().post_suite(index)
if self.args.verbose:
print('{}.post_suite'.format(self.sub_class))
if self.args.namespace:
self._ns_destroy()
def add_args(self, parser):
super().add_args(parser)
self.argparser_group = self.argparser.add_argument_group(
'netns',
'options for nsPlugin(run commands in net namespace)')
self.argparser_group.add_argument(
'-n', '--namespace', action='store_true',
help='Run commands in namespace')
return self.argparser
def adjust_command(self, stage, command):
super().adjust_command(stage, command)
cmdform = 'list'
cmdlist = list()
if not self.args.namespace:
return command
if self.args.verbose:
print('{}.adjust_command'.format(self.sub_class))
if not isinstance(command, list):
cmdform = 'str'
cmdlist = command.split()
else:
cmdlist = command
if stage == 'setup' or stage == 'execute' or stage == 'verify' or stage == 'teardown':
if self.args.verbose:
print('adjust_command: stage is {}; inserting netns stuff in command [{}] list [{}]'.format(stage, command, cmdlist))
cmdlist.insert(0, self.args.NAMES['NS'])
cmdlist.insert(0, 'exec')
cmdlist.insert(0, 'netns')
cmdlist.insert(0, 'ip')
else:
pass
if cmdform == 'str':
command = ' '.join(cmdlist)
else:
command = cmdlist
if self.args.verbose:
print('adjust_command: return command [{}]'.format(command))
return command
def _ns_create(self):
'''
Create the network namespace in which the tests will be run and set up
the required network devices for it.
'''
if self.args.namespace:
cmd = 'ip netns add {}'.format(self.args.NAMES['NS'])
self._exec_cmd('pre', cmd)
cmd = 'ip link add $DEV0 type veth peer name $DEV1'
self._exec_cmd('pre', cmd)
cmd = 'ip link set $DEV1 netns {}'.format(self.args.NAMES['NS'])
self._exec_cmd('pre', cmd)
cmd = 'ip link set $DEV0 up'
self._exec_cmd('pre', cmd)
cmd = 'ip -n {} link set $DEV1 up'.format(self.args.NAMES['NS'])
self._exec_cmd('pre', cmd)
if self.args.device:
cmd = 'ip link set $DEV2 netns {}'.format(self.args.NAMES['NS'])
self._exec_cmd('pre', cmd)
cmd = 'ip -n {} link set $DEV2 up'.format(self.args.NAMES['NS'])
self._exec_cmd('pre', cmd)
def _ns_destroy(self):
'''
Destroy the network namespace for testing (and any associated network
devices as well)
'''
if self.args.namespace:
cmd = 'ip netns delete {}'.format(self.args.NAMES['NS'])
self._exec_cmd('post', cmd)
def _exec_cmd(self, stage, command):
'''
Perform any required modifications on an executable command, then run
it in a subprocess and return the results.
'''
if '$' in command:
command = self._replace_keywords(command)
self.adjust_command(stage, command)
if self.args.verbose:
print('_exec_cmd: command "{}"'.format(command))
proc = subprocess.Popen(command,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=ENVIR)
(rawout, serr) = proc.communicate()
if proc.returncode != 0 and len(serr) > 0:
foutput = serr.decode("utf-8")
else:
foutput = rawout.decode("utf-8")
proc.stdout.close()
proc.stderr.close()
return proc, foutput
def _replace_keywords(self, cmd):
"""
For a given executable command, substitute any known
variables contained within NAMES with the correct values
"""
tcmd = Template(cmd)
subcmd = tcmd.safe_substitute(self.args.NAMES)
return subcmd
import os
import sys
from TdcPlugin import TdcPlugin
from tdc_config import *
class SubPlugin(TdcPlugin):
def __init__(self):
self.sub_class = 'root/SubPlugin'
super().__init__()
def pre_suite(self, testcount, testidlist):
# run commands before test_runner goes into a test loop
super().pre_suite(testcount, testidlist)
if os.geteuid():
print('This script must be run with root privileges', file=sys.stderr)
exit(1)
'''
run the command under test, under valgrind and collect memory leak info
as a separate test.
'''
import os
import re
import signal
from string import Template
import subprocess
import time
from TdcPlugin import TdcPlugin
from tdc_config import *
def vp_extract_num_from_string(num_as_string_maybe_with_commas):
return int(num_as_string_maybe_with_commas.replace(',',''))
class SubPlugin(TdcPlugin):
def __init__(self):
self.sub_class = 'valgrind/SubPlugin'
self.tap = ''
super().__init__()
def pre_suite(self, testcount, testidlist):
'''run commands before test_runner goes into a test loop'''
super().pre_suite(testcount, testidlist)
if self.args.verbose > 1:
print('{}.pre_suite'.format(self.sub_class))
if self.args.valgrind:
self._add_to_tap('1..{}\n'.format(self.testcount))
def post_suite(self, index):
'''run commands after test_runner goes into a test loop'''
super().post_suite(index)
self._add_to_tap('\n|---\n')
if self.args.verbose > 1:
print('{}.post_suite'.format(self.sub_class))
print('{}'.format(self.tap))
if self.args.verbose < 4:
subprocess.check_output('rm -f vgnd-*.log', shell=True)
def add_args(self, parser):
super().add_args(parser)
self.argparser_group = self.argparser.add_argument_group(
'valgrind',
'options for valgrindPlugin (run command under test under Valgrind)')
self.argparser_group.add_argument(
'-V', '--valgrind', action='store_true',
help='Run commands under valgrind')
return self.argparser
def adjust_command(self, stage, command):
super().adjust_command(stage, command)
cmdform = 'list'
cmdlist = list()
if not self.args.valgrind:
return command
if self.args.verbose > 1:
print('{}.adjust_command'.format(self.sub_class))
if not isinstance(command, list):
cmdform = 'str'
cmdlist = command.split()
else:
cmdlist = command
if stage == 'execute':
if self.args.verbose > 1:
print('adjust_command: stage is {}; inserting valgrind stuff in command [{}] list [{}]'.
format(stage, command, cmdlist))
cmdlist.insert(0, '--track-origins=yes')
cmdlist.insert(0, '--show-leak-kinds=definite,indirect')
cmdlist.insert(0, '--leak-check=full')
cmdlist.insert(0, '--log-file=vgnd-{}.log'.format(self.args.testid))
cmdlist.insert(0, '-v') # ask for summary of non-leak errors
cmdlist.insert(0, ENVIR['VALGRIND_BIN'])
else:
pass
if cmdform == 'str':
command = ' '.join(cmdlist)
else:
command = cmdlist
if self.args.verbose > 1:
print('adjust_command: return command [{}]'.format(command))
return command
def post_execute(self):
if not self.args.valgrind:
return
self.definitely_lost_re = re.compile(
r'definitely lost:\s+([,0-9]+)\s+bytes in\s+([,0-9]+)\sblocks', re.MULTILINE | re.DOTALL)
self.indirectly_lost_re = re.compile(
r'indirectly lost:\s+([,0-9]+)\s+bytes in\s+([,0-9]+)\s+blocks', re.MULTILINE | re.DOTALL)
self.possibly_lost_re = re.compile(
r'possibly lost:\s+([,0-9]+)bytes in\s+([,0-9]+)\s+blocks', re.MULTILINE | re.DOTALL)
self.non_leak_error_re = re.compile(
r'ERROR SUMMARY:\s+([,0-9]+) errors from\s+([,0-9]+)\s+contexts', re.MULTILINE | re.DOTALL)
def_num = 0
ind_num = 0
pos_num = 0
nle_num = 0
# what about concurrent test runs? Maybe force them to be in different directories?
with open('vgnd-{}.log'.format(self.args.testid)) as vfd:
content = vfd.read()
def_mo = self.definitely_lost_re.search(content)
ind_mo = self.indirectly_lost_re.search(content)
pos_mo = self.possibly_lost_re.search(content)
nle_mo = self.non_leak_error_re.search(content)
if def_mo:
def_num = int(def_mo.group(2))
if ind_mo:
ind_num = int(ind_mo.group(2))
if pos_mo:
pos_num = int(pos_mo.group(2))
if nle_mo:
nle_num = int(nle_mo.group(1))
mem_results = ''
if (def_num > 0) or (ind_num > 0) or (pos_num > 0) or (nle_num > 0):
mem_results += 'not '
mem_results += 'ok {} - {}-mem # {}\n'.format(
self.args.test_ordinal, self.args.testid, 'memory leak check')
self._add_to_tap(mem_results)
if mem_results.startswith('not '):
print('{}'.format(content))
self._add_to_tap(content)
def _add_to_tap(self, more_tap_output):
self.tap += more_tap_output
......@@ -11,16 +11,88 @@ import re
import os
import sys
import argparse
import importlib
import json
import subprocess
import time
from collections import OrderedDict
from string import Template
from tdc_config import *
from tdc_helper import *
import TdcPlugin
class PluginMgr:
def __init__(self, argparser):
super().__init__()
self.plugins = {}
self.plugin_instances = []
self.args = []
self.argparser = argparser
# TODO, put plugins in order
plugindir = os.getenv('TDC_PLUGIN_DIR', './plugins')
for dirpath, dirnames, filenames in os.walk(plugindir):
for fn in filenames:
if (fn.endswith('.py') and
not fn == '__init__.py' and
not fn.startswith('#') and
not fn.startswith('.#')):
mn = fn[0:-3]
foo = importlib.import_module('plugins.' + mn)
self.plugins[mn] = foo
self.plugin_instances.append(foo.SubPlugin())
def call_pre_suite(self, testcount, testidlist):
for pgn_inst in self.plugin_instances:
pgn_inst.pre_suite(testcount, testidlist)
def call_post_suite(self, index):
for pgn_inst in reversed(self.plugin_instances):
pgn_inst.post_suite(index)
def call_pre_case(self, test_ordinal, testid):
for pgn_inst in self.plugin_instances:
try:
pgn_inst.pre_case(test_ordinal, testid)
except Exception as ee:
print('exception {} in call to pre_case for {} plugin'.
format(ee, pgn_inst.__class__))
print('test_ordinal is {}'.format(test_ordinal))
print('testid is {}'.format(testid))
raise
def call_post_case(self):
for pgn_inst in reversed(self.plugin_instances):
pgn_inst.post_case()
def call_pre_execute(self):
for pgn_inst in self.plugin_instances:
pgn_inst.pre_execute()
def call_post_execute(self):
for pgn_inst in reversed(self.plugin_instances):
pgn_inst.post_execute()
def call_add_args(self, parser):
for pgn_inst in self.plugin_instances:
parser = pgn_inst.add_args(parser)
return parser
def call_check_args(self, args, remaining):
for pgn_inst in self.plugin_instances:
pgn_inst.check_args(args, remaining)
USE_NS = True
def call_adjust_command(self, stage, command):
for pgn_inst in self.plugin_instances:
command = pgn_inst.adjust_command(stage, command)
return command
@staticmethod
def _make_argparser(args):
self.argparser = argparse.ArgumentParser(
description='Linux TC unit tests')
def replace_keywords(cmd):
......@@ -33,21 +105,24 @@ def replace_keywords(cmd):
return subcmd
def exec_cmd(command, nsonly=True):
def exec_cmd(args, pm, stage, command):
"""
Perform any required modifications on an executable command, then run
it in a subprocess and return the results.
"""
if (USE_NS and nsonly):
command = 'ip netns exec $NS ' + command
if len(command.strip()) == 0:
return None, None
if '$' in command:
command = replace_keywords(command)
command = pm.call_adjust_command(stage, command)
if args.verbose > 0:
print('command "{}"'.format(command))
proc = subprocess.Popen(command,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stderr=subprocess.PIPE,
env=ENVIR)
(rawout, serr) = proc.communicate()
if proc.returncode != 0 and len(serr) > 0:
......@@ -60,116 +135,130 @@ def exec_cmd(command, nsonly=True):
return proc, foutput
def prepare_env(cmdlist):
def prepare_env(args, pm, stage, prefix, cmdlist):
"""
Execute the setup/teardown commands for a test case. Optionally
terminate test execution if the command fails.
Execute the setup/teardown commands for a test case.
Optionally terminate test execution if the command fails.
"""
if args.verbose > 0:
print('{}'.format(prefix))
for cmdinfo in cmdlist:
if (type(cmdinfo) == list):
if isinstance(cmdinfo, list):
exit_codes = cmdinfo[1:]
cmd = cmdinfo[0]
else:
exit_codes = [0]
cmd = cmdinfo
if (len(cmd) == 0):
if not cmd:
continue
(proc, foutput) = exec_cmd(cmd)
if proc.returncode not in exit_codes:
print
print("Could not execute:")
print(cmd)
print("\nError message:")
print(foutput)
print("\nAborting test run.")
ns_destroy()
exit(1)
(proc, foutput) = exec_cmd(args, pm, stage, cmd)
def test_runner(filtered_tests, args):
"""
Driver function for the unit tests.
if proc and (proc.returncode not in exit_codes):
print('', file=sys.stderr)
print("{} *** Could not execute: \"{}\"".format(prefix, cmd),
file=sys.stderr)
print("\n{} *** Error message: \"{}\"".format(prefix, foutput),
file=sys.stderr)
print("\n{} *** Aborting test run.".format(prefix), file=sys.stderr)
print("\n\n{} *** stdout ***".format(proc.stdout), file=sys.stderr)
print("\n\n{} *** stderr ***".format(proc.stderr), file=sys.stderr)
raise Exception('"{}" did not complete successfully'.format(prefix))
Prints information about the tests being run, executes the setup and
teardown commands and the command under test itself. Also determines
success/failure based on the information in the test case and generates
TAP output accordingly.
"""
testlist = filtered_tests
tcount = len(testlist)
index = 1
tap = str(index) + ".." + str(tcount) + "\n"
for tidx in testlist:
def run_one_test(pm, args, index, tidx):
result = True
tresult = ""
if "flower" in tidx["category"] and args.device == None:
continue
tap = ""
if args.verbose > 0:
print("\t====================\n=====> ", end="")
print("Test " + tidx["id"] + ": " + tidx["name"])
prepare_env(tidx["setup"])
(p, procout) = exec_cmd(tidx["cmdUnderTest"])
pm.call_pre_case(index, tidx['id'])
prepare_env(args, pm, 'setup', "-----> prepare stage", tidx["setup"])
if (args.verbose > 0):
print('-----> execute stage')
pm.call_pre_execute()
(p, procout) = exec_cmd(args, pm, 'execute', tidx["cmdUnderTest"])
exit_code = p.returncode
pm.call_post_execute()
if (exit_code != int(tidx["expExitCode"])):
result = False
print("exit:", exit_code, int(tidx["expExitCode"]))
print(procout)
else:
match_pattern = re.compile(str(tidx["matchPattern"]), re.DOTALL)
(p, procout) = exec_cmd(tidx["verifyCmd"])
if args.verbose > 0:
print('-----> verify stage')
match_pattern = re.compile(
str(tidx["matchPattern"]), re.DOTALL | re.MULTILINE)
(p, procout) = exec_cmd(args, pm, 'verify', tidx["verifyCmd"])
match_index = re.findall(match_pattern, procout)
if len(match_index) != int(tidx["matchCount"]):
result = False
if result == True:
tresult += "ok "
else:
tresult += "not ok "
tap += tresult + str(index) + " " + tidx["id"] + " " + tidx["name"] + "\n"
if not result:
tresult += 'not '
tresult += 'ok {} - {} # {}\n'.format(str(index), tidx['id'], tidx['name'])
tap += tresult
if result == False:
tap += procout
prepare_env(tidx["teardown"])
prepare_env(args, pm, 'teardown', '-----> teardown stage', tidx['teardown'])
pm.call_post_case()
index += 1
return tap
def ns_create():
def test_runner(pm, args, filtered_tests):
"""
Create the network namespace in which the tests will be run and set up
the required network devices for it.
Driver function for the unit tests.
Prints information about the tests being run, executes the setup and
teardown commands and the command under test itself. Also determines
success/failure based on the information in the test case and generates
TAP output accordingly.
"""
if (USE_NS):
cmd = 'ip netns add $NS'
exec_cmd(cmd, False)
cmd = 'ip link add $DEV0 type veth peer name $DEV1'
exec_cmd(cmd, False)
cmd = 'ip link set $DEV1 netns $NS'
exec_cmd(cmd, False)
cmd = 'ip link set $DEV0 up'
exec_cmd(cmd, False)
cmd = 'ip -n $NS link set $DEV1 up'
exec_cmd(cmd, False)
cmd = 'ip link set $DEV2 netns $NS'
exec_cmd(cmd, False)
cmd = 'ip -n $NS link set $DEV2 up'
exec_cmd(cmd, False)
testlist = filtered_tests
tcount = len(testlist)
index = 1
tap = str(index) + ".." + str(tcount) + "\n"
badtest = None
pm.call_pre_suite(tcount, [tidx['id'] for tidx in testlist])
def ns_destroy():
"""
Destroy the network namespace for testing (and any associated network
devices as well)
"""
if (USE_NS):
cmd = 'ip netns delete $NS'
exec_cmd(cmd, False)
if args.verbose > 1:
print('Run tests here')
for tidx in testlist:
if "flower" in tidx["category"] and args.device == None:
continue
try:
badtest = tidx # in case it goes bad
tap += run_one_test(pm, args, index, tidx)
except Exception as ee:
print('Exception {} (caught in test_runner, running test {} {} {})'.
format(ee, index, tidx['id'], tidx['name']))
break
index += 1
# if we failed in setup or teardown,
# fill in the remaining tests with not ok
count = index
tap += 'about to flush the tap output if tests need to be skipped\n'
if tcount + 1 != index:
for tidx in testlist[index - 1:]:
msg = 'skipped - previous setup or teardown failed'
tap += 'ok {} - {} # {} {} {}\n'.format(
count, tidx['id'], msg, index, badtest.get('id', '--Unknown--'))
count += 1
tap += 'done flushing skipped test tap output\n'
pm.call_post_suite(index)
return tap
def has_blank_ids(idlist):
"""
......@@ -209,29 +298,50 @@ def set_args(parser):
"""
Set the command line arguments for tdc.
"""
parser.add_argument('-p', '--path', type=str,
parser.add_argument(
'-p', '--path', type=str,
help='The full path to the tc executable to use')
parser.add_argument('-c', '--category', type=str, nargs='?', const='+c',
help='Run tests only from the specified category, or if no category is specified, list known categories.')
parser.add_argument('-f', '--file', type=str,
help='Run tests from the specified file')
parser.add_argument('-l', '--list', type=str, nargs='?', const="++", metavar='CATEGORY',
sg = parser.add_argument_group(
'selection', 'select which test cases: ' +
'files plus directories; filtered by categories plus testids')
ag = parser.add_argument_group(
'action', 'select action to perform on selected test cases')
sg.add_argument(
'-D', '--directory', nargs='+', metavar='DIR',
help='Collect tests from the specified directory(ies) ' +
'(default [tc-tests])')
sg.add_argument(
'-f', '--file', nargs='+', metavar='FILE',
help='Run tests from the specified file(s)')
sg.add_argument(
'-c', '--category', nargs='*', metavar='CATG', default=['+c'],
help='Run tests only from the specified category/ies, ' +
'or if no category/ies is/are specified, list known categories.')
sg.add_argument(
'-e', '--execute', nargs='+', metavar='ID',
help='Execute the specified test cases with specified IDs')
ag.add_argument(
'-l', '--list', action='store_true',
help='List all test cases, or those only within the specified category')
parser.add_argument('-s', '--show', type=str, nargs=1, metavar='ID', dest='showID',
help='Display the test case with specified id')
parser.add_argument('-e', '--execute', type=str, nargs=1, metavar='ID',
help='Execute the single test case with specified ID')
parser.add_argument('-i', '--id', action='store_true', dest='gen_id',
ag.add_argument(
'-s', '--show', action='store_true', dest='showID',
help='Display the selected test cases')
ag.add_argument(
'-i', '--id', action='store_true', dest='gen_id',
help='Generate ID numbers for new test cases')
parser.add_argument(
'-v', '--verbose', action='count', default=0,
help='Show the commands that are being run')
parser.add_argument('-d', '--device',
help='Execute the test case in flower category')
return parser
def check_default_settings(args):
def check_default_settings(args, remaining, pm):
"""
Process any arguments overriding the default settings, and ensure the
settings are correct.
Process any arguments overriding the default settings,
and ensure the settings are correct.
"""
# Allow for overriding specific settings
global NAMES
......@@ -244,6 +354,8 @@ def check_default_settings(args):
print("The specified tc path " + NAMES['TC'] + " does not exist.")
exit(1)
pm.call_check_args(args, remaining)
def get_id_list(alltests):
"""
......@@ -300,40 +412,107 @@ def generate_case_ids(alltests):
json.dump(testlist, outfile, indent=4)
outfile.close()
def filter_tests_by_id(args, testlist):
'''
Remove tests from testlist that are not in the named id list.
If id list is empty, return empty list.
'''
newlist = list()
if testlist and args.execute:
target_ids = args.execute
if isinstance(target_ids, list) and (len(target_ids) > 0):
newlist = list(filter(lambda x: x['id'] in target_ids, testlist))
return newlist
def filter_tests_by_category(args, testlist):
'''
Remove tests from testlist that are not in a named category.
'''
answer = list()
if args.category and testlist:
test_ids = list()
for catg in set(args.category):
if catg == '+c':
continue
print('considering category {}'.format(catg))
for tc in testlist:
if catg in tc['category'] and tc['id'] not in test_ids:
answer.append(tc)
test_ids.append(tc['id'])
return answer
def get_test_cases(args):
"""
If a test case file is specified, retrieve tests from that file.
Otherwise, glob for all json files in subdirectories and load from
each one.
Also, if requested, filter by category, and add tests matching
certain ids.
"""
import fnmatch
if args.file != None:
if not os.path.isfile(args.file):
print("The specified test case file " + args.file + " does not exist.")
exit(1)
flist = [args.file]
else:
flist = []
for root, dirnames, filenames in os.walk('tc-tests'):
testdirs = ['tc-tests']
if args.file:
# at least one file was specified - remove the default directory
testdirs = []
for ff in args.file:
if not os.path.isfile(ff):
print("IGNORING file " + ff + "\n\tBECAUSE does not exist.")
else:
flist.append(os.path.abspath(ff))
if args.directory:
testdirs = args.directory
for testdir in testdirs:
for root, dirnames, filenames in os.walk(testdir):
for filename in fnmatch.filter(filenames, '*.json'):
flist.append(os.path.join(root, filename))
alltests = list()
candidate = os.path.abspath(os.path.join(root, filename))
if candidate not in testdirs:
flist.append(candidate)
alltestcases = list()
for casefile in flist:
alltests = alltests + (load_from_file(casefile))
return alltests
alltestcases = alltestcases + (load_from_file(casefile))
allcatlist = get_test_categories(alltestcases)
allidlist = get_id_list(alltestcases)
testcases_by_cats = get_categorized_testlist(alltestcases, allcatlist)
idtestcases = filter_tests_by_id(args, alltestcases)
cattestcases = filter_tests_by_category(args, alltestcases)
cat_ids = [x['id'] for x in cattestcases]
if args.execute:
if args.category:
alltestcases = cattestcases + [x for x in idtestcases if x['id'] not in cat_ids]
else:
alltestcases = idtestcases
else:
if cat_ids:
alltestcases = cattestcases
else:
# just accept the existing value of alltestcases,
# which has been filtered by file/directory
pass
return allcatlist, allidlist, testcases_by_cats, alltestcases
def set_operation_mode(args):
def set_operation_mode(pm, args):
"""
Load the test case data and process remaining arguments to determine
what the script should do for this run, and call the appropriate
function.
"""
alltests = get_test_cases(args)
ucat, idlist, testcases, alltests = get_test_cases(args)
if args.gen_id:
idlist = get_id_list(alltests)
if (has_blank_ids(idlist)):
alltests = generate_case_ids(alltests)
else:
......@@ -347,70 +526,26 @@ def set_operation_mode(args):
print("Please correct them before continuing.")
exit(1)
ucat = get_test_categories(alltests)
if args.showID:
show_test_case_by_id(alltests, args.showID[0])
for atest in alltests:
print_test_case(atest)
exit(0)
if args.execute:
target_id = args.execute[0]
else:
target_id = ""
if args.category:
if (args.category == '+c'):
if isinstance(args.category, list) and (len(args.category) == 0):
print("Available categories:")
print_sll(ucat)
exit(0)
else:
target_category = args.category
else:
target_category = ""
testcases = get_categorized_testlist(alltests, ucat)
if args.list:
if (args.list == "++"):
if args.list:
list_test_cases(alltests)
exit(0)
elif(len(args.list) > 0):
if (args.list not in ucat):
print("Unknown category " + args.list)
print("Available categories:")
print_sll(ucat)
exit(1)
list_test_cases(testcases[args.list])
exit(0)
if (os.geteuid() != 0):
print("This script must be run with root privileges.\n")
exit(1)
ns_create()
if (len(target_category) == 0):
if (len(target_id) > 0):
alltests = list(filter(lambda x: target_id in x['id'], alltests))
if (len(alltests) == 0):
print("Cannot find a test case with ID matching " + target_id)
exit(1)
catresults = test_runner(alltests, args)
print("All test results: " + "\n\n" + catresults)
elif (len(target_category) > 0):
if (target_category == "flower") and args.device == None:
print("Please specify a NIC device (-d) to run category flower")
exit(1)
if (target_category not in ucat):
print("Specified category is not present in this file.")
exit(1)
if len(alltests):
catresults = test_runner(pm, args, alltests)
else:
catresults = test_runner(testcases[target_category], args)
print("Category " + target_category + "\n\n" + catresults)
ns_destroy()
catresults = 'No tests found\n'
print('All test results: \n\n{}'.format(catresults))
def main():
"""
......@@ -419,10 +554,15 @@ def main():
"""
parser = args_parse()
parser = set_args(parser)
pm = PluginMgr(parser)
parser = pm.call_add_args(parser)
(args, remaining) = parser.parse_known_args()
check_default_settings(args)
args.NAMES = NAMES
check_default_settings(args, remaining, pm)
if args.verbose > 2:
print('args is {}'.format(args))
set_operation_mode(args)
set_operation_mode(pm, args)
exit(0)
......
......@@ -57,20 +57,11 @@ def print_sll(items):
def print_test_case(tcase):
""" Pretty-printing of a given test case. """
print('\n==============\nTest {}\t{}\n'.format(tcase['id'], tcase['name']))
for k in tcase.keys():
if (isinstance(tcase[k], list)):
print(k + ":")
print_list(tcase[k])
else:
print(k + ": " + tcase[k])
def show_test_case_by_id(testlist, caseID):
""" Find the specified test case to pretty-print. """
if not any(d.get('id', None) == caseID for d in testlist):
print("That ID does not exist.")
exit(1)
else:
print_test_case(next((d for d in testlist if d['id'] == caseID)))
if not ((k == 'id') or (k == 'name')):
print(k + ": " + str(tcase[k]))
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment