CLI Tests
Each CLI is acompanied by a test script in the test/cli folder which runs a set of commands (defined in the corresponding .gen files) in simulation mode and compares the output with the previously recorded one.
Preparation
In order to run the CLI tests, the sim
and test
dependencies must be installed with icicle, usually like so:
python3 -m pip install ".[sim,test]"
Don’t forget to include -e
if developing.
In order to run the pyvisa_sim
simulations for the MP1
instrument, pyvisa_sim
must be patched to allow injection into setter responses. To do so, run:
cd deps/
bash patch-pysica-sim.sh
Or patch the current virtualenv
installation of pyvisa-sim
manually to match the patch files in deps
. An MR is open on pyvisa-sim
and this should not be necessary once the targeted version is updated.
Running tests locally
The CLI test scripts are run from the test/cli
directory using make
.
The fllowing will run all tests sequentially:
cd test/cli
make test
In order to perform a single test, each script can be run using the
*.cli-test
target:
make keithley2410.cli-test
A test can also be run in parallel-mode. This executes multiple (up to number of logical CPU cores) instrument commands in parallel, which speeds up testing and also checks locking as a side-effect:
cd test/cli
make keithley2410.cli-test PARALLEL=1
or
make test PARALLEL=1 # for all tests
In addition, running multiple test scripts at the same time can be activated with make -j. Although the printing order is guaranteed, the test outputs in this mode will be printed together when all commands in a test are completed.
Warning
Combining -j and PARALLEL=1 can lead to nCPU^2 processes. Depending on your scheduler and CPU this can be faster or slower. In any case it will put a high load on your system.
The test scripts are evaluated by test/cli-test.py
which is called by make
but can also be executed manually. So following commands are equivalent:
cd test
python3 cli_test.py cli/keithley2410
and
cd test/cli
make keithley2410-cli-test
Calling cli_test.py -j TESTSCRIPT activates the parallel execution of instrument commands.
CI Integration
Tests are run automatically by the CI on any git push
instruction to the main ICICLE
gitlab repository.
Test Files
Test files contain test commands and expected output and return value in the following format:
##
{test name}
# COMMAND
{test command} {test args...}
# OUTPUT
{expected output}
# RETURNS {expected return value}
##
...
The separator ##
is used between multiple tests in the same file.
For example, the Keitley2410 identify
test looks like this:
##
identify
# COMMAND
keithley2410 -S identify
# OUTPUT
Keithley Instruments Inc., Model 2410, 000000, SIM1.0
# RETURNS 0
##
There is also a utility to generate a test file from the current state of a CLI built into cli_test.py
. In order to use it, write a <intrument>.gen
file in the following format:
{test name} | {test command}
For the Keithley2410 identify
test this looks like:
identify | keithley2410 -S identify
Then a test file may be generated by running cli_test.py
with the generate file passed to the -g
flag, for example:
python3 cli_test.py -g keithley2410.gen keithley2410.test
Test Writing Good Practise
When submitting merge requests, in general test files should be updated in a separate commit just before submitting the request, to allow the changes to the test cases to the reviewed. In addition, test files should not be regenerated after initial submission, but only updated manually in most cases, since changes should be small and well-defined. An exception may be made here for the addition of large numbers of new test cases.
When submitting changes, any changes to CLI behaviour should be well-motivated, and changes in test case expected output for unrelated instruments will cause the request to be paused until the errant behaviour is removed or justified.
Tests cases should check valid and invalid, expected and unexpected inputs, and should ensure validity of the entirity of new CLI interfaces submitted in merge requests. Exceptions may be made for non-exiting commands (e.g. monitor
) as these are not currently supported by the test system.