diff options
Diffstat (limited to 'doc/HACKING/WritingTests.md')
-rw-r--r-- | doc/HACKING/WritingTests.md | 192 |
1 files changed, 96 insertions, 96 deletions
diff --git a/doc/HACKING/WritingTests.md b/doc/HACKING/WritingTests.md index 2f59c9a483..42fba2d71a 100644 --- a/doc/HACKING/WritingTests.md +++ b/doc/HACKING/WritingTests.md @@ -22,92 +22,92 @@ keep from introducing bugs. The major ones are: How to run these tests ---------------------- -=== The easy version +### The easy version -To run all the tests that come bundled with Tor, run "make check" +To run all the tests that come bundled with Tor, run `make check`. To run the Stem tests as well, fetch stem from the git repository, -set STEM_SOURCE_DIR to the checkout, and run "make test-stem". +set `STEM_SOURCE_DIR` to the checkout, and run `make test-stem`. To run the Chutney tests as well, fetch chutney from the git repository, -set CHUTNEY_PATH to the checkout, and run "make test-network". +set `CHUTNEY_PATH` to the checkout, and run `make test-network`. -To run all of the above, run "make test-full". +To run all of the above, run `make test-full`. To run all of the above, plus tests that require a working connection to the -internet, run "make test-full-online". +internet, run `make test-full-online`. -=== Running particular subtests +### Running particular subtests The Tor unit tests are divided into separate programs and a couple of bundled unit test programs. Separate programs are easy. For example, to run the memwipe tests in -isolation, you just run ./src/test/test-memwipe . +isolation, you just run `./src/test/test-memwipe`. To run tests within the unit test programs, you can specify the name of the test. The string ".." can be used as a wildcard at the end of the test name. For example, to run all the cell format tests, enter -"./src/test/test cellfmt/..". To run +`./src/test/test cellfmt/..`. To run Many tests that need to mess with global state run in forked subprocesses in order to keep from contaminating one another. But when debugging a failing test, you might want to run it without forking a subprocess. To do so, use the -"--no-fork" option with a single test. (If you specify it along with +`--no-fork` option with a single test. (If you specify it along with multiple tests, they might interfere.) -You can turn on logging in the unit tests by passing one of "--debug", -"--info", "--notice", or "--warn". By default only errors are displayed. +You can turn on logging in the unit tests by passing one of `--debug`, +`--info`, `--notice`, or `--warn`. By default only errors are displayed. -Unit tests are divided into "./src/test/test" and "./src/test/test-slow". +Unit tests are divided into `./src/test/test` and `./src/test/test-slow`. The former are those that should finish in a few seconds; the latter tend to take more time, and may include CPU-intensive operations, deliberate delays, and stuff like that. -=== Finding test coverage +### Finding test coverage Test coverage is a measurement of which lines your tests actually visit. -When you configure Tor with the --enable-coverage option, it should +When you configure Tor with the `--enable-coverage` option, it should build with support for coverage in the unit tests, and in a special -"tor-cov" binary. +`tor-cov` binary. Then, run the tests you'd like to see coverage from. If you have old -coverage output, you may need to run "reset-gcov" first. +coverage output, you may need to run `reset-gcov` first. Now you've got a bunch of files scattered around your build directories -called "*.gcda". In order to extract the coverage output from them, make a -temporary directory for them and run "./scripts/test/coverage ${TMPDIR}", -where ${TMPDIR} is the temporary directory you made. This will create a -".gcov" file for each source file under tests, containing that file's source +called `*.gcda`. In order to extract the coverage output from them, make a +temporary directory for them and run `./scripts/test/coverage ${TMPDIR}`, +where `${TMPDIR}` is the temporary directory you made. This will create a +`.gcov` file for each source file under tests, containing that file's source annotated with the number of times the tests hit each line. (You'll need to have gcov installed.) You can get a summary of the test coverage for each file by running -"./scripts/test/cov-display ${TMPDIR}/*" . Each line lists the file's name, +`./scripts/test/cov-display ${TMPDIR}/*` . Each line lists the file's name, the number of uncovered lines, the number of uncovered lines, and the coverage percentage. For a summary of the test coverage for each _function_, run -"./scripts/test/cov-display -f ${TMPDIR}/*" . +`./scripts/test/cov-display -f ${TMPDIR}/*`. -=== Comparing test coverage +### Comparing test coverage Sometimes it's useful to compare test coverage for a branch you're writing to coverage from another branch (such as git master, for example). But you -can't run "diff" on the two coverage outputs directly, since the actual +can't run `diff` on the two coverage outputs directly, since the actual number of times each line is executed aren't so important, and aren't wholly deterministic. Instead, follow the instructions above for each branch, creating a separate -temporary directory for each. Then, run "./scripts/test/cov-diff ${D1} -${D2}", where D1 and D2 are the directories you want to compare. This will +temporary directory for each. Then, run `./scripts/test/cov-diff ${D1} +${D2}`, where D1 and D2 are the directories you want to compare. This will produce a diff of the two directories, with all lines normalized to be either covered or uncovered. To count new or modified uncovered lines in D2, you can run: - "./scripts/test/cov-diff ${D1} ${D2}" | grep '^+ *\#' |wc -l + ./scripts/test/cov-diff ${D1} ${D2}" | grep '^+ *\#' | wc -l What kinds of test should I write? @@ -132,18 +132,18 @@ Unit and regression tests: Does this function do what it's supposed to? Most of Tor's unit tests are made using the "tinytest" testing framework. You can see a guide to using it in the tinytest manual at - https://github.com/nmathewson/tinytest/blob/master/tinytest-manual.md + https://github.com/nmathewson/tinytest/blob/master/tinytest-manual.md -To add a new test of this kind, either edit an existing C file in src/test/, +To add a new test of this kind, either edit an existing C file in `src/test/`, or create a new C file there. Each test is a single function that must be indexed in the table at the end of the file. We use the label "done:" as a cleanup point for all test functions. -(Make sure you read tinytest-manual.md before proceeding.) +(Make sure you read `tinytest-manual.md` before proceeding.) I use the term "unit test" and "regression tests" very sloppily here. -=== A simple example +### A simple example Here's an example of a test function for a simple function in util.c: @@ -172,20 +172,20 @@ Here's an example of a test function for a simple function in util.c: This should look pretty familiar to you if you've read the tinytest manual. One thing to note here is that we use the testing-specific -function "get_fname" to generate a file with respect to a temporary +function `get_fname` to generate a file with respect to a temporary directory that the tests use. You don't need to delete the file; it will get removed when the tests are done. -Also note our use of OP_EQ instead of == in the tt_int_op() calls. -We define OP_* macros to use instead of the binary comparison +Also note our use of `OP_EQ` instead of `==` in the `tt_int_op()` calls. +We define `OP_*` macros to use instead of the binary comparison operators so that analysis tools can more easily parse our code. -(Coccinelle really hates to see == used as a macro argument.) +(Coccinelle really hates to see `==` used as a macro argument.) -Finally, remember that by convention, all *_free() functions that +Finally, remember that by convention, all `*_free()` functions that Tor defines are defined to accept NULL harmlessly. Thus, you don't -need to say "if (contents)" in the cleanup block. +need to say `if (contents)` in the cleanup block. -=== Exposing static functions for testing +### Exposing static functions for testing Sometimes you need to test a function, but you don't want to expose it outside its usual module. @@ -193,20 +193,20 @@ it outside its usual module. To support this, Tor's build system compiles a testing version of each module, with extra identifiers exposed. If you want to declare a function as static but available for testing, use the -macro "STATIC" instead of "static". Then, make sure there's a +macro `STATIC` instead of `static`. Then, make sure there's a macro-protected declaration of the function in the module's header. -For example, crypto_curve25519.h contains: +For example, `crypto_curve25519.h` contains: -#ifdef CRYPTO_CURVE25519_PRIVATE -STATIC int curve25519_impl(uint8_t *output, const uint8_t *secret, + #ifdef CRYPTO_CURVE25519_PRIVATE + STATIC int curve25519_impl(uint8_t *output, const uint8_t *secret, const uint8_t *basepoint); -#endif + #endif -The crypto_curve25519.c file and the test_crypto.c file both define -CRYPTO_CURVE25519_PRIVATE, so they can see this declaration. +The `crypto_curve25519.c` file and the `test_crypto.c` file both define +`CRYPTO_CURVE25519_PRIVATE`, so they can see this declaration. -=== Mock functions for testing in isolation +### Mock functions for testing in isolation Often we want to test that a function works right, but the function to be tested depends on other functions whose behavior is hard to observe, @@ -216,7 +216,7 @@ To write tests for this case, you can replace the underlying functions with testing stubs while your unit test is running. You need to declare the underlying function as 'mockable', as follows: - MOCK_DECL(returntype, functionname, (argument list)); + MOCK_DECL(returntype, functionname, (argument list)); and then later implement it as: @@ -229,7 +229,7 @@ For example, if you had a 'connect to remote server' function, you could declare it as: - MOCK_DECL(int, connect_to_remote, (const char *name, status_t *status)); + MOCK_DECL(int, connect_to_remote, (const char *name, status_t *status)); When you declare a function this way, it will be declared as normal in regular builds, but when the module is built for testing, it is declared @@ -238,16 +238,16 @@ as a function pointer initialized to the actual implementation. In your tests, if you want to override the function with a temporary replacement, you say: - MOCK(functionname, replacement_function_name); + MOCK(functionname, replacement_function_name); And later, you can restore the original function with: - UNMOCK(functionname); + UNMOCK(functionname); For more information, see the definitions of this mocking logic in -testsupport.h. +`testsupport.h`. -=== Okay but what should my tests actually do? +### Okay but what should my tests actually do? We talk above about "test coverage" -- making sure that your tests visit every line of code, or every branch of code. But visiting the code isn't @@ -267,11 +267,11 @@ cases and failure csaes. For example, consider testing this function: - /** Remove all elements E from sl such that E==element. Preserve - * the order of any elements before E, but elements after E can be - * rearranged. - */ - void smartlist_remove(smartlist_t *sl, const void *element); + /** Remove all elements E from sl such that E==element. Preserve + * the order of any elements before E, but elements after E can be + * rearranged. + */ + void smartlist_remove(smartlist_t *sl, const void *element); In order to test it well, you should write tests for at least all of the following cases. (These would be black-box tests, since we're only looking @@ -298,19 +298,19 @@ When you consider edge cases, you might try: Now let's look at the implementation: - void - smartlist_remove(smartlist_t *sl, const void *element) - { - int i; - if (element == NULL) - return; - for (i=0; i < sl->num_used; i++) - if (sl->list[i] == element) { - sl->list[i] = sl->list[--sl->num_used]; /* swap with the end */ - i--; /* so we process the new i'th element */ - sl->list[sl->num_used] = NULL; - } - } + void + smartlist_remove(smartlist_t *sl, const void *element) + { + int i; + if (element == NULL) + return; + for (i=0; i < sl->num_used; i++) + if (sl->list[i] == element) { + sl->list[i] = sl->list[--sl->num_used]; /* swap with the end */ + i--; /* so we process the new i'th element */ + sl->list[sl->num_used] = NULL; + } + } Based on the implementation, we now see three more edge cases to test: @@ -319,14 +319,14 @@ Based on the implementation, we now see three more edge cases to test: * Removing an element from a position other than the end of the list. -=== What should my tests NOT do? +### What should my tests NOT do? Tests shouldn't require a network connection. Whenever possible, tests shouldn't take more than a second. Put the test into test/slow if it genuinely needs to be run. -Tests should not alter global state unless they run with TT_FORK: Tests +Tests should not alter global state unless they run with `TT_FORK`: Tests should not require other tests to be run before or after them. Tests should not leak memory or other resources. To find out if your tests @@ -338,16 +338,16 @@ the test should verify that the documented behavior is implemented, but should not break if other permissible behavior is later implemented. -=== Advanced techniques: Namespaces +### Advanced techniques: Namespaces Sometimes, when you're doing a lot of mocking at once, it's convenient to isolate your identifiers within a single namespace. If this were C++, we'd already have namespaces, but for C, we do the best we can with macros and token-pasting. -We have some macros defined for this purpose in src/test/test.h. To use -them, you define NS_MODULE to a prefix to be used for your identifiers, and -then use other macros in place of identifier names. See src/test/test.h for +We have some macros defined for this purpose in `src/test/test.h`. To use +them, you define `NS_MODULE` to a prefix to be used for your identifiers, and +then use other macros in place of identifier names. See `src/test/test.h` for more documentation. @@ -357,19 +357,19 @@ Integration tests: Calling Tor from the outside Some tests need to invoke Tor from the outside, and shouldn't run from the same process as the Tor test program. Reasons for doing this might include: - * Testing the actual behavior of Tor when run from the command line - * Testing that a crash-handler correctly logs a stack trace - * Verifying that violating a sandbox or capability requirement will - actually crash the program. - * Needing to run as root in order to test capability inheritance or - user switching. + * Testing the actual behavior of Tor when run from the command line + * Testing that a crash-handler correctly logs a stack trace + * Verifying that violating a sandbox or capability requirement will + actually crash the program. + * Needing to run as root in order to test capability inheritance or + user switching. -To add one of these, you generally want a new C program in src/test. Add it -to TESTS and noinst_PROGRAMS if it can run on its own and return success or +To add one of these, you generally want a new C program in `src/test`. Add it +to `TESTS` and `noinst_PROGRAMS` if it can run on its own and return success or failure. If it needs to be invoked multiple times, or it needs to be -wrapped, add a new shell script to TESTS, and the new program to -noinst_PROGRAMS. If you need access to any environment variable from the -makefile (eg ${PYTHON} for a python interpreter), then make sure that the +wrapped, add a new shell script to `TESTS`, and the new program to +`noinst_PROGRAMS`. If you need access to any environment variable from the +makefile (eg `${PYTHON}` for a python interpreter), then make sure that the makefile exports them. Writing integration tests with Stem @@ -379,25 +379,25 @@ The 'stem' library includes extensive unit tests for the Tor controller protocol. For more information on writing new tests for stem, have a look around -the test/* directory in stem, and find a good example to emulate. You +the `test/*` directory in stem, and find a good example to emulate. You might want to start with -https://gitweb.torproject.org/stem.git/tree/test/integ/control/controller.py +`https://gitweb.torproject.org/stem.git/tree/test/integ/control/controller.py` to improve Tor's test coverage. -You can run stem tests from tor with "make test-stem", or see -https://stem.torproject.org/faq.html#how-do-i-run-the-tests . +You can run stem tests from tor with `make test-stem`, or see +`https://stem.torproject.org/faq.html#how-do-i-run-the-tests`. System testing with Chutney --------------------------- The 'chutney' program configures and launches a set of Tor relays, -authorities, and clients on your local host. It has a 'test network' +authorities, and clients on your local host. It has a `test network` functionality to send traffic through them and verify that the traffic arrives correctly. -You can write new test networks by adding them to 'networks'. To add -them to Tor's tests, add them to the test-network or test-network-all -targets in Makefile.am. +You can write new test networks by adding them to `networks`. To add +them to Tor's tests, add them to the `test-network` or `test-network-all` +targets in `Makefile.am`. (Adding new kinds of program to chutney will still require hacking the code.) |