Testworks Reference#

See also: Testworks Usage

The testworks library exports a single module named testworks.

Suites, Tests, and Benchmarks#

test-definer Macro#

Define a new test.

Signature:

define test test-name (#key expected-to-fail-reason, expected-to-fail-test, tags) body end

Parameters:
  • test-name – Name of the test; a Dylan variable name.

  • expected-to-fail-reason (#key) – A <string> or #f. The reason this test is expected to fail.

  • expected-to-fail-test (#key) – An instance of <function>. A function to decide whether the test is expected to fail.

  • tags (#key) – A list of strings to tag this test.

Tests may contain arbitrary code, and must have at least one assertion. That is they must call one of the assert-* or check-* macros. If they have no assertions they are given a NOT IMPLEMENTED result. If any assertion fails the test fails, but any remaining assertions in the test are still executed. If code outside of an assertion signals an error, the test is marked as CRASHED and the rest of the test is skipped.

expected-to-fail-test, if supplied, is a function of no arguments that returns a true value if the test is expected to fail. Such a failure is then classified as an EXPECTED FAILURE result (which is a passing result). If the test passes rather than failing, it has an UNEXPECTED SUCCESS result (which is a type of failure). A common usage is to make expected failure conditional on the OS: method () $os-name == #"win32" end. This option has no effect on tests which are NOT IMPLEMENTED or CRASHED.

expected-to-fail-reason should be supplied if the test is expected to fail. It is good practice to reference a bug (a URL or at least a bug number).

Note

If expected-to-fail-reason is supplied without expected-to-fail-test the test is implicitly always expected to fail.

tags provide a way to select or filter out specific tests during a test run. The Testworks command-line (provided by run-test-application) has a --tag option to only run tests that match (or don’t match) specific tags.

benchmark-definer Macro#

Define a new benchmark.

Signature:

define benchmark benchmark-name (#key expected-to-fail-reason, expected-to-fail-test, tags) body end

Parameters:
  • benchmark-name – Name of the benchmark; a Dylan variable name.

  • expected-to-fail-reason (#key) – A <string> or #f. The reason this benchmark is expected to fail.

  • expected-to-fail-test (#key) – An instance of <function>. A function to decide whether the benchmark is expected to fail.

  • tags (#key) – A list of strings to tag this benchmark.

Benchmarks may contain arbitrary code and do not require any assertions. If the benchmark signals an error it is marked as CRASHED. Other than this, and minor differences in how the results are displayed, benchmarks are the same as tests.

benchmark-repeat Macro#

Repeatedly execute a block of code, recording profiling information for each execution.

Signature:

benchmark-repeat (#key iterations = 1) body end

Parameters:
  • iterations – Number of times to execute body.

Results for benchmarks that call benchmark-repeat display the min, max, mean, and median run times across all iterations.

It may be necessary to use --report=full to display detailed benchmark statistics.

At the beginning of each iteration benchmark-repeat first collects garbage to attempt to reduce variability across different executions.

suite-definer Macro#

Define a new test suite.

Signature:

define suite suite-name (#key setup-function cleanup-function) body end

Parameters:
  • suite-name – Name of the suite; a Dylan variable name.

  • setup-function (#key) – A function to perform setup before the suite starts.

  • cleanup-function (#key) – A function to perform teardown after the suite finishes.

Suites provide a way to group tests and other suites into a single executable unit. Suites may be nested arbitrarily.

setup-function is executed before any tests or sub-suites are run. If setup-function signals an error the entire suite is skipped and marked as “crashed”.

cleanup-function is executed after all sub-suites and tests have completed, regardless of whether an error is signaled.

interface-specification-suite-definer Macro#

Define a test suite to verify an API.

Signature:

define interface-specification-suite suite-name () specs end;

Parameters:
  • suite-name – Name of the suite; a Dylan variable name.

This macro is useful to verify that public interfaces to your library don’t change unintentionally.

specs are clauses separated by semicolons, specifying the attributes of an exported name. Each spec looks much like the definition of the name being tested. The following example has one of each kind of spec:

define interface-specification-suite time-specification-suite ()
  sealed instantiable abstract class <time> (<object>);
  generic function parse-time (<string>, #"key") => (<time>);
  variable *foo* :: <string>;
  constant $unix-epoch :: <time>;
end;

The following sections explain the syntax of each kind of spec in detail. Note that there is no way to verify macros automatically and therefore there is no “macro” spec.

class specs

Syntax: modifiers class name (superclasses) [, test-options ];

modifiers

sealed or open, primary or free, abstract or concrete, and instantiable. Currently the first two pairs are unused, but you may want to specify them anyway, to keep the spec in sync with the code.

If instantiable is specified, Testworks will try to make an instance of name by calling make with no arguments. If your class requires init arguments, you must define a method on make-test-instance:

define method make-test-instance
    (class == <my-class>) => (instance :: <my-class>)
  make(<my-class>, ...init args...)
end

name

Name of the class to verify.

superclasses

Comma-separated list of superclass names.

test-options

Any options valid for test-definer. For example, expected-to-fail-reason: "foo".

function specs

Syntax: modifiers function name (parameter-types) => (value-types) [, test-options ];

modifiers

generic

name

Name of the function. Note that function specs should be used for functions created with define function (which are really just bare methods bound to a name as with define constant m = method() ... end) and for generic functions.

parameter-types

Comma-separated list of parameter type names, possibly empty. Where #rest, #key, and #all-keys appear in the corresponding function definition, use #"rest", #"key", and #"all-keys" instead (i.e., with double quotes). Keyword arguments are specified without type qualifiers. Examples from the dylan-test-suite:

open generic function make
    (<type>, #"rest", #"key", #"all-keys") => (<object>);
open generic function copy-sequence
    (<sequence>, #"key", #"start", #"end") => (<sequence>);

value-types

Comma-separated list of return value type names, possibly empty.

test-options

Any options valid for test-definer. For example, expected-to-fail-reason: "foo".

variable specs

Syntax: variable name :: type [, test-options ];

name

Name of the variable.

type

Type of the variable.

test-options

Any options valid for test-definer. For example, expected-to-fail-reason: "foo".

constant specs

Syntax: constant name :: type [, test-options ];

name

Name of the constant.

type

Type of the constant.

test-options

Any options valid for test-definer. For example, expected-to-fail-reason: "foo".

Assertions#

Assertions are the smallest unit of verification in Testworks. They must appear within the body of a test.

Assertion macros that accept an argument that is the expected value as well as the expression that is to be tested typically expect the value first and the expression second. The macros don’t always require that this be the case:

assert-not-equal(5, 2 + 2);
assert-instance?(<integer>, 2 + 2);

All assertion macros accept a format string and format arguments at the end to use as the description of the assertion. The description should be stated in the positive sense. For example:

assert-equal(4, 2 + 2, "2 + 2 equals 4")

These are the available assertion macros:

assert-true Macro#

Assert that an expression evaluates to a true value. Skip the remainder of the test on failure.

Signature:

assert-true expression [ description … ]

Parameters:
  • expression – any expression

  • description – An optional description of what the assertion tests. This may be a single value of any type or a format string and format arguments. It should be stated in positive form, such as “two is less than three”. If no description is supplied one is automatically generated based on the text of the expression.

Discussion:

Importantly, this does not test that the expression is exactly #t, but rather that it is not #f. If you want to explicitly test for equality to #t use assert-equal(#t, ...) .

Note

It is also possible to use assert from the common-dylan library, which will signal an error upon failure and skip the remainder of the test.

Example:
assert-true(has-fleas?(my-dog))
assert-true(has-fleas?(my-dog), "my dog has fleas")
expect Macro#

Assert that an expression evaluates to a true value. Do not skip the remainder of the test on failure.

Discussion:

The same as assert-true but does not terminate the test. Use this only if subsequent assertions are independent of this one.

assert-false Macro#

Assert that an expression evaluates to #f. Skip the remainder of the test on failure.

Signature:

assert-false expression [ description … ]

Parameters:
  • expression – any expression

  • description – An optional description of what the assertion tests. This may be a single value of any type or a format string and format arguments. It should be stated in positive form, such as “two is less than three”. If no description is supplied one is automatically generated based on the text of the expression.

Example:
assert-false(3 < 2)
assert-false(6 = 7, "six equals seven")
expect-false Macro#

Assert that an expression evaluates to #f. Do not skip the remainder of the test on failure.

Discussion:

The same as assert-false but does not terminate the test. Use this only if subsequent assertions are independent of this one.

assert-equal Macro#

Assert that two values are equal using = as the comparison function. Using this macro is preferable to using assert-true(a = b) because the failure messages are much better when comparing certain types of objects, such as collections.

The expected value should always be the first expression.

Signature:

assert-equal want got [ description … ]

Parameters:
  • want – any expression; traditionally the expected result

  • got – any expression; traditionally the test result

  • description – An optional description of what the assertion tests. This may be a single value of any type or a format string and format arguments. It should be stated in positive form, such as “two is less than three”. If no description is supplied one is automatically generated based on the text of the expression.

Example:
assert-equal(2, my-complicated-method())
assert-equal(want, f(), "f() returned %=", want)
expect-equal Macro#

Assert that two values are equal using = as the comparison function. Do not skip the remainder of the test on failure.

Discussion:

The same as assert-equal but does not terminate the test. Use this only if subsequent assertions are independent of this one.

assert-not-equal Macro#

Assert that two values are not equal using ~= as the comparison function. Skip the remainder of the test on failure.

Signature:

assert-not-equal expression1 expression2 [ description … ]

Parameters:
  • expression1 – any expression

  • expression2 – any expression

  • description – An optional description of what the assertion tests. This may be a single value of any type or a format string and format arguments. It should be stated in positive form, such as “two is less than three”. If no description is supplied one is automatically generated based on the text of the expression.

Discussion:

Using this macro is preferable to using assert-true(a ~= b) or assert-false(a = b) because the generated failure messages can be better.

Example:
assert-not-equal(2, my-complicated-method())
assert-not-equal(want, got, "want does not equal got")
expect-not-equal Macro#

Assert that two values are not equal using ~= as the comparison function. Do not skip the remainder of the test on failure.

Discussion:

The same as assert-not-equal but does not terminate the test. Use this only if subsequent assertions are independent of this one.

assert-signals Macro#

Assert that an expression signals a given condition class. Skip the remainder of the test on failure.

Signature:

assert-signals condition, expression [ description … ]

Parameters:
  • condition – an expression that yields a condition class

  • expression – any expression

  • description – An optional description of what the assertion tests. This may be a single value of any type or a format string and format arguments. It should be stated in positive form, such as “f() signals <error>”. If no description is supplied one is automatically generated based on the text of the expression.

The assertion succeeds if the expected condition is signaled by the evaluation of expression.

Example:
assert-signals(<division-by-zero-error>, 3 / 0)
assert-signals(<division-by-zero-error>, 3 / 0,
               "my super special description")
expect-condition Macro#

Assert that an expression signals a given condition class. Do not skip the remainder of the test on failure.

Discussion:

The same as assert-signals but does not terminate the test. Use this only if subsequent assertions are independent of this one.

assert-no-errors Macro#

Assert that an expression does not signal any errors. Skip the remainder of the test on failure.

Signature:

assert-no-errors expression [ description … ]

Parameters:
  • expression – any expression

  • description – An optional description of what the assertion tests. This may be a single value of any type or a format string and format arguments. It should be stated in positive form, such as “f(3) does not signal <my-error>”. If no description is supplied one is automatically generated based on the text of the expression.

Discussion:

The assertion succeeds if no condition is signaled by the evaluation of expression.

Use of this macro is preferable to simply executing expression as part of the test body because it can clarify the purpose of the test by telling the reader “here’s an expression that is explicitly being tested, and not just part of the test setup.” Also, the assertion failure will show up in generated reports.

Example:
assert-no-errors(my-hairy-logic())
assert-no-errors(my-hairy-logic(),
                 "hairy logic completes without error")
expect-no-condition Macro#

Assert that an expression does not signal a condition. Do not skip the remainder of the test on failure.

Discussion:

The same as assert-no-errors but does not terminate the test. Use this only if subsequent assertions are independent of this one.

assert-instance? Macro#

Assert that the result of an expression is an instance of a given type. Skip the remainder of the test on failure.

Signature:

assert-instance? type expression [ description … ]

Parameters:
  • type – The expected type.

  • expression – An expression.

  • description – An optional description of what the assertion tests. This may be a single value of any type or a format string and format arguments. It should be stated in positive form, such as “f() returns an instance of <foo>”. If no description is supplied one is automatically generated based on the text of the expression.

Discussion:

Warning

The arguments to this assertion follow the typical argument ordering of Testworks assertions with the desired value before the expression that represents the test. As such, the desired type is the first parameter to this assertion while it is the second parameter for instance?.

Example:
assert-instance?(<type>, subclass(<string>));
assert-instance?(<type>, subclass(<string>), "subclass returns a type");
expect-instance? Macro#

Assert that the result of an expression is an instance of a given type. Do not skip the remainder of the test on failure.

Discussion:

The same as assert-instance? but does not terminate the test. Use this only if subsequent assertions are independent of this one.

assert-not-instance? Macro#

Assert that the result of an expression is not an instance of a given class. Skip the remainder of the test on failure.

Signature:

assert-not-instance? type expression [ description … ]

Parameters:
  • type – The type.

  • expression – An expression.

  • description – An optional description of what the assertion tests. This may be a single value of any type or a format string and format arguments. It should be stated in positive form, such as “f() does not return a <string>”. If no description is supplied one is automatically generated based on the text of the expression.

Discussion:

Warning

The arguments to this assertion follow the typical argument ordering of Testworks assertions with the desired value before the expression that represents the test. As such, the desired type is the first parameter to this assertion while it is the second parameter for instance?.

Example:
assert-not-instance?(limited(<integer>, min: 0), -1);
assert-not-instance?(limited(<integer>, min: 0), -1,
                     "values below lower bound are not instances");
expect-not-instance? Macro#

Assert that the result of an expression is not an instance of a given class. Do not skip the remainder of the test on failure.

Discussion:

The same as assert-not-instance? but does not terminate the test. Use this only if subsequent assertions are independent of this one.

Checks#

Deprecated since version 3.0: Use the expect-* macros documented above.

Checks are like Assertions but they do not cause the test to terminate when they fail or crash. Only use them if later checks or assertions do not depend on them passing and they won’t cause too many cascading failures (for example if they’re used in a tight loop).

Checks also differ from the assert-* macros in that they require a description (or “name”) as their first argument. We intend to fix this inconsistency in the future.

These are the available checks:

check Macro#

Perform a check within a test.

Deprecated since version 3.0: Use expect instead.

Signature:

check name function #rest arguments

Parameters:
  • name – An instance of <string>.

  • function – The function to check.

  • arguments (#rest) – The arguments for function.

Example:
check("Test less than operator", \<, 2, 3)
check-condition Macro#

Check that a given condition is signalled.

Deprecated since version 3.0: Use expect-condition instead.

Signature:

check-condition name expected expression

Parameters:
  • name – An instance of <string>.

  • expected – The expected condition class.

  • expression – An expression.

Example:
check-condition("format-to-string crashes when missing an argument",
                <error>, format-to-string("Hello %s"));
check-equal Macro#

Check that 2 expressions are equal.

Deprecated since version 3.0: Use expect-equal instead.

Signature:

check-equal name expected expression

Parameters:
  • name – An instance of <string>.

  • expected – The expected value of expression.

  • expression – An expression.

Example:
check-equal("condition-to-string of an error produces the correct string",
            "Hello",
            condition-to-string(make(<simple-error>, format-string: "Hello")));
check-false Macro#

Check that an expression has a result of #f.

Deprecated since version 3.0: Use expect-false instead.

Signature:

check-false name expression

Parameters:
  • name – An instance of <string>.

  • expression – An expression.

Example:
check-false("unsupplied?(#f) == #f", unsupplied?(#f));
check-instance? Macro#

Check that the result of an expression is an instance of a given type.

Deprecated since version 3.0: Use expect-instance? instead.

Signature:

check-instance? name type expression

Parameters:
  • name – An instance of <string>.

  • type – The expected type.

  • expression – An expression.

Example:
check-instance?("subclass returns type",
                <type>, subclass(<string>));
check-true Macro#

Check that the result of an expression is not #f.

Deprecated since version 3.0: Use expect instead.

Signature:

check-true name expression

Parameters:
  • name – An instance of <string>.

  • expression – An expression.

Discussion:

Note that if you want to explicitly check if an expression evaluates to #t, you should use check-equal.

Example:
check-true("unsupplied?($unsupplied)", unsupplied?($unsupplied));

Test Execution#

run-test-application Function#

Run a test suite or test as part of a stand-alone test executable.

Signature:

run-test-application #rest suite-or-test => ()

Parameters:
  • suite-or-test – (optional) An instance of <suite> or <runnable>. If not supplied then all tests and benchmarks are run.

This is the main entry point to run a set of tests in Testworks. It parses the command-line and based on the specified options selects the set of suites or tests to run, runs them, and generates a final report of the results.

Internally, run-test-application creates a <test-runner> based on the command-line options and then calls run-tests with the runner and suite-or-test.

test-option Function#

Return an option value passed on the test-application command line.

Signature:

test-option name #key default => value

Parameters:
  • name – An instance of type <string>.

  • default (#key) – An instance of type <string>.

Values:
  • value – An instance of type <string>.

Returns an option value passed to the test on the test application command line, in the form *name*=*value*. If no option value was given, the default value is returned if one was supplied, otherwise an error is signalled.

This feature allows information about external resources, such as path names of reference data files, or the hostname of a test database server, to be supplied on the command line of the test application and retrieved by the test.

test-temp-directory Function#

Retrieve a unique temporary directory for the current test to use.

Signature:

test-temp-directory => directory

Values:
  • directory – An instance of type <directory-locator>.

Returns a directory (a <directory-locator>) that may be used for temporary files created by the test or benchmark. The directory is created the first time this function is called for each test or benchmark and is not deleted after the test run is complete in case it’s useful for post-mortem analysis. The directory is named _test/<user>-<timestamp>/<test-name> and is rooted at $DYLAN, if defined, or in the current directory otherwise.

Note

In the <test-name> component of the directory both slash (/) and backslash (\) are replaced by underscore (_).

write-test-file Function#

Writes a file in the current test’s temp directory.

Signature:

write-test-file filename #key contents => locator

Parameters:
  • filename – An instance of <pathname> (i.e., a string or a locator). The name may be a relative path and if it contains the path separator character, subdirectories will be created.

  • contents (#key) – An instance of <string> to be written to the file. Defaults to the empty string.

Values:
  • locator – An instance of <file-locator> which is the full, absolute pathname of the created file.

When your test requires files to be present this is a handy utility to create them. Examples:

write-test-file("x.txt");
let locator = write-test-file("a/b/c.log", contents: "abc");