tor-browser

The Tor Browser
git clone https://git.dasho.dev/tor-browser.git
Log | Files | Refs | README | LICENSE

index.rst (33437B)


      1 XPCShell tests
      2 ==============
      3 
      4 xpcshell tests are quick-to-run tests, that are generally used to write
      5 unit tests. They do not have access to the full browser chrome like
      6 ``browser chrome tests``, and so have much
      7 lower overhead. They are typical run by using ``./mach xpcshell-test``
      8 which initiates a new ``xpcshell`` session with
      9 the xpcshell testing harness. Anything available to the XPCOM layer
     10 (through scriptable interfaces) can be tested with xpcshell. See
     11 ``Mozilla automated testing`` and ``pages
     12 tagged "automated testing"`` for more
     13 information.
     14 
     15 Introducing xpcshell testing
     16 ----------------------------
     17 
     18 xpcshell test filenames must start with ``test_``.
     19 
     20 Creating a new test directory
     21 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
     22 
     23 If you need to create a new test directory, then follow the steps here.
     24 The test runner needs to know about the existence of the tests and how
     25 to configure them through the use of the ``xpcshell.toml`` manifest file.
     26 
     27 First add a ``XPCSHELL_TESTS_MANIFESTS += ['xpcshell.toml']`` declaration
     28 (with the correct relative ``xpcshell.toml`` path) to the ``moz.build``
     29 file located in or above the directory.
     30 
     31 Then create an empty ``xpcshell.toml`` file to tell the build system
     32 about the individual tests, and provide any additional configuration
     33 options.
     34 
     35 Creating a new test in an existing directory
     36 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
     37 
     38 If you're creating a new test in an existing directory, you can simply
     39 run:
     40 
     41 .. code:: bash
     42 
     43   $ ./mach addtest --suite xpcshell path/to/test/test_example.js
     44   $ hg add path/to/test/test_example.js
     45 
     46 This will automatically create the test file and add it to
     47 ``xpcshell.toml``, the second line adds it to your commit.
     48 
     49 The test file contains an empty test which will give you an idea of how
     50 to write a test. There are plenty more examples throughout
     51 mozilla-central.
     52 
     53 Running tests
     54 -------------
     55 
     56 To run the test, execute it by running the ``mach`` command from the
     57 root of the Gecko source code directory.
     58 
     59 .. code:: bash
     60 
     61   # Run a single test:
     62   $ ./mach xpcshell-test path/to/tests/test_example.js
     63 
     64   # Test an entire test suite in a folder:
     65   $ ./mach xpcshell-test path/to/tests/
     66 
     67   # Or run any type of test, including both xpcshell and browser chrome tests:
     68   $ ./mach test path/to/tests/test_example.js
     69 
     70 The test is executed by the testing harness. It will call in turn:
     71 
     72 -  ``run_test`` (if it exists).
     73 -  Any functions added with ``add_task`` or ``add_test`` in the order
     74   they were defined in the file.
     75 
     76 See also the notes below around ``add_task`` and ``add_test``.
     77 
     78 xpcshell Testing API
     79 --------------------
     80 
     81 xpcshell tests have access to the following functions. They are defined
     82 in
     83 :searchfox:`testing/xpcshell/head.js <testing/xpcshell/head.js>`
     84 and
     85 :searchfox:`testing/modules/Assert.sys.mjs <testing/modules/Assert.sys.mjs>`.
     86 
     87 Assertions
     88 ^^^^^^^^^^
     89 
     90 - ``Assert.ok(truthyOrFalsy[, message])``
     91 - ``Assert.equal(actual, expected[, message])``
     92 - ``Assert.notEqual(actual, expected[, message])``
     93 - ``Assert.deepEqual(actual, expected[, message])``
     94 - ``Assert.notDeepEqual(actual, expected[, message])``
     95 - ``Assert.strictEqual(actual, expected[, message])``
     96 - ``Assert.notStrictEqual(actual, expected[, message])``
     97 - ``Assert.rejects(actual, expected[, message])``
     98 - ``Assert.greater(actual, expected[, message])``
     99 - ``Assert.greaterOrEqual(actual, expected[, message])``
    100 - ``Assert.less(actual, expected[, message])``
    101 - ``Assert.lessOrEqual(actual, expected[, message])``
    102 
    103 
    104 These assertion methods are provided by
    105 :searchfox:`testing/modules/Assert.sys.mjs <testing/modules/Assert.sys.mjs>`.
    106 It implements the `CommonJS Unit Testing specification version
    107 1.1 <http://wiki.commonjs.org/wiki/Unit_Testing/1.1>`__, which
    108 provides a basic, standardized interface for performing in-code
    109 logical assertions with optional, customizable error reporting. It is
    110 *highly* recommended to use these assertion methods, instead of the
    111 ones mentioned below. You can on all these methods remove the
    112 ``Assert.`` from the beginning of the name, e.g. ``ok(true)`` rather
    113 than ``Assert.ok(true)``, however keeping the ``Assert.`` prefix may
    114 be seen as more descriptive and easier to spot where the tests are.
    115 ``Assert.throws(callback, expectedException[, message])``
    116 ``Assert.throws(callback[, message])``
    117 Asserts that the provided callback function throws an exception. The
    118 ``expectedException`` argument can be an ``Error`` instance, or a
    119 regular expression matching part of the error message (like in
    120 ``Assert.throws(() => a.b, /is not defined/``).
    121 ``Assert.rejects(promise, expectedException[, message])``
    122 Asserts that the provided promise is rejected. Note: that this should
    123 be called prefixed with an ``await``. The ``expectedException``
    124 argument can be an ``Error`` instance, or a regular expression
    125 matching part of the error message. Example:
    126 ``await Assert.rejects(myPromise, /bad response/);``
    127 
    128 Test case registration and execution
    129 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    130 
    131 ``add_task([condition, ]testFunc)``
    132   Add an asynchronous function or to the list of tests that are to be
    133   run asynchronously. Whenever the function ``await``\ s a
    134   `Promise <https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise>`__,
    135   the test runner waits until the promise is resolved or rejected
    136   before proceeding. Rejected promises are converted into exceptions,
    137   and resolved promises are converted into values.
    138   You can optionally specify a condition which causes the test function
    139   to be skipped; see `Adding conditions through the add_task or
    140   add_test
    141   function <#adding-conditions-through-the-add-task-or-add-test-function>`__
    142   for details.
    143   For tests that use ``add_task()``, the ``run_test()`` function is
    144   optional, but if present, it should also call ``run_next_test()`` to
    145   start execution of all asynchronous test functions. The test cases
    146   must not call ``run_next_test()``, it is called automatically when
    147   the task finishes. See `Async tests <#async-tests>`__, below, for
    148   more information.
    149 ``add_test([condition, ]testFunction)``
    150   Add a test function to the list of tests that are to be run
    151   asynchronously.
    152   You can optionally specify a condition which causes the test function
    153   to be skipped; see `Adding conditions through the add_task or
    154   add_test
    155   function <#adding-conditions-through-the-add-task-or-add-test-function>`__
    156   for details.
    157   Each test function must call ``run_next_test()`` when it's done. For
    158   tests that use ``add_test()``, ``the run_test()`` function is
    159   optional, but if present, it should also call ``run_next_test()`` to
    160   start execution of all asynchronous test functions. In most cases,
    161   you should rather use the more readable variant ``add_task()``. See
    162   `Async tests <#async-tests>`__, below, for more information.
    163 ``run_next_test()``
    164   Run the next test function from the list of asynchronous tests. Each
    165   test function must call ``run_next_test()`` when it's done.
    166   ``run_test()`` should also call ``run_next_test()`` to start
    167   execution of all asynchronous test functions. See `Async
    168   tests <#async-tests>`__, below, for more information.
    169 **``registerCleanupFunction``**\ ``(callback)``
    170   Executes the function ``callback`` after the current JS test file has
    171   finished running, regardless of whether the tests inside it pass or
    172   fail. You can use this to clean up anything that might otherwise
    173   cause problems between test runs.
    174   If ``callback`` returns a ``Promise``, the test will not finish until
    175   the promise is fulfilled or rejected (making the termination function
    176   asynchronous).
    177   Cleanup functions are called in reverse order of registration.
    178 ``do_test_pending()``
    179   Delay exit of the test until do_test_finished() is called.
    180   do_test_pending() may be called multiple times, and
    181   do_test_finished() must be paired with each before the unit test will
    182   exit.
    183 ``do_test_finished()``
    184   Call this function to inform the test framework that an asynchronous
    185   operation has completed. If all asynchronous operations have
    186   completed (i.e., every do_test_pending() has been matched with a
    187   do_test_finished() in execution), then the unit test will exit.
    188 
    189 Environment
    190 ^^^^^^^^^^^
    191 
    192 ``do_get_file(testdirRelativePath, allowNonexistent)``
    193   Returns an ``nsILocalFile`` object representing the given file (or
    194   directory) in the test directory. For example, if your test is
    195   unit/test_something.js, and you need to access unit/data/somefile,
    196   you would call ``do_get_file('data/somefile')``. The given path must
    197   be delimited with forward slashes. You can use this to access
    198   test-specific auxiliary files if your test requires access to
    199   external files. Note that you can also use this function to get
    200   directories.
    201 
    202   .. note::
    203 
    204      **Note:** If your test needs access to one or more files that
    205      aren't in the test directory, you should install those files to
    206      the test directory in the Makefile where you specify
    207      ``XPCSHELL_TESTS``. For an example, see
    208      ``netwerk/test/Makefile.in#117``.
    209 ``do_get_profile()``
    210   Registers a directory with the profile service and returns an
    211   ``nsILocalFile`` object representing that directory. It also makes
    212   sure that the **profile-change-net-teardown**,
    213   **profile-change-teardown**, and **profile-before-change** `observer
    214   notifications </en/Observer_Notifications#Application_shutdown>`__
    215   are sent before the test finishes. This is useful if the components
    216   loaded in the test observe them to do cleanup on shutdown (e.g.,
    217   places).
    218 
    219   .. note::
    220 
    221      **Note:** ``do_register_cleanup`` will perform any cleanup
    222      operation *before* the profile and the network is shut down by the
    223      observer notifications.
    224 ``do_get_idle()``
    225   By default xpcshell tests will disable the idle service, so that idle
    226   time will always be reported as 0. Calling this function will
    227   re-enable the service and return a handle to it; the idle time will
    228   then be correctly requested to the underlying OS. The idle-daily
    229   notification could be fired when requesting idle service. It is
    230   suggested to always get the service through this method if the test
    231   has to use idle.
    232 ``do_get_cwd()``
    233   Returns an ``nsILocalFile`` object representing the test directory.
    234   This is the directory containing the test file when it is currently
    235   being run. Your test can write to this directory as well as read any
    236   files located alongside your test. Your test should be careful to
    237   ensure that it will not fail if a file it intends to write already
    238   exists, however.
    239 ``load(testdirRelativePath)``
    240   Imports the JavaScript file referenced by ``testdirRelativePath``
    241   into the global script context, executing the code inside it. The
    242   file specified is a file within the test directory. For example, if
    243   your test is unit/test_something.js and you have another file
    244   unit/extra_helpers.js, you can load the second file from the first by
    245   calling ``load('extra_helpers.js')``.
    246 
    247 Utility
    248 ^^^^^^^
    249 
    250 ``do_parse_document(path, type)``
    251   Parses and returns a DOM document.
    252 ``executeSoon(callback)``
    253   Executes the function ``callback`` on a later pass through the event
    254   loop. Use this when you want some code to execute after the current
    255   function has finished executing, but you don't care about a specific
    256   time delay. This function will automatically insert a
    257   ``do_test_pending`` / ``do_test_finished`` pair for you.
    258 ``do_timeout(delay, fun)``
    259   Call this function to schedule a timeout. The given function will be
    260   called with no arguments provided after the specified delay (in
    261   milliseconds). Note that you must call ``do_test_pending`` so that
    262   the test isn't completed before your timer fires, and you must call
    263   ``do_test_finished`` when the actions you perform in the timeout
    264   complete, if you have no other functionality to test. (Note: the
    265   function argument used to be a string argument to be passed to eval,
    266   and some older branches support only a string argument or support
    267   both string and function.)
    268 
    269 Multiprocess communication
    270 ^^^^^^^^^^^^^^^^^^^^^^^^^^
    271 
    272 ``do_send_remote_message(name, optionalData)``
    273   Asynchronously send a message to all remote processes. Pairs with
    274   ``do_await_remote_message`` or equivalent ProcessMessageManager
    275   listeners.
    276 ``do_await_remote_message(name, optionalCallback)``
    277   Returns a promise that is resolved when the message is received. Must
    278   be paired with\ ``do_send_remote_message`` or equivalent
    279   ProcessMessageManager calls. If **optionalCallback** is provided, the
    280   callback must call ``do_test_finished``. If optionalData is passed
    281   to ``do_send_remote_message`` then that data is the first argument to
    282   **optionalCallback** or the value to which the promise resolves.
    283 
    284 
    285 xpcshell.toml manifest
    286 ----------------------
    287 
    288 The manifest controls what tests are included in a test suite, and the
    289 configuration of the tests. It is loaded via the \`moz.build\` property
    290 configuration property.
    291 
    292 The following are all of the configuration options for a test suite as
    293 listed under the ``[DEFAULT]`` section of the manifest.
    294 
    295 ``tags``
    296   Tests can be filtered by tags when running multiple tests. The
    297   command for mach is ``./mach xpcshell-test --tag TAGNAME``
    298 ``head``
    299   The relative path to the head JavaScript file, which is run once
    300   before a test suite is run. The variables declared in the root scope
    301   are available as globals in the test files. See `Test head and
    302   support files <#test-head-and-support-files>`__ for more information
    303   and usage.
    304 ``firefox-appdir``
    305   Set this to "browser" if your tests need access to things in the
    306   browser/ directory (e.g. additional XPCOM services that live there)
    307 ``skip-if`` ``run-if`` ``fail-if``
    308   For this entire test suite, run the tests only if they meet certain
    309   conditions. See `Adding conditions in the xpcshell.toml
    310   manifest <#adding-conditions-through-the-add-task-or-add-test-function>`__ for how
    311   to use these properties.
    312 ``support-files``
    313   Make files available via the ``resource://test/[filename]`` path to
    314   the tests. The path can be relative to other directories, but it will
    315   be served only with the filename. See `Test head and support
    316   files <#test-head-and-support-files>`__ for more information and
    317   usage.
    318 ``[test_*]``
    319   Test file names must start with ``test_`` and are listed in square
    320   brackets
    321 ``requesttimeoutfactor``
    322   A multiplier applied to the default test timeout. The default timeout for
    323   xpcshell tests is 30 seconds. Setting ``requesttimeoutfactor = 2`` will
    324   increase the timeout to 60 seconds (30 × 2). This can be set either at the
    325   manifest level (under ``[DEFAULT]``) to apply to all tests, or on individual
    326   test entries to apply only to specific tests.
    327 
    328   **Important:** Slower platforms (such as Android, debug builds, TSan, ASan) already
    329   have platform-wide timeout factors defined in ``taskcluster/kinds/test/xpcshell.yml``
    330   that apply to all tests running on those platforms. These platform factors are
    331   multiplied with any manifest-level or test-level factors you specify. Therefore,
    332   you should **not** set ``requesttimeoutfactor`` in manifests simply because a
    333   platform is generally slower—only use it when a specific test needs extra time
    334   beyond what the platform factor already provides.
    335 
    336   This should be used when tests legitimately require more time due to:
    337 
    338   - Specific tests being disproportionately slower on certain platforms (beyond the general platform slowness)
    339   - Complex operations that take longer than the default timeout (e.g., large database operations, extensive network tests)
    340   - Tests that run multiple time-consuming operations sequentially
    341 
    342   You should **not** use this for tests that are slow due to inefficient test code
    343   or unnecessary waits. Consider profiling and refactoring such tests instead.
    344 
    345   Example usage in a manifest:
    346 
    347   .. code:: toml
    348 
    349      [DEFAULT]
    350      # Apply a 2x timeout factor to all tests in this manifest
    351      requesttimeoutfactor = 2
    352 
    353      ["test_slow_on_windows.js"]
    354      # This test needs 3x timeout (90 seconds) on Windows
    355      requesttimeoutfactor = 3  # Slow on Windows
    356 
    357   When a test-level factor is specified, it replaces (not multiplies) the manifest-level
    358   factor. For example, if the manifest has ``requesttimeoutfactor = 2`` and a test has
    359   ``requesttimeoutfactor = 3``, the test will use a 3x factor, not 6x.
    360 
    361 
    362 Creating a new xpcshell.toml file
    363 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    364 
    365 When creating a new directory and new xpcshell.toml manifest file, the
    366 following must be added to a moz.build file near that file in the
    367 directory hierarchy:
    368 
    369 .. code:: bash
    370 
    371   XPCSHELL_TESTS_MANIFESTS += ['path/to/xpcshell.toml']
    372 
    373 Typically, the moz.build containing *XPCSHELL_TESTS_MANIFESTS* is not in
    374 the same directory as *xpcshell.toml*, but rather in a parent directory.
    375 Common directory structures look like:
    376 
    377 .. code:: bash
    378 
    379   feature
    380   ├──moz.build
    381   └──tests/xpcshell
    382      └──xpcshell.toml
    383 
    384   # or
    385 
    386   feature
    387   ├──moz.build
    388   └──tests
    389      ├──moz.build
    390      └──xpcshell
    391         └──xpcshell.toml
    392 
    393 
    394 Test head and support files
    395 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    396 
    397 Typically in a test suite, similar setup code and dependencies will need
    398 to be loaded in across each test. This can be done through the test
    399 head, which is the file declared in the ``xpcshell.toml`` manifest file
    400 under the ``head`` property. The file itself is typically called
    401 ``head.js``. Any variable declared in the test head will be in the
    402 global scope of each test in that test suite.
    403 
    404 In addition to the test head, other support files can be declared in the
    405 ``xpcshell.toml`` manifest file. This is done through the
    406 ``support-files`` declaration. These files will be made available
    407 through the url ``resource://test`` plus the name of the file. These
    408 files can then be loaded in using the
    409 ``ChromeUtils.importESModule`` function
    410 or other loaders. The support files can be located in other directory as
    411 well, and they will be made available by their filename.
    412 
    413 .. code:: bash
    414 
    415   # File structure:
    416 
    417   path/to/tests
    418   ├──head.js
    419   ├──module.mjs
    420   ├──moz.build
    421   ├──test_example.js
    422   └──xpcshell.toml
    423 
    424 .. code:: toml
    425 
    426   # xpcshell.toml
    427   [DEFAULT]
    428   head = head.js
    429   support-files =
    430     ./module.mjs
    431     ../../some/other/file.js
    432   [test_component_state.js]
    433 
    434 .. code:: js
    435 
    436   // head.js
    437   var globalValue = "A global value.";
    438 
    439   // Import support-files.
    440   const { foo } = ChromeUtils.importESModule("resource://test/module.mjs");
    441   const { bar } = ChromeUtils.importESModule("resource://test/file.mjs");
    442 
    443 .. code:: js
    444 
    445   // test_example.js
    446   function run_test() {
    447     equal(globalValue, "A global value.", "Declarations in head.js can be accessed");
    448   }
    449 
    450 
    451 Additional testing considerations
    452 ---------------------------------
    453 
    454 Async tests
    455 ^^^^^^^^^^^
    456 
    457 Asynchronous tests (that is, those whose success cannot be determined
    458 until after ``run_test`` finishes) can be written in a variety of ways.
    459 
    460 Task-based asynchronous tests
    461 -----------------------------
    462 
    463 The easiest is using the ``add_task`` helper. ``add_task`` can take an
    464 asynchronous function as a parameter. ``add_task`` tests are run
    465 automatically if you don't have a ``run_test`` function.
    466 
    467 .. code:: js
    468 
    469   add_task(async function test_foo() {
    470     let foo = await makeFoo(); // makeFoo() returns a Promise<foo>
    471     equal(foo, expectedFoo, "Should have received the expected object");
    472   });
    473 
    474   add_task(async function test_bar() {
    475     let foo = await makeBar(); // makeBar() returns a Promise<bar>
    476     Assert.equal(bar, expectedBar, "Should have received the expected object");
    477   });
    478 
    479 Callback-based asynchronous tests
    480 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    481 
    482 You can also use ``add_test``, which takes a function and adds it to the
    483 list of asynchronously-run functions. Each function given to
    484 ``add_test`` must also call ``run_next_test`` at its end. You should
    485 normally use ``add_task`` instead of ``add_test``, but you may see
    486 ``add_test`` in existing tests.
    487 
    488 .. code:: js
    489 
    490   add_test(function test_foo() {
    491     makeFoo(function callback(foo) { // makeFoo invokes a callback<foo> once completed
    492       equal(foo, expectedFoo);
    493       run_next_test();
    494     });
    495   });
    496 
    497   add_test(function test_bar() {
    498     makeBar(function callback(bar) {
    499       equal(bar, expectedBar);
    500       run_next_test();
    501     });
    502   });
    503 
    504 
    505 Other tests
    506 ^^^^^^^^^^^
    507 
    508 We can also tell the test harness not to kill the test process once
    509 ``run_test()`` is finished, but to keep spinning the event loop until
    510 our callbacks have been called and our test has completed. Newer tests
    511 prefer the use of ``add_task`` rather than this method. This can be
    512 achieved with ``do_test_pending()`` and ``do_test_finished()``:
    513 
    514 .. code:: js
    515 
    516   function run_test() {
    517     // Tell the harness to keep spinning the event loop at least
    518     // until the next do_test_finished() call.
    519     do_test_pending();
    520 
    521     someAsyncProcess(function callback(result) {
    522       equal(result, expectedResult);
    523 
    524       // Close previous do_test_pending() call.
    525       do_test_finished();
    526     });
    527   }
    528 
    529 
    530 Testing in child processeses
    531 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    532 
    533 By default xpcshell tests run in the parent process. If you wish to run
    534 test logic in the child, you have several ways to do it:
    535 
    536 #. Create a regular test_foo.js test, and then write a wrapper
    537   test_foo_wrap.js file that uses the ``run_test_in_child()`` function
    538   to run an entire script file in the child. This is an easy way to
    539   arrange for a test to be run twice, once in chrome and then later
    540   (via the \_wrap.js file) in content. See /network/test/unit_ipc for
    541   examples. The ``run_test_in_child()`` function takes a callback, so
    542   you should be able to call it multiple times with different files, if
    543   that's useful.
    544 #. For tests that need to run logic in both the parent + child processes
    545   during a single test run, you may use the poorly documented
    546   ``sendCommand()`` function, which takes a code string to be executed
    547   on the child, and a callback function to be run on the parent when it
    548   has completed. You will want to first call
    549   do_load_child_test_harness() to set up a reasonable test environment
    550   on the child. ``sendCommand`` returns immediately, so you will
    551   generally want to use ``do_test_pending``/``do_test_finished`` with
    552   it. NOTE: this method of test has not been used much, and your level
    553   of pain may be significant. Consider option #1 if possible.
    554 
    555 See the documentation for ``run_test_in_child()`` and
    556 ``do_load_child_test_harness()`` in testing/xpcshell/head.js for more
    557 information.
    558 
    559 
    560 Platform-specific tests
    561 ^^^^^^^^^^^^^^^^^^^^^^^
    562 
    563 Sometimes you might want a test to know what platform it's running on
    564 (to test platform-specific features, or allow different behaviors). Unit
    565 tests are not normally invoked from a Makefile (unlike Mochitests), or
    566 preprocessed (so not #ifdefs), so platform detection with those methods
    567 isn't trivial.
    568 
    569 
    570 Runtime detection
    571 ^^^^^^^^^^^^^^^^^
    572 
    573 Some tests will want to only execute certain portions on specific
    574 platforms. Use
    575 `AppConstants.sys.mjs <https://searchfox.org/mozilla-central/rev/5f0a7ca8968ac5cef8846e1d970ef178b8b76dcc/toolkit/modules/AppConstants.sys.mjs#158>`__
    576 for determining the platform, for example:
    577 
    578 .. code:: js
    579 
    580   let { AppConstants } =
    581     ChromeUtils.importESModule("resource://gre/modules/AppConstants.mjs");
    582 
    583   let isMac = AppConstants.platform == "macosx";
    584 
    585 
    586 Conditionally running a test
    587 ----------------------------
    588 
    589 There are two different ways to conditional skip a test, either through
    590 
    591 
    592 Adding conditions through the ``add_task`` or ``add_test`` function
    593 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    594 
    595 You can use conditionals on individual test functions instead of entire
    596 files. The condition is provided as an optional first parameter passed
    597 into ``add_task()`` or ``add_test()``. The condition is an object which
    598 contains a function named ``skip_if()``, which is an `arrow
    599 function </en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions>`__
    600 returning a boolean value which is **``true``** if the test should be
    601 skipped.
    602 
    603 For example, you can provide a test which only runs on Mac OS X like
    604 this:
    605 
    606 .. code:: js
    607 
    608   let { AppConstants } =
    609     ChromeUtils.importESModule("resource://gre/modules/AppConstants.sys.mjs");
    610 
    611   add_task({
    612     skip_if: () => AppConstants.platform != "mac"
    613   }, async function some_test() {
    614     // Test code goes here
    615   });
    616 
    617 Since ``AppConstants.platform != "mac"`` is ``true`` only when testing
    618 on Mac OS X, the test will be skipped on all other platforms.
    619 
    620 .. note::
    621 
    622   **Note:** Arrow functions are ideal here because if your condition
    623   compares constants, it will already have been evaluated before the
    624   test is even run, meaning your output will not be able to show the
    625   specifics of what the condition is.
    626 
    627 
    628 Adding conditions in the xpcshell.toml manifest
    629 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    630 
    631 Sometimes you may want to add conditions to specify that a test should
    632 be skipped in certain configurations, or that a test is known to fail on
    633 certain platforms. You can do this in xpcshell manifests by adding
    634 annotations below the test file entry in the manifest, for example:
    635 
    636 .. code:: ini
    637 
    638   [test_example.js]
    639   skip-if = os == 'win'
    640 
    641 This example would skip running ``test_example.js`` on Windows.
    642 
    643 .. note::
    644 
    645   **Note:** Starting with Gecko (Firefox 40 / Thunderbird 40 /
    646   SeaMonkey 2.37), you can use conditionals on individual test
    647   functions instead of on entire files. See `Adding conditions through
    648   the add_task or add_test
    649   function <#adding-conditions-through-the-add-task-or-add-test-function>`__
    650   above for details.
    651 
    652 There are currently four conditionals you can specify:
    653 
    654 skip-if
    655 """""""
    656 
    657 ``skip-if`` tells the harness to skip running this test if the condition
    658 evaluates to true. You should use this only if the test has no meaning
    659 on a certain platform, or causes undue problems like hanging the test
    660 suite for a long time.
    661 
    662 run-if
    663 ''''''
    664 
    665 ``run-if`` tells the harness to only run this test if the condition
    666 evaluates to true. It functions as the inverse of ``skip-if``.
    667 
    668 fail-if
    669 """""""
    670 
    671 ``fail-if`` tells the harness that this test is expected to fail if the
    672 condition is true. If you add this to a test, make sure you file a bug
    673 on the failure and include the bug number in a comment in the manifest,
    674 like:
    675 
    676 .. code:: ini
    677 
    678   [test_example.js]
    679   # bug xxxxxx
    680   fail-if = os == 'linux'
    681 
    682 run-sequentially
    683 """"""""""""""""
    684 
    685 ``run-sequentially``\ basically tells the harness to run the respective
    686 test in isolation. This is required for tests that are not
    687 "thread-safe". You should do all you can to avoid using this option,
    688 since this will kill performance. However, we understand that there are
    689 some cases where this is imperative, so we made this option available.
    690 If you add this to a test, make sure you specify a reason and possibly
    691 even a bug number, like:
    692 
    693 .. code:: ini
    694 
    695   [test_example.js]
    696   run-sequentially = Has to launch Firefox binary, bug 123456.
    697 
    698 
    699 Manifest conditional expressions
    700 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    701 
    702 For a more detailed description of the syntax of the conditional
    703 expressions, as well as what variables are available, `see this
    704 page </en/XPCshell_Test_Manifest_Expressions`.
    705 
    706 
    707 Running a specific test only
    708 ----------------------------
    709 
    710 When working on a specific feature or issue, it is convenient to only
    711 run a specific task from a whole test suite. Use ``.only()`` for that
    712 purpose:
    713 
    714 .. code:: js
    715 
    716   add_task(async function some_test() {
    717     // Some test.
    718   });
    719 
    720   add_task(async function some_interesting_test() {
    721   // Only this test will be executed.
    722   }).only();
    723 
    724 
    725 Problems with pending events and shutdown
    726 -----------------------------------------
    727 
    728 Events are not processed during test execution if not explicitly
    729 triggered. This sometimes causes issues during shutdown, when code is
    730 run that expects previously created events to have been already
    731 processed. In such cases, this code at the end of a test can help:
    732 
    733 .. code:: js
    734 
    735   let thread = gThreadManager.currentThread;
    736   while (thread.hasPendingEvents())
    737     thread.processNextEvent(true);
    738 
    739 
    740 Debugging xpcshell-tests
    741 ------------------------
    742 
    743 Running unit tests under the javascript debugger via ``--jsdebugger``
    744 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    745 
    746 You can specify flags when issuing the ``xpcshell-test`` command that
    747 will cause your test to stop right before running so you can attach the
    748 `javascript debugger </docs/Tools/Tools_Toolbox>`__.
    749 
    750 Example:
    751 
    752 .. code:: bash
    753 
    754   $ ./mach xpcshell-test --jsdebugger browser/components/tests/unit/test_browserGlue_pingcentre.js
    755    0:00.50 INFO Running tests sequentially.
    756   ...
    757    0:00.68 INFO ""
    758    0:00.68 INFO "*******************************************************************"
    759    0:00.68 INFO "Waiting for the debugger to connect on port 6000"
    760    0:00.68 INFO ""
    761    0:00.68 INFO "To connect the debugger, open a Firefox instance, select 'Connect'"
    762    0:00.68 INFO "from the Developer menu and specify the port as 6000"
    763    0:00.68 INFO "*******************************************************************"
    764    0:00.68 INFO ""
    765    0:00.71 INFO "Still waiting for debugger to connect..."
    766   ...
    767 
    768 At this stage in a running Firefox instance:
    769 
    770 -  Go to the three-bar menu, then select ``More tools`` ->
    771   ``Remote Debugging``
    772 -  A new tab is opened. In the Network Location box, enter
    773   ``localhost:6000`` and select ``Connect``
    774 -  You should then get a link to *``Main Process``*, click it and the
    775   Developer Tools debugger window will open.
    776 -  It will be paused at the start of the test, so you can add
    777   breakpoints, or start running as appropriate.
    778 
    779 If you get a message such as:
    780 
    781 ::
    782 
    783    0:00.62 ERROR Failed to initialize debugging: Error: resource://devtools appears to be inaccessible from the xpcshell environment.
    784   This can usually be resolved by adding:
    785     firefox-appdir = browser
    786   to the xpcshell.toml manifest.
    787   It is possible for this to alter test behevior by triggering additional browser code to run, so check test behavior after making this change.
    788 
    789 This is typically a test in core code. You can attempt to add that to
    790 the xpcshell.toml, however as it says, it might affect how the test runs
    791 and cause failures. Generally the firefox-appdir should only be left in
    792 xpcshell.toml for tests that are in the browser/ directory, or are
    793 Firefox-only.
    794 
    795 Running unit tests with the profiler using ``--profiler``
    796 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    797 
    798 Similarly, it's possible to run an xpcshell test with the profiler enabled.
    799 After the test finishes running the profiler interface will be opened
    800 automatically:
    801 
    802 .. code:: bash
    803 
    804   $ ./mach xpcshell-test --profiler browser/components/tests/unit/test_browserGlue_migration_osauth.js
    805   ...
    806    0:00.50 INFO Running tests sequentially.
    807   ...
    808    0:00.88 profiler INFO Symbolicating the performance profile... This could take a couple of minutes.
    809    0:01.93 profiler INFO Temporarily serving the profile from: http://127.0.0.1:57737/profile_test_browserGlue_migration_osauth.js.json
    810    0:01.93 profiler INFO Opening the profile: https://profiler.firefox.com/from-url/http%3A%2F%2F127.0.0.1%3A57737%2Fprofile_test_browserGlue_migration_osauth.js.json
    811   ...
    812 
    813 Running unit tests under a C++ debugger
    814 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    815 
    816 
    817 Via ``--debugger`` and ``--debugger-interactive``
    818 """""""""""""""""""""""""""""""""""""""""""""""""
    819 
    820 You can specify flags when issuing the ``xpcshell-test`` command that
    821 will launch xpcshell in the specified debugger (implemented in
    822 `bug 382682 <https://bugzilla.mozilla.org/show_bug.cgi?id=382682>`__).
    823 Provide the full path to the debugger, or ensure that the named debugger
    824 is in your system PATH.
    825 
    826 Example:
    827 
    828 .. code:: bash
    829 
    830   $ ./mach xpcshell-test --debugger gdb --debugger-interactive netwerk/test/unit/test_resumable_channel.js
    831   # js>_execute_test();
    832   ...failure or success messages are printed to the console...
    833   # js>quit();
    834 
    835 On Windows with the VS debugger:
    836 
    837 .. code:: bash
    838 
    839   $ ./mach xpcshell-test --debugger devenv --debugger-interactive netwerk/test/test_resumable_channel.js
    840 
    841 Or with WinDBG:
    842 
    843 .. code:: bash
    844 
    845   $ ./mach xpcshell-test --debugger windbg --debugger-interactive netwerk/test/test_resumable_channel.js
    846 
    847 Or with modern WinDbg (WinDbg Preview as of April 2020):
    848 
    849 .. code:: bash
    850 
    851   $ ./mach xpcshell-test --debugger WinDbgX --debugger-interactive netwerk/test/test_resumable_channel.js
    852 
    853 
    854 Debugging xpcshell tests in a child process
    855 """""""""""""""""""""""""""""""""""""""""""
    856 
    857 To debug the child process, where code is often being run in a project,
    858 set MOZ_DEBUG_CHILD_PROCESS=1 in your environment (or on the command
    859 line) and run the test. You will see the child process emit a printf
    860 with its process ID, then sleep. Attach a debugger to the child's pid,
    861 and when it wakes up you can debug it:
    862 
    863 .. code:: bash
    864 
    865   $ MOZ_DEBUG_CHILD_PROCESS=1 ./mach xpcshell-test test_simple_wrap.js
    866   CHILDCHILDCHILDCHILD
    867     debug me @13476
    868 
    869 
    870 Debug both parent and child processes
    871 """""""""""""""""""""""""""""""""""""
    872 
    873 Use MOZ_DEBUG_CHILD_PROCESS=1 to attach debuggers to each process. (For
    874 gdb at least, this means running separate copies of gdb, one for each
    875 process.)