tor-browser

The Tor Browser
git clone https://git.dasho.dev/tor-browser.git
Log | Files | Refs | README | LICENSE

index.rst (14133B)


      1 web-platform-tests
      2 ==================
      3 
      4 `web-platform-tests <http://web-platform-tests.org>`_ is a
      5 cross-browser testsuite. Writing tests as web-platform-tests helps
      6 ensure that browsers all implement the same behaviour for web-platform
      7 features.
      8 
      9 Upstream Documentation
     10 ----------------------
     11 
     12 This documentation covers the integration of web-platform-tests into
     13 the Firefox tree. For documentation on writing tests, see
     14 `web-platform-tests.org <http://web-platform-tests.org>`_. In particular
     15 the following documents cover common test-writing needs:
     16 
     17 * `Javascript Tests (testharness.js)
     18  <https://web-platform-tests.org/writing-tests/testharness.html>`_
     19 
     20  - `testharness.js API
     21    <https://web-platform-tests.org/writing-tests/testharness-api.html>`_
     22 
     23  - `testdriver.js API
     24    <https://web-platform-tests.org/writing-tests/testdriver.html>`_ -
     25    features for writing tests that require special privileges
     26    e.g. user-initiated input events.
     27 
     28  - `Message Channels
     29    <https://web-platform-tests.org/writing-tests/channels.html>`_ -
     30    features for communicating between different globals (including
     31    those in different browsing context groups / processes).
     32 
     33 * `Reftests <https://web-platform-tests.org/writing-tests/reftests.html>`_
     34 
     35 * `Crashtests <https://web-platform-tests.org/writing-tests/crashtest.html>`_
     36 
     37 * `Server features
     38  <https://web-platform-tests.org/writing-tests/server-features.html>`_ -
     39  e.g. multiple origins, substitutions, server-side Python scripts.
     40 
     41 Running Tests
     42 -------------
     43 
     44 Tests can be run using ``mach``::
     45 
     46    mach wpt
     47 
     48 To run only certain tests, pass these as additional arguments to the
     49 command. For example to include all tests in the dom directory::
     50 
     51    mach wpt testing/web-platform/tests/dom
     52 
     53 Tests may also be passed by id; this is the path plus any query or
     54 fragment part of a URL and is suitable for copying directly from logs
     55 e.g. on treeherder::
     56 
     57    mach wpt /web-nfc/idlharness.https.window.html
     58 
     59 A single file can produce multiple tests, so passing test ids rather
     60 than paths is sometimes necessary to run exactly one test.
     61 
     62 The testsuite contains a mix of various test types including
     63 Javascript (``testharness``) tests, reftests and wdspec tests. To limit
     64 the type of tests that get run, use ``--test-type=<type>`` e.g.
     65 ``--test-type=reftest`` for reftests.
     66 
     67 Note that if only a single testharness test is run the browser will
     68 stay open by default (matching the behaviour of mochitest). To prevent
     69 this pass ``--no-pause-after-test`` to ``mach wpt``.
     70 
     71 When the source tree is configured for building android, tests will
     72 also be run on Android, by default using a local emulator.
     73 
     74 Running Tests In Other Browsers
     75 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     76 
     77 web-platform-tests is cross browser, and the runner is compatible with
     78 multiple browsers. Therefore it's possible to check the behaviour of
     79 tests in other browsers. By default Chrome, Edge and Servo are
     80 supported. In order to run the tests in these browsers use the
     81 ``--product`` argument to wptrunner::
     82 
     83    mach wpt --product chrome dom/historical.html
     84 
     85 By default these browsers run without expectation metadata, but it can
     86 be added in the ``testing/web-platform/products/<product>``
     87 directory. To run with the same metadata as for Firefox (so that
     88 differences are reported as unexpected results), pass ``--meta
     89 testing/web-platform/meta`` to the mach command.
     90 
     91 Results from the upstream CI for many browser, including Chrome and
     92 Safari, are available on `wpt.fyi <https://wpt.fyi>`_. There is also a
     93 `gecko dashboard <https://jgraham.github.io/wptdash/>`_ which by default
     94 shows tests that are failing in Gecko but not in Chrome and Safari,
     95 organised by bug component, based on the wpt.fyi data.
     96 
     97 Directories
     98 -----------
     99 
    100 Under ``testing/web-platform`` are the following directories:
    101 
    102 ``tests/``
    103  An automatically-updated import of the web-platform-tests
    104  repository. Any changes to this directory are automatically
    105  converted into pull requests against the upstream repository, and
    106  merged if they pass CI.
    107 
    108 ``meta/``
    109  Gecko-specific metadata including expected test results
    110  and configuration e.g. prefs that are set when running the
    111  test. This is explained in the following section.
    112 
    113 ``mozilla/tests``
    114  Tests that will not be upstreamed and may
    115  make use of Mozilla-specific features. They can access
    116  the ``SpecialPowers`` APIs.
    117 
    118 ``mozilla/meta``
    119  Metadata for the Mozilla-specific tests.
    120 
    121 Metadata
    122 --------
    123 
    124 In order to separate out the shared testsuite from Firefox-specific
    125 metadata about the tests, all the metadata is stored in separate
    126 ini-like files in the ``meta/`` sub-directory.
    127 
    128 There is one metadata file per test file with associated
    129 gecko-specific data. The metadata file of a test has the same relative
    130 path as the test file and has the the suffix ``.ini`` e.g. for the
    131 test in ``testing/web-platform/tests/example/example.html``, the
    132 corresponding metadata file is
    133 ``testing/web-platform/meta/example/example.html.ini``.
    134 
    135 The format of these files is similar to `ini` files, but with a couple
    136 of important differences; sections can be nested using indentation,
    137 and only `:` is permitted as a key-value separator. For example::
    138 
    139    [filename.html]
    140        [Subtest 1 name]
    141            key: value
    142 
    143        [Subtest 2 name]
    144            key: [list, value]
    145 
    146 For cases where a single file generates multiple tests (e.g. variants
    147 or ``.any.js`` tests), the metadata file has one top-level section for
    148 each test, for example::
    149 
    150    [test.any.html]
    151        [Subtest name]
    152            key: value
    153 
    154    [test.any.worker.html]
    155        [Subtest name]
    156            key: other-value
    157 
    158 Values can be made conditional using a Python-like conditional syntax::
    159 
    160    [filename.html]
    161        key:
    162            if os == "linux": linux-value
    163            default-value
    164 
    165 The available variables for the conditions are those provided by
    166 `mozinfo
    167 <https://firefox-source-docs.mozilla.org/mozbase/mozinfo.html>`_, plus
    168 some additional `wpt-specific values
    169 <https://searchfox.org/mozilla-central/search?q=def%20run_info_extras&path=testing%2Fweb-platform%2Ftests%2Ftools%2Fwptrunner%2Fwptrunner%2Fbrowsers%2Ffirefox.py&case=false&regexp=false>`_.
    170 
    171 For more information on manifest files, see the `wptrunner
    172 documentation
    173 <https://web-platform-tests.org/tools/wptrunner/docs/expectation.html>`_
    174 
    175 
    176 Expectation Data
    177 ~~~~~~~~~~~~~~~~
    178 
    179 All tests that don't pass in our CI have expectation data stored in
    180 the metadata file, under the key ``expected``. For example the
    181 expectation file for a test with one failing subtest and one erroring
    182 subtest might look like::
    183 
    184    [filename.html]
    185        [Subtest name for failing test]
    186            expected: FAIL
    187 
    188        [Subtest name for erroring test]
    189            expected: ERROR
    190 
    191 Expectations can be made configuration-specific using the conditional syntax::
    192 
    193    [filename.html]
    194        expected:
    195            if os == "linux" and bits == 32: TIMEOUT
    196            if os == "win": ERROR
    197            FAIL
    198 
    199 Tests that are intermittent may be marked with multiple statuses using
    200 a list of possibilities e.g. for a test that usually passes, but
    201 intermittently fails::
    202 
    203    [filename.html]
    204        [Subtest name for intermittent test]
    205            expected: [PASS, FAIL]
    206 
    207 
    208 Auto-generating Expectation Data
    209 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    210 
    211 After changing some code it may be necessary to update the expectation
    212 data for the relevant tests. This can of course be done manually, but
    213 tools are available to automate much of the process.
    214 
    215 First one must run the tests that have changed status, and save the
    216 raw log output to a file::
    217 
    218    mach wpt /url/of/test.html --log-wptreport wptreport.json
    219 
    220 Then the ``wpt-update`` command may be run using this log data to update
    221 the expectation files::
    222 
    223    mach wpt-update wptreport.json
    224 
    225 CI runs also produce ``wptreport.json`` files that can be downloaded
    226 as artifacts. When tests are run across multiple platforms, and all
    227 the wptreport files are processed together, the tooling will set the
    228 appropriate conditions for any platform-specific results::
    229 
    230    mach wpt-update logs/*.json
    231 
    232 For complete runs the ``--full`` flag will cause metadata to be
    233 removed when a) the test was updated and b) there is a condition that
    234 didn't match any of the configuration in the input files.
    235 
    236 When tests are run more than once ``--update-intermittent`` flag will
    237 cause conflicting results to be marked as intermittents (otherwise the
    238 data is not updated in the case of conflicts).
    239 
    240 Disabling Tests
    241 ~~~~~~~~~~~~~~~
    242 
    243 Tests are disabled using the same manifest files used to set
    244 expectation values. For example, if a test is unstable on Windows, it
    245 can be disabled using an ini file with the contents::
    246 
    247    [filename.html]
    248        disabled:
    249            if os == "win": https://bugzilla.mozilla.org/show_bug.cgi?id=1234567
    250 
    251 For intermittents it's generally preferable to give the test multiple
    252 expectations rather than disable it.
    253 
    254 Fuzzy Reftests
    255 ~~~~~~~~~~~~~~
    256 
    257 Reftests where the test doesn't exactly match the reference can be
    258 marked as fuzzy. If the difference is inherent to the test, it should
    259 be encoded in a `meta element
    260 <https://web-platform-tests.org/writing-tests/reftests.html#fuzzy-matching>`_,
    261 but where it's a Gecko-specific difference it can be added to the
    262 metadata file, using the same syntax::
    263 
    264    [filename.html]
    265        fuzzy: maxDifference=10-15;totalPixels=200-300
    266 
    267 In this case we expect between 200 and 300 pixels, inclusive, to be
    268 different, and the maximum difference in any RGB colour channel to be
    269 between 10 and 15.
    270 
    271 
    272 Enabling Prefs
    273 ~~~~~~~~~~~~~~
    274 
    275 Some tests require specific prefs to be enabled before running. These
    276 prefs can be set in the expectation data using a ``prefs`` key with a
    277 comma-separated list of ``pref.name:value`` items::
    278 
    279    [filename.html]
    280        prefs: [dom.serviceWorkers.enabled:true,
    281                dom.serviceWorkers.exemptFromPerDomainMax:true,
    282                dom.caches.enabled:true]
    283 
    284 Disabling Leak Checks
    285 ~~~~~~~~~~~~~~~~~~~~~
    286 
    287 When a test is imported that leaks, it may be necessary to temporarily
    288 disable leak checking for that test in order to allow the import to
    289 proceed. This works in basically the same way as disabling a test, but
    290 with the key ``leaks``::
    291 
    292    [filename.html]
    293        leaks:
    294            if os == "linux": https://bugzilla.mozilla.org/show_bug.cgi?id=1234567
    295 
    296 Per-Directory Metadata
    297 ~~~~~~~~~~~~~~~~~~~~~~
    298 
    299 Occasionally it is useful to set metadata for an entire directory of
    300 tests e.g. to disable then all, or to enable prefs for every test. In
    301 that case it is possible to create a ``__dir__.ini`` file in the
    302 metadata directory corresponding to the tests for which you want to
    303 set this metadata e.g. to disable all the tests in
    304 ``tests/feature/unsupported/``, one might create
    305 ``meta/feature/unsupported/__dir__.ini`` with the contents::
    306 
    307    disabled: Feature is unsupported
    308 
    309 Settings set in this way are inherited into sub-directories. It is
    310 possible to unset a value that has been set in a parent using the
    311 special token ``@Reset`` (usually used with prefs), or to force a value
    312 to true or false using ``@True`` and ``@False``.  For example to enable
    313 the tests in ``meta/feature/unsupported/subfeature-supported`` one might
    314 create an ini file
    315 ``meta/feature/unsupported/subfeature-supported/__dir__.ini`` like::
    316 
    317    disabled: @False
    318 
    319 Setting Metadata for Release Branches
    320 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    321 
    322 Run info properties can be used to set metadata for release branches
    323 that differs from nightly (e.g. for when a feature depends on prefs
    324 that are only set on nightly), for example::
    325 
    326    [filename.html]
    327      expected:
    328        if release_or_beta: FAIL
    329 
    330 Note that in general the automatic metadata update will work better if
    331 the nonstandard configuration is used explicitly in the conditional,
    332 and placed at the top of the set of conditions, i.e. the following
    333 would cause problems later::
    334 
    335    [filename.html]
    336      expected:
    337        if nightly_build: PASS
    338        FAIL
    339 
    340 This is because on import the automatic metadata updates are run
    341 against the results of nightly builds, and we remove any existing
    342 conditions that match all the input runs to avoid building up stale
    343 configuration options.
    344 
    345 Test Manifest
    346 -------------
    347 
    348 web-platform-tests use a large auto-generated JSON file as their
    349 manifest. This stores data about the type of tests, their references,
    350 if any, and their timeout, gathered by inspecting the filenames and
    351 the contents of the test files. It is not necessary to manually add
    352 new tests to the manifest; it is automatically kept up to date when
    353 running `mach wpt`.
    354 
    355 Synchronization with Upstream
    356 -----------------------------
    357 
    358 Tests are automatically synchronized with upstream using the `wpt-sync
    359 bot <https://github.com/mozilla/wpt-sync>`_. This performs the following tasks:
    360 
    361 * Creates upstream PRs for changes in
    362  ``testing/web-platform/tests`` once they land on autoland, and
    363  automatically merges them after they reach mozilla-central.
    364 
    365 * Runs merged upstream PRs through gecko CI to generate updated
    366  expectation metadata.
    367 
    368 * Updates the copy of web-platform-tests in the gecko tree with
    369  changes from upstream, and the expectation metadata required to make
    370  CI jobs pass.
    371 
    372 The nature of a two-way sync means that occasional merge conflicts and
    373 other problems. If something isn't in sync with upstream in the way
    374 you expect, please ask on `#interop
    375 <https://chat.mozilla.org/#/room/#interop:mozilla.org>`_ on matritx.
    376 
    377 wpt-serve
    378 ---------
    379 
    380 Sometimes, it's preferable to run the WPT's web server on its own, and point different browsers to the test files.
    381 
    382    ./mach wpt-serve
    383 
    384 can be used for this, after a short setup:
    385 
    386 On Unix, one can run:
    387 
    388    ./wpt make-hosts-file | sudo tee -a /etc/hosts
    389 
    390 from the root of the WPT checkout, present at ``testing/web-platform/tests/``.
    391 
    392 On Windows, from an administrator ``mozilla-build`` shell, one can run:
    393 
    394    ./wpt make-hosts-file >> /c/Windows/System32/drivers/etc/host
    395 
    396 from the WPT checkout.
    397 
    398 Most of the time, browsing to http://localhost:8000 will allow
    399 running the test, although some tests have special requirements, such as running
    400 on a specific domain or running using https.