tor-browser

The Tor Browser
git clone https://git.dasho.dev/tor-browser.git
Log | Files | Refs | README | LICENSE

writing.rst (14815B)


      1 Performance scripts
      2 ===================
      3 
      4 Performance scripts are programs that drive the browser to run a specific
      5 benchmark (like a page load or a lower level call) and produce metrics.
      6 
      7 We support two flavors right now in `perftest` (but it's easy to add
      8 new ones):
      9 
     10 - **xpcshell** a classical xpcshell test, turned into a performance test
     11 - **browsertime** a browsertime script, which runs a full browser and controls
     12  it via a Selenium client.
     13 - **mochitest** a classical mochitest test, turned into a performance test
     14 
     15 In order to qualify as performance tests, both flavors require metadata.
     16 
     17 For our supported flavors that are both JavaScript modules, those are
     18 provided in a `perfMetadata` mapping variable in the module, or in
     19 the `module.exports` variable when using Node.
     20 
     21 This is the list of fields:
     22 
     23 - **owner**: name of the owner (person or team) [mandatory]
     24 - **author**: author of the test
     25 - **name**: name of the test [mandatory]
     26 - **description**: short description [mandatory]
     27 - **longDescription**: longer description
     28 - **options**: options used to run the test
     29 - **supportedBrowsers**: list of supported browsers (or "Any")
     30 - **supportedPlatforms**: list of supported platforms (or "Any")
     31 - **tags**: a list of tags that describe the test
     32 
     33 Most tests are registered using tests manifests and the **PERFTESTS_MANIFESTS**
     34 variable in `moz.build` files - it's good practice to name this file
     35 `perftest.toml`. **This doesn't apply to mochitest tests**, they should use the manifest variable of the respective flavour/subsuite that the test is running in.
     36 
     37 Example of such a file: https://searchfox.org/mozilla-central/source/testing/performance/perftest.toml
     38 
     39 
     40 XPCShell
     41 --------
     42 
     43 `xpcshell` tests are plain xpcshell tests, with two more things:
     44 
     45 - the `perfMetadata` variable, as described in the previous section
     46 - calls to `info("perfMetrics", ...)` to send metrics to the `perftest` framework.
     47 
     48 Here's an example of such a metrics call::
     49 
     50    # compute some speed metrics
     51    let speed = 12345;
     52    info("perfMetrics", JSON.stringify({ speed }));
     53 
     54 XPCShell Tests in CI
     55 ^^^^^^^^^^^^^^^^^^^^
     56 
     57 To run your test in CI, you may need to modify the ``_TRY_MAPPING`` variable `found here <https://searchfox.org/mozilla-central/rev/7d1b5c88343879056168aa710a9ee743392604c0/python/mozperftest/mozperftest/utils.py#299>`_. This will allow us to find your test file in CI, and is needed because the file mappings differ from local runs. The mapping maps the top-level folder of the test to it's location in CI. To find this location/mapping, download the ``target.xpcshell.tests.tar.zst`` archive from the build task and search for your test file in it.
     58 
     59 The XPCShell test that is written can also be run as a unit test, however, if this is not desired, set the `disabled = reason` flag in the test TOML file to prevent it from running there. `See here for an example <https://searchfox.org/mozilla-central/rev/7d1b5c88343879056168aa710a9ee743392604c0/toolkit/components/ml/tests/browser/perftest.toml#7>`_.
     60 
     61 Mochitest
     62 ---------
     63 
     64 Similar to ``xpcshell`` tests, these are standard ``mochitest`` tests with some extra things:
     65 
     66 - the ``perfMetadata`` variable, as described in the previous section
     67 - calls to ``info("perfMetrics", ...)`` to send metrics to the ``perftest`` framework
     68 - does not require using the ``PERFTESTS_MANIFESTS`` for test manifest definition - use the variable needed for the flavour/subsuite it runs in
     69 
     70 Note that the ``perfMetadata`` variable can exist in any ``<script>...</script>`` element in the Mochitest HTML test file. The ``perfMetadata`` variable also needs a couple additional settings in Mochitest tests. These are the ``manifest``, and ``manifest_flavor`` options::
     71 
     72    var perfMetadata = {
     73      owner: "Performance Team",
     74      name: "Test test",
     75      description: "N/A",
     76      options: {
     77        default: {
     78          perfherder: true,
     79          perfherder_metrics: [
     80            { name: "Registration", unit: "ms" },
     81          ],
     82          manifest: "perftest.toml",
     83          manifest_flavor: "plain",
     84          extra_args: [
     85            "headless",
     86          ]
     87        },
     88      },
     89    };
     90 
     91 The ``extra_args`` setting provides an area to provide custom Mochitest command-line arguments for this test.
     92 
     93 Here's an example of a call that will produce metrics::
     94 
     95    # compute some speed metrics
     96    let speed = 12345;
     97    info("perfMetrics", JSON.stringify({ speed }));
     98 
     99 Existing Mochitest unit tests can be modified with these to be compatible with mozperftest, but note that some issues exist when doing this:
    100 
    101 - unittest issues with mochitest tests running on hardware
    102 - multiple configurations of a test running in a single manifest
    103 
    104 At the top of this document, you can find some information about the recommended approach for adding a new manifest dedicated to running performance tests.
    105 
    106 Locally, mozperftest uses ``./mach test`` to run your test. Always ensure that your test works in ``./mach test`` before attempting to run it through ``./mach perftest``. In CI, we use a custom "remote" run that runs Mochitest directly, skipping ``./mach test``.
    107 
    108 If everything is setup correctly, running a performance test locally will be as simple as this::
    109 
    110    ./mach perftest <path/to/my/mochitest-test.html>
    111 
    112 Mochitest Android Tests
    113 ^^^^^^^^^^^^^^^^^^^^^^^
    114 
    115 Running Android tests through Mochitest is the same as desktop, except the ``--android`` option, and the ``--app`` option to specify the app need to provided.
    116 
    117 Either a local android build is expected, or a preinstalled application on the device being used. To ensure that the logs for performance metrics get through, **you need to run** ``SimpleTest.requestCompleteLog()`` at the start of your test. Otherwise, the performance metrics may be buffered and destroyed before the test completes.
    118 
    119 Only the GeckoView Test Runner, and GeckoView Example are currently supported in Mochitest (see `bug 1902535 <https://bugzilla.mozilla.org/show_bug.cgi?id=1902535>`_ for progress on using Fenix).
    120 
    121 Mochitest Tests in CI
    122 ^^^^^^^^^^^^^^^^^^^^^
    123 
    124 To run your test in CI, you may need to modify the ``_TRY_MAPPING`` variable `found here <https://searchfox.org/mozilla-central/rev/7d1b5c88343879056168aa710a9ee743392604c0/python/mozperftest/mozperftest/utils.py#299>`_. This will allow us to find your test file in CI, and is needed because the file mappings differ from local runs. The mapping maps the top-level folder of the test to it's location in CI. To find this location/mapping, download the ``target.mochitest.tests.tar.zst`` archive from the build task and search for your test file in it.
    125 
    126 The Mochitest test that is written can also be run as a unit test, however, if this is not desired, set the `disabled = reason` flag in the test TOML file to prevent it from running there. `See here for an example <https://searchfox.org/mozilla-central/rev/7d1b5c88343879056168aa710a9ee743392604c0/toolkit/components/ml/tests/browser/perftest.toml#7>`_.
    127 
    128 Mochitest Android Tests in CI
    129 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    130 
    131 For Mochitest Android tests in CI, everything that applies to desktop tests also applies here. When writing a new task in the ``android.yml``, ensure that there are the following fetches applied to the task::
    132 
    133  build:
    134      - artifact: geckoview_example.apk
    135        extract: false
    136      - artifact: en-US/target.perftests.tests.tar.zst
    137      - artifact: en-US/target.condprof.tests.tar.zst
    138      - artifact: en-US/target.common.tests.tar.zst
    139      - artifact: en-US/target.mochitest.tests.tar.zst
    140  toolchain:
    141      - linux64-hostutils
    142 
    143 Ensure that the ``runner.py`` script is also running from ``MOZ_FETCHES_DIR`` instead of the ``GECKO_PATH`` like other android MozPerftest tests. Everything else is the same as other android mozperftest tests. Note that ``--android-install-apk`` needs to be specified to point to the ``geckoview_example.apk`` that was obtained from the build task. Fenix is not currently supported in CI for Mochitest (see `bug 1902535 <https://bugzilla.mozilla.org/show_bug.cgi?id=1902535>`_).
    144 
    145 Custom Script
    146 -------------
    147 
    148 Custom Script tests use a custom/adhoc script to execute a test. Currently, only shell scripts are supported through the ScriptShellRunner. In the future, other types of scripts may be supported through the addition of new test layers. These types of scripts support both mobile and desktop testing within the ``custom-script`` flavor.
    149 
    150 Custom Shell Scripts
    151 ^^^^^^^^^^^^^^^^^^^^
    152 
    153 A shell script test must contain the following fields as comments somewhere in the code::
    154 
    155  # Name: name-of-test
    156  # Owner: Name/team that owns the test
    157  # Description: Description of the test
    158 
    159 Optionally, it can also contain a line that starts with ``Options:`` to denote any default options. These options are similar to other test layers. For these custom script tests, a valid JSON string is expected in this field.
    160 
    161 These scripts have a `BROWSER_BINARY` defined for them which will point to the binary (or package name on mobile) that is being tested. By default, this is Firefox. If a different binary is required, ``--binary`` can be used to specify it, or ``--app`` if the application is known and can be found automatically (not guaranteed).
    162 
    163 Once everything is setup for your shell script test, you can run it with the following::
    164 
    165  ./mach perftest <path/to/custom-script.sh>
    166 
    167 
    168 Alert
    169 -----
    170 
    171 This flavor/layer enables running all tests that produced a performance alert locally. It can either run the basic test without any options, or it can run the exact same command that was used to run the test in CI by passing the ``--alert-exact`` option. The ``--alert-tests`` option can also be used to specify which tests should be run from the alert.
    172 
    173 The following command can be used as a sample to run all the tests of a given alert number::
    174 
    175  ./mach perftest <ALERT-NUMBER>
    176 
    177 Note that this layer has no tests available for it, and new tests should never make use of this layer.
    178 
    179 Browsertime
    180 -----------
    181 
    182 With the browsertime layer, performance scenarios are Node modules that
    183 implement at least one async function that will be called by the framework once
    184 the browser has started. The function gets a webdriver session and can interact
    185 with the browser.
    186 
    187 You can write complex, interactive scenarios to simulate a user journey,
    188 and collect various metrics.
    189 
    190 Full documentation is available `here <https://www.sitespeed.io/documentation/sitespeed.io/scripting/>`_
    191 
    192 The mozilla-central repository has a few performance tests script in
    193 `testing/performance` and more should be added in components in the future.
    194 
    195 By convention, a performance test is prefixed with **perftest_** to be
    196 recognized by the `perftest` command.
    197 
    198 A performance test implements at least one async function published in node's
    199 `module.exports` as `test`. The function receives two objects:
    200 
    201 - **context**, which contains:
    202 
    203  - **options** - All the options sent from the CLI to Browsertime
    204  - **log** - an instance to the log system so you can log from your navigation script
    205  - **index** - the index of the runs, so you can keep track of which run you are currently on
    206  - **storageManager** - The Browsertime storage manager that can help you read/store files to disk
    207  - **selenium.webdriver** - The Selenium WebDriver public API object
    208  - **selenium.driver** - The instantiated version of the WebDriver driving the current version of the browser
    209 
    210 - **command** provides API to interact with the browser. It's a wrapper
    211  around the selenium client `Full documentation is available here <https://www.sitespeed.io/documentation/sitespeed.io/scripting/#commands>`_
    212 
    213 
    214 Below is an example of a test that visits the BBC homepage and clicks on a link.
    215 
    216 .. sourcecode:: javascript
    217 
    218    "use strict";
    219 
    220    async function setUp(context) {
    221      context.log.info("setUp example!");
    222    }
    223 
    224    async function test(context, commands) {
    225        await commands.navigate("https://www.bbc.com/");
    226 
    227        // Wait for browser to settle
    228        await commands.wait.byTime(10000);
    229 
    230        // Start the measurement
    231        await commands.measure.start("pageload");
    232 
    233        // Click on the link and wait for page complete check to finish.
    234        await commands.click.byClassNameAndWait("block-link__overlay-link");
    235 
    236        // Stop and collect the measurement
    237        await commands.measure.stop();
    238    }
    239 
    240    async function tearDown(context) {
    241      context.log.info("tearDown example!");
    242    }
    243 
    244    module.exports = {
    245        setUp,
    246        test,
    247        tearDown,
    248        owner: "Performance Team",
    249        test_name: "BBC",
    250        description: "Measures pageload performance when clicking on a link from the bbc.com",
    251        supportedBrowsers: "Any",
    252        supportedPlatforms: "Any",
    253    };
    254 
    255 
    256 Besides the `test` function, scripts can implement a `setUp` and a `tearDown` function to run
    257 some code before and after the test. Those functions will be called just once, whereas
    258 the `test` function might be called several times (through the `iterations` option)
    259 
    260 
    261 Hooks
    262 -----
    263 
    264 A Python module can be used to run functions during a run lifecycle. Available hooks are:
    265 
    266 - **before_iterations(args)** runs before everything is started. Gets the args, which
    267  can be changed. The **args** argument also contains a **virtualenv** variable that
    268  can be used for installing Python packages (e.g. through `install_package <https://searchfox.org/mozilla-central/source/python/mozperftest/mozperftest/utils.py#115-144>`_).
    269 - **before_runs(env)** runs before the test is launched. Can be used to
    270  change the running environment.
    271 - **after_runs(env)** runs after the test is done.
    272 - **on_exception(env, layer, exception)** called on any exception. Provides the
    273  layer in which the exception occurred, and the exception. If the hook returns `True`
    274  the exception is ignored and the test resumes. If the hook returns `False`, the
    275  exception is ignored and the test ends immediately. The hook can also re-raise the
    276  exception or raise its own exception.
    277 
    278 In the example below, the `before_runs` hook is setting the options on the fly,
    279 so users don't have to provide them in the command line::
    280 
    281    from mozperftest.browser.browsertime import add_options
    282 
    283    url = "'https://www.example.com'"
    284 
    285    common_options = [("processStartTime", "true"),
    286                      ("firefox.disableBrowsertimeExtension", "true"),
    287                      ("firefox.android.intentArgument", "'-a'"),
    288                      ("firefox.android.intentArgument", "'android.intent.action.VIEW'"),
    289                      ("firefox.android.intentArgument", "'-d'"),
    290                      ("firefox.android.intentArgument", url)]
    291 
    292 
    293    def before_runs(env, **kw):
    294        add_options(env, common_options)
    295 
    296 
    297 To use this hook module, it can be passed to the `--hooks` option::
    298 
    299    $  ./mach perftest --hooks hooks.py perftest_example.js