tor-browser

The Tor Browser
git clone https://git.dasho.dev/tor-browser.git
Log | Files | Refs | README | LICENSE

index.rst (27652B)


      1 =====
      2 Talos
      3 =====
      4 
      5 Talos is a cross-platform Python performance testing framework that is specifically for
      6 Firefox on desktop. New performance tests should be added to the newer framework
      7 `mozperftest </testing/perfdocs/mozperftest.html>`_ unless there are limitations
      8 there (highly unlikely) that make it absolutely necessary to add them to Talos. Talos is
      9 named after the `bronze automaton from Greek myth <https://en.wikipedia.org/wiki/Talos>`_.
     10 
     11 .. contents::
     12   :depth: 1
     13   :local:
     14 
     15 Talos tests are run in a similar manner to xpcshell and mochitests. They are started via
     16 the command :code:`mach talos-test`. A `python script <https://searchfox.org/mozilla-central/source/testing/talos>`_
     17 then launches Firefox, which runs the tests via JavaScript special powers. The test timing
     18 information is recorded in a text log file, e.g. :code:`browser_output.txt`, and then processed
     19 into the `JSON format supported by Perfherder <https://searchfox.org/mozilla-central/source/testing/mozharness/external_tools/performance-artifact-schema.json>`_.
     20 
     21 Talos bugs can be filed in `Testing::Talos <https://bugzilla.mozilla.org/enter_bug.cgi?product=Testing&component=Talos>`_.
     22 
     23 Talos infrastructure is still mostly documented `on the Mozilla Wiki <https://wiki.mozilla.org/TestEngineering/Performance/Talos>`_.
     24 In addition, there are plans to surface all of the individual tests using PerfDocs.
     25 This work is tracked in `Bug 1674220 <https://bugzilla.mozilla.org/show_bug.cgi?id=1674220>`_.
     26 
     27 Examples of current Talos runs can be `found in Treeherder by searching for "Talos" <https://treeherder.mozilla.org/jobs?repo=autoland&searchStr=Talos>`_.
     28 If none are immediately available, then scroll to the bottom of the page and load more test
     29 runs. The tests all share a group symbol starting with a :code:`T`, for example
     30 :code:`T(c d damp g1)` or :code:`T-gli(webgl)`.
     31 
     32 Running Talos Locally
     33 *********************
     34 
     35 Running tests locally is most likely only useful for debugging what is going on in a test,
     36 as the test output is only reported as raw JSON. The CLI is documented via:
     37 
     38 .. code-block:: bash
     39 
     40    ./mach talos-test --help
     41 
     42 To quickly try out the :code:`./mach talos-test` command, the following can be run to do a
     43 single run of the DevTools' simple netmonitor test.
     44 
     45 .. code-block:: bash
     46 
     47    # Run the "simple.netmonitor" test very quickly with 1 cycle, and 1 page cycle.
     48    ./mach talos-test --activeTests damp --subtests simple.netmonitor --cycles 1 --tppagecycles 1
     49 
     50 
     51 The :code:`--print-suites` and :code:`--print-tests` are two helpful command flags to
     52 figure out what suites and tests are available to run.
     53 
     54 .. code-block:: bash
     55 
     56    # Print out the suites:
     57    ./mach talos-test --print-suites
     58 
     59    # Available suites:
     60    #  bcv                          (basic_compositor_video)
     61    #  chromez                      (about_preferences_basic:tresize)
     62    #  dromaeojs                    (dromaeo_css:kraken)
     63    # ...
     64 
     65    # Run all of the tests in the "bcv" test suite:
     66    ./mach talos-test --suite bcv
     67 
     68    # Print out the tests:
     69    ./mach talos-test --print-tests
     70 
     71    # Available tests:
     72    # ================
     73    #
     74    # a11yr
     75    # -----
     76    # This test ensures basic a11y tables and permutations do not cause
     77    # performance regressions.
     78    #
     79    # ...
     80 
     81    # Run the tests in "a11yr" listed above
     82    ./mach talos-test --activeTests a11yr
     83 
     84 Running Talos on Try
     85 ********************
     86 
     87 Talos runs can be generated through the mach try fuzzy finder:
     88 
     89 .. code-block:: bash
     90 
     91    ./mach try fuzzy
     92 
     93 The following is an example output at the time of this writing. Refine the query for the
     94 platform and test suites of your choosing.
     95 
     96 .. code-block::
     97 
     98    | test-windows10-64-qr/opt-talos-bcv-swr-e10s
     99    | test-linux64-shippable/opt-talos-webgl-e10s
    100    | test-linux64-shippable/opt-talos-other-e10s
    101    | test-linux64-shippable-qr/opt-talos-g5-e10s
    102    | test-linux64-shippable-qr/opt-talos-g4-e10s
    103    | test-linux64-shippable-qr/opt-talos-g3-e10s
    104    | test-linux64-shippable-qr/opt-talos-g1-e10s
    105    | test-windows10-64/opt-talos-webgl-gli-e10s
    106    | test-linux64-shippable/opt-talos-tp5o-e10s
    107    | test-linux64-shippable/opt-talos-svgr-e10s
    108    | test-linux64-shippable/opt-talos-damp-e10s
    109    > test-windows7-32/opt-talos-webgl-gli-e10s
    110    | test-linux64-shippable/opt-talos-bcv-e10s
    111    | test-linux64-shippable/opt-talos-g5-e10s
    112    | test-linux64-shippable/opt-talos-g4-e10s
    113    | test-linux64-shippable/opt-talos-g3-e10s
    114    | test-linux64-shippable/opt-talos-g1-e10s
    115    | test-linux64-qr/opt-talos-bcv-swr-e10s
    116 
    117      For more shortcuts, see mach help try fuzzy and man fzf
    118      select: <tab>, accept: <enter>, cancel: <ctrl-c>, select-all: <ctrl-a>, cursor-up: <up>, cursor-down: <down>
    119      1379/2967
    120    > talos
    121 
    122 At a glance
    123 ***********
    124 
    125 -  Tests are defined in
    126   `testing/talos/talos/test.py <https://searchfox.org/mozilla-central/source/testing/talos/talos/test.py>`__
    127 -  Treeherder abbreviations are defined in
    128   `taskcluster/kinds/test/talos.yml <https://searchfox.org/mozilla-central/source/taskcluster/kinds/test/talos.yml>`__
    129 -  Suites are defined for production in
    130   `testing/talos/talos.json <https://searchfox.org/mozilla-central/source/testing/talos/talos.json>`__
    131 
    132 Test lifecycle
    133 **************
    134 
    135 -  Taskcluster schedules `talos
    136   jobs <https://searchfox.org/mozilla-central/source/taskcluster/kinds/test/talos.yml>`__
    137 -  Taskcluster runs a Talos job on a hardware machine when one is
    138   available - this is bootstrapped by
    139   `mozharness <https://searchfox.org/mozilla-central/source/testing/mozharness/mozharness/mozilla/testing/talos.py>`__
    140 
    141   -  mozharness downloads the build, talos.zip (found in
    142      `talos.json <https://searchfox.org/mozilla-central/source/testing/talos/talos.json>`__),
    143      and creates a virtualenv for running the test.
    144   -  mozharness `configures the test and runs
    145      it <https://wiki.mozilla.org/TestEngineering/Performance/Talos/Running#How_Talos_is_Run_in_Production>`__
    146   -  After the test is completed the data is uploaded to
    147      `Perfherder <https://treeherder.mozilla.org/perf.html#/graphs>`__
    148 
    149 -  Treeherder displays a green (all OK) status and has a link to
    150   `Perfherder <https://treeherder.mozilla.org/perf.html#/graphs>`__
    151 -  13 pushes later,
    152   `analyze_talos.py <http://hg.mozilla.org/graphs/file/tip/server/analysis/analyze_talos.py>`__
    153   is ran which compares your push to the previous 12 pushes and next 12
    154   pushes to look for a
    155   `regression <https://wiki.mozilla.org/TestEngineering/Performance/Talos/Data#Regressions>`__
    156 
    157   -  If a regression is found, it will be posted on `Perfherder
    158      Alerts <https://treeherder.mozilla.org/perf.html#/alerts>`__
    159 
    160 Test types
    161 **********
    162 
    163 There are two different species of Talos tests:
    164 
    165 -  Startup_: Start up the browser and wait for either the load event or the paint event and exit, measuring the time
    166 -  `Page load`_: Load a manifest of pages
    167 
    168 In addition we have some variations on existing tests:
    169 
    170 -  Heavy_: Run tests with the heavy user profile instead of a blank one
    171 -  WebExtension_: Run tests with a WebExtension to see the perf impacts extension have
    172 -  `Real-world WebExtensions`_: Run tests with a set of 5 popular real-world WebExtensions installed and enabled.
    173 
    174 Different tests measure different things:
    175 
    176 -  Paint_: These measure events from the browser like moz_after_paint, etc.
    177 -  ASAP_: These tests go really fast and typically measure how many frames we can render in a time window
    178 -  Benchmarks_: These are benchmarks that measure specific items and report a summarized score
    179 
    180 Startup
    181 =======
    182 
    183 `Startup
    184 tests <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/startup_test>`__
    185 launch Firefox and measure the time to the onload or paint events. We
    186 run this in a series of cycles (default is 20) to generate a full set of
    187 data. Tests that currently are startup tests are:
    188 
    189 -  `ts_paint <#ts_paint>`_
    190 -  tpaint_
    191 -  `tresize <#tresize>`_
    192 -  `sessionrestore <#sessionrestore>`_
    193 -  `sessionrestore_no_auto_restore <#sessionrestore_no_auto_restore>`_
    194 -  `sessionrestore_many_windows <#sessionrestore_many_windows>`_
    195 
    196 Page load
    197 =========
    198 
    199 Many of the talos tests use the page loader to load a manifest of pages.
    200 These are tests that load a specific page and measure the time it takes
    201 to load the page, scroll the page, draw the page etc. In order to run a
    202 page load test, you need a manifest of pages to run. The manifest is
    203 simply a list of URLs of pages to load, separated by carriage returns,
    204 e.g.:
    205 
    206 .. code-block:: none
    207 
    208   https://www.mozilla.org
    209   https://www.mozilla.com
    210 
    211 Example:
    212 `svgx.manifest <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/tests/svgx/svgx.manifest>`__
    213 
    214 Manifests may also specify that a test computes its own data by
    215 prepending a ``%`` in front of the line:
    216 
    217 .. code-block:: none
    218 
    219   % https://www.mozilla.org
    220   % https://www.mozilla.com
    221 
    222 Example:
    223 `v8.manifest <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/tests/v8_7/v8.manifest>`__
    224 
    225 The file you created should be referenced in your test config inside of
    226 `test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l607>`__.
    227 For example, open test.py, and look for the line referring to the test
    228 you want to run:
    229 
    230 .. code-block:: python
    231 
    232   tpmanifest = '${talos}/page_load_test/svgx/svgx.manifest'
    233   tpcycles = 1 # run a single cycle
    234   tppagecycles = 25 # load each page 25 times before moving onto the next page
    235 
    236 Heavy
    237 =====
    238 
    239 All our testing is done with empty blank profiles, this is not ideal for
    240 finding issues for end users. We recently undertook a task to create a
    241 daily update to a profile so it is modern and relevant. It browses a
    242 variety of web pages, and have history and cache to give us a more
    243 realistic scenario.
    244 
    245 The toolchain is documented on
    246 `github <https://github.com/tarekziade/heavy-profile>`__ and was added
    247 to Talos in `bug
    248 1407398 <https://bugzilla.mozilla.org/show_bug.cgi?id=1407398>`__.
    249 
    250 Currently we have issues with this on windows (takes too long to unpack
    251 the files from the profile), so we have turned this off there. Our goal
    252 is to run this on basic pageload and startup tests.
    253 
    254 WebExtension
    255 =============
    256 
    257 WebExtensions are what Firefox has switched to and there are different
    258 code paths and APIs used vs addons. Historically we don't test with
    259 addons (other than our test addons) and are missing out on common
    260 slowdowns. In 2017 we started running some startup and basic pageload
    261 tests with a WebExtension in the profile (`bug
    262 1398974 <https://bugzilla.mozilla.org/show_bug.cgi?id=1398974>`__). We
    263 have updated the Extension to be more real world and will continue to do
    264 that.
    265 
    266 Real-world WebExtensions
    267 ========================
    268 
    269 We've added a variation on our test suite that automatically downloads,
    270 installs and enables 5 popular WebExtensions. This is used to measure
    271 things like the impact of real-world WebExtensions on start-up time.
    272 
    273 Currently, the following extensions are installed:
    274 
    275 -  Adblock Plus (3.5.2)
    276 -  Cisco Webex Extension (1.4.0)
    277 -  Easy Screenshot (3.67)
    278 -  NoScript (10.6.3)
    279 -  Video DownloadHelper (7.3.6)
    280 
    281 Note that these add-ons and versions are "pinned" by being held in a
    282 compressed file that's hosted in an archive by our test infrastructure
    283 and downloaded at test runtime. To update the add-ons in this set, one
    284 must provide a new ZIP file to someone on the test automation team. See
    285 `this comment in
    286 Bugzilla <https://bugzilla.mozilla.org/show_bug.cgi?id=1575089#c3>`__.
    287 
    288 Paint
    289 =====
    290 
    291 Paint tests are measuring the time to receive both the
    292 `MozAfterPaint <https://developer.mozilla.org/en-US/docs/Web/Events/MozAfterPaint>`__
    293 and OnLoad event instead of just the OnLoad event. Most tests now look
    294 for this unless they are an ASAP test, or an internal benchmark
    295 
    296 ASAP
    297 ====
    298 
    299 We have a variety of tests which we now run in ASAP mode where we render
    300 as fast as possible (disabling vsync and letting the rendering iterate
    301 as fast as it can using \`requestAnimationFrame`). In fact we have
    302 replaced some original tests with the 'x' versions to make them measure.
    303 We do this with RequestAnimationFrame().
    304 
    305 ASAP tests are:
    306 
    307 -  `basic_compositor_video <#basic_compositor_video>`_
    308 -  `displaylist_mutate <#displaylist_mutate>`_
    309 -  `glterrain <#glterrain>`_
    310 -  `rasterflood_svg <#rasterflood_svg>`_
    311 -  `rasterflood_gradient <#rasterflood_gradient>`_
    312 -  `tsvgx <#tsvgx>`_
    313 -  `tscrollx <#tscrollx>`_
    314 -  `tp5o_scroll <#tp5o_scroll>`_
    315 -  `tabswitch <#tabswitch>`_
    316 -  `tart <#tart>`_
    317 
    318 Benchmarks
    319 ==========
    320 
    321 Many tests have internal benchmarks which we report as accurately as
    322 possible. These are the exceptions to the general rule of calculating
    323 the suite score as a geometric mean of the subtest values (which are
    324 median values of the raw data from the subtests).
    325 
    326 Tests which are imported benchmarks are:
    327 
    328 -  `ARES6 <#ares6>`_
    329 -  `dromaeo <#dromaeo>`_
    330 -  `JetStream <#jetstream>`_
    331 -  `kraken <#kraken>`_
    332 -  `motionmark <#motionmark>`_
    333 -  `stylebench <#stylebench>`_
    334 
    335 Row major vs. column major
    336 ==========================
    337 
    338 To get more stable numbers, tests are run multiple times. There are two
    339 ways that we do this: row major and column major. Row major means each
    340 test is run multiple times and then we move to the next test (and run it
    341 multiple times). Column major means that each test is run once one after
    342 the other and then the whole sequence of tests is run again.
    343 
    344 More background information about these approaches can be found in Joel
    345 Maher's `Reducing the Noise in
    346 Talos <https://elvis314.wordpress.com/2012/03/12/reducing-the-noise-in-talos/>`__
    347 blog post.
    348 
    349 Page sets
    350 *********
    351 
    352 We run our tests 100% offline, but serve pages via a webserver. Knowing
    353 this we need to store and make available the offline pages we use for
    354 testing.
    355 
    356 tp5pages
    357 ========
    358 
    359 Some tests make use of a set of 50 "real world" pages, known as the tp5n
    360 set. These pages are not part of the talos repository, but without them
    361 the tests which use them won't run.
    362 
    363 -  To add these pages to your local setup, download the latest tp5n zip
    364   from `tooltool <https://mozilla-releng.net/tooltool/>`__, and extract
    365   it such that ``tp5n`` ends up as ``testing/talos/talos/tests/tp5n``.
    366   You can also obtain it by running a talos test locally to get the zip
    367   into ``testing/talos/talos/tests/``, i.e ``./mach talos-test --suite damp``
    368 -  see also `tp5o <#tp5o>`_.
    369 
    370 {documentation}
    371 
    372 Extra Talos Tests
    373 *****************
    374 
    375 .. contents::
    376    :depth: 1
    377    :local:
    378 
    379 File IO
    380 =======
    381 
    382 File IO is tested using the tp5 test set in the `xperf`_
    383 test.
    384 
    385 Possible regression causes
    386 --------------------------
    387 
    388 -  **nonmain_startup_fileio opt (with or without e10s) windows7-32**    389   `bug
    390   1274018 <https://bugzilla.mozilla.org/show_bug.cgi?id=1274018>`__
    391   This test seems to consistently report a higher result for
    392   mozilla-central compared to Try even for an identical revision due to
    393   extension signing checks. In other words, if you are comparing Try
    394   and Mozilla-Central you may see a false-positive regression on
    395   perfherder. Graphs:
    396   `non-e10s <https://treeherder.mozilla.org/perf.html#/graphs?timerange=604800&series=%5Bmozilla-central,e5f5eaa174ef22fdd6b6e150e8c450aa827c2ff6,1,1%5D&series=%5Btry,e5f5eaa174ef22fdd6b6e150e8c450aa827c2ff6,1,1%5D>`__
    397   `e10s <https://treeherder.mozilla.org/perf.html#/graphs?series=%5B%22mozilla-central%22,%222f3af3833d55ff371ecf01c41aeee1939ef3a782%22,1,1%5D&series=%5B%22try%22,%222f3af3833d55ff371ecf01c41aeee1939ef3a782%22,1,1%5D&timerange=604800>`__
    398 
    399 Xres (X Resource Monitoring)
    400 ============================
    401 
    402 A memory metric tracked during tp5 test runs. This metric is sampled
    403 every 20 seconds. This metric is collected on linux only.
    404 
    405 `xres man page <https://linux.die.net/man/3/xres>`__.
    406 
    407 % CPU
    408 =====
    409 
    410 Cpu usage tracked during tp5 test runs. This metric is sampled every 20
    411 seconds. This metric is collected on windows only.
    412 
    413 tpaint
    414 ======
    415 
    416 .. warning::
    417 
    418   This test no longer exists
    419 
    420 -  contact: :davidb
    421 -  source:
    422   `tpaint-window.html <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/startup_test/tpaint.html>`__
    423 -  type: Startup_
    424 -  data: we load the tpaint test window 20 times, resulting in 1 set of
    425   20 data points.
    426 -  summarization:
    427 
    428   -  subtest: `ignore first`_ **5** data points, then take the `median`_ of the remaining 15; `source:
    429      test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l190>`__
    430   -  suite: identical to subtest
    431 
    432 +-----------------+---------------------------------------------------+
    433 | Talos test name | Description                                       |
    434 +-----------------+---------------------------------------------------+
    435 | tpaint          | twinopen but measuring the time after we receive  |
    436 |                 | the `MozAfterPaint and OnLoad event <#paint>`__.  |
    437 +-----------------+---------------------------------------------------+
    438 
    439 Tests the amount of time it takes the open a new window. This test does
    440 not include startup time. Multiple test windows are opened in
    441 succession, results reported are the average amount of time required to
    442 create and display a window in the running instance of the browser.
    443 (Measures ctrl-n performance.)
    444 
    445 **Example Data**
    446 
    447 .. code-block:: none
    448 
    449    [209.219, 222.180, 225.299, 225.970, 228.090, 229.450, 230.625, 236.315, 239.804, 242.795, 244.5, 244.770, 250.524, 251.785, 253.074, 255.349, 264.729, 266.014, 269.399, 326.190]
    450 
    451 Possible regression causes
    452 --------------------------
    453 
    454 -  None listed yet. If you fix a regression for this test and have some
    455   tips to share, this is a good place for them.
    456 
    457 xperf
    458 =====
    459 
    460 -  contact: perftest team
    461 -  source: `xperf
    462   instrumentation <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/xtalos>`__
    463 -  type: `Page load`_ (tp5n) / Startup_
    464 -  measuring: IO counters from windows (currently, only startup IO is in
    465   scope)
    466 -  reporting: Summary of read/write counters for disk, network (lower is
    467   better)
    468 
    469 These tests only run on windows builds. See `this active-data
    470 query <https://activedata.allizom.org/tools/query.html#query_id=zqlX+2Jn>`__
    471 for an updated set of platforms that xperf can be found on. If the query
    472 is not found, use the following on the query page:
    473 
    474 .. code-block:: javascript
    475 
    476   {
    477       "from":"task",
    478       "groupby":["run.name","build.platform"],
    479       "limit":2000,
    480       "where":{"regex":{"run.name":".*xperf.*"}}
    481   }
    482 
    483 Talos will turn orange for 'x' jobs on windows 7 if your changeset
    484 accesses files which are not predefined in the
    485 `allowlist <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/xtalos/xperf_allowlist.json>`__
    486 during startup; specifically, before the
    487 "`sessionstore-windows-restored <https://hg.mozilla.org/mozilla-central/file/0eebc33d8593/toolkit/components/startup/nsAppStartup.cpp#l631>`__"
    488 Firefox event. If your job turns orange, you will see a list of files in
    489 Treeherder (or in the log file) which have been accessed unexpectedly
    490 (similar to this):
    491 
    492 .. code-block:: none
    493 
    494    TEST-UNEXPECTED-FAIL : xperf: File '{profile}\secmod.db' was accessed and we were not expecting it. DiskReadCount: 6, DiskWriteCount: 0, DiskReadBytes: 16904, DiskWriteBytes: 0
    495    TEST-UNEXPECTED-FAIL : xperf: File '{profile}\cert8.db' was accessed and we were not expecting it. DiskReadCount: 4, DiskWriteCount: 0, DiskReadBytes: 33288, DiskWriteBytes: 0
    496    TEST-UNEXPECTED-FAIL : xperf: File 'c:\$logfile' was accessed and we were not expecting it. DiskReadCount: 0, DiskWriteCount: 2, DiskReadBytes: 0, DiskWriteBytes: 32768
    497    TEST-UNEXPECTED-FAIL : xperf: File '{profile}\secmod.db' was accessed and we were not expecting it. DiskReadCount: 6, DiskWriteCount: 0, DiskReadBytes: 16904, DiskWriteBytes: 0
    498    TEST-UNEXPECTED-FAIL : xperf: File '{profile}\cert8.db' was accessed and we were not expecting it. DiskReadCount: 4, DiskWriteCount: 0, DiskReadBytes: 33288, DiskWriteBytes: 0
    499    TEST-UNEXPECTED-FAIL : xperf: File 'c:\$logfile' was accessed and we were not expecting it. DiskReadCount: 0, DiskWriteCount: 2, DiskReadBytes: 0, DiskWriteBytes: 32768
    500 
    501 In the case that these files are expected to be accessed during startup
    502 by your changeset, then we can add them to the
    503 `allowlist <https://bugzilla.mozilla.org/enter_bug.cgi?product=Testing&component=Talos>`__.
    504 
    505 Xperf runs tp5 while collecting xperf metrics for disk IO and network
    506 IO. The providers we listen for are:
    507 
    508 -  `'PROC_THREAD', 'LOADER', 'HARD_FAULTS', 'FILENAME', 'FILE_IO',
    509   'FILE_IO_INIT' <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/xperf.config#l10>`__
    510 
    511 The values we collect during stackwalk are:
    512 
    513 -  `'FileRead', 'FileWrite',
    514   'FileFlush' <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/xperf.config#l11>`__
    515 
    516 Notes:
    517 
    518 -  Currently some runs may `return all-zeros and skew the
    519   results <https://bugzilla.mozilla.org/show_bug.cgi?id=1614805>`__
    520 -  Additionally, these runs don't have dedicated hardware and have a
    521   large variability. At least 30 runs are likely to be needed to get
    522   stable statistics (xref `bug
    523   1616236 <https://bugzilla.mozilla.org/show_bug.cgi?id=1616236>`__)
    524 
    525 Build metrics
    526 *************
    527 
    528 These are not part of the Talos code, but like Talos they are benchmarks
    529 that record data using the graphserver and are analyzed by the same
    530 scripts for regressions.
    531 
    532 Number of constructors (num_ctors)
    533 ==================================
    534 
    535 This test runs at build time and measures the number of static
    536 initializers in the compiled code. Reducing this number is helpful for
    537 `startup
    538 optimizations <https://blog.mozilla.org/tglek/2010/05/27/startup-backward-constructors/>`__.
    539 
    540 -  https://hg.mozilla.org/build/tools/file/348853aee492/buildfarm/utils/count_ctors.py
    541 
    542   -  these are run for linux 32+64 opt and pgo builds.
    543 
    544 Platform microbenchmark
    545 ***********************
    546 
    547 IsASCII and IsUTF8 gtest microbenchmarks
    548 ========================================
    549 
    550 -  contact: :hsivonen
    551 -  source:
    552   `TestStrings.cpp <https://dxr.mozilla.org/mozilla-central/source/xpcom/tests/gtest/TestStrings.cpp>`__
    553 -  type: Microbench_
    554 -  reporting: intervals in ms (lower is better)
    555 -  data: each test is run and measured 5 times
    556 -  summarization: take the `median`_ of the 5 data points; `source:
    557   MozGTestBench.cpp <https://dxr.mozilla.org/mozilla-central/source/testing/gtest/mozilla/MozGTestBench.cpp#43-46>`__
    558 
    559 Test whose name starts with PerfIsASCII test the performance of the
    560 XPCOM string IsASCII function with ASCII inputs if different lengths.
    561 
    562 Test whose name starts with PerfIsUTF8 test the performance of the XPCOM
    563 string IsUTF8 function with ASCII inputs if different lengths.
    564 
    565 Possible regression causes
    566 --------------------------
    567 
    568 -  The --enable-rust-simd accidentally getting turned off in automation.
    569 -  Changes to encoding_rs internals.
    570 -  LLVM optimizations regressing between updates to the copy of LLVM
    571   included in the Rust compiler.
    572 
    573 Microbench
    574 ==========
    575 
    576 -  contact: :bholley
    577 -  source:
    578   `MozGTestBench.cpp <https://dxr.mozilla.org/mozilla-central/source/testing/gtest/mozilla/MozGTestBench.cpp>`__
    579 -  type: Custom GTest micro-benchmarking
    580 -  data: Time taken for a GTest function to execute
    581 -  summarization: Not a Talos test. This suite is provides a way to add
    582   low level platform performance regression tests for things that are
    583   not suited to be tested by Talos.
    584 
    585 PerfStrip Tests
    586 ===============
    587 
    588 -  contact: :davidb
    589 -  source:
    590   https://dxr.mozilla.org/mozilla-central/source/xpcom/tests/gtest/TestStrings.cpp
    591 -  type: Microbench_
    592 -  reporting: execution time in ms (lower is better) for 100k function
    593   calls
    594 -  data: each test run and measured 5 times
    595 -  summarization:
    596 
    597 PerfStripWhitespace - call StripWhitespace() on 5 different test cases
    598 20k times (each)
    599 
    600 PerfStripCharsWhitespace - call StripChars("\f\t\r\n") on 5 different
    601 test cases 20k times (each)
    602 
    603 PerfStripCRLF - call StripCRLF() on 5 different test cases 20k times
    604 (each)
    605 
    606 PerfStripCharsCRLF() - call StripChars("\r\n") on 5 different test cases
    607 20k times (each)
    608 
    609 Stylo gtest microbenchmarks
    610 ===========================
    611 
    612 -  contact: :bholley, :SimonSapin
    613 -  source:
    614   `gtest <https://dxr.mozilla.org/mozilla-central/source/layout/style/test/gtest>`__
    615 -  type: Microbench_
    616 -  reporting: intervals in ms (lower is better)
    617 -  data: each test is run and measured 5 times
    618 -  summarization: take the `median`_ of the 5 data points; `source:
    619   MozGTestBench.cpp <https://dxr.mozilla.org/mozilla-central/source/testing/gtest/mozilla/MozGTestBench.cpp#43-46>`__
    620 
    621 Servo_StyleSheet_FromUTF8Bytes_Bench parses a sample stylesheet 20 times
    622 with Stylo’s CSS parser that is written in Rust. It starts from an
    623 in-memory UTF-8 string, so that I/O or UTF-16-to-UTF-8 conversion is not
    624 measured.
    625 
    626 Gecko_nsCSSParser_ParseSheet_Bench does the same with Gecko’s previous
    627 CSS parser that is written in C++, for comparison.
    628 
    629 Servo_DeclarationBlock_SetPropertyById_Bench parses the string "10px"
    630 with Stylo’s CSS parser and sets it as the value of a property in a
    631 declaration block, a million times. This is similar to animations that
    632 are based on JavaScript code modifying Element.style instead of using
    633 CSS @keyframes.
    634 
    635 Servo_DeclarationBlock_SetPropertyById_WithInitialSpace_Bench is the
    636 same, but with the string " 10px" with an initial space. That initial
    637 space is less typical of JS animations, but is almost always there in
    638 stylesheets or full declarations like "width: 10px". This microbenchmark
    639 was used to test the effect of some specific code changes. Regressions
    640 here may be acceptable if Servo_StyleSheet_FromUTF8Bytes_Bench is not
    641 affected.
    642 
    643 History of tp tests
    644 *******************
    645 
    646 tp
    647 ==
    648 
    649 The original tp test created by Mozilla to test browser page load time.
    650 Cycled through 40 pages. The pages were copied from the live web during
    651 November, 2000. Pages were cycled by loading them within the main
    652 browser window from a script that lived in content.
    653 
    654 tp2/tp_js
    655 =========
    656 
    657 The same tp test but loading the individual pages into a frame instead
    658 of the main browser window. Still used the old 40 page, year 2000 web
    659 page test set.
    660 
    661 tp3
    662 ===
    663 
    664 An update to both the page set and the method by which pages are cycled.
    665 The page set is now 393 pages from December, 2006. The pageloader is
    666 re-built as an extension that is pre-loaded into the browser
    667 chrome/components directories.
    668 
    669 tp4
    670 ===
    671 
    672 Updated web page test set to 100 pages from February 2009.
    673 
    674 tp4m
    675 ====
    676 
    677 This is a smaller pageset (21 pages) designed for mobile Firefox. This
    678 is a blend of regular and mobile friendly pages.
    679 
    680 We landed on this on April 18th, 2011 in `bug
    681 648307 <https://bugzilla.mozilla.org/show_bug.cgi?id=648307>`__. This
    682 runs for Android and Maemo mobile builds only.
    683 
    684 tp5
    685 ===
    686 
    687 Updated web page test set to 100 pages from April 8th, 2011. Effort was
    688 made for the pages to no longer be splash screens/login pages/home pages
    689 but to be pages that better reflect the actual content of the site in
    690 question. There are two test page data sets for tp5 which are used in
    691 multiple tests (i.e. awsy, xperf, etc.): (i) an optimized data set
    692 called tp5o, and (ii) the standard data set called tp5n.
    693 
    694 tp6
    695 ===
    696 
    697 Created June 2017 with recorded pages via mitmproxy using modern google,
    698 amazon, youtube, and facebook. Ideally this will contain more realistic
    699 user accounts that have full content, in addition we would have more
    700 than 4 sites- up to top 10 or maybe top 20.
    701 
    702 These were migrated to Raptor between 2018 and 2019.
    703 
    704 .. _geometric mean: https://wiki.mozilla.org/TestEngineering/Performance/Talos/Data#geometric_mean
    705 .. _ignore first: https://wiki.mozilla.org/TestEngineering/Performance/Talos/Data#ignore_first
    706 .. _median: https://wiki.mozilla.org/TestEngineering/Performance/Talos/Data#median