vision.rst (3018B)
1 Vision 2 ====== 3 4 The `mozperftest` project was created with the intention to replace all 5 existing performance testing frameworks that exist in the mozilla central 6 source tree with a single one, and make performance tests a standardized, first-class 7 citizen, alongside mochitests and xpcshell tests. 8 9 We want to give any developer the ability to write performance tests in 10 their component, both locally and in the CI, exactly like how they would do with 11 `xpcshell` tests and `mochitests`. 12 13 Historically, we have `Talos`, which provided a lot of different tests, from 14 micro-benchmarks to page load tests. From there we had `Raptor`, that was a 15 fork of Talos, focusing on page loads only. Then, `mach browsertime` was added, 16 which was a wrapper around the `browsertime` tool. 17 18 All those frameworks besides `mach browsertime` were mainly focusing on working 19 well in the CI, and were hard to use locally. `mach browsertime` worked locally but 20 not on all platforms and was specific to the Browsertime framework. 21 22 `mozperftest` currently provides the `mach perftest` command, that will scan 23 for all tests that are declared in ini files such as 24 https://searchfox.org/mozilla-central/source/netwerk/test/perf/perftest.toml and 25 registered under **PERFTESTS_MANIFESTS** in `moz.build` files such as 26 https://searchfox.org/mozilla-central/source/netwerk/test/moz.build#17 27 28 The framework loads perf tests and reads its metadata, that can be declared 29 within the test. We have a parser that is currently able to recognize and load 30 **xpcshell** tests and **browsertime** tests, and a runner for each one of those. 31 32 But the framework can be extended to support more formats. We would like to add 33 support for **jsshell** and any other format we have in m-c. 34 35 A performance test is a script that perftest runs, and that returns metrics we 36 can use. Right now we consume those metrics directly in the console, and 37 also in perfherder, but other formats could be added. For instance, there's 38 a new **influxdb** output that has been added, to push the data in an **influxdb** 39 time series database. 40 41 What is important is to make sure performance tests belong to the component it's 42 testing in the source tree. We've learned with Talos that grouping all performance 43 tests in a single place is problematic because there's no sense of ownership from 44 developers once it's added there. It becomes the perf team's problem. If the tests 45 stay in each component alongside mochitests and xpcshell tests, the component 46 maintainers will own and maintain it. 47 48 49 Next steps 50 ---------- 51 52 We want to rewrite all Talos and Raptor tests into perftest. For Raptor, we need 53 to have the ability to use proxy records, which is a work in progress. From there, 54 running a **raptor** test will be a simple, one-liner Browsertime script. 55 56 For Talos, we'll need to refactor the existing micro-benchmarks into xpcshell tests, 57 and if that does not suffice, create a new runner. 58 59 For JS benchmarks, once the **jsshell** runner is added to perftest, it will be 60 straightforward.