This page is primarily intended for Mercurial's developers.
Performance tracking infrastructure
Provide a continuous integration infrastructure to measuring and preventing performance regressions on mercurial
Discussion on devel list: http://marc.info/?t=145863695000002
Atom feed of regressions: https://buildbot.mercurial-scm.org/speed/regressions.xml
Mercurial code change fast and we must detect and prevent performances regressions as soon as possible.
- Automatic execution of performance tests on a given Mercurial revision
- Store the performance results in a database
- Expose the performance results in a web application (with graphs, reports, dashboards etc.)
- Provide some regression detection notifications
We already have code that produce performance metrics:
- Commands from the perf extension in contrib/perf.py
- Revset performance tests contrib/revsetbenchmarks.py
Unit test execution time
Another idea is to produce metrics from annotated portions of unit test execution time. These metrics will be used (after some refactoring for some of the tools that produce them) as performance metrics, but we may need some more specifically written for the purpose of performance regression detection
These metrics will be used (after some refactoring for some of the tools that produce them) as performance metrics, but we may need some more specifically written for the purpose of performance regression detection
3. Tool selection
After evaluating several tools we choose to use Airspeed velocity that already handle most of our needs.
used by the http://www.astropy.org/ projects (numpy)
Presentation (2014): https://www.youtube.com/watch?v=OsxJ5O6h8s0
This tool aims at benchmarking Python packages over their lifetime. It is mainly a command line tool, asv, that run a series of benchmarks (described in JSON configuration file), and produces a static HTML/JS report.
When running a benchmark suite, ASV take care of clone/pulling the source repository in a virtual env and running the configured tasks in this virtual env.
Results of each benchmark execution are stored in a "database" (consisting in JSON files). This database is used to produce evolution plots of the time required to run a test (or any metrics; out of the box, asv has support for 4 types of benchmark: timing, memory, peak memory and tracking), and to run the regression detection algorithms.
One key feature of this tool is that it's very easy for every developer to use it on its own development environment. For example, it provides an asv compare command allowing to compare the results of any 2 revisions.
4. Q & A
Q: What revisions of the Mercurial source code should we run the performance regression tool on? (public cs on the main branch only? Which branches? ...).
A: Let's focus on public changesets for now and on the two branch (default and stable)
Q: How do we manage the non-linear structure of a Mercurial history ?
A: The Mercurial repository is mostly linear as long as only one branch is concerned, however we don't (and have no reason to) enforce it. For now the plan is to follow the first parent of the merge changesets to enforce the linearity of each branches.
Fix mercurial branch handling in ASV. OK https://github.com/spacetelescope/asv/pull/394
Use revision instead of commit date as X axis in ASV. OK https://github.com/spacetelescope/asv/pull/429
- Provide ansible configuration to deploy the tool in the existing buildbot infrastructure and expose the results in a public website when new public
changesets are pushed on the main branches. IN PROGRESS
Parametrize benchmarks against multiple references repositories (hg, mozilla-central, ...). OK
Parametrize revset benchmarks with variants (first, last, min, max, ...). OK
Implement a notification system in ASV. OK https://github.com/spacetelescope/asv/issues/397
Add unit test execution time as benchmark( We must handle when the test itself has changed )
Write an annotation system in unit tests and get metrics execution time of annotated portions
Write a system of scenario based benchmark. They should be written as mercurial tests (with annotation) and might be kept in a dedicated repository
- Track both improvement and regression ? A change, especially on revset, can have positive or negative
impact on multiple benchmarks, having a global view of this information could be a good feature: OK https://github.com/spacetelescope/asv/pull/437
Integrate the benchmark suite in main repository, allowing developpers to run asv compare locally
Write benchmark à la .t tests (against small created repository and by calling contrib/perf commands).
The current work in progress code can be cloned with hg clone -b default https://hg.logilab.org/review/hgperf