Note:

This page is primarily intended for developers of Mercurial.

Performance Improvement

Status: Multi Part Project

Main proponents: Pierre-YvesDavid, GregorySzorc

/!\ This is a speculative project and does not represent any firm decisions on future behavior.

The goal of this page is to gather data about known performance bottleneck and ideas about how to solve them

1. Goal

Make Mercurial faster on repositories.

This might be about repository of any size even if most of the work is usually done for large repositories. The larger repositories can be affected by pathological cases and usually get more funding toward improving performance.

Performance on a various reference repository are tracked here: http://perf.octobus.net/

2. Performance Areas

2.1. Local Access

2.1.1. Status

Getting the status of file an important important operation. We can distinct piece where performance matters:

Storing the expected state of files (dirstate). The format currently used is a flat file that has to be fully rewritten on update. Facebook wrote a tree based format for lighter update the storage is also meant to be mmap friendly (see MMapPlan)

In addition, a Rust rewrite of the status can yield impression performance. Some of that gain is the fact rust is a compiled language, some of it is the ability to do this computation in parallel

2.1.2. Nodemap

Many operation needs to validate that some node are in the repository (loading bookmark, discovery, revsets, etc…). Building this mapping for each invocation of Mercurial gets slow for repository with millions of commit.

2.1.3. Branchmap

Computing the branch map from scratch can be very expensive. Instead we use a cache. However this cache can en up being quite expensive too.

So there are multiple things we could improve

For example, we could stop explicitly listing the branch head that are also topological heads. Instead we could directly get them from the topological heads. Storing the number of heads (of each types) in the on disk cache could still be useful (in short, there is good progress to be made with a small amount of work).

2.1.4. Manifest Access

Many operation requires manifest access. This is important for this computation to be fast. Some work have been done in this area:

2.1.5. Copy Tracing

In some situation the using the copies information in a repository can be very slow. A new algorithm based on precomputed information is one the work.

2.2. Rust

Various part of Mercurial are getting a Rust implementation. This improves performance and safety. At some point, we should be able to perform some command without ever running Python Code. See OxidationPlan for details.

2.3. Exchange

2.3.1. Discovery

The discovery is an important step of any push and pull operation. This is especially a probleme for repository with many heads (thousands). The known pathological case has been solved. However, we could reduce the base time further with more work.

2.3.2. Server-side Changegroup Performance

Servers tend to spend a lot of CPU and bandwidth computing and transferring changegroup data.

https://www.mercurial-scm.org/wiki/OxidationPlan

2.3.2.1. Caching bundles

The most effective way to alleviate this resource usage is by serving static, pre-generated changegroup data instead of dynamically generating it at request time. A server-side cache of changegroup data would fall into this bucket. The "clone bundles" feature which serves initial clones from URLs is one implementation of this. But it only addresses the initial clone case. Subsequent pulls still result in significant load on the server. There is support for a "remote changegroup" bundle2 part that allows servers to advertise the URL of a pre-generated changegroup. There is a prototype for an extension using this.

2.3.3. Delta computation

2.3.4. compression

There is plenty of potential to optimize the server for changegroup generation. As of Mercurial 4.0, changegroups (with exception of the changelog) are effectively collections of single delta chains per revlogs. For generaldelta repos, many deltas on disk are reused. However, the server still needs to decompress the revlog entries on disk to obtain the raw deltas then recompress them as part of the changegroup compression context. Furthermore, if there are multiple delta chains in the revlog, the server will need to compute a new delta for those entries. This contributes to overhead, especially the decompression and recompression. Switching away from zlib for both revlog storage and wire protocol compression will help tremendously, as zstd can be 2x more efficient in both decompression and compression.

2.3.5. Skipping the decompression/compression stage

While server efficiency could be increased by increasing the efficiency of compression, it would be better to avoid compression altogether. There exists an "streaming clone" feature that essentially does a file copy of revlogs from server to client. However, this only applies to initial clone. It should be possible to extend this feature to subsequent pulls. So instead of transferring a changegroup with on-the-fly computed delta chains, the server would transfer the raw data in its revlogs, including compression. This feature would not be suitable for all environments, as the transfer size would likely increase and clients would need to support and effectively inherit the settings of the server. However, it would substantially reduce server-side CPU requirements.

2.4. MonoRepo Scaling

Extensions like narrow and remotefilelog are useful to see the subset of a huge monorepository as a smaller repository. Moving them as core feature would help their adoption and usage.

3. See Also


CategoryDeveloper CategoryNewFeatures

PerformancePlan (last edited 2020-07-21 01:59:37 by JoergSonnenberger)