<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Jun 27, 2016, at 5:24 AM, Philippe Pepiot <<a href="mailto:philippe.pepiot@logilab.fr" class="">philippe.pepiot@logilab.fr</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">
<meta content="text/html; charset=utf-8" http-equiv="Content-Type" class="">
<div bgcolor="#FFFFFF" text="#000000" class=""><p class="">Hi,</p><p class="">I rewrote the benchmarks suite
(<a class="moz-txt-link-freetext" href="https://hg.logilab.org/review/hgperf">https://hg.logilab.org/review/hgperf</a>) using contrib/perf.py from
tested version of mercurial while keeping old helper functions to
write plain benchmarks easily.</p><p class="">Here is a summary of the current status of the project:</p><p class="">All my patches are merged upstream into AirSpeed Velocity,
especially ordering commit by graph revision instead of date which
was the big one.<br class="">
</p><p class="">I'm running a demo <a class="moz-txt-link-freetext" href="https://jenkins.philpep.org/hgperf/">https://jenkins.philpep.org/hgperf/</a> that
benchmark all revisions added to the repository since one month,
with sparse earlier commits too (since 3.4).</p></div></div></blockquote>Sweet!<br class=""><blockquote type="cite" class=""><div class=""><div bgcolor="#FFFFFF" text="#000000" class=""><p class=""> One interesting
result is that I'm benchmarking mercurial against the same
mercurial repository, i.e. the reference repository is pulled,
this explain all the small detected regressions. So benchmarking a
moving repository should be avoided (seems trivial to say but I
wasn't sure how this will be visible on the graphs)</p></div></div></blockquote><div>Yes, this makes sense. We should probably take a bit of time to define specific repositories as of specific heads for benchmarking (I think we should probably have at least one for each of small, medium, and large?)</div><br class=""><blockquote type="cite" class=""><div class=""><div bgcolor="#FFFFFF" text="#000000" class=""><p class="">ASV is actively maintained some new features are incoming:</p><p class=""><a class="moz-txt-link-freetext" href="https://github.com/spacetelescope/asv/pull/447">https://github.com/spacetelescope/asv/pull/447</a> Regressions
notifications using atom feed, this one is part of the plan. If
people prefer to receive notifications via email, there are tools
like rss2email or online services to transform new feed items into
an email.</p><p class=""><a class="moz-txt-link-freetext" href="https://github.com/spacetelescope/asv/pull/437">https://github.com/spacetelescope/asv/pull/437</a> This one (WIP) is
quite interesting because it track improvement too and provide a
useful summary page.<br class="">
</p><p class="">In the plan,
<a class="moz-txt-link-freetext" href="https://www.mercurial-scm.org/wiki/PerformanceTrackingSuitePlan">https://www.mercurial-scm.org/wiki/PerformanceTrackingSuitePlan</a> ,
the next topic is using unit test execution time as benchmark
result and related topics (Annotation system, handle changed
tests, scenario based benchmarks). Most of them are shell style
(.t) tests that spawn a lot of hg subprocesses and I wonder if
it's relevant to use them as benchmark result because they work on
very small repositories (generated during tests) and I think most
of the time is spent on hg startup. Maybe we could detect startup
time regression here, but I think we should miss others
regressions that will be insignificant comparing to the whole test
duration. I wonder if it's worth to put efforts on this topic, but
I maybe missing something, do you have already used unit test
execution time to compare performance changes introduced by a
commit ?</p><p class="">Some other topics might require more work, like improving the web
interface, for instance having a "per commit" view that summarize
regressions and improvements introduced by a single commit,
integrate the benchmark suite in mercurial, write more benchmarks,
I could start by cover all combination of benchmarks/options
offered in contrib/perf.py. Another topic could be extensions
benchmarking. I put the development in pause while waiting your
feedback, hoping (or not ;)) they will be some real regressions
detected soon to validate the tool, I'll add the atom feed once it
get merged in ASV.<br class="">
</p><p class="">Cheers,</p>
<div class="moz-cite-prefix">On 05/26/2016 11:57 AM, Philippe Pepiot
wrote:<br class="">
</div>
<blockquote cite="mid:5746C876.2030505@logilab.fr" type="cite" class="">
<br class="">
<br class="">
On 05/23/2016 11:38 PM, Kevin Bullock wrote:
<br class="">
<blockquote type="cite" class="">(resurrecting this thread now that I've
had a closer look at the plan)
<br class="">
<br class="">
</blockquote>
<br class="">
Thanks !
<br class="">
<br class="">
<blockquote type="cite" class="">
<blockquote type="cite" class="">>On Apr 12, 2016, at 05:03, Philippe
Pepiot<a class="moz-txt-link-rfc2396E" href="mailto:philippe.pepiot@logilab.fr"><philippe.pepiot@logilab.fr></a> wrote:
<br class="">
>
<br class="">
>[...]
<br class="">
>Now I've a question about writing and maintaining
benchmark code as we have multiple choices here:
<br class="">
>
<br class="">
>1) Use mercurial internal API (benefits: unlimited
possibilities without modifying mercurial and we can write
backward compatible benchmark with some 'if' statements and
benchmarks older versions, profits all ASV features
(profiling, memory benchmarks etc). Drawbacks: duplicate code
with contrib/perf.py, will break on internal API changes, need
more maintenance and more code to write/keep backward
compatible benchmarks).
<br class="">
>
<br class="">
>2) Use contrib/perf.py extension from the benchmarked
version of mercurial (benefits: de facto backward compatible,
drawbacks: limited to what the tool can do in previous
versions)
<br class="">
>
<br class="">
>3) Use contrib/perf.py extension from the latest version
of mercurial (benefits: no duplicate code, easier maintenance,
new benchmarks profits to both tools. Drawbacks: not backward
compatible for now (it works only for >= 3.7 versions)). We
could also implement some glue code, either in the tracking
tool or in contrib/perf.py, to list available benchmarks and
theirs parameters.
<br class="">
>
<br class="">
>At this stage of the project my advice it to use 1), but
we could also have a mix of 1) and 3). It depend on how fast
are internal api changes and on your short/mid/long term
objectives on the level of integration of the tracking tool.
<br class="">
</blockquote>
My first instinct tells me we should use approach #2, and teach
the benchmark suite what Mercurial release a given perf* command
first appeared in. This can be automated by reading the output
of `hg help -e perf`.
<br class="">
<br class="">
#1 sounds like a particularly bad idea given the deliberately
high level of churn in our internal APIs. If we try to maintain
an external tool that also tries to remember the details of that
churn, I'm pretty sure it will rot and fall out of use in short
order.
<br class="">
<br class="">
#3 has a similar (though lesser) disadvantage. I also don't see
a strong need to run new perf checks against old versions, with
the exception of perhaps the current stable release.
<br class="">
<br class="">
All that said, please convince me I'm wrong.:)
<br class="">
<br class="">
Regarding ASV: is there any way we can use it to instrument our
code (including profiling and memory tracking) without having it
call into the internals? Specifically, is there a way that we
can either feed it metrics that we output from the perf
extension, or integrate it into perf.py such that it gets
optionally imported (similar to how you can optionally use ipdb
via the --debugger flag, wired up in dispatch.py)?
<br class="">
</blockquote>
<br class="">
Ok if the internal API used in perf.py often change we can exclude
#1 (at least until the benchmark suite is included into mercurial
repository).
<br class="">
<br class="">
Nevertheless #1 can still be useful if you want to bisect a old
regression (and have no relevant benchmark code on the bisect
range), this can be done by just transforming the perf.py
benchmark into a regular asv benchmark (quite easy step), handle
potential backward compatibility then run the bisect.
<br class="">
<br class="">
With #2 we cannot fix potential "bugs" in perf.py when
benchmarking old revisions. For instance if an internal change
alter a benchmark that is no more computing the same thing than
above and it's fixed later on, you will keep having wrong values
even if you re-run the benchmarks (but this is not a big issue).
<br class="">
<br class="">
About #3, currently perf.py is not backward compatible with <
3.7 versions but it seems related to the extension code (not
benchmark code). After embedding commands.formatteropts and
commands.debugrevlogopts and remove the norepo parameter from
perflrucache, I can run some benchmarks on 3.0 versions.
<br class="">
<br class="">
For now, #2 (and #3) can be achieved with ASV by writing a "track"
benchmark that run the given perf.py command in a subprocess and
return the value.
<br class="">
<br class="">
To get a closer integration (and enable profiling features of ASV)
we could write a module that declare benchmarks (setup code and
body code) and let both asv and perf.py use it. Actually asv
benchmark themselves could be a candidate because this is just
declarative code (functions and class with naming convention and
"setup" attribute on the function), there is no dependency on asv
here.
<br class="">
<br class="">
Another idea could be to call perf.py commands programmatically in
ASV and eventually monkeypatch the gettimer() function to
transform the command into a regular asv "time benchmark".
<br class="">
<br class="">
As a first step, I think we can go for the simplest solution (ie.
#2) and keep in mind a future inclusion (or dedicated api) in
mercurial.
<br class="">
<br class="">
</blockquote>
<br class="">
<pre class="moz-signature" cols="72">--
Philippe Pepiot
<a class="moz-txt-link-freetext" href="https://www.logilab.fr/">https://www.logilab.fr</a></pre>
</div>
_______________________________________________<br class="">Mercurial-devel mailing list<br class=""><a href="mailto:Mercurial-devel@mercurial-scm.org" class="">Mercurial-devel@mercurial-scm.org</a><br class="">https://www.mercurial-scm.org/mailman/listinfo/mercurial-devel<br class=""></div></blockquote></div><br class=""></body></html>