Status of speed regressions...

Matt Mackall mpm at selenic.com
Sat Nov 20 11:24:16 CST 2010


On Sat, 2010-11-20 at 17:16 +0100, Jason Harris wrote:
> On Nov 19, 2010, at 11:03 PM, Matt Mackall wrote:
> > In other words, what I wanted to see is an email a month ago during our
> > code freeze that said "Hi, doing my routine pre-release test for 1.7 and
> > I see that performance for <foo> has regressed 20%, here's how I
> > reproduce it without MacHG". Then you're much more likely to get a
> > response like "Good spotting, thanks, please test this fix for 1.7."
> > 
> > That's the right way. Your email is the opposite of that.
> 
> What I wanted to see from you would have been: "Ohh... We are not aware of any
> speed regressions. Could you please send us details as soon as possible. As
> always, if its possible for you we would like you to do pre-release testing.
> Links and instructions are here XYZ. Cheers, Matt..."
> 
> Now *that* would have been professional, friendly and much more likely to elicit
> a positive response from me.

Well that's exactly how I would interact with an end user. If you want
to be treated like a random end user, great. But then you don't get to
say "this is a big deal for MacHG, the quality of which reflects on
Mercurial for lots of Mac users", you only get to say "this is a big
deal for my private project that's of no special concern to you". Do you
see the difference?

> > Now please give us some specifics.
> 
> Well you yourself noticed there was a startup time regression.
> 
> http://www.selenic.com/pipermail/mercurial-devel/2010-November/025903.html
> 
> Thus what transpired is that I thought this was a know issue and so I did some
> quick tests to see if I could reproduce these (which it seemed I could), and
> then emailed basically asking what is the status of these issues. (Spending an
> hour going through and testing things which you all knew about would obviously
> be a waste of time on my part...)

I see. Well then I was mostly off-base, sorry. Your note that "I think
others have noticed this as well" reads as "this issue may or may not
have been discussed here", which is a large red flag for "I've been
sitting on a bug report". But see below...

> Very happily, it turns our that with repeated testing I can no longer get such
> large regressions. In fact it looks like hg 1.7.1 is slightly faster than hg
> 1.5.4 for me. This is great news!

Well that's very strange, because nothing changed in the area we were
talking about yet..

> So the details:  In my testing I can't always get stable times. I have seen the
> first execution in a series go some 30% slower sometimes but then after repeated
> runs things speed up.  I don't know how to reliably reproduce this, so I guess
> its best to ignore this data point for now. (In the results below I include one
> such example of this different timing...)

Looks like what you're measuring has actually has nothing to do with our
"startup time" discussion, which is about the fixed and fairly small
cost of just loading Python and the Mercurial core (ie timing the
version command). Some i18n tweaks have recently made that jump, but
that's still in the neighborhood of .05 seconds here.

> However to indicate I had strange timings you can see the following:
> 
> [Bolt:QuickSilver/Development/sandbox/openOffice] openOffice 262741(262741) $ chg version
> Mercurial Distributed SCM (version 1.7+20101101)
> [Bolt:QuickSilver/Development/sandbox/openOffice] openOffice 262741(262741) $ time chg status --all > /dev/null
> run 1: real 0m3.803s  user 0m1.557s  sys 0m2.105s
> run 2: real 0m3.156s  user 0m1.531s  sys 0m1.607s
> run 3: real 0m3.587s  user 0m1.546s  sys 0m2.020s
> run 4: real 0m3.151s  user 0m1.531s  sys 0m1.603s
> run 5: real 0m3.525s  user 0m1.536s  sys 0m1.965s
> run 6: real 0m3.052s  user 0m1.517s  sys 0m1.531s
> 
> Note the oscillation in times. (the sys times are suspiciously
> changing in these runs). Perhaps one of the crew members has a better
> idea of what is happening here?

Very odd, might be a cache effect of some sort.

The regression from 1.5 to 1.6 may have nothing to do with the
improvement from 1.6 to 1.7, in which case we may be able to fix it and
be faster still. Perhaps you can locate the 1.6 regression with bisect.

-- 
Mathematics is the supreme nostalgia of our time.




More information about the Mercurial-devel mailing list