timeit and (micro)benchmarks (was Re: [PATCH] performance: disable workaround for an old bug of Python)

Augie Fackler raf at durin42.com
Mon Aug 22 10:13:07 EDT 2016


On Sun, Aug 21, 2016 at 06:49:45PM +0200, Maciej Fijalkowski wrote:
> On Sun, Aug 21, 2016 at 5:19 PM, Yuya Nishihara <yuya at tcha.org> wrote:
> > On Sat, 20 Aug 2016 23:04:06 +0200, Maciej Fijalkowski wrote:
> >> Well.... you're using TIMEIT to do GC benchmarks - it's a terrible
> >> idea, ever, but for GC especially. You're taking a minimum of 10 runs
> >> - of course it'll be random, you should take average instead and do
> >> more runs.
> >>
> >> That said, this is what I would expect, but I still strongly disagree
> >> with the methodology.
> >
> > Good point. At least, I should try to calculate average of many runs.
> >
> > I thought 100k markers would be large enough to see significant difference
> > even by timeit, on Python 2.6.9 as described in 5817f71c2336, but it seemed
> > not.
>
> never use timeit

You've got a lot more experience with Python performance than I - for
little microbenchmarks is timeit still not good? If so, what would you
recommend for (semi-disposable) microbenchmarks?

Thanks!

> _______________________________________________
> Mercurial-devel mailing list
> Mercurial-devel at mercurial-scm.org
> https://www.mercurial-scm.org/mailman/listinfo/mercurial-devel


More information about the Mercurial-devel mailing list