Test-suite performance graphs

Anurag Goel anurag.dsps at gmail.com
Mon Sep 22 19:00:28 CDT 2014


On Tue, Sep 23, 2014 at 3:45 AM, Mads Kiilerich <mads at kiilerich.com> wrote:

> On 09/21/2014 01:25 AM, Anurag Goel wrote:
>
>> Hi all,
>>
>> From last few weeks, I have been working on plotting graphs using test
>> result data.
>>
>> After discussing with Pierre-Yves, i come out with this
>> http://web.iiit.ac.in/~anurag.goel/mercurial/test-suite/ <
>> http://web.iiit.ac.in/%7Eanurag.goel/mercurial/test-suite/>
>>
>> In plotting graphs, I used google-charts API tool (
>> https://developers.google.com/chart/ ) . It is absolutely free to use
>> for commercial/non-comercial purposes.
>>
>> All four graphs are ordered by duration.
>> 1. Graph 1 is the comparison between realtime and cputime(cuser + csys),
>> ordered by realtime duration.
>> 2. Graph2 shows the realtime performance of the test-suite.
>> 3. Graph3 shows the 'cuser-time' (time taken by cpu in user mode)
>> analysis of test-suite.
>> 4. Graph4 shows the 'csys-time'(time taken by cpu in system mode)
>> analysis of test-suite.
>>
>> Please give me your valuable feedback/suggestions if there is any further
>> changes require.
>>
>
> What can these graphs tell us now or in the future?
>
> Currently these are just a introductory graph which gives us visual idea
about the comparison between cputime and realtime. I think main purpose of
using graph is to compare current test performances with the former ones.


> Do you have plans for adding other graphs that can tell us other things?
>
> Yes, as long as we keep adding test result data in our "report.json file",
we can plot more meaningful graph in future. Now what that data could be,
that i need to discuss before starting things my own.


> I can imagine a graph that could help us spot performance changes: A
> 3-dimensional graph with a "horizontal" plane with revision number on the X
> axis (must be linear) and test name (must be static). On that plane we plot
> the duration of the test run compared to the previous test run - but only
> if the test file didn't change. I can imagine that graph can help us spot
> trends in execution time.
>
> I think above idea demands to access multiple "report.json" at a time. I
will work on that to get more clear view on this.
.

> revsetbenchmarks.py would also be an obvious candidate for performance
> graphs.
>

/Mads
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://selenic.com/pipermail/mercurial-devel/attachments/20140923/87e5f155/attachment.html>


More information about the Mercurial-devel mailing list