[PATCH V3] run-tests: added '--json' functionality to store test result data in json file

Anurag Goel anurag.dsps at gmail.com
Sat Jun 28 07:31:46 CDT 2014


On Thu, Jun 26, 2014 at 9:05 PM, Pierre-Yves David <
pierre-yves.david at ens-lyon.org> wrote:

>
>
> On 06/24/2014 10:15 AM, Anurag Goel wrote:
>
>> # HG changeset patch
>> # User anuraggoel <anurag.dsps at gmail.com>
>> # Date 1403601145 -19800
>> #      Tue Jun 24 14:42:25 2014 +0530
>> # Node ID 49d16dcd51a0e4be7079e32486021ede9fd3b175
>> # Parent  4144906e24e4ec620fa7d815b8c2eff78ec0be6c
>> run-tests: added '--json' functionality to store test result data in json
>> file
>>
>> This patch added a new functionality '--json'. While testing, if '--json'
>> is enabled then test result data gets stored in newly created
>> "report.json"
>> file in the following format.
>>
>> testreport ={
>>      "test-success.t": [
>>          ".",
>>          7.2604079246521
>>      ],
>>
>
> 1. do not use a list, use json object
>
>   {"result": "success"
>    "time": 7.26.04
>   }
>
>   it make no sense to mix unrelated item in a list.
>
> 2. use real word for test result: "." and "!" are obscure "success"
> "failure" "error" are better.
>
>
>       "test-failure.t": [
>>          "!",
>>          4.783581018447876
>>      ],
>>      "test-skipped.t": [
>>          "s",
>>          3.2986509799957275
>>      ]
>> }
>>
>> Json format contains a key-value pair where key represent 'testname' and
>> value
>> is a list array contains 'successinfo' and 'timing data'.
>>
>> This "report.json" file will further accessed by html/javascript file for
>> graph usage.
>>
>> diff -r 4144906e24e4 -r 49d16dcd51a0 tests/run-tests.py
>> --- a/tests/run-tests.py        Mon Jun 23 15:02:10 2014 +0530
>> +++ b/tests/run-tests.py        Tue Jun 24 14:42:25 2014 +0530
>> @@ -58,6 +58,7 @@
>>   import killdaemons as killmod
>>   import Queue as queue
>>   import unittest
>> +import json
>>
>
> Still won't work on 2.4
>
> Yes, it won't work on 2.4. As we discussed in V2 of this patch, we will
gracefully disable the option on 2.4


>
>>   processlock = threading.Lock()
>>
>> @@ -185,6 +186,8 @@
>>                " (default: $%s or %d)" % defaults['timeout'])
>>       parser.add_option("--time", action="store_true",
>>           help="time how long each test takes")
>> +    parser.add_option("--json", action="store_true",
>> +        help="store test result data in 'report.json' file")
>>       parser.add_option("--tmpdir", type="string",
>>           help="run tests in the given temporary directory"
>>                " (implies --keep-tmpdir)")
>> @@ -453,6 +456,11 @@
>>                   return
>>
>>               success = False
>> +
>> +            # This carries the test success info corresponding to the
>> testcase
>> +            # where '!' represent failure, 's' represent skipped, '.'
>> represent
>> +            # success.
>> +            successinfo = '!'
>>               try:
>>                   self.runTest()
>>               except KeyboardInterrupt:
>> @@ -460,6 +468,7 @@
>>                   raise
>>               except SkipTest, e:
>>                   result.addSkip(self, str(e))
>> +                successinfo = 's'
>>               except IgnoreTest, e:
>>                   result.addIgnore(self, str(e))
>>               except WarnTest, e:
>> @@ -486,9 +495,10 @@
>>                   success = False
>>
>>               if success:
>> +                successinfo = '.'
>>                   result.addSuccess(self)
>>           finally:
>> -            result.stopTest(self, interrupted=self._aborted)
>> +            result.stopTest(self, successinfo, interrupted=self._aborted)
>>
>>       def runTest(self):
>>           """Run this test instance.
>> @@ -1075,6 +1085,13 @@
>>           self.warned = []
>>
>>           self.times = []
>> +
>> +        # Stores testfile name in the list
>> +        self.testfiles = []
>> +
>> +        # Stores success info and timing data corresponding to each
>> testcase
>> +        self.outcome = []
>> +
>>
>
> Why do you need two lists here ?
>
>
>            self._started = {}
>>
>>       def addFailure(self, test, reason):
>> @@ -1167,10 +1184,14 @@
>>
>>           self._started[test.name] = time.time()
>>
>> -    def stopTest(self, test, interrupted=False):
>> +    def stopTest(self, test, successinfo, interrupted=False):
>>           super(TestResult, self).stopTest(test)
>>
>> -        self.times.append((test.name, time.time() - self._started[
>> test.name]))
>> +        testtime = time.time() - self._started[test.name]
>> +
>> +        self.testfiles.append(test.name)
>> +        self.outcome.append((successinfo, testtime))
>> +        self.times.append((test.name, testtime))
>>           del self._started[test.name]
>>
>>           if interrupted:
>> @@ -1341,9 +1362,19 @@
>>                   os.environ['PYTHONHASHSEED'])
>>           if self._runner.options.time:
>>               self.printtimes(result.times)
>> +        if self._runner.options.json:
>> +            self.getjsonfile(result.testfiles, result.outcome)
>>
>>           return result
>>
>> +    def getjsonfile(self, testfiles, outcome):
>> +        """Store test result info in json format in report.json file."""
>> +
>> +        fp = open("report.json", "w")
>>
>
> This will write the file in the current working directory. not the test
> directory. sounds like a bad idea.
>
>
>
>  +        testdata = dict(zip(testfiles, outcome))
>> +        fp.writelines(("testreport =", (json.dumps(testdata, indent=4))))
>> +        fp.close()
>> +
>>
>
> You need to make sure the file is close using a try: … finally: … clause.
>
>
>        def printtimes(self, times):
>>           self.stream.writeln('# Producing time report')
>>           times.sort(key=lambda t: (t[1], t[0]), reverse=True)
>> diff -r 4144906e24e4 -r 49d16dcd51a0 tests/test-run-tests.t
>> --- a/tests/test-run-tests.t    Mon Jun 23 15:02:10 2014 +0530
>> +++ b/tests/test-run-tests.t    Tue Jun 24 14:42:25 2014 +0530
>> @@ -201,3 +201,37 @@
>>     # Ran 2 tests, 0 skipped, 0 warned, 1 failed.
>>     python hash seed: * (glob)
>>     [1]
>> +
>> +test for --json
>> +==================
>> +
>> +  $ $TESTDIR/run-tests.py --with-hg=`which hg` --json
>> +
>> +  --- $TESTTMP/test-failure.t
>> +  +++ $TESTTMP/test-failure.t.err
>> +  @@ -1,2 +1,2 @@
>> +     $ echo babar
>> +  -  rataxes
>> +  +  babar
>> +
>> +  ERROR: test-failure.t output changed
>> +  !.
>> +  Failed test-failure.t: output changed
>> +  # Ran 2 tests, 0 skipped, 0 warned, 1 failed.
>> +  python hash seed: * (glob)
>> +  [1]
>> +
>> +  $ cat report.json
>> +  testreport ={
>> +      "test-[a-z]{7}[\.]t": [\[] (re)
>> +          "[\.|!]",  (re)
>> +          [\d\.]* (re)
>> +      ],
>> +      "test-[a-z]{7}[\.]t": [\[] (re)
>> +          "[\.|!]",  (re)
>> +          [\d\.]* (re)
>> +      ]
>> +  } (no-eol)
>>
>
> I'm sure you reduce this to a less generic output.
>
> I have changed the json format like as you suggested above. But i am
facing issues in writing the test for it.

Firstly there is uncertainty in ordering of testfile runs like sometime
"test-success.t" runs  first and sometimes "test-failure.t" runs first.

Secondly there is no particular order, i am getting the json output.

Sometime i get this
{
    "result": "success",
    'time": 2.04
}

and sometimes i get this
{
    'time": 2.04,
    "result": "success"
}

As you can see order is different. Could you please help me with these
issues?

-- 
> Pierre-Yves David
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://selenic.com/pipermail/mercurial-devel/attachments/20140628/121751c8/attachment.html>


More information about the Mercurial-devel mailing list