[PATCH 04 of 11] httppeer: use compression engine API for decompressing responses
Pierre-Yves David
pierre-yves.david at ens-lyon.org
Tue Jan 10 16:56:58 UTC 2017
On 11/26/2016 11:19 AM, Yuya Nishihara wrote:
> On Sun, 20 Nov 2016 14:23:41 -0800, Gregory Szorc wrote:
>> # HG changeset patch
>> # User Gregory Szorc <gregory.szorc at gmail.com>
>> # Date 1479678953 28800
>> # Sun Nov 20 13:55:53 2016 -0800
>> # Node ID da1caf5b703a641f0167ece15fdff167a1343ec1
>> # Parent 0bef0b8fb9f44ed8568df6cfeabf162aa12b211e
>> httppeer: use compression engine API for decompressing responses
>
> [...]
>
>> +def decompressresponse(response, engine):
>> try:
>> - for chunk in util.filechunkiter(f):
>> - while chunk:
>> - yield zd.decompress(chunk, 2**18)
>> - chunk = zd.unconsumed_tail
>
> 65bd4b8e48bd says "decompress stream incrementally to reduce memory usage",
> but util._zlibengine has no limit for the decompressed size. If the 256kB
> limit is still valid, we should port it to _zlibengine.
It does not looks like this ever got addressed, (I nigh of course be
wrong) should we create a ticket to track it?
Cheers,
--
Pierre-Yves David
More information about the Mercurial-devel
mailing list