[PATCH 04 of 11] httppeer: use compression engine API for decompressing responses
Pierre-Yves David
pierre-yves.david at ens-lyon.org
Tue Jan 10 13:46:18 EST 2017
On 01/10/2017 07:37 PM, Gregory Szorc wrote:
> On Tue, Jan 10, 2017 at 8:56 AM, Pierre-Yves David
> <pierre-yves.david at ens-lyon.org <mailto:pierre-yves.david at ens-lyon.org>>
> wrote:
>
>
>
> On 11/26/2016 11:19 AM, Yuya Nishihara wrote:
>
> On Sun, 20 Nov 2016 14:23:41 -0800, Gregory Szorc wrote:
>
> # HG changeset patch
> # User Gregory Szorc <gregory.szorc at gmail.com
> <mailto:gregory.szorc at gmail.com>>
> # Date 1479678953 28800
> # Sun Nov 20 13:55:53 2016 -0800
> # Node ID da1caf5b703a641f0167ece15fdff167a1343ec1
> # Parent 0bef0b8fb9f44ed8568df6cfeabf162aa12b211e
> httppeer: use compression engine API for decompressing responses
>
>
> [...]
>
> +def decompressresponse(response, engine):
> try:
> - for chunk in util.filechunkiter(f):
> - while chunk:
> - yield zd.decompress(chunk, 2**18)
> - chunk = zd.unconsumed_tail
>
>
> 65bd4b8e48bd says "decompress stream incrementally to reduce
> memory usage",
> but util._zlibengine has no limit for the decompressed size. If
> the 256kB
> limit is still valid, we should port it to _zlibengine.
>
>
> It does not looks like this ever got addressed, (I nigh of course be
> wrong) should we create a ticket to track it?
>
>
> This was addressed in
> https://www.mercurial-scm.org/repo/hg/rev/98d7636c4729
Ha right, I looked at that code before asking, but got its meaning
wrong. Sorry for the noise :-/
--
Pierre-Yves David
More information about the Mercurial-devel
mailing list