[PATCH] largefiles: use multiple threads for fetching largefiles remotely

Siddharth Agarwal sid at less-broken.com
Thu Oct 9 20:45:05 CDT 2014


On 10/09/2014 06:41 PM, Mads Kiilerich wrote:
> On 10/10/2014 03:10 AM, Siddharth Agarwal wrote:
>> On 10/09/2014 05:59 PM, Mads Kiilerich wrote:
>>> # HG changeset patch
>>> # User Mads Kiilerich <madski at unity3d.com>
>>> # Date 1412902786 -7200
>>> #      Fri Oct 10 02:59:46 2014 +0200
>>> # Node ID 483463c1d99ba5e5979b756fc3d1255f0a7bd854
>>> # Parent  a1eb21f5caea4366310e32aa85248791d5bbfa0c
>>> largefiles: use multiple threads for fetching largefiles remotely
>>>
>>> Largefiles are currently fetched with one request per file. That adds a
>>> constant overhead per file that gives bad network utilization.
>>>
>>> To mitigate that, run multiple worker threads when fetching 
>>> largefiles remotely.
>>> The default is 2 processes, but it can be tweaked with the 
>>> undocumented config
>>> setting largefiles._remotegetthreads.
>>
>> Why is this undocumented?
>
> Because I want to keep the documentation short and concise. We should 
> just pick a number/algorithm that works for everybody, and nobody 
> should care about what it is or tweaking it. But I like to have a 
> workaround, just in case someone for some reason should want another 
> number or disable this feature.
>
> /Mads

I think we should document and make this an explicit knob for sysadmins 
to set to get higher performance. For example, people on long fat pipes 
will probably want higher concurrency. TCP receive windows within 
kernels are tunable for this reason, and this should be too.


More information about the Mercurial-devel mailing list