HG clone network performance

Patrick Mézard pmezard at gmail.com
Wed Dec 19 12:09:55 CST 2007


Jonathan S. Shapiro a écrit :
> On Wed, 2007-12-19 at 09:49 -0600, Matt Mackall wrote:
>> On Wed, Dec 19, 2007 at 09:38:48AM +0530, dhruva wrote:
>>> Hi Matt,
>>>
>>> On Dec 18, 2007 8:58 PM, Matt Mackall <mpm at selenic.com> wrote:
>>>> Yes, but often CPU time isn't the bottleneck. And encryption is
>>>> relatively fast here.
>>> Do you feel it is worthwhile trying multiple connection points to
>>> transfer  changesets in parallel for a clone operation? From what
>>> little I know, I have found multi threaded download clients to be
>>> faster than their single threaded counterparts. Could the same analogy
>>> be extended to this case?
>> If you've got multiple connections open to the same source, it's
>> called "cheating". In particular, it's bypassing the TCP fair share
>> bandwidth allocation scheme.
> 
> Yes. And results will vary. Some routers actually try to penalize this
> behavior. Others simply aggregate them for fair share determination.
> 
> 
> I want to suggest that the network performance issue is a matter of
> perceptions, not reality. In our shop, we have a repo that includes a
> bunch of the GNU tool chain tarballs. Moving these over a DSL link (even
> in the "fast" direction) takes a while, and it can be disconcerting when
> the pull/fetch/clone doesn't report progress for so long.
> 
> A "fix" would be for hg to do a bit of progress reporting. On larger
> files, have it report progress increments and file names (at least under
> -v) as transfer occurs. Using -v --debug now gets *some* progress
> information, but not very much.

Thomas proposed an simple progress bar implementation as well as a more sophisticated extension version, in the end of June 2007. I don't really know what happened to it.
 


More information about the Mercurial-devel mailing list