[PATCH STABLE] setdiscovery: fix hang when #heads>200 (issue2971)

Peter Arrenbrecht peter.arrenbrecht at gmail.com
Thu Aug 25 15:01:09 CDT 2011


On Thu, Aug 25, 2011 at 9:53 PM, Matt Mackall <mpm at selenic.com> wrote:
> On Thu, 2011-08-25 at 21:37 +0200, Peter Arrenbrecht wrote:
>> # HG changeset patch
>> # User Peter Arrenbrecht <peter.arrenbrecht at gmail.com>
>> # Date 1314300314 -7200
>> # Branch stable
>> # Node ID 95abeecfd5d08d3f45119ead5c9df8e1650dbbee
>> # Parent  4a43e23b8c55b4566b8200bf69fe2158485a2634
>> setdiscovery: fix hang when #heads>200 (issue2971)
>
> Queued, thanks.
>
>> When setting up the next sample, we always add all of the heads, regardless
>> of the desired max sample size. But if the number of heads exceeds this
>> size, then we don't add any more nodes from the still undecided set.
>> (This is debatable per se, and I'll investigate it, but it's how we designed
>> it at the moment.)
>>
>> The bug was that we always added the overall heads, not the heads of the
>> remaining undecided set. Thus, if #heads>200 (desired sample size), we
>> did not make progress any longer.
>
> But it seems like this is still insufficiently probabilistic?
>
> What will happen in legitimate cases where there are >200 heads to
> discover?

The problem occurs if local has many heads. We'd just send all of
these heads in one big request. This on the assumption, I guess, that
we would have to send them at some point anyway. And that is where I
think we might be a bit over-eager. Sending a lower node, from which a
lot of heads dangle would eliminate them all if reported unknown.

So it could lead to a big request, but not to an error. This is, in
fact, what happens in my test. All of the 260 heads dangle from a
common root on branches of length 3. So we see 3 requests which are
260 instead of just 200 nodes.

-parren


More information about the Mercurial-devel mailing list