EOF abort and forcing uncompressed serving
matthewturk at gmail.com
Tue Jan 18 14:26:41 CST 2011
I've been attempting to debug an issue with serving hgweb, using a
recent version of hg on both ends. There's a bit of background, which
might help out.
* hgweb is running on shared hosting, and these problems show up using
either fcgi or passenger hosting of WSGI apps
* The repository in question is ~40 mb post-clone
* It's the result of an svn clone, and it contains 16 mb in the
history of a single file
The issue shows up as an aborted transaction with a premature EOF:
requesting all changes
adding file changes
abort: premature EOF reading chunk (got 663770 bytes, expected 3987505)
With --debug on, the error always happened while adding changes to
that single file that contains the large history. I believe the
problem to be related to the server killing the process when it
reaches some timeout or CPU time limit, which usually happens to fall
within that large file. Depending on the bandwidth of the connection,
it may or may not always fail. However, I *can* get the process to
succeed with 100% success rate using the --uncompressed option on
cloning. For the purposes of the project, bandwidth is cheaper than
the issue of not being able to clone.
After setting the server.uncompressed setting in the config file for
hgweb, it looks to me like setting that option on the server side only
*allows* clients to clone using the uncompressed stream. If this
solution sounds reasonable, of forcing uncompressed serving to keep
the hgweb server alive, is there a way that only allows this? And, if
it's likely some other issue that this coincidentally sidesteps, does
anyone have any suggestions for how to work around it?
Thanks very much for any ideas you might have.
More information about the Mercurial