Making large file handling a user-transparent, final solution

dukeofgaming dukeofgaming at gmail.com
Sun Nov 30 19:39:37 CST 2014


Hi,

For several years I have pondered about using the largefiles extension or
not, and now that I' forced to look at the problem again by a practical
problem (large binary assets) I see two fundamental problems for its usage:


1) It is not transparent to the user: It requires a static limit, that in
my opinion should be dynamic (i.e. large files could be cached)... file
revisions not handled by binary diffing, and requires manual file extension
configuration (when IMHO, this could be done by detecting binary content as
a sensible default, then let the user specify other files).

2) It is not enabled by default: This leads to services like Bitbucket not
supporting it, and being disregarded as a real problem. Yes, it is said it
breaks the D in DVCS, but I think this would not be the case if the default
behavior was to download all binary assets. if binary files revisions were
handled by binary diffing, this would probably be less of a problem, and if
when cloning Mercurial just suggested to download the latest file and point
to a cached resource instead of reconstructing history from a separate
store.

Large file handling is a fundamental problem in DVCS technology, and I
hereby propose that Mercurial exploits the solution it already has, and
stops seeing large files as a last resort and more as a feature.

Sure, there are major technical obstacles to this... but imagine... if
Mercurial *just worked* for binary/large files.

Thanks,

David
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://selenic.com/pipermail/mercurial-devel/attachments/20141130/5ccb02fc/attachment.html>


More information about the Mercurial-devel mailing list