[RFC] largefiles - "system-wide" cache
mpm at selenic.com
Wed Oct 19 16:26:13 CDT 2011
On Wed, 2011-10-19 at 17:56 +0200, Na'Tosha Bard wrote:
> 2011/10/19 Carter, Eli <Eli.Carter at tektronix.com>
> > > Seems reasonable. Actually I'd like someone to explain to me why we need
> > > two complete copies of all the largefiles on the local system.
> > > ;-) If that is in fact the case... so far I've mainly been reading the
> > code rather
> > > than actually *using* largefiles.
> > The purpose is so that when you do another clone of the repo from the
> > server that you don't have to download those 1.5GB files all over again.
> > And the intent, from what I understand, was for these to be hardlinked so
> > you don't have multiple copies of it, just multiple references.
> > I don't know how well that works on Windows machines.
> I haven't looked into it in great detail, but I think it works fine on
> windows machines with Kbfiles (which work fundamentally the same way).
> > As (I think) it should be implemented:
> > ~/.largefiles (or ~/.hg-largefiles) should be a cache and only a cache.
> > $repo/.hg/largefiles should be a store.
> > I think we really need some way to say 'this repo must maintain a complete
> > store' so there is a way to assert that 'this repo is always complete'. As
> > it stands, I worry about data getting lost in a convoluted
> > backup-and-restore shuffle.
> I think your proposal here is a good idea. It is substantially more
> consistent, and it should fix our issue with cloning largefiles repos
> between multiple local repos on different machines. And I also agree that
> we need a --backfill-largefiles option or --all-largefiles or something for
> clone, both for backup purposes, and also for the "fork" feature of some
> software tools.
> Patches to fix both of these things would be welcome (at least by me).
I'd like to see this resolved before the first release if possible.
Mathematics is the supreme nostalgia of our time.
More information about the Mercurial-devel