[RFC] largefiles - "system-wide" cache
benjamin at bitquabit.com
Wed Oct 19 16:54:25 CDT 2011
On Oct 19, 2011, at 5:26 PM, Matt Mackall wrote:
> On Wed, 2011-10-19 at 17:56 +0200, Na'Tosha Bard wrote:
>> 2011/10/19 Carter, Eli <Eli.Carter at tektronix.com>
>>> As (I think) it should be implemented:
>>> ~/.largefiles (or ~/.hg-largefiles) should be a cache and only a cache.
>>> $repo/.hg/largefiles should be a store.
>>> I think we really need some way to say 'this repo must maintain a complete
>>> store' so there is a way to assert that 'this repo is always complete'. As
>>> it stands, I worry about data getting lost in a convoluted
>>> backup-and-restore shuffle.
>> I think your proposal here is a good idea. It is substantially more
>> consistent, and it should fix our issue with cloning largefiles repos
>> between multiple local repos on different machines. And I also agree that
>> we need a --backfill-largefiles option or --all-largefiles or something for
>> clone, both for backup purposes, and also for the "fork" feature of some
>> software tools.
>> Patches to fix both of these things would be welcome (at least by me).
I agree. This is also actually how kbfiles originally operated; it appears the logic got reversed or dropped in the move to largefiles.
I'll submit two patches, if no one else is already working on it: one to add "hg clone --all-largefiles", and one to set things back to having a user *cache* and a canonical in-repository *store*.
More information about the Mercurial-devel