[PATCH] largefiles: cache new largefiles for new heads when pulling

Kevin Gessner kevin at fogcreek.com
Wed Jan 18 09:13:18 CST 2012


On Tue, Jan 17, 2012 at 6:52 PM, Matt Mackall <mpm at selenic.com> wrote:

> On Tue, 2012-01-17 at 17:06 -0600, Matt Mackall wrote:
> > On Tue, 2012-01-17 at 22:10 +0100, Na'Tosha Bard wrote:
> > > On Tue, Jan 17, 2012 at 9:56 PM, Matt Mackall <mpm at selenic.com> wrote:
> > >
> > > > On Tue, 2012-01-17 at 16:36 +0100, Na'Tosha Bard wrote:
> > > > > # HG changeset patch
> > > > > # User Na'Tosha Bard <natosha at unity3d.com>
> > > > > # Date 1326814529 -3600
> > > > > # Node ID 453c8e098d89d29c725ef0c3bc609aa977c2f7c0
> > > > > # Parent  d25169284e9883bfff74faf4fdafdd74a6a5ba27
> > > > > largefiles: cache new largefiles for new heads when pulling
>


> > "I guess" was basically shorthand for "there's apparently no perfect
> > answer". The same problem exists with subrepos, but we can't do much
> > about it there because we don't have a free-standing subrepo cache:
> > everything lives in the working directory.
> >
> > If the other largefiles folks think this is the best answer here, I'm
> > fine with it.
>

This seems like a good, practical solution. +1


>
> Looks like this fails a couple tests, check-code plus this in
> test-largefiles-cache.t
>
> @@ -37,6 +37,8 @@
>   adding file changes
>   added 1 changesets with 1 changes to 1 files
>   (run 'hg update' to get a working copy)
> +  caching new largefiles
> +  0 largefiles cached
>
> Not sure if that's a correct result, so dropping for now.
>

Natosha, can this be changed to print "caching new largefiles" only if it
actually has to hit the network and grab the files? Maybe it could have a
debug message in this spot.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://selenic.com/pipermail/mercurial-devel/attachments/20120118/64a0327b/attachment.html>


More information about the Mercurial-devel mailing list